optimization

Beyond Gradients

Alan J. Lockett's Research Pages
English

Evolutionary Annealing

Evolutionary annealing is a martingale optimizer that constructs an approximate martingale in order to sample the annealing distributions familiar from simulated annealing. Evolutionary annealing stores the points it has evaluated inside of a binary partition tree that segments the search domain into disjoint sets. The algorithm is an evolutionary method, and proceeds in two steps, selection and variation.

In the selection phase, the algorithm probabilistically walks the partition tree to select a previously evaluated point. As it walks the tree, it chooses either the left or the right branch depending on two factors: (1) the aggregate scores of points spanned by the left and right nodes, and (2) the volume of the search domain spanned by each node. The scores are annealed with a cooling schedule as in simulated annealing so that the search eventually focuses on the best points. The area is included in order to force exploration of unexplored regions of the search space, so that the search does not converge too quickly before finding the global optimum.

In the variation phase, methods of evolutionary computation are used to vary the selected point in order to propose a new point to be explored. The magnitude of the variation is usually scaled to the size of the selected partition region. Evolutionary annealing is computationally expensive, but it is also a powerful and effective optimizer when used with the proper cooling schedule. Evolutionary annealing solves many optimization problems as well as or better than many other state-of-the-art algorithms in continuous domains of 5 to 25 dimensions.

The details of evolutionary annealing comprise Chapters 11 and 12 of my thesis.

About Me

Alan J. Lockett

I am looking for an assistant professorship to research the theory of feedback controllers for the control of complex autonomous systems, from smart homes to self-driving cars and humanoid robots. A CV and research statement can be found in the links to the left.

I have published on the theory of global optimization, humanoid robotics, neural networks for perception and control, and opponent modelling in games, and am working on a book expanding my Ph.D. thesis about the theory of global optimization under contract with Springer.

I am currently a postdoctoral fellow at the Dalle Molle Institute for Artificial Intelligence Studies on a US National Science Foundation postdoc grant working with Juergen Schmidhuber in Lugano, Switzerland. My Ph.D. is from the University of Texas where I studied with Risto Miikkulainen. See my About page for contact information and more.