optimization

Beyond Gradients

Alan J. Lockett's Research Pages
English

Martingale Optimization

When an optimizer is run on an objective function, it collects information about the objective at the points it evaluates. This information can be used to build a model of the objective that becomes more accurate as more points are evaluated. This model can be implemented as a type of martingale, a stochastic process that iteratively models a random quantity by filtering through a source of sequentially increasing information. An optimization method that does so is a martingale optimizer.

Intuitively, whereas many other methods discard previously evaluated points, a martingale optimizer retains the evaluated poitns and attempts to utilize all of the information gleaned from evaluating the objective function. Many other global optimization methods rely on the theory of Markov chains to achieve global asymptotic convergence; martingale optimizers rely on the martingale convergence theorems, specifically, Levy's zero-one law.

Evolutionary Annealing

Evolutionary annealing is a martingale optimizer that constructs an approximate martingale in order to sample the annealing distributions familiar from simulated annealing. Evolutionary annealing stores the points it has evaluated inside of a binary partition tree that segments the search domain into disjoint sets. This partition tree is used to propose new points to evaluate using a selection and variation scheme derived from evolutionary methods. The details of evolutionary annealing comprise Chapters 11 and 12 of my thesis.

Evolutionary Annealing Convergence

Under certain assumptions, evolutionary annealing asymptotically converges to the global optimum. These assumption essentially require the search domain and the objective function to be non-pathological. Additionally, the annealing process must not proceed to quickly, or else the globally optimal regions can slip through the algorithm's grasp. The proof demonstrates that under the conditions, evolutionary annealing is an approximate Doob martingale that converges to a limit distribution that samples the true global optima with probability one. The current version of the proof can be found in Chapter 11 of my thesis.

Neuroannealing

Neuroannealing is the application of evolutionary annealing to the space of neural networks. Neural networks encode a function from inputs to outputs and can be used to solve control problems. In order to apply evolutionary annealing to neural networks, neuroannealing uses a specialized partitioning method to generate the partition tree. The principles of neuroevolution are applied in the variation phase in order to generate new networks. Neuroannealing effectively learns controllers for complex but moderately-sized tasks. The details can be found in Chapter 13 of my thesis.

About Me

Alan J. Lockett

I am looking for an assistant professorship to research the theory of feedback controllers for the control of complex autonomous systems, from smart homes to self-driving cars and humanoid robots. A CV and research statement can be found in the links to the left.

I have published on the theory of global optimization, humanoid robotics, neural networks for perception and control, and opponent modelling in games, and am working on a book expanding my Ph.D. thesis about the theory of global optimization under contract with Springer.

I am currently a postdoctoral fellow at the Dalle Molle Institute for Artificial Intelligence Studies on a US National Science Foundation postdoc grant working with Juergen Schmidhuber in Lugano, Switzerland. My Ph.D. is from the University of Texas where I studied with Risto Miikkulainen. See my About page for contact information and more.