Beyond Gradients

Alan J. Lockett's Research Pages

Job Search

I'm now on the job market looking for a tenure-track assistant professorship for the 2015-2016 school year. Specifically, I'm looking for a position where I can start a lab to study the theory of feedback controllers for complex and heterogeneous autonomous systems. My research involves optimization, machine learning, neural networks, control theory, state estimation, stochastic process theory, functional analysis, game theory, and robotics. As such, I could contribute well to departments ranging from Computer Science to Applied Mathematics and Engineering (with a focus on computation and autonomous systems). A curriculum vitae can be found here. For generic research and teaching statements, see the links to the left.

Several of my research topics, especially deep neural networks and robotic control, currently receive substantial attention and are thus good candidates for research grants. In fact, I am currently funded by a two-year US National Science Foundation grant to integrate perception and control tasks in a humanoid robot using deep neural networks. For the project, an iCub robot is being trained to play chess. The first publication from this research, co-authored with a Ph.D. student, was submitted in October 2014, and three more papers will follow into Spring 2014.

My work on the theory of stochastic global optimization has been well-received in the evolutionary computation community, including four peer-reviewed full-length conference papers (the most recent at GECCO 2014, which had a 33% acceptance rate) and an award of runner-up overall best paper out of 445 peer-reviewed papers at the 2013 IEEE Congress on Evolutionary Computation. This work is in the process of being published as a monograph under contract with Springer-Verlag, and two new 30 page journal articles on the topic were submitted for peer review in October 2014.

My future work will continue to examine the theoretical context behind developing controllers that can integrate complex sensory functions with high-degree of freedom controls. The potential applications for such a theory are not limited to robotics, but are relevant to any large autonomous systems, including those deployed on important computer networks, such as medical, financial, or military defense systems. One particular application interest of mine is the idea of integrated home computing that coordinates infrastructure (heating/cooling, security, etc) with domestic automata (robotic vacuums, lawn mowers, and maybe even a cook or a maid).

Please contact me about potential research positions at alan.lockett@gmail.com.

News and Events

October 21, 2014My paper Insights from Adversarial Fitness Functions has been accepted for publication in FOGA 2015. A draft is available on my publications page, but still needs some corrections and a bit of reorganization.
October 1, 2014With Marijn Stollenga, just submitted a paper on low-level control of humanoid robots that respects complex constraints using the natural gradient on a parameterized control space. A draft is available on my publications page.
July 12-16, 2014At GECCO 2014 to present a paper Model-Optimal Optimization by Solving Bellman Equations in the theory track.
March 12, 2014Conference paper Model-Optimal Optimization by Solving Bellman Equations accepted to GECCO 2014, which has a 33% acceptance rate.
January 20, 2014Journal article Evolutionary Annealing: Global Optimization in Measure Spaces officially published in the Journal of Global Optimization, 2013 Impact Factor 1.355)
July 6-10, 2013At GECCO 2013 to present a paper Neuroannealing: Martingale-Driven Optimization for Neural Networks.
June 20-23, 2013At CEC-2013, Per Kristian Lehre presented my paper Measure-Theoretic Analysis of Evolutionary Algorithms on my behalf and also accepted the overall runner-up best paper award at the award ceremony. Thanks Per Kristian! I was unable to attend due to visa issues after moving to Switzerland.
June 21, 2013My paper Measure-Theoretic Analysis of Performance in Evolutionary Algorithms won runner-up overall best paper at CEC-2013. There were 445 peer-reviewed papers at the conference, and only four awards: best papers and runners-up in student and overall categories. Many thanks to the awards committee at CEC!
May 15, 2013Moved to Switzerland to join Juergen Schmidhuber's group at the Dalle Molle Institute of Artificial Intelligence Studies on a two-year postdoc grant from the US National Science Foundation's International Research Fellows Program.
April 23, 2013Conference paper Measure-Theoretic Analysis of Performance in Evolutionary Algorithms accepted as full paper at CEC-2013.
March 27, 2013Journal article Evolutionary Annealing: Global Optimization in Measure Spaces accepted for publication in the Journal of Global Optimization, 2012 Impact Factor 1.307.
March 14, 2013Conference paper Neuroannealing: Martingale-Driven Optimization for Neural Networks accepted as full paper at GECCO-2013.
October 26, 2012Just received a contract from Springer to expand my thesis General-Purpose Optimization Through Information Maximization into a monograph.
August 8, 2012Just awarded a postdoctoral grant from the US National Science Foundation's International Research Fellows Program, which has a 30% acceptance rate. I'll receive $177,208 to train an iCub humanoid robot to play chess using deep neural networks to integrate perception and control tasks.
April 18, 2012Successfully defended my thesis, General-Purpose Optimization Through Information Maximization.

What This Site is About

In research disciplines such as machine learning or robotics, the functions being optimized may be non-smooth, non-convex, multimodal, discontinuous, or discrete. They might operate on binary codes, graphs, computer programs, or even neural networks. In these settings, taking derivatives may not be the best way to solve your problem, if the derivatives even exist. There is a large variety of non-gradient-based optimization methods that can be used in these situations. In the mathematical optimization community, these methods are referred to as heuristic search. This term is a misnomer; algorithms like differential evolution or evolutionary annealing have been proven to converge to the global optimum under certain conditions. These methods are no more heuristic than gradient descent, which can fail catastrophically even on functions such as the Weierstrass function, which is everywhere continuous and nowhere differentiable. By contrast, evolutionary annealing can be configured to locate the optima of the Weierstrass function reliably.

Isaac Newton

Optimization Beyond Gradients is about what to do when you need to optimize functions of a post-Newtonian character. Derivatives were the cutting edge of mathematics back in the 1700s, when the mere suggestion of non-analytic functions was enough to make any Oxford don shake in his stockings and ruffle his powdered coif. Nowadays, functions whose optima can be found analytically are a rare treat. 

Weierstrass Function

This site is about non-gradient methods: how to define them, how to configure them, and how to determine the best optimization method for your problem. In the Research section, you'll find a discussion of optimization methods form a vector space, how optimization problems can be modeled as a distribution over functions, and how the best optimization method for your problem can be derived from the interplay of these two features. There are also pages describing Martingale Optimization, especially Evolutionary Annealing, a new optimization method for non-smooth, multimodal functions.

The Software section contains links to open-source software that implements the methods and ideas discussed on this site, especially PyEC, a software package for evolutionary optimization in the Python language. Links to published research papers are provided on the Publications page, and you can find out more about who I am on the About page.

Enjoy the site! Send any comments or contact requests to Alan Lockett at alan.lockett@gmail.com.

About Me

Alan J. Lockett

I am looking for an assistant professorship to research the theory of feedback controllers for the control of complex autonomous systems, from smart homes to self-driving cars and humanoid robots. A CV and research statement can be found in the links to the left.

I have published on the theory of global optimization, humanoid robotics, neural networks for perception and control, and opponent modelling in games, and am working on a book expanding my Ph.D. thesis about the theory of global optimization under contract with Springer.

I am currently a postdoctoral fellow at the Dalle Molle Institute for Artificial Intelligence Studies on a US National Science Foundation postdoc grant working with Juergen Schmidhuber in Lugano, Switzerland. My Ph.D. is from the University of Texas where I studied with Risto Miikkulainen. See my About page for contact information and more.