When an optimizer is run on an objective function, it collects information about the objective at the points it evaluates. This information can be used to build a model of the objective that becomes more accurate as more points are evaluated. This model can be implemented as a type of martingale, a stochastic process that iteratively models a random quantity by filtering through a source of sequentially increasing information. An optimization method that does so is a martingale optimizer.
Intuitively, whereas many other methods discard previously evaluated points, a martingale optimizer retains the evaluated poitns and attempts to utilize all of the information gleaned from evaluating the objective function. Many other global optimization methods rely on the theory of Markov chains to achieve global asymptotic convergence; martingale optimizers rely on the martingale convergence theorems, specifically, Levy's zero-one law.
Evolutionary annealing is a martingale optimizer that constructs an approximate martingale in order to sample the annealing distributions familiar from simulated annealing. Evolutionary annealing stores the points it has evaluated inside of a binary partition tree that segments the search domain into disjoint sets. This partition tree is used to propose new points to evaluate using a selection and variation scheme derived from evolutionary methods. The details of evolutionary annealing comprise Chapters 11 and 12 of my thesis.
Evolutionary Annealing Convergence
Under certain assumptions, evolutionary annealing asymptotically converges to the global optimum. These assumption essentially require the search domain and the objective function to be non-pathological. Additionally, the annealing process must not proceed to quickly, or else the globally optimal regions can slip through the algorithm's grasp. The proof demonstrates that under the conditions, evolutionary annealing is an approximate Doob martingale that converges to a limit distribution that samples the true global optima with probability one. The current version of the proof can be found in Chapter 11 of my thesis.
Neuroannealing is the application of evolutionary annealing to the space of neural networks. Neural networks encode a function from inputs to outputs and can be used to solve control problems. In order to apply evolutionary annealing to neural networks, neuroannealing uses a specialized partitioning method to generate the partition tree. The principles of neuroevolution are applied in the variation phase in order to generate new networks. Neuroannealing effectively learns controllers for complex but moderately-sized tasks. The details can be found in Chapter 13 of my thesis.