optimization

Beyond Gradients

Alan J. Lockett's Research Pages
English

Optimizers as Vectors

The space $\mathcal{PF}$ of optimizers and its subset $\mathcal{O}_{\mathrm{tr}}$ were introduced in Optimizer Spaces. Each of these sets are closed, convex subsets of a normed vector space. The definition of this space depends on the concept of a finite signed measure. A signed measure is like a probability measure, but it can take on negative values for some (or all) sets and the measure of whole space can be any number, not just 1. The space of all finite signed measures is a Banach space - a complete normed vector space under the total variation norm, which measures the largest volume of any measurable set.

To obtain a vector space of pseudo-optimizers, the space $\mathcal{PF}$ can be expanded to include functions that return finite signed measures rather than just probability measures. If $\mathcal{M}[X]$ is the space of finite signed measures on $\left(X,\mathcal{B}_\tau\right)$ then a starting point for finding a normed vector space containing $\mathcal{PF}$ is: $$\mathcal{MF}_0 = \left\{\mathcal{G} : \mathcal{T}[X] \times \mathbb{R}^X \to \mathcal{M}[X]\right\}.$$ The space $\mathcal{MF}_0$ contains $\mathcal{PF}$ and is a vector space under the operations of pointwise vector addition and pointwise scalar multiplication. These operations are defined by $$\left(\alpha\mathcal{G}_1 + \mathcal{G}_2\right)[t,f](A) = \alpha\left(\mathcal{G}_1[t,f](A)\right) + \mathcal{G}_2[t,f](A),$$ for $\alpha\in\mathbb{R}$ and $\mathcal{G}_1,\mathcal{G}_2 \in \mathcal{MF}_0$.

A norm $||\mathcal{G}||$ provides a notion of absolute vector magnitude that can produce a distance metric $$d(\mathcal{G}_1,\mathcal{G}_2) = ||\mathcal{G}_1 - \mathcal{G}_2||.$$ One candidate for the norm is $$||\mathcal{G}||_{\mathcal{MF}} = \sup_{t\in\mathcal{T}[X],f\in\mathbb{R}^X} ||\mathcal{G}[t,f]||_{\mathcal{M}},$$ where $\sup$ indicates the upper bound over all trajectories and objective functions and $||\mathcal{G}[t,f]||_{\mathcal{M}}$ is the total variation norm for $\mathcal{M}[X]$. Essentially, $||\mathcal{G}||_{\mathcal{MF}}$ takes the largest measure of any subset of the search domain. As such, it is not always a norm for $\mathcal{MF}_0$. A norm must be finite, and in continuous search domains, there may be a sequence of trajectories and objectives such that $||\mathcal{G}||_{\mathcal{MF}}$ grows without bound. This problem can be solved by taking the subset of $\mathcal{MF}_0$ on which the proposed norm is finite, $$\mathcal{MF} = \left\{\mathcal{G}\in\mathcal{MF}_0 : ||\mathcal{G}||_{\mathcal{MF}} < \infty\right\}.$$ Notice that for any optimizer in $\mathcal{PF}$, $||\mathcal{G}||_{\mathcal{MF}} = 1$, so trivially $\mathcal{PF}\subseteq\mathcal{MF}$. Furthermore, $\mathcal{MF}$ is a proper vector subspace of $\mathcal{MF}_0$, and so $\mathcal{MF}$ is a normed vector space under the proposed norm.

This conclusion may seem to be rather abstract, but it has important practical consequences. For example, there is a full spectrum of optimizers between any two computable optimizers that is both well-defined and computable. There is also an absolute measure of difference between any two optimizers -- say, conjugate gradient descent and a genetic algorithm.

The vector space $\mathcal{MF}$ introduces vector operations of pointwise addition and pointwise scalar multiplication that allow the creation of new optimizers algebraically. Several other operations over optimizers are possible as well, and these operations can be used to give a component-based description of evolutionary algorithms.

About Me

Alan J. Lockett

I am looking for an assistant professorship to research the theory of feedback controllers for the control of complex autonomous systems, from smart homes to self-driving cars and humanoid robots. A CV and research statement can be found in the links to the left.

I have published on the theory of global optimization, humanoid robotics, neural networks for perception and control, and opponent modelling in games, and am working on a book expanding my Ph.D. thesis about the theory of global optimization under contract with Springer.

I am currently a postdoctoral fellow at the Dalle Molle Institute for Artificial Intelligence Studies on a US National Science Foundation postdoc grant working with Juergen Schmidhuber in Lugano, Switzerland. My Ph.D. is from the University of Texas where I studied with Risto Miikkulainen. See my About page for contact information and more.