Self-Guided Evolution Strategies with Historical Estimated Gradients (IJCAI 2020)

This blog summarizes our work of self-guided evolution strategies with historical estimated gradients, which is presented at IJCAI 2020.

In our IJCAI 2020 paper self-guided evolution strategies with historical estimated gradients [1], which is a joint work with Fei-yu Liu and Chao Qian, we consider to improve the efficiency of evolution strategies (ES). In particular, ES is a class of black-box optimization algorithms and has been applied to various problems where only zeroth-order information (i.e., function values) is given. Typically, ES estimates gradients from random directions, then performs gradient-based updates. To improve the efficiency of ES, we develop an adaptive method that leverages the historical gradient information to harness the optimization dynamics; see Figure 1 for the algorithm procedure.

Figure 1. Algorithm of self-guided evolution strategies (SGES).

Concretely, our method constructs a low-dimensional subspace to sample search directions and adjusts the importance of this subspace adaptively by the fitness. For instance, if a low-dimensional subspace could provide more function value reductions, we add the weight of this subspace for sampling.

Our method is computationally efficient than previous methods [2][3] and its performance is also competitive to other methods on the benchmark tasks; see Figure 2.

Figure 2. Performance of various methods on the benchmark black-box functions.

[1] Fei-yu Liu, Ziniu Li, and Chao Qian. "Self-guided evolution strategies with historical estimated gradients." IJCAI 2020.

[2] Choromanski, Krzysztof, et al. "From complexity to simplicity: Adaptive es-active subspaces for blackbox optimization." NeurIPS 2020.

[3] Hansen, Nikolaus. "The CMA evolution strategy: A tutorial." arXiv:1604.00772 (2016).