This blog summarizes our work of self-guided evolution strategies with historical estimated gradients, which is presented at IJCAI 2020.
In our IJCAI 2020 paper self-guided evolution strategies with historical estimated gradients , which is a joint work with Fei-yu Liu and Chao Qian, we consider to improve the efficiency of evolution strategies (ES). In particular, ES is a class of black-box optimization algorithms and has been applied to various problems where only zeroth-order information (i.e., function values) is given. Typically, ES estimates gradients from random directions, then performs gradient-based updates. To improve the efficiency of ES, we develop an adaptive method that leverages the historical gradient information to harness the optimization dynamics; see Figure 1 for the algorithm procedure.
Concretely, our method constructs a low-dimensional subspace to sample search directions and adjusts the importance of this subspace adaptively by the fitness. For instance, if a low-dimensional subspace could provide more function value reductions, we add the weight of this subspace for sampling.
 Fei-yu Liu, Ziniu Li, and Chao Qian. "Self-guided evolution strategies with historical estimated gradients." IJCAI 2020.
 Choromanski, Krzysztof, et al. "From complexity to simplicity: Adaptive es-active subspaces for blackbox optimization." NeurIPS 2020.
 Hansen, Nikolaus. "The CMA evolution strategy: A tutorial." arXiv:1604.00772 (2016).