# GECCO '21: Proceedings of the Genetic and Evolutionary Computation Conference

Full Citation in the ACM Digital Library## SESSION: Evolutionary numerical optimization

### Saddle point optimization with approximate minimization oracle

A major approach to saddle point optimization min_{x} max_{y} *f*(*x, y*) is a gradient based approach as is popularized by generative adversarial networks
(GANs). In contrast, we analyze an alternative approach relying only on an oracle
that solves a minimization problem approximately. Our approach locates approximate
solutions *x*′ and *y*′ to min_{x′} *f*(*x*′, *y*) and max_{y}′ *f*(*x, y*′) at a given point (*x, y*) and updates (*x, y*) toward these approximate solutions (*x*′, *y*′) with a learning rate *η.* On locally strong convex-concave smooth functions, we derive conditions on *η* to exhibit linear convergence to a local saddle point, which reveals a possible shortcoming
of recently developed robust adversarial reinforcement learning algorithms. We develop
a heuristic approach to adapt *η* derivative-free and implement zero-order and first-order minimization algorithms.
Numerical experiments are conducted to show the tightness of the theoretical results
as well as the usefulness of the *η* adaptation mechanism.

### Self-referential quality diversity through differential MAP-Elites

Differential MAP-Elites is a novel algorithm that combines the illumination capacity of CVT-MAP-Elites with the continuous-space optimization capacity of Differential Evolution. The algorithm is motivated by observations that illumination algorithms, and quality-diversity algorithms in general, offer qualitatively new capabilities and applications for evolutionary computation yet are in their original versions relatively unsophisticated optimizers. The basic Differential MAP-Elites algorithm, introduced for the first time here, is relatively simple in that it simply combines the operators from Differential Evolution with the map structure of CVT-MAP-Elites. Experiments based on 25 numerical optimization problems suggest that Differential MAP-Elites clearly outperforms CVT-MAP-Elites, finding better-quality and more diverse solutions.

### Explorative data analysis of time series based algorithm features of CMA-ES variants

In this study, we analyze behaviours of the well-known CMA-ES by extracting the time-series features on its dynamic strategy parameters. An extensive experiment was conducted on twelve CMA-ES variants and 24 test problems taken from the BBOB (Black-Box Optimization Bench-marking) testbed, where we used two different cutoff times to stop those variants. We utilized the tsfresh package for extracting the features and performed the feature selection procedure using the Boruta algorithm, resulting in 32 features to distinguish either CMA-ES variants or the problems. After measuring the number of predefined targets reached by those variants, we contrive to predict those measured values on each test problem using the feature. From our analysis, we saw that the features can classify the CMA-ES variants, or the function groups decently, and show a potential for predicting the performance of those variants. We conducted a hierarchical clustering analysis on the test problems and noticed a drastic change in the clustering outcome when comparing the longer cutoff time to the shorter one, indicating a huge change in search behaviour of the algorithm. In general, we found that with longer time series, the predictive power of the time series features increase.

### Augmented lagrangian, penalty techniques and surrogate modeling for constrained optimization with CMA-ES

In this paper, we investigate a non-elitist Evolution Strategy designed to handle
black-box constraints by an adaptive Augmented Lagrangian penalty approach, AL-(*μ*/*μ _{w}, λ*)-CMA-ES, on problems with up to 28 constraints. Based on stability and performance
observations, we propose an improved default parameter setting. We exhibit failure
cases of the Augmented Lagrangian technique and show how surrogate modeling of the
constraints can overcome some difficulties. Several variants of AL-CMA-ES are compared
on a set of nonlinear constrained problems from the literature. Simple adaptive penalty
techniques serve as a baseline for comparison.

### Interaction between model and its evolution control in surrogate-assisted CMA evolution strategy

Surrogate regression models have been shown as a valuable technique in evolutionary optimization to save evaluations of expensive black-box objective functions. Each surrogate modelling method has two complementary components: the employed model and the control of when to evaluate the model and when the true objective function, aka evolution control. They are often tightly interconnected, which causes difficulties in understanding the impact of each component on the algorithm performance. To contribute to such understanding, we analyse what constitutes the evolution control of three surrogate-assisted versions of the state-of-the-art algorithm for continuous black-box optimization --- the Covariance Matrix Adaptation Evolution Strategy. We implement and empirically compare all possible combinations of the regression models employed in those methods with the three evolution controls encountered in them. An experimental investigation of all those combinations allowed us to asses the influence of the models and their evolution control separately. The experiments are performed on the noiseless and noisy benchmarks of the Comparing-Continuous-Optimisers platform and a real-world simulation benchmark, all in the expensive scenario, where only a small budget of evaluations is available.

### A matrix adaptation evolution strategy for optimization on general quadratic manifolds

An evolution strategy design is presented that allows for an evolution on general quadratic manifolds. That is, it covers elliptic, parabolic, and hyperbolic equality constraints. The peculiarity of the presented algorithm design is that it is an interior point method. It evaluates the objective function only for feasible search parameter vectors and it evolves itself on the nonlinear constraint manifold. This is achieved by a closed form transformation of an individual's parameter vector, which is in contrast to iterative repair mechanisms. Results of different experiments are presented. A test problem consisting of a spherical objective function and a single hyperbolic/parabolic equality constraint is used. It is designed to be scalable in the dimension and it is used to compare the performance of the developed algorithm with other optimization methods supporting constraints. The experiments show the effectiveness of the proposed algorithm on the considered problems.

### Towards exploratory landscape analysis for large-scale optimization: a dimensionality reduction framework

Although exploratory landscape analysis (ELA) has shown its effectiveness in various applications, most previous studies focused only on low- and moderate-dimensional problems. Thus, little is known about the scalability of the ELA approach for large-scale optimization. In this context, first, this paper analyzes the computational cost of features in the flacco package. Our results reveal that two important feature classes (ela_level and ela_meta) cannot be applied to large-scale optimization due to their high computational cost. To improve the scalability of the ELA approach, this paper proposes a dimensionality reduction framework that computes features in a reduced lower-dimensional space than the original solution space. We demonstrate that the proposed framework can drastically reduce the computation time of ela_level and ela_meta for large dimensions. In addition, the proposed framework can make the cell-mapping feature classes scalable for large-scale optimization. Our results also show that features computed by the proposed framework are beneficial for predicting the high-level properties of the 24 large-scale BBOB functions.