Towards a theory-guided benchmarking suite for discrete black-box optimization heuristics

Abstract : Theoretical and empirical research on evolutionary computation methods complement each other by providing two fundamentally different approaches towards a better understanding of black-box optimization heuristics. In discrete optimization, both streams developed rather independently of each other, but we observe today an increasing interest in reconciling these two sub-branches. In continuous optimization, the COCO (Comparing Continuous Optimisers) benchmarking suite has established itself as an important platform that theoreticians and practitioners use to exchange research ideas and questions. No widely accepted equivalent exists in the research domain of discrete black-box optimization. Marking an important step towards filling this gap, we adjust the COCO software to pseudo-Boolean optimization problems, and obtain from this a benchmarking environment that allows a fine-grained empirical analysis of discrete black-box heuristics. In this documentation we demonstrate how this test bed can be used to profile the performance of evolutionary algorithms. More concretely, we study the optimization behavior of several (1 + λ) EA variants on the two benchmark problems OneMax and LeadingOnes. This comparison motivates a refined analysis for the optimization time of the (1 + λ) EA on LeadingOnes.
Complete list of metadatas
Contributor : Carola Doerr <>
Submitted on : Tuesday, November 13, 2018 - 3:54:36 PM
Last modification on : Friday, July 5, 2019 - 3:26:03 PM

Links full text



Carola Doerr, Furong Ye, Sander van Rijn, Hao Wang, Thomas Bäck. Towards a theory-guided benchmarking suite for discrete black-box optimization heuristics. GECCO '18 - Genetic and Evolutionary Computation Conference, Jul 2018, Kyoto, France. pp.951-958, ⟨10.1145/3205455.3205621⟩. ⟨hal-01921076⟩



Record views