Benchmark-Driven Algorithm Configuration Applied to Parallel Model-Based
This paper introduces a benchmarking framework that allows rigorous
evaluation of parallel model-based optimizers for expensive functions.
The framework establishes a relationship between estimated costs of
parallel function evaluations (on real-world problems) to known sets of
Such real-world problems are not always readily available (e.g.,
confidentiality, proprietary software).
Therefore, new test problems are created by Gaussian process simulation.
The proposed framework is applied in an extensive benchmark study to
compare multiple state-of-the-art parallel optimizers with a novel
model-based algorithm, which combines ideas of an explorative search for
global model quality with parallel local searches to increase function
The benchmarking framework is used to configure good batch size setups
for parallel algorithms systematically based on landscape properties.
Furthermore, we introduce a proof-of-concept for a novel automatic batch
The predictive quality of the batch size configuration is evaluated on a
large set of test functions and the functions generated by Gaussian
The introduced algorithm outperforms multiple state-of-the-art
optimizers, especially on multi-modal problems.
Additionally, it proves to be particularly robust over various problem
landscapes, and performs well with all tested batch sizes.
Consequently, this makes it well-suited for black-box kinds of problems.