How to Exploit Optimization Experience? Revisiting Evolutionary
Sequential Transfer Optimization: Part A - Benchmark Problems
Abstract
Evolutionary sequential transfer optimization (ESTO), which attempts to
enhance the evolutionary search of a target task using the knowledge
captured from several previously-solved source tasks, has been receiving
increasing research attention in recent years. Despite the tremendous
approaches developed, it is worth noting that existing benchmark
problems for ESTO are not well designed, as they are often simply
extended from other benchmarks in which the relationships between the
source and target tasks are not well analyzed. Consequently, the
comparisons conducted on these problems are not systematic and can only
provide numerical results without a deeper analysis of how an ESTO
algorithm performs on problems with different properties. Taking this
clue, this two-part paper revisits a large body of solution-based ESTO
algorithms on a group of newly developed test problems, to help
researchers and practitioners gain a deeper understanding of how to
better exploit optimization experience towards enhanced optimization
performance. Part A of the series designs a problem generator based on
several newly defined concepts to generate benchmark problems with
diverse properties, which are competitive in resembling real-world
problems. Part B of the series empirically revisits various algorithms
by answering five key research questions related to knowledge transfer.
The results demonstrated that the performance of many ESTO algorithms is
highly problem-dependent, which suggest the necessity of more research
efforts on transferability measurement and enhancement in ESTO algorithm
design. The source code of the benchmark suite developed in part A is
available at https://github.com/XmingHsueh/Revisiting-S-ESTOs-PartA.