On Asymptotic Convergence Rate of Evolutionary Algorithms

This paper presents theoretical studies on Asymptotic Convergence Rate
(ACR) Â for finite dimensional optimization. Given the problem function
(fitness function), ACR measures how fast an iterative optimization
method converges to the global solution as the number of iterations
increases to infinity. If less than one, it determines Â fast
exponential Â convergence mode ( known as linear convergence in various
contexts). The presented theory extends the previous studies on
Â Average Convergence Rate, a related convergence rate measure. The main
focus is on the question how the change of problem function Â may
influence the value of ACR and what is the relation Â between
Â convergence rate in the objective space and in the search space. Â It
is shown, in particular, Â that the ACR is the maximum of two
components, one of which does not depend on the problem function. This
provides the lower bound for convergence rate and implies that some
Â algorithms Â cannot converge exponentially fast for any nontrivial
continuous optimization problem. Furthermore, among other results, it is
shown how the convergence rate in the search space is related to the
convergence rate in the objective space if the problem function is
dominated by some polynomial. We Â discuss various examples and
Â numerical simulations with use of (1+1) self-adaptive evolution
strategy and other algorithms.