loading page

On Asymptotic Convergence Rate of Evolutionary Algorithms
  • Dawid Tarłowski
Dawid Tarłowski
Jagiellonian University, Jagiellonian University

Corresponding Author:[email protected]

Author Profile


Asymptotic Convergence Rate (AsCR) measures how fast an iterative optimization method converges to the global solution as the number of iterations increases to infinity. If less than one, it determines fast exponential convergence mode. We provide general theory for the AsCR in case of both discrete and continuous state space models. We start from showing that the AsCR may be defined by the geometric mean of the subsequent one step ratios of the expected fitness error and thus this paper extends the previous studies on Average Convergence Rate, a recently introduced measure. We show that in discrete optimization those convergence measures do not depend on the values of the fitness function but only on the algorithm itself and that the convergence rate in the solutions’ space is the same as the convergence rate in the fitness space ( any particular conditions on the optimization algorithm like the Markov property are not necessary). Next we focus on continuous optimization and we analyse how the change of fitness function may influence the value of AsCR. Surprisingly, even in continuous case there are some limitations on how much the change of the fitness function may influence the convergence rate of the process and we provide lower and upper bounds based on the asymptotic relation between the optimization process and the fitness functions. We also discuss examples and applications, including some specific instances of Evolutionary Algorithms and Simulated Annealing. In particular, we show that some algorithms cannot converge exponentially fast for any nontrivial fitness function.