Mendelian evolutionary theory optimization algorithm

This study presented a new multi-species binary coded algorithm, Mendelian evolutionary theory optimization (METO), inspired by the plant genetics. This framework mainly consists of three concepts: first, the “denaturation” of DNA’s of two different species to produce the hybrid “offspring DNA”. Second, the Mendelian evolutionary theory of genetic inheritance, which explains how the dominant and recessive traits appear in two successive generations. Third, the Epimutation, through which organism resist for natural mutation. The above concepts are reconfigured in order to design the binary meta-heuristic evolutionary search technique. Based on this framework, four evolutionary operators—(1) Flipper, (2) Pollination, (3) Breeding, and (4) Epimutation—are created in the binary domain. In this paper, METO is compared with well-known evolutionary and swarm optimizers: (1) binary hybrid GA, (2) bio-geography-based optimization, (3) invasive weed optimization, (4) shuffled frog leap algorithm, (5) teaching–learning-based optimization, (6) cuckoo search, (7) bat algorithm, (8) gravitational search algorithm, (9) covariance matrix adaptation evolution strategy, (10) differential evolution, (11) firefly algorithm and (12) social learning PSO. This comparison is evaluated on 30 and 100 variables benchmark test functions, including noisy, rotated, and hybrid composite functions. Kruskal–Wallis statistical rank-based nonparametric H-test is utilized to determine the statistically significant differences between the output distributions of the optimizer, which are the result of the 100 independent runs. The statistical analysis shows that METO is a significantly better algorithm for complex and multi-modal problems with many local extremes.


Introduction
Optimization plays an essential role in achieving accuracy and increasing efficiency of systems.Under the classes of real and binary coded schemes, the literature proposes a variety of meta-heuristic population-based Evolutionary Algorithms (EAs) [1][2][3] i.e.Genetic Algorithm (GA) [4,5], Memetic algorithm (MA) [6], PSO [7] etc.Although the population-based EAs have been being widely accepted by the researchers and industries from different fields [8].Literature revels the limitations of meta-heuristic algorithms, where it is hard to solve multi modal functions [9][10][11].These limitations invoke the researcher to find the better optimization technique, which should be able to offer better results.In this regard, this paper makes an effort to present a better optimization technique to handle multi-modularity of the objective functions.
Recent papers [29][30][31] show the comparative evaluations of the recent meta-heuristic optimizer.To add the state-ofthe-art, we inspired from the evolution theory of plant genetics based on Mendel's inheritance law to propose a genetically evolved optimization algorithm.In this algorithm, the evolution process takes place by interbreeding the plants of different species [32].To design a novel Evolutionary algorithm (EA), we redefine the biologically inspired metaphors in the binary domain to implement as computer program.Due to its binary structure, the proposed five operations: flipping, pollination, breeding, discriminating and Epimutation [33][34][35][36] have encoding and decoding techniques like the Genetic algorithm (GA) [20], but in a different working structure wherein the ancestors' memory is transferred in two consecutive generations, F1 and F2 offspring as METO produces two generations offspring in the sequence.This makes the METO belongs to a different class from the GA.METO is a gradient-free method that does not require the function differential, thus best suited for the discontinuous and multi-modal problems as well.
The evaluative study of METO performs an analytical comparison with twelve optimizers, including the state of the art techniques as described briefly in Table-2.For this several classes of benchmark test functions are considered including noisy, rotated, and hybrid composite functions [37][38][39].The following distinctive features describe METO algorithm: (i) It is a binarily coded optimizer and explores the transformed genome search space instead of real.(ii) Mendel deduced his theory by the experimentation on the pea plants wherein genetic information exchange took place by breeding the plants of different species.
Based on his experiments, METO employs multiple populations, where each population correspond to a particular species.This is the base of METO algorithm.(iii) Instead of producing one generation offspring in an epoch through crossover/breeding, which is usual in GA and evolutionary optimizers, METO produces two consecutive generations offspring in an evolution epoch.Inspired from Mendel's experiments, first generation offspring, F1, is produced by cross-breeding of F0 generation parents, and the second generation offspring, F2, by self-breeding of the F1 generation parents.(iv) An organism 1 in the population is represented by the complementary double strands artificial DNA.(v) DNA of F1 generation offspring, as a result of crossbreading, is formed by combining the opposite strands of DNA of two parents [33].Based on Mendel experiments, breeder parents belong to the two different species.That is why at least two species are required to see the Mendelian evolution in plants.(vi) In an evolution cycle of METO, parallel transmission of genes takes place in consecutive two generations.First genes transmission take place from F0 to F1 and then F1 to F2.These genes appear in the next generation or descendents and are called Dominant Genes (DG).The second genes transmission is in alternative generations, instead of next.These genes are recessive genes (RG) and transmit from F0 to F2 generation.(vii) Because recessive genes are transmitting in alternative generations (F0 to F2, without appearing in F1), thus subjected to the mutation multiple times over an evolution cycle.It resembles the rehabilitation process of nature in self organizing.
In the presented manuscript, Section 2 gives the biological phenomena behind the development of the optimizer.Section 3 presents the modeling of the biological metaphors as operators and illustrates the proposed METO algorithm in steps.Section 4 shows how points move in phenotype due to operation in its genotype search space.Experimental evaluation and simulation results with limitations unveiled in Section 5.Moreover, the effect of parameters on METO performance is discussed in the same section.Section 6 exhibits future research scope for further advancement of METO.Finally, the conclusion is derived based on comparative results and statistical analysis of the METO with other optimizers.

METO: the biological inspiration
Three concepts from genetics are adopted and redefined in order to design a meta-heuristic evolutionary search technique.It would be worth to mention here that the chromosome is typically the end to end lined up construction of many DNAs, which would run millions of miles long, as shown in Fig. 1.Following this structure of the chromosome, the first concept is the "denaturation and annealing of DNA" of two breeder species 2 to produce the hybrid off- 2 we interchangeably use population for species spring as shown in Fig.
2 [33].As a result of natural breeding of anti-parallel strands of DNA of breeder parents are first denatured in Sense strand (SS) and Antisense strand (AS).
Thereafter, SS of one breeder parent is annealed with the AS of the other to form DNA of new offspring.This concept can equally be applied on chromosomes for extracting the two strands of lined up DNAs, moreover, in Fig. 1, we can observe that 5 the end of one DNA is connected to 3 the end of other DNA.In this way, multiple lined up SS DNAs are considered as 3 − 5 SS chromosome strand.Similarly, lined up AS DNAs are considered as 5 − 3 AS chromosome strand.In the context of METO, the length of chromosome strand depends on the number of variables N v in the problem space, which is equal to the number of DNAs in a chromosome.
Second concept is the evolutionary theory of Mendel [54], which revels the transmission of heredity characteristics from generation to generation as a result of breeding the parents of different species.He traced the transmission of heredity characteristics by sequential cross-breeding and self-breeding to produce F1 and F2 offspring generations, respectively.He came-up with the phenomena that (1) the dominance genes appear in the successive generations offspring such as from F0 to F1 generation offspring and from F1 to F2 generation offspring, and (2) the recessive genes appear in the second generation offspring such as from F0 to F2 generation offspring.This genetic heredity transmission from generation to generation is thought the "selfies microbes", coined by Dowkins in 1976 [51].
Third concept is biological Epimutation which guides the evolution in the organism by following the cycles of nature self-organizing behavior of sequential mutation and rehabilitation [36,55].This phenomena is cemented by the Joseph Heitman statement [55] that the Epimutation is a reversible phenomenon, and due to this, it results in the flexibility of the organism by giving the abilities of maintains, self-improvement, and recovery of physical strength, cognition, and mobility.In one word, it leads to the 'rehabilitation'.
A long time heredity characteristics are subjected to change due to environmental factors, which may appear as either good or bad.Good mutation 3 is acceptable by the organism as evolved traits, however, for bad mutation 4 organism responds to recover himself.Nature has blessed the organism by this self-healing capability through which an organism tries to rehabilitate itself to some extent.Moreover, pollination resembles the selection of breeder plants from different species for fertilization.
Here, we briefly explain the biological terminology and the technical inspired equivalent tools for the sake of clarity.

METO: the implementation
In this manuscript, we have established METO algorithm based on binary bit compositions being benefited from their hardware-friendly operations as searches the solution in Genome space.To implement the above described biologically inspired phenomena METO deploys four operators as (i) the Flipper [33], (ii) the Pollination [34], (iii) the Breeding [52], and (iv) the Epimutation [35,36].This section illustrates the implementation of above operators.Before describing the operators, encoding and decoding schemes are presented here to implement the strands of the chromosome.This is the base of METO on which all operators work.

Binary representation of the chromosome strand
In the genotype, G, representation of the plant 5 , each gene in the particular strand of the chromosome, considered as lined up DNAs, corresponds to the visible appearances, for example, height, type, size and color of the flower etc. Change in the gene value changes the above characteristics and thus results the movement in genome search space.
Implementation of the genotype G representation of both lined-up DNA strands is a sequence of binary bits g[l] ∈ {0, 1}, Fig. 3, where each bit is entitled for a genes and its value for alleles.Here, l is the bit "locus" equivalent to the position of the corresponding gene g[l] in the chromosome.Decoded value of G refers to the genetic contribution to the phenotype, R = f (G).A certain pattern of binary bits g[l] in the chromosome string encompasses specific information and decoded vale of it represents a particular point in phenotype space.Thus, METO requires a coding-decoding system to represent a binary-coded string in corresponding state in real space as shown in Fig. 3.This Figure illustrates the relations of genotype representation for a particular point in phenotype space.
In Fig. 3, the one strand of chromosome is shown to represent variables, where each ten bits (genes) code one variable.This figure shows the corresponding phenotype appearance of the genotype representation of two variables x 1 and x 2 .It would be worth mentioning here that the number of genes representing a variable in the chromosome strand is determined according to the desired solution accuracy.For example, in the case of two variables x 1 and x 2 with lower and upper bounds, represented by subscript L and superscript U respectively, if each contains ten genes, the lower and upper bound of the variables are illustrated by the chromosomes (0000000000, 0000000000) and (1111111111, 1111111111) which respectively map to (x L 1 , x L 2 ) and (x U 1 , x U 2 ) in real space bounds.Between the above specified limits, all intermediate chromosomes in G to represent points in R can be produced by changing one or more respective bits at any Decoding and normalizing between [0, 1] Mapping between the lower and upper variable bounds: x L and x U In the lined up DNA strands (chromosome), the first and last bits/genes location of each DNA strand are as: For the l number of bits to represent a variable in the chromosome, there are 2 l possible distinct sub-strings.Thus the accuracy error in the real search space is 1 2 l [20].However, the usual number of bits N b to represent a variable is calculated as N b = min[10, round(0.1 × (x U − x L ))].

Population structure
R1.1 * , R1.3 * As Mendelian theory of evolution is based on the breeding of two species parents, the proposed algorithm is a multi-population optimizer wherein each i th population i is defined as Following the denaturing of DNA for breeding, population or organisms/plants i can be represented by two sub-populations of all sense strand (SS) and anti-sense strand (AS) respectively as SS and AS .Each sub-population has n number or individuals, which we interchangeability call as organisms, plants, and chromosome (lined up DNAs) as follows: Each r i,n and v i,n chromosomes strands are defined by the lined up DNA strands as shown in Figure 1.
Here N v and N b represent the number of variables/DNAs and number of bits to represent a variable, respectively.The binary value of each gene g r i,n [l] is randomly initialized either 0 or 1 and then evolved by the proposed algorithm.First and last bits of the strands are respectively represented by l start and l end and are assigned as per the the double strand structure of DNA in Fig. 1 and 4 as follows: In above formulation, d th anti-sense strand (AS) is opposite composition of corresponding SS.It is defined by the Flipper operator F as shown in Fig. 4. Section-3.4.1 defines F operator in detail.
In the experiment, Mendel traced the heredity transmission in two successive generations, which are F1 and F2, where current parent population is F0.k,n for k th species (see Fig. 5).The populations in an evolution epoch are defined as Here, | indicates the size of population.Due to following the elitism, best individuals are selected to breed the offspring, thus, new offspring contains the less number of genes but from the best characteristic of parents.Note besides the above inequality, the resultant population size of the epoch new j is acquired in a way to be same size as F0 j size by acquiring the SS strands of new j and new k population as follows where r new j,n and r new k,n ∈ new j .Note that the developed algorithm is based on heredity transfer from one generation to the next.This heredity can only be transferred through the parents to the successive offspring.Thus the parents are subjected to change by evolved offspring of the same location.In other words, n th location parent is changed only by the n th location offspring.In this manner, the heredity of n th parent transmits to the corresponding offspring properly.The resulting new parent population for i th species new i is as follows:

Construction of heredity
This section illustrates the construction of recessive genes (RG), which is an important part of the algorithm.According to the Mendelian evolution theory, heredity transmits Fig. 6: Heredity evolution by elite selection from generation to the next.According to his theory, two types of heredity exist; Dominant Genes (DG) and Recessive Genes (RG).DG appear in the F1 generation offspring, and RG are intended to appear in the F2 generation offspring.Thus, we retain RG as the heredity R, to be available for F2 generation offspring.Fig. 6 shows the construction of heredity genes, H by the elit selection of genes from one generation to next.After cross-breeding, two breeder parents r j,n and v k,n have their own recessive heredity information.Since we are developing an evolutionary algorithm, thus always best fitness genes are intended to pass to the next generation -implicit elitism property [53].This process is a elite selection of heredity and accomplished by three comparators C j , C j,k , and C as shown in Fig. 6.C j selects the best from [r j,n , v j,n ] based on their fitness.The output is then compared with v k,n using the comparator C j , which provides the best heredity from [r j,n , v j,n , v k,n ].The output goes to the comparator C for comparison with the reference old heredity H old j,n .Then the final selected heredity H j,n is available for the n th F2 generation offspring of j th species.Similarly, H k,n is produced for the n th F2 generation offspring of k th species.

Basic operators
METO ensembles biologically inspired phenomena in the form of four operators to accomplish the four different tasks as described in this section.

Flipper operator (F )
In the way of creating AS from SS, Flipper operator, v i,n = F (r i,n ), is introduced which reverse the order of bits in SS, see Fig. for n ← 1 to size of sub-population r i do 7: for for ∀ l ∈ l d do 10: δ N b ←random number between 0 and 1 11: end if 14: end for 15: end for 16: end for 17: return v i 18: end function and 3-5, as in Fig. 2 and 4: Gene g v i,n [l] ∈ v i,n at location l is obtained by Flipper operator F (.) and is defined to reverses the order of g r i,n [l] bits as: To introduce the stochastic nature in the Flipper operator, a Binary variable X ∈ {0, 1} is defined, based on which the equation (12) puts the l = |l end ×(1−X)−l +1−X| location bit to l position, where |.| denotes absolute value.If X = 1, the bits do not flip due to | l| = l, while X = 0, l = l end − l + 1 which results in flipping the bits.Thus, in equation ( 12), only those bits of F (r i,n ) will be flipped, for which | l| = l.Flipper operator is essential in initial evolution epochs of the algorithm since it tries to explore the maximum problem space and reduces the risk of being trapped at premature convergence or local extremes.In the long run, this affects the solution accuracy and oppose the exploitation concept, thus need to control as evolution epochs grow.Thus, flipping probability δ is introduced as a control mechanism which controls the binary variable X as in equation ( 12): δ N b is a randomly generated number, and δ is the function of maximum number of maximum function evaluations (α) and the current function evaluation (β ).It is defined as an exponentially decreasing function: Moreover, inspired from the error in DNA replication, we introduced the mechanism where Flipper operator applied on the selected DNA from N v DNAs as per equation (15).Candidate DNSs are selected randomly by the factor δ N b .
where . is the ceiling operation.To control the search strategy, Flipper operator is applied on selected DNAs δ N v out of N v in the chromosome strand, based on the random number rand ∈ [0, 1].Pseudocode for producing AS by Flipper operation is given in Algorithm 1.

Pollination operator
In the METO algorithm, two pollination schemes are used in sequence as called here random and sequential.First, the random pollination randomly picks the two breeder species/population from multiple even number of species as j and k .Then, in sequential pollination, each individual of j is paired in bijection with an individual of k at the same order.
where, is set of all populations, P(.) is pollination operator, and M is the number of species.One can notice that two selected populations j and k are not equal.Sequential pollination is the simplistic one, which can be extended to the other pollination schemes such as Roulette Wheel, Tournament and etc. [16].

Breeding operators
To implement the Mendelian theory, two breeding operators Cross-breeding and Self-breeding are defined to produce the two successive generations, F1 and F2, offspring, respectively.The definition of both breeding operators are as follows:

Cross-breeding
To produce the n-th F1 generation offspring, breeder parents chromosome strands r j,n and v k,n are selected from different species j and k, using pollination operator.Cross-breeding operator B(.) produces the SS for the species corresponding to r j,n , where the corresponding AS for the same species is constructed by F (.) operator.AS is treated as a supporting strand to produce SS, because in genetics, SS is coding strand.Operator B(.) produces twin SS offspring Algorithm 2 F1 generation offspring Require: r j and v k are the set of Sense and Anti-sense strands of different species j th and k th , respectively; N v and N b are respectively number of variable and number of bits to represent each variable 1: function F1-OFFSPRING(r j , v k , N v , N b ) 2: p ← size of sub-population r j number of plants available for cross-breeding, where size of r j = size of v k 3: r 11 ← r j initializing first offspring with the genes of r j 4: r 12 ← r j initializing second offspring with the genes of r j 5: for j ← p do 7: d ← randomly select N v × rand variables from N v 8: for ∀i ∈ d do for all elements in vector d 9: Subscript '1' and '2' represents respectively for dimension 1 and dimension 2 3: Initialization of the F2 generation offspring of size F 1 4: production of F2 generation offspring is void for i th F1 generation offspring, because of same fitness of genes.

24:
end if 25: end forF 2 ← remove all chromosomes from F 2 population which have all bits = '0' 26: return F 2 27: end function Following the stochastic approach for selecting the dominant genes from two breeder parents r j,n and v k,n , natural selection is adopted.A variable y is defined for cross-breeding, where it has d elements.Each element in y may have either 1 or 0 as per the generated random number γ in the range of [0, 1]: Twin offspring SS chromosomes are then generated by the following equations: From the twin offspring, we select best of them, based on their fitness, as the resulting F1 generation offspring SS.
However, multiple offspring sense strands can be produced as shown in Fig. 7.Both twin SS offspring [r F11 j,n , r F12 j,n ] is composed of dominant genes either from r j,n or v k,n based on natural selection.
In a similar manner, the offspring are produced for S k species where sense and anti-sense parents chromosomes are respectively r k,n and v j,n .For generating multiple twin offspring, the common genes can be obtained in advance using XNOR logical process, and for the remaining locus of the genes, the above equation is used.In the case of very long chromosomes, this can help to decrease the time complexity for producing multiple offspring.
where .indicates the complement of the bit.Transfer of common characteristics has been shown in Fig. 7, where gray genes show the common ones at 3 rd and 4 th locus as a result of XNOR operator, where others are undefined genes and represented by allele value X.

Self-breeding
The self-breeding operator O breeds the SS strands of offspring generated in F1 generation with itself to produce the F2 offspring SS for j th species.This is the process which RG dominates in F2 offspring by replacing the same locus genes of F1 generation offspring If the fitness of RG is better than F1 generation offspring, which is parent for F2 generation offspring.On the other hand if fitness of RG is not better than F1 generation offspring, then genes of F1 generation offspring will be dominating in F2 generation offspring.It is controlled by the random number ρ in the range of [ρ L , ρ U ]. Generally, ρ L = 0.9 and ρ U = 0.97 are selected.The effect of different values of ρ on the search strategy is discussed in the Section 11 and in Fig. 4.
Similarly, r F2 k,n [l] for k species are produced by O(r F1 k,n , r F1 k,n ) as in equation 26.Offspring in F2 generation is denoted as F2 , where AS strand is generated by the Flipper operator F .Thus, the population of F2 generation offspring for j and k species are as Pseudocode for producing F2 generation offspring is given in Algorithm-3.It would be interesting to observe that in line number 14 and 18 of Algorithm-3 that if heredity fitness of the F1 generation offspring is better then heredity is transferred to produce F2 generation offspring.On the other hand, If fitness of F1 generation offspring is better than its heredity, then F1 offspring genes dominated to produce F2 generation offspring.This transfer of genes to the F2 generation offspring is based on the ρ, the Mendelian probability.

Algorithm 4 Epimutation in Heredity
Require: H , the set of heredity of all N organism in a species; τ L , τ U , Lower and Upper Limits of the mutation probability; ζ , the mutation rate for selecting individuals for mutation from the population, τ Nv , DNAs mutation rate, N v , N b ) to the variable s 3: ξ ← 0 initializing counter for rehabilitation attempts 4: index ← 0 This contains indexes of all organisms who have evolved after going through the mutation attempt 5: while (¬empty(ζ ) && ξ < 1) do index =[]; index for evolved hereditary from all due to mutation 6: for i ← ξ do Do the process inside the loop for i = each element of ζ 7: m ← τ Nv × N v Number of heredity DNA strands for evolution 8: u ← Selected m DNAs form N n Each element of u is an index and represents a selected DNA strand 9: for j ← u do Do the process inside the loop for j =each element of u 10: b DNA strand bits from a chromosome, it selects N b bits according to j value 11: r1 ← generate a random vector of length 1 × N b between 0 and 1 13: 1 in left side Logic vector d, shows that associated bit should mutate: 0 ⇐⇒ 1 14: XOR mutates the bits of heredity DNA strand as per the 1s in d 15: end for 16: F s ← Calculate Fitness of s(i) Element of s(i) are the binary-to-decimal conversion of binary DNA strand of s(i) 17: if

Epimutation operator
Fig. 9 shows the Epimutation process of the heredity of an organism where the five states of the organism are shown as p1 p2, p3, p4 and p5.Here p1 is the current state of the organism, and others are the results of Epimutation.p2, p3, and p4 states are not better than the current state; thus organism returns back to its original state p1 by a rehabilitation process.But, mutated state p5 is better than the current state; thus, in this case, the organism accepts it as evolution.This mechanism is Epimutation and tries to find the better state of the pseudo-global optima, and we call it -fine-tuning -of the global best point by iterative mutations followed by rehabilitation.We define Epimutation factor ξ , which gives the number of chances to the organism to go through the mutation in their life cycle.Here, the organism took four attempts to get a better solution, thus ξ = 4.
E operator changes the genes value of heredity g H [l] based on the randomly generated Epimutation factor τ.For each gene, if τ lies between the lower and upper limits then the corresponding gene will be changed according to the equation below The lower and upper limits of τ is calculated as follows: It is worth to note that heredity is used to produce the F2 generation offspring from self-breeding of F1 generation parent.Thus only those heredity are subjected to Epimutaion which are associated with F1 generation parents, intended to self-breed.

Algorithm 5 METO Algorithm
Require: M is the number of species, N v , N b , N is the number of individuals, TerminationCriteria, CostFunction Iter, feval, solution accuracy are the general TerminationCriteria and Fitness if calculated based on CostFunction for j ← 1 to M do p ← M × N M , each M species contains p organism/plants/individuals/DNAs ( j).r ← Initialize SS population randomly Heredity formation, taking best of SS and AS strands based on fitness S best ( j) ← best of ( j).H Taking best heredity of all species, Species best (S best ) end for G best (1) ← best of S best Taking best heredity from all species, Global best (G best ) i ← 2, ρ L ← 0.9, ρ U ← 0.97 i is evolution epoch counter for while loop while until termination criteria met do for j ← 1 to M do k ← select another species based on random pollination δ N b , τ U and τ L ← based on equation ( 14) and (30), respectively δ Nv andτ Nv ← generate a random number between [0, 1] Initialize cross-breeding rate that how many organisms/plants are ready for cross-breeding Self-breeding rate: It is to select the parents organisms/plants from F1 generation for self-breeding E F1 ← index of best p F1 strands from population ( j).r Implicit Elitism: organisms for cross-breeding: Due to cross-breeding heredity of two species are evolving ξ ← E F2 ← index of best (p F2 ) strands from the F1 population For self-breeding and Epimutation, Best of two offspring from the same parents are selected based on their fitness Replace SS strands and heredity at positions E F1 in main population rand is the randomly generated number: This process evolve the heredity of organisms on place ξ replace parents in main population, if F2 generation offspring fitness is better rand is the randomly generated number: Produce AS strands of F2 generation offspring on Concatenate and sort the AS strands based on their Fitness Taking best heredity of all species, Species best (S best ) end for Taking best heredity from all species, Global best In this algorithm, Fitness is calculated for each new strand for the given CostFunction(x) Each sub-segment of binary strand in Genotype representation G, representing a variable, is converted in its equivalent real domain (R) before placing in CostFunction(x); Epimutation is the self-organization mechanism, in which an organism self-adjust them-self against the environmental mutation via a rehabilitation process.Selfies microbes preserve the recessive genes and carry them from generation to generation.Here, we assumed that heredity RG undergoes the mutation process multiple times over the plant life cycle.If it gains a better mutation, adapts it as evolution process; otherwise, the organism rehabilitate them-self to the former situation.Epimutation -rehabilitation against mutation process -is shown in Algorithm 4 as a function.

METO algorithm
A block diagram of the processes in one evolution of the METO is presented in Fig. 10.Here, we can observe two parallel processes of evolution, the inner and the outer.The inner process ensembles the interactions of pollination, crossbreeding, self-breeding, and flipper operators to produce two successive generation offspring.Although, outer process deals with the heredity formation and its Epimutation operation, which is used to produce F2 generation offspring by self- In each evolution epoch, each species has its own best solution, which is S best .At the last of evolution epoch, best of all species is extracted to form the "Global Pseudo-best (G best )".The term "pseudo" comes from the fact that the solutions improve their merit in being closer to the global optimal solution at each evolution.Algorithm 5 presents the pseudocode of METO as it deploys the above described four operations in the sequence.It would be worth to mention again that the current heredity is replaced with strands of F1 and F2, if they are better than current RG.METO concludes the solution, once, one of the following termination criteria met: 1) Maximum number of iterations, 2) Individuals of all species have the same heredity and not changing in N iterations, 3) The same answer is coming in each successive iteration for m times, and 4) If the error is below 10 −5 or a desired value.

Movement of points
This section presents the role of different operators for movement of the points in hyperspace as a result of evolution.
Flipper operation provides a stochastic search strategy which eliminates the effect of biases and prevents the optimizer from premature convergence or trapping at local minima.Effect of this operation can be seen in Fig. 11(a), where flipping the SS of DNA results in AS which spreads the points represented by S far in the search space.This operation results in an unbiased search strategy, which is very important for multi-modal problems as the global search strategy.It prevents the search to be trapped at local extremes.Distracted search from the above process is again aligned in F1 and F2 generation offspring.Due to cross-breeding F1 generation offspring explores the hyperspace surrounded by the two parents.As a result of F1 generation operation, the movement of points can be observed in Fig. 11(b).We can see that the new offspring are produced between two parents or the region specified by them.It shows that F1 generation offspring are influenced by their parents only and not influenced by ancestor characteristics.Here, it is worth to notice that the number of multiple offspring productions is a control variable which may vary based on the complexity of the problem to be solved.Although self-breeding use heredity memory based on the Mendelian probability, this is the second operation, which makes the METO different from the GA procedures, where GA does not utilize heredity memory.However, few papers suggest the GA with implicit memory [58] to save the chromosome efficiently in the computer memory.This can be used with METO as well for improving computational power for large variables.Self-breeding operation pulls back the points towards the region of interest in search space.In the successive generation, heredity memory provides a biasing mechanism to produce the F2 generation offspring as a neighbor of already found best solution.This is the pull mechanism which attracts the points towards pseudo best solution acquired by this organism.The quantity of recessive heredity to transmit in F2 generation offspring depends on Mendelian probability ρ, which ranges from τ L to τ U .This provides local exploitation of the neighbor points of the pseudo-global solution, which can be seen in Fig. 11(c)-(f) for different values of ρ.We can observe that for higher ρ most of the points converge to the pseudo best point, where low value provides random location selection strategy.Thus the optimal selection of ρ can make the proposed optimizer better.This mechanism is backed by the Epi-mutation, which is in the result of the organism's survival instincts.

Experimental Evaluation: Benchmarking METO
In this section, experimental evaluation has been portraited to benchmark the METO.For this, we utilize some complex test functions which belong to the different categories of complexity.Details of experimental benchmark test functions are given in many publications, books, and online [37][38][39].These benchmark functions belong to the different categories such as continuous, discrete, analytical, non-analytical,   -3 and 4. Simulation outcomes and statistical results are collected over 100 independent runs on the 30 and 100 variables problems.Simulation results also assist us in setting the limitations of METO algorithm and its parameters to get better performance.Functions F1-F20, F81 are multi-modal with many extremes test functions.F21 is deceptive function, F21-F23, and F70 are integrated functions, F19 is with many global solutions, F58-F67, F83 are Noisy functions, F68 is a constrained function, F34-F37 is rotated-shifted functions, F39-40 are the noncontinuous functions, F41-F49 are the Hybrid and composite functions as described in [38].
According to the literature, many local minima functions with single global optima are hard to solve in more than twenty dimensions, where the present algorithms show some limitations.It gives motivation to the researchers for designing a better optimizer.Following the motivation, here, we focus on solving the above class of functions regarding improving the consistency with better results.
We simulate the thirty and hundred variables bench-marking test problems with twelve different prominent class of evolutionary and swarm optimizers, as shown in Table -2.Although we have tested it with more algorithms and on more bench-mark functions, which we can provide on the demand of readers.In the limited version of the manuscript, we present some useful results to support the METO's outperformance over other algorithms.The parameters of all the optimizer are also given in Table-2.Optimal parameters of all optimizer are adopted from their corresponding papers.[19][20][21][22][23][24][45][46][47][48][49][50] Each optimizer had a population size of 100 and run for 100000 function evaluations.100 individual runs of each optimizer are carried out on each benchmark function to get comparative performance.Based on the output in 100 runs, three statistical measures are calculated to show the efficiency of all above optimizers.First one is the average µ of best values, which is the sum of all the final values divided by the number of runs.Second is the standard deviation σ of the achieved solution in all iterations to show the spread of the obtained results.The last attribute gives the robustness and consistency C of the algorithms, which is in percentage and represents the number of times the optimizer provides solution below a particular threshold value, here we selected mean of the METO is as the threshold for all optimizers.Moreover best B and worst W value highlights the best and worst possible performance of all optimizer.For comparison, the METO has the following parameters configuration, i.e., number of species equal to 2 with 50 number of individ- 4} and s is any value between 0.2 and 1.4 as described in [1***] 2 ) ± 5.125 F38 fE ScafferF6 noncont Schaffer no 6 function with modified input variables as: ± 100 uals in each species, ρ L and ρ U are respectively 0.9 to 0.97.The cross-breeding and self-breeding rates are respectively max(0.3,rand) and max(0.5, rand).For producing the offspring, always elites parents are taken from the population.Good selection of the parameters δ N b , τ L , and τ U can improve the computation power of the algorithm.Kruskal Wal-lis non-parametric H-test is used to analyze the results [56].This test is an extension of the Wilcoxon rank sum test to more than two groups, where we have distributions of 13 optimizers (groups).It is also a version of classical one-way ANOVA and compares the medians of the distributions to differentiate them.It gives an idea that the results of two                  noise-free solution.On this function, CMAES provides unfeasible out of bound solution.F19 On this function, METO appears as the best optimizer.
Equal ψ and µ show symmetric distribution of the solution with low uncertainty and high consistency below −29.0.Its worst performance is also better than others.F20 For this function solution distribution is also symmetric due to equal µ and ψ with low uncertainty σ = 0.2.The best value of the function is achieved by METO as well, which is −0.93.As usual, consistency below −0.72 is the highest as can be seen in Table-6.F21 Deformed Schaffer2 is the deformed version of F14 with higher complexity.METO performs best in all perfor-mance measures with uncertainty level of σ = 0.8.Smaller ψ = −13.8than µ = −12.5 shows a left-skewed distribution which is good to achieve high accuracy below −12.5 as shown in Table-6.F22 We also tested on Keane Bump function, which is a constrained function.No other algorithm can compete with METO for this function except BHGA to some extent.We can observe that METO is very consistent with zero noise σ = 0.1 on this function.This is one of the very hard functions to solve, where METO shows its importance to adopt as better Evolutionary Algorithm.Comparative results on this function are given in Table-6.Overall, the results shown in Tables 5 to 8 prove the dominance of METO over other optimizers for multi-modal problems with single or few global extremes.

Kruskal Wallis statistical analysis of the results
We tested distributions of all optimizers for their significant difference using KWT one way ANOVA rank test.It is an extended version of the Mann-Whitney test.For the test, we consider the Null hypothesis, H 0 , as "distribution of METO is same as the distribution of other optimizers", where distributions of all optimizers are coming from independent experiments.It discriminates the distribution of all optimizer based on the calculated "critical chi-square value χ2 " and KWT value.If the value of KWT is smaller than the χ2 , the H 0 cannot be rejected.Thus, to reject the H 0 , KWT value should be greater than χ2 .For this procedure, p-value is utilized to test the significant difference between distributions with 1% significance level.
ANOVA Tables-9 and 10 provide additional test results.ANOVA results for each function has six attributes.Sr represents the source of the variability.Based on the different type of variability here, three types of sources are given.First is Cl, representing groups, it is due to the variability that exists due to the differences among the distribution means.Second is e, which is error, and the variability exists due to the differences between the data set within the group and the group mean.This is also called variability within the group.The third is the T Total, which represents total variability.SS is the sum of square due to each Sr, df is the degree of freedom, df associated with each Sr is calculated.For Cl, df is the degree of freedom (DoF) between the distributions/groups and calculated as df =K − 1; here K = 13 the number of the optimizers.For e, the df is the DoF within the distribution groups and defined as df = N − K, here N = 650, the number of observations.The total DoF is calculated as df = N − 1, which is equal to (N − K) + (K − 1).Next attribute, MS is the mean squares for each source and calculated as SS df .Fstatistics, which is ζ , represented for the Sr and the ratio of the MS.The last column of this table is p-value, which is the probability that the χ2 can take a value larger than the computed test-statistic value.ANOVA1 derives this probability from the cdf of F-distribution [57].In the ANOVA table, χ2 and p-value are important, where other above-described parameters value support to calculate them.
Moreover, to show the significant difference between the distributions of solutions achieved by each optimizer, notched box-plot is shown in Fig 12 .The notched box is associated with an optimizer which has two sections divided by a centerline, and this is the median.Two end edges of each notched box, the bottom and the top, indicates the q i = 25 th and q 3 = 75 th percentiles, respectively.Outliers O in the distribution are plotted individually using the '+' symbol.In Fig. 12, we can observe that METO results are free of any outlier.Also, therer is a significant difference between the METO and other optimizers.We can observe that notches of METO box plot do not overlap the others, which shows that true medians do differ with others with 95% confidence level.Beyond the whiskers length, the i th solution in the solution distribution are displayed as outliers O i : if O i > q 3 + w(q 3 -q 1 ) or O i < q 1 -w(q 3 -q 1 ), where w is the maximum whisker length.Horizontal axes numbers in each sub-plot represent optimizer number, where from 1 to 13 it are respectively METO, BHGA, BBO, IWO, DE, CMAES, SFLA, FA, TLBO, CUCKOO, BA, GSA, SLPSO, respectively.

Statistical ranking
In this manuscript, significant results are presented due to the limitation of pages.It is worth to discuss the features and limitation of the METO in this section based on the experimental results and associated Kruskal Wallis statistical test.The rank-sum scores of optimizers for all functions are given in Table 11.Based on the score in the table, we can point out that for how many functions METO secures based rank score.Accordingly, we can rank all the optimizer from the best rank to the worst rank.In Table 12, we can see that on 24 functions METO secured the first position (P 1 ), on 7 functions it is on the second position (P 2 ), as so on.The rank count is based on the Table-11.
For getting a numerical value, we adopted the scoring algorithm based on the average performance.We assign 13 points to the best optimizer and reduces it by 1 for each degraded performer, sequentially.This is because we have K = 13 optimizers in this manuscript.This scoring is averaged after applying for all benchmark functions.Optimizer with the highest score is best ranked.In this paper, we show the results only for the functions presented in Tables 3 AND 4. The ω = ∑ K i=1 (P i ×(14−i))) is the aggregated score to show overall performance of all optimizers, for example, METO has 502 score which is at 1 st position, similarly, BHGA is at 2 nd position with 419 scores as so on.Finaly, average score is calculated by Average − ω = ω/K.The optimizer with the highest Average is considered as winner.
For the Table-12, we can see on-average METO is flagged as one of the best optimizers by achieved high average score.Ranking of optimizers for 30 variable problems can be observed from high to low score as: METO, BHGA, BBO, TLBO, FA, SLPSO, CMAES, CUCKOO, SFLA, GSA, BA, DE and IWO in the sequence.

Observation and discussion on 100 variables problems
We motivated to test the performance of the METO for more variables problems to see its persistent goodness.We solved the 100 variables benchmark problems and statistically tested as well using the Kruskal Wallis test.Tables 13, and 14 show the results on selected benchmark functions from the Table-3 and 4 on which METO performance is significantly  Based on the output distribution by several independent runs of all algorithms, the results of Kruskal Wallis test are presented in  In this table, chi-square value χ2 and calculated p-value is presented.The smaller p-value than χ2 shows that the solution distribution of METO is significantly different from other optimizers.To see the performance of METO on multiple objective problem we includes 100 variables Kursawe funciton as mentioned F44 6 , with linear sum of all objectives.On this problem, we can see that METO is giving best performance in all matrices, highlighted in Table-14.Moreover, METO performance is significantly observable on two more functions, noisy langerman, F45= noisy(F6) and Xinshe Yangn function number- It is observed from Table 13 and 14 that METO may be an alternative optimizer to solve the undertaken functions with good average performance.On Functions F3, F6, F9, F10 and F28, METO is the best in all aspects, where it achieves higher accuracy, best value , minimum worst value with low uncertainty in the results compared to other algorithms.METO shows reasonable and comparable consistency.
We have also tested with many variants of GA as given in Ref. [40], and deduce the observation based on the simulation results that, METO is a good replacement of GA for highly complex problems.It works best on single global solution problems with multiple local solutions. 7 where,   Moreover, we can observe the significant difference in Table-17.GSA algorithm win on twelve functions, however, METO secures second place on sixteen functions.From all algorithms, we can observe that on average METO is the best algorithm.CMSA algorithm gives the best solution on six functions but no other functions, and it has the worst performance on eleven functions as it is on the 13 th position.We highlight all good results in bold in the tables.We can observe that on most of the funcitons METO secures second position where METO is not very good but better than the others.It motivates us to improve it by tuning its parameters, which we left open for future research.C of METO is significant comparatively.
We can observe that where METO is not best on functions, the average performance is comparative to validate the ability of METO to be a substitute optimizer for most of the functions.Significance difference between the distribution of METO and other optimizers can be observed in the boxplot of Fig. 15.First box-plot belongs to the METO.

Limitation of METO
As "No free lunch" theorem [9] suggest that elevated performance of any optimizer over one class of problem costs on the performance over another type, METO holds the same.From the observation, we can draw the limitations of METO, where it underperform on plate-shaped and bowl-shaped functions on the higher dimension.We observed that plate-shaped functions such as Zakharov function, Perm0db, etc. , cannot be solved by METO.Performance of METO is not very efficient for the class of functions where more than one global points exits, such as Griewank, Weierstrass, etc. functions.From the experimentation on the vast set of problems, we observed that best performance from METO could not be expected for all class of problems.By following the results, we found that for few functions such as Ackley, Rosenbrock Leon, Dixon price, CF-3, HCF-1 optimizer CMAES, GSA, SLPSO, BBO, and TLBO give the better solution than METO.
In the case of these function by tuning the parameters of METO near solution can be found.By observing the results, one can conclude that METO surpasses other optimizers on multi-modal with a single global solution, non-linear, steep/ridge, flat surface/step integer problems, and discrete problems.However, for the bowl-shaped like sphere function, it gives comparative results but requires the number of function evaluations.

Parameter optimization
Parameters of the METO are problem-specific.Using the sensitivity analysis by utilizing multiple Monte Carlo runs on a particular problem, we can define a range of all parameters for better performance.The proposed algorithm is sensitive to the following parameters, tuning of which can improve the solution accuracy.
1 -Number of bits in the DNA to represent the variable.
As we have discussed in Section 3.1 that the accuracy of the solution can be increased by higher number of N b .But it increases the size of chromosome, thus extra computational burden for higher number of variables.
Based on the required computational accuracy, it can be increased.2 -Lower and upper limits of mutation probability τ is defined by equation 30.In initial iterations, it is high to explore maximum search space, which is reduced exponentially to very low value to fine-tune the solution in later evolution epochs.3 -Flipping probability δ is an important factor and defined in equation 14.Since it fillips the SS and produces completely different AS of the DNA, thus, it needs to be defined carefully.The high value of it avoids the local extreme in the initial phase of evolution but affects the later stage of evolution when fine-tuning of the solution is required.Thus, in the later stage of evolution iteration, it should be lower enough.However, for a large number of variables, instead of equation 14, τ can be generated randomly in the range [0, 0.3] for each selected DNA of the chromosome from fist iteration to the last.It may provide an efficient search.The optimal value of it needs rigorous experimentation, which is our future research scope.4 -Cross-breeding and self-breeding and Epimutation rates are again important factors.Appropriate selection of them reduced the number of function evaluation.The value of them should not be small as well as large.In our proposal we recommends its value in Algorithm-5, by ∆ F1 and ∆ F2 .Epimutation rates should be the same as ∆ F2 .5 -Smaller population size may give quicker convergence, but there is a risk to be trapped at a local extreme.Vicee-verse, large population size can lead to process speed down.Similarly, a low number of bits N v tends for lower accuracy but provides faster computation.Thus, an appropriate selection of it can increase the computation power of the algorithm.An optimized trade-off between the number of species and its size may lead to better results, which is problem specific.Instead of increasing the population size increasing number of species gives considerably better result for large problem space.6 -As the discussion given in -4, it is expected that best genes should dominate in offspring.Simulation results and effect of different values of ρ are shown in Fig. 14.
Based on the experiments, we proposed it varying between the range 0.9 to 0.97.7 -Epimutation factor ξ is also problem-dependent.It increases the function evaluations.A large number of ξ may give extra burden on the computation time but could improve the convergence as shown in Fig. 14.Thus it needs attention to define it carefully.For large variables problem space it can be increased to five. on multiple species of plants, where the recessive genes are transmitted to next generation with some probability.Thus, based on the evolution theory, five operators as called the Flipper, the Pollination, the Breeding, the Discrimination, the Epimutation are introduced and sequentially employed.
The algorithm imitates a two-strand DNA structure rather than a single chromosome one, which is formed by fertilization of sense strand of one plant DNA with the anti-sense strand of another plant DNA.By assigning a recessive DNA corresponding to each plant as the pseudo-global point, the algorithm does not deviate from optimal.A member with the best surviving value associated with a plant is assigned to as recessive chromosome and focuses on the exploration of neighboring points based on Mendelian transfer probability of genes.To benchmark the proposed optimizer, we have compared it with thirteen best-known optimizers of different nature, as well as a diverse set of test problems.Simulation results and statistical analysis on thirty and hundred variables test problems ranks the METO higher than the other optimizers.From the presented results, we can observe the better consistency of METO on multi-modal and steep-ridge, noisy and deceptive in nature functions.ACCs show clear-cut distinction between the algorithms on various types of test functions.Although, the limitation of the METO on plate-shaped, bowl-shaped problems is observable, where BBO, TLBO, RSA.In conclusion, METO is a nature-inspired genetic evolutionary algorithm which utilizes the standard structure of genotype and phenotypes for a double-strand DNA.Thus, it can be used on the broad range of the problems without any specific guidelines.Future works are expected to be done on the parametric control, complexity analysis, convergence analysis, reducing the number of functions evaluation, testing the algorithm in solving real-life problems, hardware implementation and extension for the multi-objective optimization problem case.

Fig. 2 :
Fig. 2: Denaturing and annealing of breeder DNAs to produce the offspring

Fig. 3 :
Fig. 3: Genotype and phenotype representation of a point locus using the mapping rule x d = x L d + x U d −x L d 2 l d −1 xd .Where xd is the decoded value of binary string and is calculated as xd = ∑ l d end d=l d start 2 d g[d] and l d is equal to l d end − l d start .In the lined up DNA strands (chromosome), the first and last bits/genes location of each DNA strand are as:

Fig. 4 :
Fig. 4: Breeder Population of denatured SS and AS strands

Fig. 5 :
Fig. 5: Illustration of production of F1 and F2 generation offspring and new population

Fig. 10 :
Fig. 10: Block diagram of the processes in an evolution

Fig. 11 :
Fig. 11: Movement of points in an evolution

F81Fig. 12 :
Fig. 12: Statistical Analysis of the Results based on KWT for 30 variables test functions.

2 F68Fig. 13 :
Fig. 13: Statistical Analysis of the Results based on KWT for 100 variables test functions.

Table 1 :
Table of Acronyms, Variables, and notations Algorithm 1 Producing AS using Flipper OperatorRequire: r i , SS offspring sub-population of i th species, N v and N b are respectively number of variables and associated number of bits to form a DNA, N is number of individuals in i-th species, δ N b , δ Nv are the bits flipping rate and DNAs selection rate for flipping, respectively.
Require: F 1 , F1 generation offspring; H , heredity; f H , fitness of H ; f F 1 , fitness of F 1 ; ρ L , and ρ U , lower and upper limit of Mendelian probability ρ, respectively 1 for which I H = 1 6: H ← all chromosomes from H , for which I

Table 3 :
Test Benchmark Functions

Table 5 :
Results of 30 variables test functions

Table 6 :
Results of 30 variables test functions

Table 7 :
Results of 30 variables test functions

Table 8 :
Results of 30 variables test functions Associated results for thirty variables test problems are shown in Table-5 to 8 and explained below.OB in the Table associated with CMAES algorithm represents the out-of-bound solution provided by the corresponding algorithm.60%.Also, the best obtained solution B = 1 is achieved by METO with minimum uncertainty, σ = 37.2.In the case of the worst solution, it is better than the other optimizers.F4 On this function, METO also outperforms with C = 52% ψ = µ = 1.5E −02 , B = 5.6E −12 with very low uncertainty σ = 9.5E −03 .F5 On Langermann function, METO give the solution with uncertainty σ = 0.4 lower than others and µ = −28.4.On this function, METO is again best with µ = 33.6 and ψ = 40.3,higher median than mean, which indicates that the distribution is right-skewed.Due to this, consistency is lower, but higher than other optimizers as can be seen in Table-5.The worst performance is also better than others.F8 On this function, METO is the winner optimizer with comparatively low uncertainty of σ = 1.1 and ψ = −444.3,µ = −443.8.The lower median is always preferable as indicates the better optimizer.The worst result by METO is better than others.F9 METO is the best optimizer for this function with the normal distribution in the results, where ψ = µ, as can be seen in Table-5.Moreover, METO has the least uncertainty and the highest consistency level over −385.2 value.Its worst performance is also better than others.F10 Bird function is extended to the multiple variables, on which METO performance is comparitive to other optimizers.CMAES is giving best solution on this function.F11 On Periodic function, four optimizer METO, BBO, GSA, and SLPSO are showing the equivalent performance, which can be seen in Table 5. F12 Gramacy Lee function is extended to the multiple vari-50%.F14 METO is the winner on this function in all aspects shown in Table-6.For this function, ψ is slightly lower than the µ, which is good sign for getting high consistency lower than the threshold value of 12.3.F15 As shown in Table-6, with the equal ψ = −4 and µ = −4, METO has the highest consistency and the best solution B = −4.8comparing to the others.
F1 F1 function is best solved by the METO with B = −26.037,µ= −23.6 and median ψ = −23.5.Here, we can observe that approximately equal µ and ψ drives symmetrical distribution with spread σ = 1.2.To get the above mean, METO achieves high consistency which is C = 46%.On this function, METO's worst value W is the best amongst the other optimizers.F2 On this function, METO outperforms the other optimizers, where it ends up with the best solution of B = 1, µ = 6.1, ψ = 5.7 and σ = 2.2.The distribution of final values in 100 individual runs is left skewed where ψ < µ, which is best for getting high consistency of C = 54%.Also, the worst performance of METO, W , is better than other optimizers.F3 On Schwefel function No 226, obtained distribution from METO is left skewed due to ψ = 1.6 < µ = 24.8,thusachieved consistency is the highest as C = F6 Luniacek-Bi-Rastrigin function is solved best by METO with highest consistency which is more than 52% and approximately similar ψ = −230.3andµ = −229.8.The Worst performance METO is better than others too.F7 ables as shown in Tanble 3. The results of all optimizers are given in Tbale 6, where we can observe that the (ψ = −85.5)< (µ = 84.8)with better consistency C = 56%.METO has the best performance, also the worst performance of METO is comparatively better than others.F13 As usual performance of METO on multimodal functions, on Schaffer6, F16 funciton, it has best response with the minimum µ = ψ = −12.5, and the best worst performance.METO is able to achieve B = −13.4with low distribution spread and highest C = ψ = −1.95e+4 is greater than the µ = −2e+4.F18 On Deceptive function, only DE competes with METO, where both algorithms are very consistent and giving a

Table 11 :
Kruskal Wallis Rank sum score

Table 12 :
Rank of all optimizer based on Kruskal Wallis Rank Sum scores: 30 variables All algorithms run a hundred times with the same parameters as above for all optimizers.Based on the output distribution of all algorithms, a comparison table is formed based on their mean µ, standard deviation σ , best value achieved B, worst performance W and consistency C .
7. Table-16shows the Kruskal Wallis rank sum score for the selected functions.Table-17 is the output of this table.Rows of Table-17 represents the algorithms, where each column is position P K , where K = 1, 2, ..., 13.We can observe that METO is on P 1 position 11 times and on P 2 position 11 times.It shows that METO performance is good comparative to other optimizers.Overall, METO has ω = 275 the average of it is Average-ω = 21.2, which is maximum of all.Thus METO is placed at 1 st position, similarly, BBO is at 2 nd position with ω = 208 and so on.For the Table17, we can observe that METO is flagged as one of the best optimizers with 11 best and 11 good performances.Ranking of the optimizers from high to low average-ω score is as: METO, BBO, IWO, FA, CUCKOO, TLBO, SLPSO, GSA, SFLA, BA, CMAES, BHGA, and DE in the sequence.We can observe that for higher variables problems BHGA degrades its performance.

Table 13 :
Statistical results on 100 variables problems

Table 14 :
Statistical results on 100 variables problems

Table 15 :
Kruskal Wallis Test Table

Table 17 :
Rank of all optimizer based on Kruskal Wallis Rank Sum score: 100 variables