A Distributed Bi-behaviors Crow Search Algorithm for Dynamic Multi-Objective Optimization and Many-Objective Optimization

Dynamic multi-objective optimization problems (DMOPs) and
Many-Objective Optimization Problems (MaOPs) are two classes of the optimization
filed which have potential applications in engineering. Modified
Multi-Objective Evolutionary Algorithms hybrid approaches seem to be suitable
to effectively deal with such problems. However, the Crow Search Algorithm has
not yet considered for both DMOP and MaOP. This paper proposes a Distributed Bi-behaviors Crow Search Algorithm (DB-CSA) with two
different mechanisms, one corresponding to the search behavior and another to
the exploitative behavior with a dynamic switch mechanism. The bi-behaviors CSA
chasing profile is defined based on a large Gaussian-like Beta-1 function which
ensures diversity enhancement, while the narrow Gaussian Beta-2 function is
used to improve the solution tuning and convergence behavior. The DB-CSA
approach is developed to solve several types of DMOPs and a set of MaOPs with
2, 3, 5, 7, 8, 10 and 15 objectives. The Inverted General Distance, the Mean
Inverted General Distance and the Hypervolume Difference are the main
measurement metrics are used to compare the DB-CSA approach to the
state-of-the-art MOEAs. All quantitative results are analyzed using the
nonparametric Wilcoxon signed rank test with 0.05 significance level which
proving the efficiency of the proposed method for solving both 44 DMOPs and
MaOPs utilized.


Introduction
During the last decade, a wide range of metaheuristics are designed to solve many complex problems based on Evolutionary Algorithms (EA) like the Genetic Algorithm (GA) [1] and the Swarm Intelligence (SI) such as the Particle Swarm Optimization (PSO) approach [2]- [5].
Different Multi-Objective Evolutionary Algorithms (MOEAs) have been employed to solve static single and multi-objective optimization problems, where the main challenge is to find the best global solutions through a compromise between convergence and diversity on the search space. However, this process becomes more challenging when solving Dynamic Multi-Objective Optimization Problems (DMOPs) characterized by several types of time-varying Pareto Optimal Set (POS) and Pareto Optimal Front (POF) [6].
Generally speaking, MOEAs are designed to track and react effectively to the change that may affect the POS and the POF while conserving both convergence and diversity concepts [7], [8]. On the other hand, Evolutionary Dynamic Optimization (EDO) approaches should include explicit and implicit mechanisms to detect and correctly react to those changes. A change detection mechanism can be maintained through detectors from a feasible search population like the current best solutions, the memory of optimal solutions or some predefined subpopulation. Also, it can be assumed separately to the search space using a set of random selected solutions, a fixed point, a regular grid of solutions or a set of determined points. In addition, the algorithm behaviors have considered as a robust detection strategy based-on the average of bestfound solutions, the time-varying observation of different sub-swarms, the diversity of the solutions compared to the success rate, time-varying distributions and statistical methods.
Five groups of EDO methods are available in the literature to solve DMOPs; diversity-based techniques, memory-based approaches, prediction methods, parallel systems and the transfer learning-based algorithms. Increasing the mutation rate (hyper-mutation) or adding a randomly new member and relocate some useful solutions are the main mechanisms to manage the diversity in dynamic optimization, this technique may fall within undetected regions while of interests. The diversity-based approach [1] shown their ability for solving dynamic problem with continuous and small time-varying parameters and show their limits in problems with severe environmental changes. Furthermore, many DMOPs have presented some periodical or recurrent changes making storing historical experience of solutions useful to preserve diversity.
Memory-based approaches use redundant representation of an evolutionary algorithm using extra-memory components to help detecting future changes [9]. This category of approaches is very effective to solve DMOPs with periodically time-varying properties. However, such mechanisms slow down the convergence and strengthen diversity in the EDO approaches. The main disadvantage of memory-based algorithms is the ineffectiveness of redundant solutions stored in the archive. On the other hand, the prediction-based methods tend to predict changes based-on limited patterns. Such system can detect the global best solution quickly but they fail when the changes are stochastic which increases their relative training error rates. The parallel approaches present an optimization process over multiple sub-swarms that may handle the problem on separate search space and are recommended for multi-modal problems while are computationally expensive. A key challenge for these methods is finding the appropriate number of sub-swarm and their sizes. Last but not least, the transfer learning-based methods [5], [10]- [12] have the advantage to re-use previous computational experience to improve the efficiency of the new generated populations after each change detection by adding transfer learning mechanisms which is a time-consuming process.
The efficiency of MOEAs significantly decreases when dealing with MaOPs. In MaOPs, the number of objectives to satisfy is in general equal or higher to 3. Furthermore, three main issues are introduced when solving MaOPs thus including; (i) the inutility of dominance operator when dealing with a large number of objectives, (ii) the lack of convergence and diversity and (iii) the limited population size in a large dimension of objectives space that increase exponentially. Many Pareto-based approaches showed their limits to deal with the increasing number of non-dominated solutions using the dominance operator causing the issue of poor convergence implicated by the Active Diversity Promotion (ADP) phenomenon [13].
As a solution, a variety of enhancements are adopted to the original MOEAs when solving MaOPs including the decomposition-based and indicator-based approaches. Decomposition mechanisms combine multiple objectives into a single one or sub-problems. Some of the popular techniques of this type are Pareto sampling [14], improved Pareto sampling (MSOPS-II) [15] and multi-objective evolutionary algorithm based on decomposition (MOEA/D) [16].
The decomposition-based approach become more effective with a set of sub-MOPs such as presented in the reference vector-guided evolutionary algorithm (RVEA) [17], MOEA/D-M2M [18], NSGA-III [19] and the MOEA/DD [20] and the MOEA/D-ROD [21]. In addition, a set of performance metrics are considered to guide the optimization process over different indicatorbased approaches like the fast hypervolume based evolutionary algorithm (HypE) [22], the Smetric selection based evolutionary multi-objective algorithm (SMS-EMOA) [23], the indicator based evolutionary algorithm (IBEA) [24], the Evolutionary Many-Objective Optimization Algorithm based on IGD Indicator with Region Decomposition [25] and the MaOEA/IGD [26].
A set of new techniques are proposed to deal with the issue of the ineffectiveness of the dominance operator over a set of Pareto-based methods like L-optimality [27], -dominance [28], fuzzy dominance [29], Grid-based Evolutionary Algorithm (GrEA) [30], θ Dominancebased Evolutionary Algorithm (θ-DEA) [31] and the preference order ranking [32]. Diversity management techniques are proposed to arrange a good balance between the convergence and the diversity when solving MaOPs. In [30] a three grid-based criterion was proposed to maintain diversity including the grid crowding distance, the grid coordinate point distance and the grid ranking. A diversity promotion mechanism, DM, is introduced in [33] to activate or disactivate the diversity of the population based on the spread and the crowding distance of solutions.
In NSGA-III algorithm [19], the reference point-based strategy is used to solve MaOPs. The shift-based density estimation (SDE) strategy [34] has been utilized to replace the dominance operators of MOEAs. Also, the knee point-driven evolutionary algorithm (KnEA) [35] has developed using both knee point-based selection and dominance-based selection. Three groups of preference-based approaches including priori algorithms, interactive algorithms and posteriori algorithms are employed to deal with the issue of population size limitation in regards to the large dimension of the objective space. The most known posteriori approaches are the Preference-Inspired Coevolutionary Algorithms (PICEA-g) [36], the novel two-archive algorithm (TAA) [37] and its improved version (Two_Arch2) [38].
In addition, the Particle Swarm Optimization (PSO) algorithm has received a great attention in MaOP. The Control Dominance Area of Solutions (CDAS) [39] is used with SMPSO and SigmaMOPSO for MaOPS. The indicator-based PSO systems have been proposed to maintain leader's selection using the R2 indicator as presented in H-MOPSO [40] or the hypervolume metric in S-MOPSO [41]. Two-stage strategy and a parallel cell coordinate system are adopted in MaOPSO/2s-pccs [42]. A preference-based method is proposed using PSO system focusing on solutions around the knee point and called knee driven particle swarm optimization (KnPSO) [43]. In [44] the MaPSO method uses leader's selection from a certain number of historical solutions by using scalar projection. In addition, the HGLSS-MOPSO algorithm [45] has adopted the Hybrid Global Leader Selection (HGLSS) using two global leader selection mechanisms the first for exploration and the second for exploitation. A recent published paper [46] has presented an adaptive localized decision variable analysis approach under the decomposition-based framework to solve the Large-Scale Multi-Objective Optimization problems and Multi-Tasking Optimization Problems in MaOPs. As a conclusion, all mentioned Many-Objective Evolutionary Algorithms (MaOEAs) are presented as highly complex and time-consuming systems, essentially when using decomposition-based mechanisms and/or the quality indicators to deal separately with convergence and diversity.
The Crow Search Algorithm (CSA) [47] is a meta-heuristic simulating the social organization of crow folks essentially for food-search procedure. Crows are characterized by their ability to memorize food sources they found but also sources that other members of the flock may hold or hide. The CSA algorithm was first proposed as a mono-objective optimization technique and population into several sub-populations and solving many sub-problems separately and simultaneously making the MOEA/D system lower and timely consuming.
Transfer-learning-based techniques are reliable alternatives for DMOPs based on the MOEA/D as a baseline system. In 2020, the new memory-driven manifold transfer learning was proposed based evolutionary algorithm (MMTL-MOEA/D) [51]. This approach has combined the memory mechanism to preserve the previous best solutions and the manifold transfer learning feature to estimate the best solutions, so that the best solutions are conserved and set as initial population of the next generation.
In addition, a randomly reinitialized mechanism (RI-MOEA/D) [51] is used to 10 % of selected populations after each change to maintain the diversity. A combination between the PPS [50] and the MOEA/D are considered in the PPS-MOEA/D algorithm to solve the DMOP.
Also, the support vector regression (SVR) based on evolutionary algorithm (SVR-MOEA/D) is proposed in [52] is designed to solve the nonlinear correlation between two historical optimization process. The SVR, is used to predict a new population after each change in the search space. A transfer learning-based dynamic multi-objective evolutionary algorithm (Tr-MOEA/D) is proposed in [53], aiming to solve the issue of non-independent and identically distributed data in a dynamic environment. The Tr-MOEA/D system implements a transfer learning mechanism to reuse the past historical population after each change which speed-up the optimization process. In KF-MOEA/D [54] system a Kalman filter (KF) is used to predict a new population prior to perform the convergence concept.

Many-Objective Optimization Methods
Generally speaking, many-objective algorithms are designed to optimally manage the couple of exploitation and exploration concepts. A vector angle-based evolutionary algorithm (VaEA) [55] is proposed for Unconstrained MaOPs. This algorithm uses the maximum-vector-angle as selection mechanism to guarantee a good distribution and approximation to a POF; while the worse solutions are replaced with a new generated one. The θ-DEA [31] system is based on NSGA-III while with a new θnondominated concept which is different from the original dominance operator used on the pareto-based methods. It employs a set of reference points to cluster the solutions set in order to enhance the exploration phase. The NSGA-II/SDR is a modified version of the NSGA-II with a Strengthened Dominance Relation (SDR), presented in [56] for solving MaOP. The NSGA-II/SDR adopts the angle and the niching mechanism to select the best converged solutions. MOEA/DD, MOEA dominance and decomposition [20] is a hybridization between the MOEA/D [16] and the NSGA-III [19]; where the many-objectives are decomposed into sub-problems then a dominance criterion is used to aggregate the global solution. Different grid-based criterions like the grid crowding distance (GCD), the grid ranking (GR) and the grid coordinate point distance (GCPD) are integrated in MOEAs to evaluate the fitness function of the MaOP. In addition, the GrEA system [30], is designed to maintain a good balance between convergence and diversity over both the grid dominance and grid difference to evaluate the fitness function and pushing the system toward the best optimal solutions. Two variants of the Pareto-based evolutionary algorithm using the penalty mechanism (PMEA) are presented in [57], the MPEA-MP and the MPEA*-MA. The PMEA-MA is developed using the Manhattandistance and the cosine distance as the convergence and distribution metrics, it includes a population preprocessing to enhance the diversity. The second variant, PMEA*-MA, is a simplified one, which do not adopt the preprocessing step.
The AnD algorithm [58] is a non-pareto-based method and maintains the diversity of the population using an angle-based selection technique, then it picks optimized members which are the same search direction as a sorting solution. A hybridization between the Strength Pareto Evolutionary Algorithm (SPEA) and the shift-based density estimation (SDE) strategy in [34] is denoted by (SPEA/SDE) it estimates the density of the population, then individuals who are not converging are eliminated to enhance the diversity among the divergent solutions only. In [59], the SPEAR leverages on reference direction-based density estimator using the standard SPEA algorithm for multi/many objective optimization problems. The knee point-driven evolutionary algorithm (KnEA), proposed in [35], evolves a population then select nondominated solutions based on knee point criterion, which may be assumed to a Pareto strategy.
Furthermore, the two-stage evolutionary algorithm (TSEA) is developed in [60], in the first stage several sub-populations are optimized to converge to different regions of the Pareto front, then the nondominated solutions of each sub-population are considered as individuals to optimize in the second stage. In indicator-based methods several quality metrics are used to perform the optimization process, for example the Monto Carlo simulation is used in HypE algorithm [22] to minimize the computation cost and to approximate the results. The preferencebased approaches use different adaptation mechanisms to perform the decision toward the true Pareto front. In [36], the PICEA-g algorithm integrates the coevolution as a posteriori adaptation mechanism with a set of candidate solutions to help decision making and approximate the entire of POF. Two archives are used in the Two_Arch2 [38] system, where the first is considered for convergence (CA) and the second is to maintain diversity (DA). A crossover operator is used between the CA and DA as selector mechanism and mutation operator is used in CA memory.

Existing Crow Search-based Methods
The Crow Search Algorithm (CSA) [47] was first proposed in 2016 to solve constrained engineering optimization problems. In Furthermore, two binary version of CSA algorithm are proposed in [64] and [65]. The first one is the BCSA [64] which used a V-shaped transfer function to obtain a binary representation a continuous data with application to feature selection. The second on [65] consists in applying a sigmoid transformation and was applied to solve the 2D bin packing problem. Several modified versions of CSA tended to manage the diversity based on the Gaussian distribution and diversity information of the population such in [66] for electromagnetic optimization, the usability factors hierarchical model for feature extraction and prediction [67], the priority-based technique is used to determine the sufficient flight length amount for each crow to update their position based other crow for economic load dispatch problem [68] and the modification of the CSA parameters like; the awareness probability and the random perturbation of each crow is proposed in [69].
A set of mechanisms has been used to improve the CSA algorithm including search bounds limits management strategy [70], adding an archive component [71], restructuring the awareness probability [72] to enhance the random perturbation and the dynamic probability of CSA system. Several operators have been added to achieve a good balance between the convergence and the diversity such as the Roulette wheel selection tool and the inertia weight, the Lévy flight and the adaptive adjustment factors. In addition, a cross-over and a mutation operator was proposed to hybridize CSA intrinsically in [73] with application to a hybrid renewable energy PV/wind/battery system. Many hybridization methods are developed to combine the CSA algorithm with the Grey Wolf Optimizer (GWO) [74], the Cat Swarm Optimization (CSO), the Crow PSO [75] and the Crow Search Mating-based Lion Algorithm [76].

The proposed Distributed Bi-behaviors Crow Search Algorithm
Different MOEAs are designed to solve the DMOP should be able to detect the problem patterns changes and to response respectively. However, many modified evolutionary

The Standard Crow Search Algorithm
The Crow Search Algorithm (CSA) was proposed by Askarzadeh in 2016 [47] as a metaheuristic for solving constrained engineering optimization problems. Crows are known to be social bird with the ability to memorize and use food source positions when needed; those sources may be the result of a personal search or from the crow group social activity. The CSA algorithm mimics the crows flock search mechanisms' and use it for optimization purposes.
The search process is detailed in Figure 1  and should keep awake if other crows discover. Assuming that the j-th crow decides to visit a previously memorized position at iteration ( ) ( , ) and assuming that a congener (i) is following the crow (j), two controversial behaviors, may occur each one represented by a state: -The first state is when the crow ignores being followed, so it simply continues searching considering what it previously found( , ).
-The second state is when the crow is aware of being followed; in this case the crow will simply hide its food source and undergo a fully random search.
These two position updates are detailed in equation (1).
: In CSA algorithm, the balance between exploration and exploitation during the optimization process is achieved by the flight length (Fl) of the ℎ crow during the update process of each position. However, the memory ( + 1) of each crow i is updated using equation (2). All the optimization process is executed until a predefined maximum number of iterations.

A General Presentation of the new DB-CSA Approach
The Distributed Bi-behaviours Crow Search Algorithm (DB-CSA) is based on the couple of Beta distribution profiles for exploitation and exploration enhancement as presented the flowchart in Figure 2 and detailed in the pseudo code in Figure 3. The new DB-CSA system has the same optimization process as the standard CSA algorithm [47] and the main difference is provided on the convergence and the diversity treatment during the optimization process when updating the position of each crow . In DB-CSA algorithm, each crow is presented as a potential solution in the search space. The key processing steps of the proposed approach, see Figure 2, are detailed as follow: In the standard CSA algorithm, the update of crow position is done according to the Equation (1), while the convergence and the diversity stages are treated separately causing the issue of premature convergence. However, this issue has treated by the new DB-CSA system using a bi-behaviours beta distribution profiles to assume a dynamic and a good balance between both stages. The two beta distribution profiles are presented in equation (6) denoted by _ which and _ respectively for exploitation and exploration. The couple of beta profiles are used to modify the original equation (1) presenting the update process executed at each iteration for each crow . The two profiles were presented based on the beta function proposed by Alimi [77] and presented in both equations (3), (4) and (5). When, the main advantage in using the beta functions here, is their capacity to produce several forms and configurations of distributions, including the normal Gaussian one. The one-dimensional Beta function is defined in equation (3).
Where; p, q, 0 and 1 are a real value, with ( 0 < 1 ) ∈ ℝ and is detailed in equation However, the multi-dimensional version is provided in the mathematical definition (5) presenting product of the one-dimensional in (3).
The dynamic switch mechanism between the bi-behaviors Beta-1 and Beta-2 profiles are assumed by a comparison between the fitness function ( ( )) of each crow and the average solution (crow). If the fitness function ( ( )) = ∑ =1 is greater than the mean value, we assume an exploration stage for the crow optimization process using Beta-1 behaviour in Equation (6) is used the update the crow position. Otherwise, the second Beta-2 behaviour in Equation (6) is considered pushing each solution to the exploitation stage.
As it can be illustrated in figure 2, the two beta distribution profiles are detailed as follows: ✓ The first large Gaussian Beta-1 exploitation profile, which characterized by a large standard deviation pushing the population for a good diversity in the search space with p and q variables of the Beta function in equation (3) are equals to 50.
✓ The second narrow Gaussian Beta-2 exploration profile adapts a limited standard deviation with p and q in equation (3) where; Beta-1 is a beta random distribution over [0, 1] which is assimilated to fine search step around the optimal solution, while the Beta-2 is more like a random explore mechanism performed away from the previous optimal solution, ( ). Both Beta-1 and Beta-2 values are determined using equation (3) with different configuration of the two properties p and q.
The mutation operators in [78] is added to maintain more diversity in the flock of N crows.
The nonuniform and the boundary mutation operators in equations (7) and (8) are applied to modify the variables X = 1 , 2 , … , of each crow according to the probability mutation equal to 1 , where is the dimensional search space and X ∈ [ , ] where: and are the lower and the upper bounds respectively. The nonuniform mutation in equation (7) is applied when the modulo value when dividing the crow position i by three is equal to zero.
However, if the remainder is equal to one the boundary mutation in equation (8) is used. Otherwise, all variables are considered without mutation operators.
where: 1 and 2 are a random value between 0 and 1. 6.5. Update the crow position using Equation (6) on Beta-1 exploitation profile 6.6. Else: 6.7. Update the crow position using Equation (6) on Beta-2 exploration profile 6.8. End If 6.9. Update the memory using Equation (2) 7. End For 8. Apply the mutation operators using equation (7) and (8) 9. Update the archive of non-dominated solutions 10. End While 11. Return the archive of the non-dominated solutions The advantage of the proposed DB-CSA algorithm is proved over their simplicity in terms of complexity which is equal to ( × log( )). When, the dynamic beta distribution profiles are the main properties of the DB-CSA algorithm investigating a high flexibility to produce several forms and configurations of distributions. Using both large Beta-1 and the narrow Beta-2 functions have given the standard CSA a new mechanism to assume a good distribution of the population toward the best approximated results.

Experimental Study
The experimental study presented in this section is conducted using personal computer with 8 Go of Ram and a i7 intel processor. A Java implementation of the proposed method is done on the jMetal framework [79]. Results are presented with two comparative studies as detailed in Table 5: -The first is done to compare the new proposed DB-CSA to a set of MOEAs designed for Dynamic Multi-Objective Optimization Problems (DMOPs).
-The second is for Many-Objective Optimization Problems (MaOPs).
-Algorithm configuration and parameters are listed in Table 4.

Quality Indicators
The performance measurements of all tested systems are done using the minimum values of the three quality indicators (QI), including the Inverted General Distance (IGD), the Mean Inverted General Distance (MIGD) and the Hypervolume Difference (HVD) which are presented respectively in equations (9), (10) and (11) respectively. All those metrics are used to measure both convergence and diversity of the tested MOEAs.

Tested Benchmarks
Forty-four benchmarks are used to evaluate the relative performances of the proposed method upon the two scenarios. The twenty-one DMOPs test beds are as follows: five FDA [6], three dMOP [49], seven UDF [80] and six F(ZJZ) [81] functions. The twenty-three problems for MaOPs are composed of: seven MaF test suite MaF1-7, seven DTLZ1-7 functions and nine WFG1-9 problems. Test configurations detailed in Table 4 according to the number of variables (D) and objectives (M).
For dynamic multi-objective optimization, Farina et al. [6] has presented three types of DMOPs classified into three categories according to the time-varying POF and POS. In type I, the POS change and the POF remains the same, in type II both POS and POF are changed.
However, type III of DMOP presents a time-varying POF and POS is unchanged. The main properties of all tested problems are reported in Table 3 presenting the variation of both POS and POF.

A-Comparative study (1) for DMOPs:
The first comparative test is done for DMOPs using FDA, dMOP, UDF and F(ZJZ) benchmarks with 2 and 3 objectives. Five standard MOEAs [9] and the six-transfer learningbased methods [51] are compared to the new proposal DB-CSA system. All compared algorithms have the same parameters settings referring to the original publications [9] and [51].
However, all DMOPs are characterized by a dynamic POS or/and POF according to the timevarying property that change at each instance as in equation (12).
where: , and are the severity of change, the iteration counter and the frequency of the change respectively. Three categories of environmental change are considered in this study and differentiated according to the values of fixed to 10 and the variation of the frequency .
The property is equal to 5, 10 and 20 for severe, moderate and slight environmental changes respectively.
As resumed in Table 4, the swarm and the archive size are equal to 100 as fixed in [9] and  Table 3.

B-Comparative study (2) for MaOPs:
The second experimental test is done for many-objective optimization referring to the contributions [57] and [58] to compare the proposed DB-CSA approach to seven and thirteen Many Objective Evolutionary Algorithms (MaOEAs) respectively. As mentioned in Table 4

Results Analysis and Discussion
In this sub-section, comparative result analysis is conducted for the experimental studies of DMOPs and MaOPS, using the nonparametric Wilcoxon sign rank test [82], while some qualitative results are performed over the box plot of the one-way ANOVA test [83]. The statistical analysis methods are used to estimate the p-value property to determine the statistically significant difference between the compared methods. If the p-value is less or equal to 0.05, the statistical results are considered significantly important. All quantitative results are presented in the appendices section including Tables 9, 10, 11, 12, 13, 14, 15, 16 and 17.

A-Analysis of the comparative study (1) for FDA and dMOP problems
The comparative study (1)  Based on the reported results over MIGD metric in Table 9, it is remarkable the efficiency of the new DB-CSA system having the best mean and standard deviations values for all test suites with different environmental changes compared to six transfer learning-based approaches.
Based on the statistical results over the Wilcoxon signed rank test on Table 6   and dMOP test suites over the IGD and HVD metrics respectively can be seen in Tables 10 and   11. Based on IGD metric on Table 10, we can argue the superiority of DB-CSA method compared to five standard MOEAs designed for dynamic multi-objective optimization. The results based on Wilcoxon signed rank test are presented in Table 7, indicating that DB-CSA is the best method over IGD at 0.05 statistically significance level compared to other MOEAs.
While, the same conclusion is confirmed using the box plot over one-way ANOVA test in Figure 5. Based on Table 7 and comparing the negative and positive ranks, the DB-CSA is the best method over HVD quality indicator. While, this importance does not determine as statistically significant with a p-value greater than 0.05. The one-way ANOVA results in Figure 6 assume the competitive importance of DNSGA-II, dCOEA, PPS, MOEA/D, and SGEA for solving FDA and dMOPs test functions with 2 and 3 objective including different environmental changes when using the HVD metric.

B-Analysis of the comparative study (1) for UDF and F problems
Considering the quantitative results for the Unconstrained Dynamic Functions (UDF1-UDF7) in Table 12, it appears that the DB-CSA has the greatest values for all UDF functions.  Table 7, we can resume that the DB-CSA is the best method, however this importance does not present a high statistically significance with a p-values greater than 0.05 compared to the five MOEAs over the IGD metric.
Based on HVD results reported in Table 13, the DB-CSA has a good result for the majority of UDF benchmarks, and fails only for solving the disconnected UDF6 compared to the DNSGA-II system. However, we can resume the importance of the PPS system for solving F5, F7 and F10 and the SGEA for F6 and F9. Also, the Wilcoxon signed rank test detailed in Table   7 presents the importance of DNSGA-II, dCOEA, PPS, MOEA/D and SGEA with a p-value exceeding 0.05 significance level. Figure 7 has reported the one-way ANOVA results in a box plot of the six MOEAs over IGD and HVD metrics.

1) Analysis of the comparative study (2) for MaF and WFG problems with 2, 3 and 7 objectives
For the second comparative study (2), thirteen many-objectives evolutionary approaches  Table 3. Results reported in Table 14, shown the IGD results of the 14 compared Many-Objective Evolutionary Algorithms for solving nine MaOPs (WFG1-WFG9) characterized by a dynamic shape of the POF that change from convex to concave. The DB-CSA has ranked as the first system for solving seven WFG test suites from nine thus including; WFG1, WFG3, WFG4, WFG5, WFG6, WFG8 and WFG9 and fails only for WFG2 compared to HypE and θ-DEA having almost the same mean values of the IGD metric for WFG7 when the number of objectives is equal to 2. By increasing the number of objectives to 3 and 7 the WFG becomes more complex and the issue of the lack of convergence and diversity presents the challenging task. Based on the reported IGD values of the tri-objectives WFG functions in Table 14, we can conclude the efficiency of the new proposed DB-CSA approach to deal with the increasing number of objectives. Also, Table 14 has shown the best values for MaOPS with 7 objectives.
In addition, Table 15 has showing the mean and the standard deviation values over IGD metric for solving the MaF test suite (MaF1-MaF7) with 2, 3 and 7 objectives functions. Figure   12, has presented the approximated POF for the MaF test suite. The new DB-CSA is presented a good method for solving the MaF test suite compared to the thirteen state of the art MaOEAs.   Tables 16 and 17 presenting the efficiency of new DB-CSA approach over IGD metric for solving the complex set of tested nine WFG1-9 problems and seven DTLZ1-7 functions respectively. However, this difference is reported as statistically very significant when using the Wilcoxon signed rank test with 0.05 significance level as detailed in Table 8, when all computed p-values are less than 0.05. Figure   8, has presented the boxplot over the one-way ANOVA test for solving a set of WFG test suit with 3, 5 and 15 objectives, when the DB-CSA is the best method.

Conclusions and perspectives
In this paper, a new Distributed Bi-behaviors Crow Search Algorithm (DB-CSA) is proposed for dynamic treatment of both convergence and diversity concepts, which is based on two new mechanisms: distributed bi-behaviors profiles characterized by a large gaussian Beta-1 and narrow gaussian Beta-2 functions for exploitation and exploration enhancement respectively.
All quantitative results are analyzed using the nonparametric Wilcoxon signed rank test with 0.05 significance level. The experiments showed that the proposed DB-CSA is significantly better than the key similar techniques used in this paper for comparisons. DB-CSA is found to be more effective in solving dynamic multi-objective problem characterized by different timevarying of both POS and POF with 2 and 3 objectives. It is also a powerful solver for the many-

Acknowledgment
The research leading to these results has received funding from the Ministry of Higher Education and Scientific Research of Tunisia under the grant agreement number LR11ES48. Table 9. MIGD results (Mean and Standard Deviation) for FDA and dMOP functions. The symbols "+", "≈" and "−" denote that the performance of the compared algorithm is statistically better than, equivalent to, and worse than DB-CSA.  The symbols "+", "≈" and "−" denote that the performance of the compared algorithm is statistically better than, equivalent to, and worse than DB-CSA. The symbols "+", "≈" and "−" denote that the performance of the compared algorithm is statistically better than, equivalent to, and worse than DB-CSA.  The symbols "+", "≈" and "−" denote that the performance of the compared algorithm is statistically better than, equivalent to, and worse than DB-CSA The symbols "+", "≈" and "−" denote that the performance of the compared algorithm is statistically better than, equivalent to, and worse than DB-CSA     30E-7) 7.13E-4(4.90E-6) The symbols "+", "≈" and "−" denote that the performance of the compared algorithm is statistically better than, equivalent to, and worse than DB-CSA.