A novel Dynamic Pareto bi-level Multi-Objective Particle Swarm Optimization (DPb-MOPSO) algorithm

—Distributed architecture-based Particle Swarm Op- timization is very useful for static optimization and not yet explored to solve complex dynamic multi-objective optimization problems. This study proposes a novel Dynamic Pareto bi-level Multi-Objective Particle Swarm Optimization (DPb-MOPSO) algorithm with two optimization levels. In the ﬁrst level, all solutions are optimized in the same search space and the second level is based on a distributed architecture using the Pareto ranking operator for dynamic multi-swarm subdivision. The proposed approach adopts a dynamic handling strategy using a set of detectors to keep track of change in the objective function that is impacted by the problem’s time-varying parameters at each level. To ensure timely adaptation during the optimization process, a dynamic response strategy is considered for the re-evaluation of all non-improved solutions, while the worst particles are replaced with a newly generated one. The convergence and diversity performance of the DPb-MOPSO algorithm are proven through Friedman Analysis of Variance, and the Lyapunov theorem is used to prove stability analysis over the Inverted Generational Distance (IGD) and Hypervolume Difference (HVD) metrics. Compared to other evolutionary algorithms, the novel DPb-MOPSO is shown to be most robust for solving complex problems over a range of changes in both the Pareto Optimal Set and Pareto Optimal Front.


I. INTRODUCTION
R ECENTLY, the main issue facing optimization has focused on the challenging field of Dynamic Multi-Objective Problems (DMOPs) with two or three conflicting functions characterized by dynamic objectives, parameters and/or constraints. A great deal of research conducts on Multi-Objective Evolutionary Algorithms (MOEAs) for solving static single and multi-objective problems referred to as SOPs and MOPs respectively are adapted for DMOPs [9], [32], [33] using a powerful range of bio-inspired intelligent techniques like the Genetic Algorithm (GA) [15] and the Particle Swarm Optimization (PSO) [6], [28]- [30]. These A. Hussain is with Edinburgh Napier University, School of Computing, Edinburgh EH10 5DT, Scotland, U.K. (e-mail: a.hussain@napier.ac.uk).
algorithms are designed to resolve evolutionary stagnation issues. In dynamic optimization, any MOEA should be able not only to find the best set of compromised solutions over Pareto Optimal Set (POS) and Pareto Optimal Front (POF) but also to track the changes that can affect. Several variants of PSO algorithm are modified for solving dynamic SOP (DSOP) when the standard PSO cannot be used for DMOP without modification because of the dual issues of outdated memory and the loss of diversity that needs crucial actions to be undertaken like change detection, memory update and diversity enhancement. Four types of DMOPs have classified by Farina et al. [22], in type I the POS change and the POF remains the same, but for type II both POS and POF change over time. However the POF change and the POS remains constant for DMOPs in type III. Despite the environmental changes both POS and POF still unchanged in type IV.
Several detection techniques are developed to tackle timedependent issues. The re-evaluation of objectives and constraints' values of certain percentage or randomly selected outdated population are very useful. Different response strategies using explicit actions like the re-initialization or hypermutation of populations is considered for ensuring fast convergence and diversity [1], [26]. PSO-based approaches are very considered for solving DSOP and a few methods are designed to solve DMOPs [2]. Nonetheless, distributed methods using genetic and population-based strategies have been shown to be important for static optimization [7] which has not been treated for handling the DMOPs.
To the best of our knowledge, does not exist any developed approach using the multi-objective particle swarm optimization (MOPSO) with distributed architecture for solving DMOPs. For that reason, the main motivation underlying this study presents the proposed Dynamic Pareto bi-level Multi-Objective Particle Swarm Optimization (DPb-MOPSO) based on a distributed architecture for handling DMOPs with various types of changes on POS and POF. The distributed architecture of DPb-MOPSO is presented over upper and lower level for distribution and diversity amelioration. For the upper level, all individuals are optimized in the same research space. Nevertheless, the second lower level provides a distributed architecture using a dynamic multi-swarm subdivision based on the Pareto Ranking operator. Filling the contradictory gaps of convergence and diversity that leads to the algorithm losing its ability to react efficiently to changes. The DPb-MOPSO presents a dynamic handling strategy to effectively detect and react to the change. The detection strategy has carried out using a ten percent of solutions as detectors. The response strategy aims to re-evaluate all solutions with negative change to be replaced with new generated ones and the solutions with positive progress are adapted to accelerate optimization after each change.
The rest of this paper has organized as follows. Section II presents an overview of dynamic multi-objective optimization methods. Section III details the proposed DPb-MOPSO. Section IV introduced preliminaries of the experimental study. Section V, describes the experimental results through Friedman Analysis of Variance overall quantitative results, furthermore a stability analysis has presented using Lyapunov theorem in Section VI. Finally, Section VII concluded this paper and suggested some perspectives for future works.

II. OVERVIEW OF DYNAMIC MULTI-OBJECTIVE OPTIMIZATION METHODS
In dynamic optimization, we aim to optimize a dynamic problem f t with m objective functions by an algorithm G at a given period [t begin , t end ] such as presented in the following mathematical definition.
Several existing studies [26] have focused on three research levels, including the development of an exhaustive set of challenging dynamic benchmarks, the development of new algorithms for handling various dynamic problems and the development of efficient quality indicators to measure the performance of MOEAs. Many investigations have been carried out to encourage researchers to develop DMOPs instead of DSOPs. For example, the continuous FDA functions [22] with dynamic shift of POF and POS, the five ZJZ (F5 to F10) [34] with a non-linear correlation between the decision variables and the nine Unconstrained Dynamic Functions (UDFs) [27]. The challenge of dynamic optimization does not limit to the development of generic benchmarks, but it has interested also for solving dynamic real-world problems [23]. In literature, a powerful set of MOEAs is designed to address the static MOPs [11], [15], [16] are used for solving DMOPs. During the last decade, several challenging approaches are developed for solving a powerful ranges of DMOPs. Those approaches are classified into four categories as resumed in Table I  and detailed as follow; Firstly, the diversity-based methods  are used to preserve population diversity through the reinitialization or hyper-mutation behavior after each transition  or throughout the execution time like the variations Dynamic  NSGA-II (DNSGA-II-A and DNSGA-II-B) [14] that aims to identify a random solution to be re-evaluated to monitor environmental change or to replace a percentage of the new population with new randomly generated solutions. Also, the DC-NSGA-II [24] algorithm presents a self-adaptive penalty function to deal with time-varying constraint problems. Thus, the Dynamic-MOPSO system [2] adapts a dynamic handling strategy to detect changes by controlling the evolution of the objective function over time and re-initialize all solutions with negative changes to maintain diversity.
In addition, several memory-based techniques [8], [13], [25], [27] are considered over an extra memory mechanism to implicitly or explicitly store the useful information and outdated solutions for potential uses. Furthermore, change prediction-based approaches were achieved to exploit past information and anticipate the new position of POS using a prediction model to reduce the number of function evaluations [3]. In [10] the D-QMOO algorithm was proposed based on the Feed-forward Prediction Strategy (FPS) to study the size and the distribution of the anticipatory solutions over time. Also, the Population Prediction Strategy (PPS) [17] is developed to predict a totality of solutions instead of using some isolated point. Finally, the parallel methods are implemented using multiple sub-populations mechanisms like the master-slave method in [20], [21], the independent runs, the island model and the cellular EAs. The DMOEA [33] adapts the subpopulation technique to solve a single objective problem. In this case study, a dynamic version of the MOPSO method is considered over a distributed architecture. The standard MOPSO algorithm safer from the stagnation at local optima and loss the diversity over time. Generally, the re-initialization strategy is the most useful technique to address the issues of population diversity after environmental changes. In addition, there is no approach using MOPSO with distributed architecture for solving DMOPs.

A. DPb-MOPSO: Upper Level (L 1 )
The upper level (L 1 ) is detailed in Algorithm 1 and started with random initialization of all particles p i in the swarm S and the leaders archive A 1 . Each particle is a candidate solution x * i in n dimensional search space. For each particle p i , we select a leader A i based on the crowding distance operator between two random particles. The particle with the lowest value is selected as leader, then we calculate the fitness function F i of each particle and update its position (X) and velocity (V ) based on their self-experience (p best ) as a local component and the experience of their neighbors (g best ) as a global component using 2 and 3 respectively. This level is executed until 50% of T max .
Where; w is the inertia weights, c 1 and c 2 are acceleration coefficients. r 1 and r 2 are two parameters aiming to influence of the cognitive and social components of each particle. In Addition, all the changes are known for the algorithm and checked according to the frequency τ t assuming timedependent parameters t for each problem, where t= 1 nt τ τt , ∀ t ∈ T, with; n t , τ and τ t are the severity, the iteration counter and the frequency of change respectively. If a change is detected on all the objective values, the dynamism handling strategy detailed in the subsection C and the algorithm 3 is executed. Furthermore, all the optimal solutions X * i are saved into the archive A 1 using the operator to measure the best approximation for the true POF at the end of (L 1 ).

B. DPb-MOPSO: Lower Level (L 2 )
At half the maximum number of iterations T max /2 of (L 1 ), the switched level (L 2 ) over a distributed architecture is started with the extraction of the intermediate population P'(POS). Then, the Pareto Ranking Operator is applied to subdivide P'(POS) into N sub-swarms presenting the set of fronts denoted by F 0 , F 1 , ..., F n . In most of the cases, the front

8.
Regroup all solutions with negative change from POS (t) 9.
Re-initialize POS (t) 10. Re-evaluate the non-dominated solution in the archive (t) Environment change detection strategy (minimization case)

Extract first 10% of current nondominated solutions set POF (t) 2. Extract 10% of their image POS (t)
The response strategy 6. If (current fitness function (t) > previous fitness function (t-1) ) then until 25% of T max . When achieving T max , all the nondominated solutions from both A 1 and A 2 are regrouped in the global archive A using Epsilon dominance operator.

C. Dynamism Handling Strategy
Filling the gap for ensuring convergence during the run that may cause a lack of diversity in a dynamic environment. Two steps for dynamism handling strategy are considered in Algorithm 3 and detailed as follows: • Step 1: Environment detection strategy The environmental change is checked at the frequency τ t , which is equal to 5, 10, and 20 for severe, moderate, and slight environmental changes respectively. The comparison procedure is started after the evaluation of fitness function F i by selecting 10% of previous P OF (t−1) and current P OF (t) as detectors; then, we select 10% of their mapped P OS(t−1) and P OS(t) for controlling their evolution over time. The strong dominance operator of Epsilon-Dominance method is used to compare dynamic behavior of particles between Algorithm 3 :Dynamism Handling Strategy Input: P OSt−1 (mapped solutions of P OFt of previous iteration), P OFt (population of current iteration); Output: P OSt+1 (new population of iteration t+1); Detection Strategy 1: if t = τt then 2: Extract detectors: 10% of best particles from POF(t);

4:
Extract 10% of their projection from POS(t-1); 5: Compare the Fitness of POS(t) and POS(t-1) using strong dominance operator of Epsilon-Dominance method;

10:
Select the set of solutions with positive changes P OS+(t − 1) ;

11:
end if 12: end for Response Strategy

15:
Re-evaluate the archives A1 and A2; 16: end if 17: return Set of new population P OSt+1; Step 2: Response change strategy If a change is identified successfully, the DPb-MOPSO adapts all solutions with negative changes P OS − (t) to be reinitialized and all detectors with positive changes P OS + (t) are integrated in the new population to ensure good diversity and convergence using only solutions with positive effects. Also, the leader's archive is re-evaluated to update all nondominated solutions and overcome the deterioration of the research ability.

IV. EXPERIMENTAL STUDY: PRELIMINARIES
In this section, we aim to present the evolutionary algorithms, the DMOPs and the quality indicators considered for this case study. The empirical study was discussed referring to the contribution of Jiang et al. [13].

A. Evolutionary and Swarm Intelligence Multi-Objective Algorithms
Our main case of study consists of the comparison of performance between the proposed DPb-MOPSO and several algorithms regrouped as follow: • The five static optimization methods considered for DMOPs namely; -The dCOEA [8] defined as a hybrid competitive and cooperative method adapts an additional archive to save the outdated individuals to assess the new optimization step. -The MOEA/D [31] presenting a decompositionbased approach aiming to replace a limited number of individuals with any new created solutions. -The dMOPSO [18] is a decomposition PSO-based approach used for static MOPs. -The MOPSO [5] is the standard version of PSO designed for static MOPs. -The pbMOPSO [7] is the MOPSO-based approach using a distributed architecture for static MOPs.
• The two diversity based approaches: -The DNSGA-II [14] using a randomly created or mutated individuals after the change. -The Dynamic-MOPSO [2] presenting a dynamic version of the standard MOPSO approach with a dynamism handling strategy for solving DMOPs in type I.
• The prediction-based approach PPS [34] using a univariate auto-regression model to predict the new location of POS. • The memory-based approach SGEA [13] aiming to reuse some outdated solutions with good distribution to relocate solutions closer to the new POF. All algorithms are executed during 30 independent runs for each DMOPs. Each run is stopped when the maximum number of evaluations T max is achieved using (3 × n t × τ t + 50). Where, τ t is the frequency of changes fixed to 5, 10 and 20 respectively, and n t is the severity of changes fixed to 10. All experimental settings are summarized in Table II.

B. Dynamic Multi Objective Problems
Twenty-one DMOPs [13], [27] are used for algorithms testing. Table III presenting all parameter settings and the changed nature of POS and POF.

C. Quality Indicators
The Inverted Generational Distance (IGD) and the Hypervolume Difference (HVD) are used to measure both of convergence and diversity considering the smaller values as the best. The IGD is computed using 4 based on the minimum euclidean distance d i between ith points in the generated nP OF (|P OF |) and the true P OF * . In addition, the HVD is calculated using 5 to measure the difference between the obtained P OF to the true P OF * .
V. EXPERIMENTAL RESULTS AND STATISTICAL ANALYSIS The concepts of inferential statistics over Friedman twoway Analysis of Variance (Friedman two-way ANOVA) is devoted to discussing all Mean (M.) and Standard Deviation  Tables VIII, IX, XII and XV. Previous analysis using only IGD and HVD metrics is insufficient to evaluate the performance of a swarm intelligence algorithms and highlight the significant level [12]. In this study, the ANOVA procedure is used and does not assume a normal distribution to compare multiple treatments and decide the significance level between algorithms.

A. Procedure of Friedman two-way Analysis of Variance
The Friedman two-way ANOVA test is done using the following steps : • Step 1: the normality distribution test is done through the Q-Q plot for all algorithms over IGD and HVD metrics with both τ t and n t are equal to 10. Taking the example of Q-Q plot of DPb-MOPSO in Figure 2, the data-points does not fall the straight line of the ideal normal distribution curve and does not fit the normal distribution. However, due to the non-normality of data the Friedman two-way ANOVA by ranks is used as a nonparametric statistical procedure to decide the significance level between algorithms. • Step 5: determine the degree of freedom (df= k-1), where k is the number of algorithms equal to 10, the df is equal to 9.
• Step 6: state the decision rule, according to nine degree of freedom and α= 0.05 is defined following to the Chi-Square (χ 2 ) table 1 . The critical value (CV) is equal to 16.92, if the computed χ 2 greater than CV we're going to reject the null hypothesis H 0 .
• Step 7: Friedman two-way ANOVA by ranks is considered as follows: 1) Assign the rank values (r ij ) for each problem i while; 1 is the best result to k which is the worst result (where; 1 ≤ i ≤ P and 1 ≤ j ≤ k) and for the ties cases, we assign the average of ranks. 2) Compute the average of ranks obtained for each algorithm/problem pair using 6.
3) Hypothesis testing, to retain null hypothesis the Friedman statistic F f is computed according to 7 based on the probability distribution χ 2 with nine degree of freedom.
In most of the cases, the Friedman ranking test assumed the assumption of ignoring the null hypothesis implies that at least two algorithms differ in significance. The Friedman ranks and their p-value can not define where the algorithm variance is which leads to the Friedman 1 × N ANOVA multiple comparisons with the DPb-MOPSO is the control method using different Post-hoc procedures [12] described in step 8. • Step 8: The Post-hoc procedures are classified in Table  IV aiming to compute the Adjusted P-Value (APV) and identify the significant level of variation between algorithms. One-step Bonferroni Dunn min{v, 1}; where v=(K-1)pi k: number of tested algorithms and i ∈ 1, · · · , K.

B. Results Analysis through Friedman two-way ANOVA Test
Different configurations are considered on MOEAs to demonstrate the impact of the frequency of the change for solving problems with the different nature of changes in the POS and POF. Those configurations are achieved according to the constant value of the severity of change n t set to 10 and the variation of the frequency τ t from 5, 10 to 20 representing severe, moderate and slight environmental changes respectively. All significant results are reported in Tables VIII, IX, XII and XV and highlighted in bold faces for twenty one problems (P =21) considering five FDA, three dMOP, six F(ZJZ) and seven UDF functions per algorithm K is equal to ten (K=10). We can conclude that the five MOPSObased algorithms (dMOPSO, MOPSO, pbMOPSO, Dynamic-MOPSO and DPb-MOPSO) are the most competitive methods over both IGD and HVD metrics. The mean values are not very useful to confirm whether the new proposed DPb-MOPSO has significant improvement compared to other algorithms. For that reason, three groups of analysis are considered for the Friedman two-way ANOVA ranking test and the Post-hoc procedures: • Friedman two-way ANOVA test for moderate environmental changes (τ t = n t =10). • Friedman two-way ANOVA test for severe environmental changes (τ t =5 and n t =10). • Friedman two-way ANOVA test for slight environmental changes (τ t =20 and n t =10). 1) Friedman rankings test: let's first consider the Friedman two-way ANOVA by ranks over a group of twenty one problems with moderate environmental changes. The average rankings are obtained over IGD and HVD metrics assigning to the best algorithm the ranking one and to the worst the ranking k. Under the null hypothesis, the test statistics of the Chi-Square (χ 2 ) distribution with nine degrees of freedom and the p-value are computed. Based on Table V, we can conclude that the DPb-MOPSO has the best ranks compared to other algorithms. In addition, the proposed DPb-MOPSO is the winner for solving the group of eight problems (5 FDA and 3 dMOP functions) with severe environmental changes such as demonstrated on Table VI. Besides, Table VII presents the average rankings for slight environmental changes while the DPb-MOPSO has the best rank over IGD and fail over HVD compared to MOPSO algorithm. Whereas, all computed χ 2 ≥ 16.92 assuming the existence of significant difference between all algorithms. Friedman's ranking test detects the significant difference over the whole multiple comparison and cannot be able to detect when one method performed better than the other. This issue leads to the Friedman 1×N ANOVA multiple comparisons with the DPb-MOPSO is the control method using four Post-hoc comparisons procedures such as resuming in Table IV and detailed in the following subsection.   2) Friedman 1×N ANOVA Multiple Comparisons: The 1 × N comparisons are made to evaluate the studies of k-1 algorithms and calculate the unadjusted and the adjusted P-Value (APV) for each hypothesis except DPb-MOPSO is the control method. The average rankings of all methods are transformed to the unadjusted p-value which is computed using the normal approximation. The test statistics are considered to compare the ith algorithm with the jth algorithm which are ordered from the smallest to the largest ranks (i). Applying the four Post-hoc procedures can then lead to the APV determining the    a true null hypothesis) [12].   over HVD metric such as reported on Table XI. For severe environmental changes, Table XIII presents the significant difference between the DPb-MOPSO and DNS-GAII, MOEAD and PPS for all pot-hoc procedures and dCOEA using the Li procedure with a p-value≤ 0.05 for the IGD metric. In addition, we can infer that there is no difference with p-value > 0.05 likewise SGEA, dMOPSO, MOPSO, pbMOPSO, Dynamic-MOPSO and dCOEA using Bonferroni-Dunn, Holm and Hochberg procedures. All APV     when the null hypothesis is retained with significant level α = 0.05 assuming an equal importance between DPb-MOPSO and MOPSO over HVD.

VI. STABILITY ANALYSIS USING LYAPUNOV THEOREM
The stability analysis is an important aspect when analyzing the robustness of dynamic systems. The Lyapunov's theorem developed by Alexandr Mikhailovich Lyapunov [19] is the well-known mechanism for stability analysis. This theorem aims to measure the growth of the initial values over time while the small differences from one instance to another is called Lyapunov Exponent (LE). In addition, LE can grow into very large differences to indicate the speed with which two initially close dynamics diverge or converge. The positive value of LE indicates a divergence whereas the negative value present the convergence in phase space. In [4], the difference between two points t i and t j over time is denoted by t i , t i+1 , t i+2 . . . t i+n and t j , t j+1 , t j+2 . . . t j+n . The distance d(k) between two sequences after k steps is computed using: d(k) = |t i+k -t j+k | to measure the convergence and divergence over the Lyapunov Exponent in time-varying space. If the system is chaotic, d(k) will initially rise exponentially with k and we can compute and plot ln d(k) vs k to estimate the Lyapunov Exponent (LE). When LE values are very small and close over time, this state indicates that the system is stable and reaches the same solution over time. In this study, we consider a set of IGD values during 30 independent runs of the five MOPSO-based algorithms over several DMOPs with n t and τ t are equal to 10. The independent IGD values are computed using the minimum Euclidean distance d i between i points in the generated nP OF (|P OF |) and the true P OF * over time t. The stability analysis through Lyapunov Exponent for MOPSO-based systems is computed using ln(IGD()). Figure 3 shown the spectrum of Lyapunov Exponent over FDA, dMOP, F and UDF functions. So, we can conclude that all Lyapunov exponent for all MOPSO-based algorithms over FDA1, FDA2, FDA4, dMOP1, dMOP2 and dMOP3 are under zero with negative values assuming that the systems converge over time. For UDF problems, the spectrum of Lyapunov exponent indicates the stability of DPb-MOPSO assuming the best convergence to stability with negative values of LE. The location points for F5, F9 and F10 indicate that the Lyapunov exponent diverges to deep positive values for MOPSO, dMOPSO, pbMOPSO and Dynamic-MOPSO systems compared to the spectrum of DPb-MOPSO that converge faster for all F(ZJZ) problems. Almost, the proposed DPb-MOPSO algorithm has the best spectrum of convergence over many tested problems.

VII. CONCLUSION
This study presents a novel Dynamic Pareto bi-level Multi-Objective Particle Swarm Optimization (DPb-MOPSO) algorithm for solving different types of DMOPs (I, II and III). The proposed DPb-MOPSO presents a distributed architecture over two optimization levels, including a dynamism-handling strategy to detect and respond efficiently to changes. The Friedman two-way ANOVA test is used to evaluate the proposed DPb-MOPSO compared to several MOEAs. Based on the statistical analysis (at 0.05 significance level), the DPb-MOPSO is shown to be a more robust algorithm in terms of its ability to ensure a good trade-off between convergence and diversity in a timedependent environment. Further the spectrum of Lyapunov exponents has been shown to be an efficient tool for analyzing periodic motions and stability analysis in dynamical systems. The superiority of the proposed method is clearly observed when handling DMOPs with two or three objectives including different types of changes in POS and POF. In conclusion, our study has demonstrated the novelty and robustness of the DPb-MOPSO for dynamic multi-objective problems. However, these characteristics have yet to be explored for problems with more than three objectives. For future work, we will further enhance the proposed approach for Many Objective Optimization Problems (MaOOPs) as well as for pertinent feature selection in a real-world problem. Ahlem Aboud She is currently a PhD student in computer science in the ISITCom, Sousse of University and a PhD member of the REGIM-lab: REsearch Group on Intelligent Machines at the National school of engineers of Sfax (ENIS). She received the engineering degree in 2015 in the Higher Institute of Computing and Multimedia of Sfax. She is currently IEEE student member since October 2015. His research interests include, dynamic multi-objective optimization problem, evolutionary computation and collective intelligence methods.