Multi-dimensional particle swarm optimization in dynamic environments

https://doi.org/10.1016/j.eswa.2010.08.009Get rights and content

Abstract

Particle swarm optimization (PSO) was proposed as an optimization technique for static environments; however, many real problems are dynamic, meaning that the environment and the characteristics of the global optimum can change in time. In this paper, we adapt recent techniques, which successfully address several major problems of PSO and exhibit a significant performance over multi-modal and non-stationary environments. In order to address the pre-mature convergence problem and improve the rate of PSO’s convergence to the global optimum, Fractional Global Best Formation (FGBF) technique is used. FGBF basically collects all the best dimensional components and fractionally creates an artificial Global Best particle (aGB) that has the potential to be a better “guide” than the PSO’s native gbest particle. To establish follow-up of local optima, we then introduce a novel multi-swarm algorithm, which enables each swarm to converge to a different optimum and use FGBF technique distinctively. Finally for the multi-dimensional dynamic environments where the optimum dimension also changes in time, we utilize a recent PSO technique, the multi-dimensional (MD) PSO, which re-forms the native structure of the swarm particles in such a way that they can make inter-dimensional passes with a dedicated dimensional PSO process. Therefore, in a multi-dimensional search space where the optimum dimension is unknown, swarm particles can seek for both positional and dimensional optima. This eventually pushes the frontier of the optimization problems in dynamic environments towards a global search in a multi-dimensional space, where there exists a multi-modal problem possibly in each dimension. We investigated both standalone and mutual applications of the proposed methods over the moving peaks benchmark (MPB), which originally simulates a dynamic environment in a unique (fixed) dimension. MPB is appropriately extended to accomplish the simulation of a multi-dimensional dynamic system, which contains dynamic environments active in several dimensions. An extensive set of experiments show that in traditional MPB application domain, FGBF technique applied with multi-swarms exhibits an impressive speed gain and tracks the global peak with the minimum error so far achieved with respect to the other competitive PSO-based methods. When applied over the extended MPB, MD PSO with FGBF can find optimum dimension and provide the (near-) optimal solution in this dimension.

Research highlights

► Multi-dimensional particle swarm optimization. ► Fractional global-best formation. ► Optimization in dynamic environments. ► Global optimum tracking.

Introduction

Many real-world problems are dynamic and thus require systematic re-optimizations due to system and/or environmental changes. Even though it is possible to handle such dynamic problems as a series of individual processes via restarting the optimization algorithm after each change, this may lead to a significant loss of useful information, especially when the change is not too drastic. Since most of such problems have a multi-modal nature, which further complicates the dynamic optimization problem, the need for powerful and efficient optimization techniques is imminent. In the last decade the efforts have been focused on evolutionary algorithms (EAs) (Bäck & Schwefel, 1993) such as Genetic Algorithms (GA) (Goldberg, 1989), Genetic Programming (GP) (Koza, 1992), Evolution Strategies (ES), (Bäck & Kursawe, 1995) and Evolutionary Programming (EP) (Fayyad, Shapire, Smyth, & Uthurusamy, 1996). The common point of all EAs, which have population based nature, is that they may also avoid being trapped in local optima. Thus they can find the optimum solutions; however, this is never guaranteed.

Conceptually speaking, particle swarm optimization (PSO) (Engelbrecht, 2005, Kennedy and Eberhart, 1995, Omran et al., 2006), which has obvious ties with the EA family, lies somewhere in between GA and EP. Yet unlike GA, PSO has no complicated evolutionary operators such as crossover, selection and mutation and it is highly dependent on stochastic processes. PSO is originated from the computer simulation of individuals (particles or living organisms) in a bird flock or fish school (Wilson, 1975), which basically show a natural behavior when they search for some target (e.g. food). Their goal is, therefore, to converge to the global optimum of a possibly non-linear function or system. Similarly, in a PSO process, a swarm of particles (or agents), each of which represents a potential solution to an optimization problem, navigate through the search space. The particles are initially distributed randomly over the search space with a random velocity and the goal is to converge to the global optimum of a function or a system. Each particle keeps track of its position in the search space and its best solution so far achieved. This is the personal best value (the so-called pbest in Kennedy & Eberhart (1995)) and the PSO process also keeps track of the global best solution so far achieved by the swarm by remembering the index of the best particle (the so-called gbest in Kennedy & Eberhart (1995)). During their journey with discrete time iterations, the velocity of each agent in the next iteration is affected by the best position of the swarm (the best position of the particle gbest as the social component), the best personal position of the particle (pbest as the cognitive component), and its current velocity (the memory term). Both social and cognitive components contribute randomly to the velocity of the agent in the next iteration.

There are some efforts for simulating dynamic environments in a standard and configurable way. Some early works such as (Angeline, 1997, Bäck, 1998, Eberhart and Shi, 2001) use experimental setup introduced by Angeline (1997). In this setup the minimum of the three-dimensional parabolic function, f(x, y, z) = x2 + y2 + z2, is moved along a linear or circular trajectory or randomly. Three different update frequencies (200, 1000 and 2000 evaluations) and change severities (0.01, 0.1, 0.5) are used. However, this setup enables testing only in a uni-modal environment. Branke (2008) has provided a publicly available Moving Peaks Benchmark (MPB) to enable different dynamic optimization algorithms to be tested in a standard way in a multi-modal environment. MPB allows the creation of different dynamic fitness functions consisting of a number of peaks with varying location, height and width. The primary measure for performance evaluation is offline error, which is the average difference between the optimum and the best evaluation since the last environment change. Obviously, this value is always a positive number and it is zero only for perfect tracking. Several PSO methods have been developed and tested using MPB such as (Blackwell and Branke, 2004a, Blackwell and Branke, 2004b, Li et al., 2006, Mendes and Mohais, 2005). Particularly Blackwell & Branke (2004a) proposed a successful multi-swarm approach. The idea behind this is that different swarms can converge to different peaks and track them when the environment changes. The swarms interact only by mutual repulsion that keeps any two swarms from converging to the same peak.

Similar to the aforementioned EAs, PSO might exhibit some major problems and severe drawbacks such as parameter dependency (Lovberg & Krink, 2002) and loss of diversity (Riget & Vesterstrom, 2002). Particularly the latter phenomenon increases the probability of being trapped in a local optimum and it is the main source of premature convergence especially when the dimensionality of the search space is large (Van den Bergh, 2002) and the problem to be optimized is multi-modal (Esquivel and Coello, 2003, Riget and Vesterstrom, 2002). Another reason for premature convergence is that particles are flown through a single point, which is (randomly) determined by gbest and pbest positions and this point is not even guaranteed to be a local optimum (Van den Bergh & Engelbrecht, 2002). Since PSO was proposed for static problems in general, effects of such drawbacks eventually become much more severe for dynamic environments. Various modifications and PSO variants have been proposed in order to address these problems such as (Abraham et al., 2007, Chen and Li, 2007, Chen et al., 2007, Christopher and Seppi, 2004, Clerc, 1999, Eberhart et al., 1996, Higashi and Iba, 2003, Ince et al., 2009, Janson and Middendorf, 2005, Kaewkamnerdpong and Bentley, 2005, Krohling and Coelho, 2006, Liang and Qin, 2006, Li et al., 2006, Lovberg, 2002, Lovberg and Krink, 2002, Mendes et al., 2004, Peng et al., 2003, Peram et al., 2003, Ratnaweera et al., 2003, Ratnaweera et al., 2002, Riget and Vesterstrom, 2002, Richards and Ventura, 2003, Shi and Eberhart, 1998, Shi and Eberhart, 2001, Van den Bergh and Engelbrecht, 2002, Van den Bergh and Engelbrecht, 2004, Xie et al., 2002a, Xie et al., 2002b, Xie et al., 2002c, Yasuda et al., 2003, Zhang and Xie, 2003). Such methods usually try to improve the diversity among the particles and the search mechanism either by changing the update equations towards more diversified versions or adding more randomization to the system (to particle velocities, positions, etc.) or simply resetting some or all of them randomly when some conditions are met. However, their performance improvement might be quite limited even in static environments and most of them use more parameters and/or thresholds to accomplish this whilst making the PSO variant even more parameter dependent. Therefore, they do not yield set a reliable solution for dynamic environments, which usually have a multi-modal nature and high dimensionality.

Another major drawback of the basic PSO and the aforementioned variants is that they can only be applied to a search (solution) space with a fixed dimensionality. However, in many optimization problems, the optimum dimension is also unknown (e.g. data clustering, object extraction, optimization of the dynamic functions, etc.) and should thus be determined within the PSO process. Take for example the color-based image segmentation as a data clustering application, where the optimum dimension of the solution space corresponds to the true number of clusters (segments) in the data (color) space, which cannot be known beforehand. In such a case the PSO process should perform a multi-dimensional search in order to determine both the true (optimum) number of clusters and the optimum centroid location for each cluster. The problem becomes even more challenging when it is applied over a dynamic environment such as a video where both the number of clusters (segments) and their centroids (dominant colors) are changing over time. Yet since the change between consecutive frames is not drastic but rather minor, instead of performing a new clustering in the color domain via multi-dimensional search for each frame in the video, for a new (next) frame, the PSO process can establish a follow-up mechanism to track the optimum (number of) clusters (segments) from the previous frame. Therefore, using the past history about global and local optima becomes a crucial information to search for the current optima (both dimension and location).

In this paper, we shall first introduce a recent technique that significantly improves the global convergence performance of PSO by forming an artificial Global Best particle (aGB) fractionally. This algorithm, the so-called Fractional Global Best Formation (FGBF), collects the best dimensional components from each swarm particle and fractionally creates the aGB, which will replace gbest as a guide for the swarm, if it turns out to be better than the swarm’s native gbest particle. We then propose a novel multi-swarm algorithm, which combines multi-swarms with the FGBF technique so that each swarm can apply FGBF distinctively. Via applying the proposed techniques on conventional MPB we shall show that they can find and track the global peak in an efficient way and usually in earlier stages. For the multi-dimensional dynamic environments where the optimum dimension also changes over time, we shall then introduce a multi-dimensional (MD) PSO technique, which re-forms the native structure of swarm particles in such a way that they can make inter-dimensional passes with a dedicated dimensional PSO process. Therefore, in a multi-dimensional search space where the optimum dimension is unknown, swarm particles can seek for both positional and dimensional optima. This eventually pushes the frontier of optimization problems in dynamic environments towards a global search in a multi-dimensional space, where the problems in each dimension are possibly multi-modal and dependent on each other in a certain manner. Since the conventional MPB is created for a unique (fixed) dimensionality, we shall propose multi-dimensional extension of the benchmark in which there exists a unique optimum dimension where the global (highest) peak is located. Also the optimum dimension can change over time within a dimension range. In order to provide a certain degree of dependency among individual dimensions, peaks in different dimensions share the common coordinates of the peak locations. This is basically accomplished by subtracting a penalty term, whose magnitude depends on a dimensional error, from the landscape height in non-optimal dimensions. As a result, over the extended MPB, MD PSO can seek for both the optimum dimension and the global peak on it simultaneously. FGBF is an add-on to both multi-swarms and MD PSO which can enhance their performance. We shall show when the multi-dimensional search is needed, the best performance is achieved by their mutual operation. In recent works, both MD PSO and FGBF have been successfully applied over static problems such as general data clustering (Kiranyaz, Ince, Yildirim, & Gabbouj, 2010) and evolutionary artificial neural networks (Ince et al., 2009), respectively.

The rest of the paper is organized as follows. Section 2 surveys related work on PSO and MPB. The proposed techniques, MD PSO, multi-swarms with FGBF and their applications over the (extended) MPB are presented in detail in Section 3. Section 4 provides the experiments conducted and discusses the results. Finally, Section 5 concludes the paper.

Section snippets

The basic PSO algorithm

In the basic PSO method, (bPSO), a swarm of particles flies through an N-dimensional search space where each particle represents a potential solution to the optimization problem. Each particle with index a in the swarm, ξ = {x1,  , xa ,… , xS}, is represented by the following characteristics:

  • xa,j(t): jth dimensional component of the position of particle a, at time t.

  • va,j(t): jth dimensional component of the velocity of particle a, at time t.

  • ya,j(t): jth dimensional component of the personal best (pbest

The proposed optimization technique for dynamic environments

In this section, we first introduce the multi-swarms with FGBF technique and its application to MPB. The multi-dimensional extension of PSO, the so-called MD PSO, will be detailed next. Finally we show their mutual application over the (extended) MPB.

Experimental results

An extensive set of experiments was conducted over both conventional (uni-dimensional) MPB and the extended (multi-dimensional) MPB and the results will be presented in the following sub-sections.

Conclusions

In this paper, we presented two PSO techniques, namely, FGBF with multi-swarms and MD PSO for an efficient and robust optimization over the dynamic systems. Both MD PSO and FGBF have been successfully used in static optimization problems particularly as a cure to common drawbacks of the family of PSO methods such as a priori fixation of the search space dimension and pre-mature convergence to local optima. MD PSO efficiently addresses the former drawback by defining a new particle structure and

References (53)

  • Abraham, A., Das, S., & Roy, S. (2007). Swarm intelligence algorithms for data clustering. In Soft computing for...
  • P.J. Angeline

    Tracking extrema in dynamic environments

  • Bäck, T. (1998). On the behaviour of evolutionary algorithms in dynamic environments. In Proceedings of the IEEE...
  • Bäck, T., & Kursawe, F. (1995). Evolutionary algorithms for fuzzy logic: A brief overview. Fuzzy logic and soft...
  • T. Bäck et al.

    An overview of evolutionary algorithms for parameter optimization

    Evolution on Computers

    (1993)
  • T.M. Blackwell

    Particle swarm optimization in dynamic environments

    Evolutionary Computation in Dynamic and Uncertain Environments, Studies in Computational Intelligence

    (2007)
  • Blackwell, T. M., & Branke, J. (2004a). Multi-swarm optimization in dynamic environments. Applications of evolutionary...
  • T.M. Blackwell et al.

    Multi-swarms, exclusion, and anti-convergence in dynamic environments

    IEEE Transactions on Evolutionary Computation

    (2004)
  • Branke, J. (2008). Moving Peaks Benchmark. http://www.aifb.uni-karlsruhe.de/∼jbr/MovPeaks/, viewed...
  • X. Chen et al.

    A modified PSO structure resulting in high exploration ability with convergence guaranteed

    IEEE Transactions on Systems, Man, and Cybernetics, Part B

    (2007)
  • Y.-P. Chen et al.

    Particle swarm optimization with recombination and dynamic linkage discovery

    IEEE Transactions on Systems, Man, and Cybernetics, Part B

    (2007)
  • Christopher, K. M., & Seppi, K. D. (2004). The Kalman swarm. A new approach to particle motion in swarm optimization....
  • Clerc, M. (1999). The swarm and the queen: Towards a deterministic and adaptive particle swarm optimization. In...
  • Eberhart, R., & Shi, Y. (2001). Tracking and optimizing dynamic systems with particle swarms. In Proceedings of...
  • R. Eberhart et al.

    Computational intelligence. PC tools

    (1996)
  • A.P. Engelbrecht

    Fundamentals of computational swarm intelligence

    (2005)
  • S.C. Esquivel et al.

    On the use of particle swarm optimization with multimodal functions

    IEEE Transactions on Evolutionary Computation

    (2003)
  • U.M. Fayyad et al.

    Advances in knowledge discovery and data mining

    (1996)
  • D. Goldberg

    Genetic algorithms in search, optimization and machine learning

    (1989)
  • Higashi, H., & Iba, H. (2003). Particle swarm optimization with gaussian mutation. In Proceedings of the IEEE swarm...
  • T. Ince et al.

    A generic and robust system for automated patient-specific classification of electrocardiogram signals

    IEEE Transactions on Biomedical Engineering

    (2009)
  • S. Janson et al.

    A hierarchical particle swarm optimizer and its adaptive variant

    IEEE Transactions on Systems, Man, and Cybernetics, Part B

    (2005)
  • Kaewkamnerdpong, B., & Bentley, P. J. (2005). Perceptive particle swarm optimization: An investigation. In Proceedings...
  • Kennedy, J., & Eberhart, R. (1995). Particle swarm optimization. In Proceedings of the IEEE international conference on...
  • S. Kiranyaz et al.

    Fractional particle swarm optimization in multi-dimensional search space

    IEEE Transactions on Systems, Man, and Cybernetics, Part B

    (2010)
  • J. Koza

    Genetic programming: On the programming of computers by means of natural selection

    (1992)
  • Cited by (0)

    1

    This work was supported by the Academy of Finland, project No. 213462 (Finnish Centre of Excellence Program (2006–2011).

    View full text