Abstract

Artificial immune system is one of the most recently introduced intelligence methods which was inspired by biological immune system. Most immune system inspired algorithms are based on the clonal selection principle, known as clonal selection algorithms (CSAs). When coping with complex optimization problems with the characteristics of multimodality, high dimension, rotation, and composition, the traditional CSAs often suffer from the premature convergence and unsatisfied accuracy. To address these concerning issues, a recombination operator inspired by the biological combinatorial recombination is proposed at first. The recombination operator could generate the promising candidate solution to enhance search ability of the CSA by fusing the information from random chosen parents. Furthermore, a modified hypermutation operator is introduced to construct more promising and efficient candidate solutions. A set of 16 common used benchmark functions are adopted to test the effectiveness and efficiency of the recombination and hypermutation operators. The comparisons with classic CSA, CSA with recombination operator (RCSA), and CSA with recombination and modified hypermutation operator (RHCSA) demonstrate that the proposed algorithm significantly improves the performance of classic CSA. Moreover, comparison with the state-of-the-art algorithms shows that the proposed algorithm is quite competitive.

1. Introduction

Optimization techniques play a very important role in engineering design, commercial manufacture, financial market, information science, and related areas. An optimization problem can be expressed aswhere is a D-dimensional vector of decision variables in the feasible region . The optimization could be either a minimization problem or a maximization problem. Traditional optimization algorithms often fail to deal with multimodal, nonconvex, nondifferentiable problems since most of them rely on the gradient information [1]. In the past few decades, inspired by nature, people have developed many optimization computation methods to solve the complicated optimization problems, including genetic algorithm (GA), differential evolution (DE), particle swarm optimization (PSO) artificial immune system (AIS), and some new nature-inspired algorithms [27].

Among them, artificial immune system (AIS) is a newly emerging computational paradigm inspired by the fundamentals of immune system. It abstracts the structure and function of the biological immune system and exploits the applications to solve computational problems. In the area of optimization, numerical comparisons demonstrated that the performance of AIS is competitive compared to that of the other nature-inspired algorithms [8]. Among them, clonal selection algorithm is one of well-known AIS and has been applied to solve many practical optimization problems [911]. The clonal selection algorithm is inspired from behavior of B cells in secreting antibody to bind with the invading antigen. In view of the good performance of clonal selection algorithms (CSAs) in computational optimization area [12], in this paper, our work concentrates on CSA.

CSAs have attracted a lot of attention since they were developed [1315]. However, with the increase of the complexity of the optimization problems, the deficiency of CSA is gradually exposed. One of the main shortages is that hypermutation is the main operator to modify the construction of the candidate solution in the traditional CSAs, and generally both global search and local search are based on adjusting the step size of the hypermutation. It would be barely unsatisfactory on balancing the diversity and convergence. In some improved versions, new randomly generated solutions are introduced by receptor editing or other mechanisms to increase diversity. However, the mechanism of randomly introduced solutions, along with the step-size controlled hypermutation leads to a blind optima searching process. Thereafter, premature convergence and diversity loss happen when dealing with the high-dimensional, multimodal optimization problems.

In view of the above analysis, we are motivated to explore the undeveloped potential of clonal selection theory in this paper. Inspired by the immune response of B cells, a combinatorial recombination operator is introduced to share the responsibility with hypermutation. In the proposed algorithm, recombination operator is proposed to enhance the search ability of the CSA. Moreover, the hypermutation operator is modified to generate more promising candidate solutions.

The rest of this paper is organized as follows. In Section 2, the general framework of CSA algorithm is presented, and the research works on improved CSAs in the field of numerical optimization are reviewed. In Section 3, the RHCSA algorithm is proposed based on the new introduced recombination and the modified hypermutation operators. Section 4 presents and discusses the experimental results. Finally, the conclusion is drawn in Section 5.

2. CSA Framework

The AIS has become popular from the late 1990s. Several books, journals, and conference papers have been published in the past few years. de Castro and Timmis [16] depict the main models in AIS such as clonal selection, immune networks, and negative selection theories. More detailed review of AIS and their applications can be found in [15, 1719]. Here we will give a brief literature review of the contemporary research efforts in clonal selection based algorithms in handling the numerical optimization.

2.1. Clonal Selection Algorithms (CSAs)

The main idea of clonal selection theory lies in the phenomenon where B cell reacts to invaded antigen through modifying the receptor called antibody. The general one, named CLOGNALG [20], is one of the representatives for clonal selection algorithms. There are mainly three operations involved, which are cloning, hypermutation, and selection.

The general framework of clonal selection algorithm for optimization is presented as follows.

Framework of CSA

Step 1 (initialization). Randomly initialize antibody population.

Step 2 (evaluation). Evaluate the objective values of the antibody population as their fitness.

Step 3 (cloning). Generate copies of the antibodies.

Step 4 (hypermutation). Mutate all the generated copies.

Step 5 (selection). Select the one with highest fitness to survive.

Step 6. Repeat Steps until a termination criterion is met.

The main features of the CSA framework are (i) the clone number, usually proportionally to the affinity of antibody with respect to the antigens; (ii) the mutation rate, normally inversely proportional to the affinity; and (iii) the absence of recombination operators (such as crossover in GAs). Those characteristics expose deficiencies when facing the high-dimensional, nonconvex multimodal, and multiobjective optimization problems. First, it is obvious that the cloning operator consumes high computational resource. Second, hypermutation is insufficient to bear the burden of balancing both the global search and the local search, which will lead to the premature convergence and unsatisfied accuracy. Third, the lack of consideration on interaction among individuals in the population may lead to the search missing global awareness, that is, overly searching one or some areas of the search space while leaving the others unvisited.

2.2. Improved CSAs

To overcome the abovementioned shortcomings, a bunch of improved algorithms based on CSA are proposed. The research works can be briefly classified as three categories.

2.2.1. Modification of the Operators or Introduction of New Operators within the CSA Framework

Cutello et al. introduced a real-coded clonal selection algorithm for global optimization, involving the cloning operator, inversely proportional hypermutation, and aging operator [21]. Three versions of somatic contiguous hypermutation operator are analyzed and proven to have better performance than standard bit mutation to some types of optimization problem [22]. Khilwani et al. [23] proposed a fast clonal algorithm (FCA) which designs a parallel mutation operator comprising Gaussian and Cauchy mutation strategies. Lu and Yang [24] introduce the Cauchy mutation for the improved CSA (IMCSA). Two versions of immune algorithm named OPT-IMMALG01 and OPT-IMMALG combined with the clonal operator, M hypermutation operator, aging operator, and selection operator based on binary-code and real-code representation, respectively, are discussed. The experimental results approve the effectiveness of the algorithms in handing the high-dimensional global numerical optimization problem [12]. Randomized clonal expansion strategy is proposed to solve high-dimensional global optimization problem in [25].

2.2.2. Combine CSA with Immune Network Theory

Immune network theory considers that the immune cells have relations with each other and hereafter the cells, molecules, and other related substances construct a network. CAS combing with immune network theory is more suitable for handling multimodal function optimization.

The opt-AINet algorithm is an earlier version of CSA with artificial immune networks, which is proposed to solve multimodal continuous function optimization [26]. Uniform cloning operator, affinity-based Gaussian mutation, and similarity-based suppressor are proposed. To enhance the parameter adaptation, an improved adaptive Artificial Immune Network called IA-AIS is proposed, where affinity-based cloning operator, controlled affinity-based Gaussian mutation, and dynamic suppressor are introduced [27]. By imitating the social behaviors of animals, social leaning mechanism is introduced to the immune network; an algorithm called AINet-SL is proposed. The population is separated into elitist swarm (ES) and the common swarm (CS). Nonlinear affinity-based cloning, self-learning mutation, and social learning mutation are employed [28]. Based on the affinity measure, the candidate solution space is divided into the elitist space, the common space, and the poor space, and different hypermutation strategies are applied, respectively, in MPAINet [29]. Concentration-Based Artificial Immune Network, where the concentration of antibody is introduced to stimulate and maintain the diversity of the population, is applied to continuous optimization [30], combinatorial optimization [31], and multiobjective optimization [32].

2.2.3. Hybrid CSA

CSA is also hybrid with PSO, DE, and other evolutionary strategies. Hill-climbing local search operator is combined with immune algorithm in [33]. Differential immune clonal selection algorithm (DICSA) is put forward by Gong et al. [34], where the differential mutation and differential crossover operators are introduced. Orthogonal initialization and neighborhood orthogonal cloning operator are proposed in an orthogonal immune algorithm (OIA) [35]. CSA with nondominated neighborhood selection strategy (NNIA) [36] was proposed for multiobjective optimization. Shang et al. [37] proposed an immune clonal algorithm (NICA) for multiobjective optimization problems, which makes improvements on four aspects in comparison with the traditional clonal selection computing model.

Baldwin effect is introduced into CSA and formulated the Baldwinian clonal selection algorithm (BCSA), which guides the evolution of each antibody by the differential information of other antibodies in the population [38]. Gong et al. [39] substitute the mutation in CSA with the local search technique in Lamarckian Clonal Selection Algorithm (LCSA) and adopt recombination operator and tournament selection operator for numerical optimization. An enhanced CSA, hybrid learning CSA (HLCSA) [40], is proposed by introducing two learning mechanisms: Baldwinian learning and orthogonal learning.

Although many improvements on immune inspired algorithm have been realized, the abovementioned shortcoming of artificial immune algorithm, such as premature convergence, high computational cost, and unsatisfied accuracy that can negatively affect the application of immune algorithm, remains a problem. The search ability of immune algorithm is limited when dealing with high-dimensional complex optimization with different characteristics. In this paper, we propose an improved CSA, by introducing a recombination operator and modifying the hypermutation operator to cope with the complex optimization problems with the characteristic high-dimensional, nonconvex, rotated, and composed optimization problems.

3. CSA with Combinatorial Recombination and Modified Hypermutation

3.1. Combinatorial Recombination

In immunology, the presence of both recombination and somatic mutation takes the responsibility for the diversification of antibody genes. The recombination of the immunoglobulin gene segments is the first step when the cells are first exposed to antigen. In the past few decades, recombination is mentioned more as crossover in GA. However, as depicted in [41], the recombination of immunoglobulin genes involved in the production of antibodies differs from the recombination (crossover) of parental genes in sexual reproduction. In the former, nucleotides can be inserted and deleted randomly from the recombined gene segments, while in the latter the genetic mixture is generated from parental chromosomes.

The basic unit of antibody molecule has a Y-shape structure, which contains two identical light chains and two identical heavy chains as shown in Figure 1(a). Variable regions located at the tips of the Y are primarily responsible for antigen recognition. Within these variable regions, some polypeptide segments show exceptional variability. The antibody molecules can be synthesized by an individual lying in the way of encoding the amino acid sequences of the variable domains into DNA chains as well as the random selection and recombination of gene segments as shown in Figure 1(b). During the development of B cell, the gene segments in the libraries are combined and rearranged at the level of the DNA. With the recombination of gene segments, recombination creates a population of cells that vary widely in their specificity. In this way, few immune cells are compatible with various antigens. After altering the base of antibody, the mutations fine-tune the lymphocyte receptor to better match the antigen [41, 42].

In the perspective of optimization, recombination functions as the coarse-grained exploration while hypermutation works the same way as fine-grained exploitation. Inspired by this, a combinatorial recombination operator is proposed as follows.

To avoid the truncation error and the complexity of the coding, our algorithm is coded in real number and each dimension of a solution is viewed as a gene segment. The whole population forms the gene fragments library. According to the recombination in immunology, any orderly rearrangement of gene segments would generate a new B cell. As presented in Figure 2, the recombination could be (a) between two individuals as crossover or (b) among several individuals as the combination of randomly selected gene segments. With the help of normalization, the combination of gene segments could be in the specific order, such as in Figure 2(a) or can be randomly arranged as shown in Figure 2(b) in the computational respective.

The former one is similar to the SBX recombination in GA [42], where the crossover of parents could swap gene segments on one site or many sites. To better simulate the arrangement of gene segments, line recombination, DE inspired recombination [43], and intelligent recombination [44] are proposed. There would be a lot of ways to do the combinations with optional joined gene segments. With the randomness of the arrangements and combination, diversity is introduced. However, as is known to all, too much introduced diversity would be harmful for the performance. Based on the experimental experience, our work focuses on the way to process recombination between two parents. A new combinatorial recombination operator combining with line recombination is proposed as follows.

Randomly choose two individuals from the population, denoted by and , and then randomly choose dimensions, , from each of them, where dimensions index could be recorded as vectors and , respectively. The new individuals are generated bywhere is a randomly produced number between 0 and 1. It should be noted that the range of each dimension of decision variable should be normalized at first. The new proposed recombination operator is represented in Figure 3.

As shown in Figure 3, two new individuals are generated through the combinational recombination. In the example, equals 3. It needs to be known that could be different from , the same as in and and and as long as the normalization has been done.

Then, the fitness of the new generated individuals is evaluated. Together with the original individuals, two with the higher fitness will survive, and the other two individuals are deleted; that is, choose two individuals with high fitness from the set . Instead of comparing with the whole population and reserving the elite, a better diversity will be maintained this way.

3.2. Modified Hypermutation

Hypermutation operator brings diversity for the population by introducing perturbation for each clone. Although there are several ways to implement this operator [22], inversely proportional strategy is always the main basis. The concept of the operator proposed in [12] is adopted in this research work, where each candidate solution is subject to M mutations without explicitly using a mutation probability. The inversely proportional law is used to determine the number of the mutations M:where is the normalized fitness of , is the decay constant which determines the shape of the mutation rate, and returns the lower bound integer. Then, mutation is performed on each candidate solution:

is the th dimension of the th individual, is randomly chosen indexes without repetition, and is a random number in the range of . are randomly selected numbers; hereafter, the amplitude of the hypermutation is controlled automatically by the difference of randomly selected individuals in the population. The mutation equation (4) could be considered as the variant of differential evolution and is introduced by [45] to modify the original updating food source of artificial bee colony algorithm (ABC), where it is improved to benefit for enhancing the search ability of ABC. In our proposed algorithm, the equation is combined with the hypermutation inversely proportional M strategy as depicted above. The M strategy controls the direction, that is, along how many dimensions, while the equation controls the distance of the mutated clones with their parents. With combination of both, the amplitude of the hypermutation is automatically controlled according to the distribution of the population.

3.3. Framework of the Proposed Algorithm

Combined with the proposed combinatorial recombination and hypermutation operator, the framework of the proposed algorithm is as follows.

Step 1 (initialization). Randomly initialize a population of individuals, where denotes the size of the initial population. Each initial solution is produced randomly within the range of boundaries of the decision space:where , , where is the dimension of the decision variables, and and are the lower and upper bounds for the dimension , respectively.

Step 2 (evaluation). Evaluate the objective value of each solution as its fitness.

Step 3 (recombination). Randomly choose two individuals from the population and implement the combinatorial recombination as described in Section 3.1. The recombination rate is set to .

Step 4 (clonal selection). (i) Cloning: each individual generates copies , where is the clone number, which is a user defined constant.
(ii) Hypermutation: each clone , , goes through the hypermutation as described in Section 3.2 and generates the hypermutated clones .
(iii) Selection: select the individual with highest fitness among and hypermutated clones .

Step 5. If stopping condition is not met go to Step . Otherwise, output the best one of the feasible group.

It could be observed that instead of being proportional to the fitness, the clone number is a constant, which not only saves computational source but also avoids individuals being concentrated in some decision area and leaving the others unvisited when handling the multimodal problems.

4. Experimental Studies on Function Optimization Problems

In this section, experiments are conducted to evaluate the performance of RHCSA by using 16 commonly used global optimization benchmark problems. These functions contain the characteristics of being unimodal as ~, unrotated multimodal as ~, rotated multimodal as ~, and composite functions as ~. Table 1 gives the expression of the benchmark function. The detailed characteristics of these functions can be found in [1, 46].

The introduced parameters are analyzed at first, and then the proposed algorithm is compared with traditional CSA in the benchmark functions. Finally, comparisons between the proposed algorithm and the state-of-the-art evolutionary computing models are represented. Some discussions on the performance analysis of RHCSA are also included. It needs to be noted that the same setup for the experiments is used for all the involved peers algorithms; that is, the comparison of all the algorithms is under the same configuration of experiment setup in this paper.

4.1. Experimental Parameters Settings

The experiments were conducted on 16 benchmark functions. The maximum number of function evaluations (mFES) is set as 10,000D for 10-D and 30-D situations, respectively. Each test is run 30 independent times.

There are quite few parameters introduced in the proposed algorithm. For a fair comparison among CSAs, they are tested using the same setting of the parameters, that is, the population size is set to 30 and clone number is set to 4 as in [40]. Furthermore, there are two new parameters introduced. One is which controls the direction of combinatorial recombination, and the other is recombination rate. Tables 2 and 3 give the experimental results of varying in the 16 benchmark functions with 10-D and 30-D. For convenience, the experiment sets 4 proportions to the dimension as the value to , which are .

The recombination operator could introduce diversity to the population through fusing information between randomly chosen parents. When is small, the generated offspring is much similar to one of the chosen parents, and only little dimension obtains information from both parents. When is large, the generated offspring tend to be the fusion of chosen parents. That is to say, m controls the offspring being much like one of the chosen parents or the fusion of both of them. From Tables 2 and 3, we can find that the difference among the varying setup of m is quite small. In a more careful observation, and are more appropriate to compare with the other situations for both D = 10 and D = 30, and is even better. It could be observed that experimental results are not sensitive to the parameter , and a slightly larger may be beneficial for the algorithm. In view of the observation, m = is chosen in our experiments.

There is another parameter named recombination rate , which controls the rate of recombination process, which will be implemented. The higher the , the more recombination processes that will be executed. In the early stage, the population is uniformly distributed in the search space, and recombination of randomly chosen individuals will bring diversity to the population. As is known to all, diversity is beneficial for enhancing the search ability of the algorithm, but too much diversity will slow the convergence. Tables 4 and 5 represent the experimental results of varying the recombination rate.

From Tables 4 and 5, it could be observed that RHCSA has a quite robust performance with the varying recombination rate. By comparison, which is equal to 0.7 achieves the best performance and is adopted in the paper.

4.2. Comparisons with Classic CSA

The experiments are implemented to check if the proposed operators are beneficial for the performance of the algorithm. Firstly, recombination operator is added to the original CSA denoted as RCSA, and then the modified hypermutation is introduced as RHCSA.

Table 6 shows the statistical results of CLONALG, RCSA, and RHCSA in optimizing the 16 test problems with D = 10 based on 30 independent runs including the mean and standard deviation. From the table, we can observe that, for all these test instances, the introduced recombination operator could obviously improve the performance of the CSA. The reason is that the recombination operator could bring diversity to enhance the search ability and avoid being trapped into the local optima. The experiments results on multimodal function ~ could present the ability of introduced recombination operator. With the modified hypermutation operator, the performance of the algorithm obtains a further improvement. This is because both recombination and hypermutation adopt the difference of chosen individuals to generate the candidate solution in the proposed algorithm. In the early stage, the population is distributed in the search space, and the difference is relatively large which is beneficial for global search and then, with the iterations going on, the population converges to the optima and the difference between individuals becomes smaller and smaller; that is, the search process turns to be a local search. It can be observed that RCSA surpass CSA and be surpassed by RHCSA at all the tested functions. It can be concluded that the proposed recombination and hypermutation operators are effective improving the ability of the CSA.

Table 7 shows the experimental results of the algorithms when . Similar results are represented, which indicates that RHCSA is able to handle the high-dimensional optimization problems as well.

4.3. Comparisons with the State-of-the-Art Algorithms

To compare RHCSA with the state-of-the-art algorithms, experimental results of seven representative evolutionary algorithms are listed in Tables 8 and 9. These algorithms are Baldwinian clonal selection algorithm (BCSA) [38]; hybrid learning clonal selection algorithm (HLCSA) [40]; orthogonal crossover based differential evolution (OXDE) [47]; self-adaptive differential evolution (SaDE) [48]; global and local real-coded genetic algorithm (GL-25) [49]; and comprehensive learning particle swarm optimization (CLPSO) [1].

Table 8 presents the mean values and standard deviations of the seven algorithms on the 16 test functions with , where the best results are shown in bold face. First of all, RHCSA performs the best on the 6 unrotated multimodal functions ~. It could exactly locate the global optima in 7 functions, ~, and . In particular, it is observed that RHCSA surpasses all the other algorithms on functions and with great superiority. Moreover, RHCSA, HLCSA, BCSA, SaDE, and CLPSO can obtain the global minima 0 on functions , and ; RHCSA, HLCSA, BCSA, SaDE, and OXDE get the global optimum of function and RHCSA, HLCSA, SaDE, OXDE, and GL-25 find the optimum on function . Though RHCSA cannot get the best results on functions , and , it still gets the comparable solutions with respect to the best results obtained by the other algorithms. To the composite problems , RHCSA reaches the global best as HLCSA, OXDE, SaDE, and GL-25. To the function , RHCSA surpasses the other algorithms except for HLCSA.

Similar results can be observed from Table 9 when equals 30. RHCSA can consistently obtain good results which are even slightly better than those on 10- problems. RHCSA achieves the best performance on functions , , , , , , , , , and compared with the other algorithms. Even RHCSA cannot find the best results on functions , , and ; its obtained results are very close to the best results obtained by the other algorithms. For example, the best result on function is found by OXDE while a comparable result obtained by RHCSA is . RHCSA greatly improves the results on functions , , and . It obtains comparatively good results on functions , , and . It can be observed that the scalability of RHCSA is pretty well when dealing with high-dimensional optimization problems.

5. Conclusion

This paper presents a new clonal selection based algorithm, in which combinatorial recombination and hypermutation are introduced to enhance the search ability. The proposed algorithm is tested on 16 commonly used benchmark functions with unimodal, multimodal, rotation, and composite characteristics. Firstly, the CSAs with and without recombination operator is compared. The experimental results indicate that the proposed recombination operator is able to improve the performance of the CSA. And then the modified hypermutation makes further improvement. Finally, the proposed algorithm is compared with the state-of-the-art algorithms. The experimental results show the competitiveness of the RHCSA.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (no. 61403349, no. 61402422, no. 61501405, no. 61401404, and no. 61302118), Science and Technology Research Key Project of Basic Research Projects in Education Department of Henan Province (no. 15A520033, no. 14B520066, and no. 17B510011), and Doctoral Foundation of Zhengzhou University of Light Industry (no. 2013BSJJ044) and in part by the Program for Science and Technology Innovation Talents in Universities of Henan Province, under grant no. 17HASTIT022.