"An overview of evolutionary computation 001.ps.gz" - читать интересную книгу автора



An Overview of Evolutionary Computation

\Lambda

Xin Yao Computational Intelligence Group

Department of Computer Science University College, The University of New South Wales Australian Defence Force Academy, Canberra, ACT, Australia 2600

Email: [email protected]

Abstract This paper presents a brief overview of the field of evolutionary computation. Three major research areas of evolutionary computation will be discussed; evolutionary computation theory, evolutionary optimisation and evolutionary learning. The state-of-the-art and open issues in each area will be addressed. It is indicated that while evolutionary computation techniques have enjoyed great success in many engineering applications, the progress in theory has been rather slow. This paper also gives a brief introduction to parallel evolutionary algorithms. Two models of parallel evolutionary algorithms, the island model and the cellular model, are described.

1 Introduction The field of evolutionary computation has grown rapidly in recent years [1, 2, 3]. Engineers and scientists with quite different backgrounds have come together to tackle some of the most difficult problems using a very promising set of stochastic search algorithms -- evolutionary algorithms (EAs). There are several different types of EAs; genetic algorithms (GAs) [4, 5], evolutionary programming (EP) [6, 7] and evolution strategies (ESs) [8, 9]. Each type has numerous variants due to different parameter settings and implementations. The answer to the question which EA is the best is problem dependent. There is no universally best algorithm which can achieve the best result for all problems. One of the challenges to the researchers is to identify and characterise which EA is best suited to which problem.

EAs have two prominent features which distinguish themselves from other search algorithms. First, they are all population-based. Second, there is communications and information exchange among individuals in a population. Such communications and information exchange are the result of selection, competition and recombination. A simple EA can be summarised by Figure 1, where the search operators are often called genetic operators in GAs. Different representation or encoding schemes, selection schemes, and search operators will define different EAs. For example, GAs normally use crossover and mutation as search operators, while EP only uses mutation. GAs often emphasise genetic evolution, while EP puts more emphasis on the evolution of behaviours.

\Lambda This work is partially supported by a University College Special Research Grant. The paper is based on an invited tutorial given at ICYCS'95. To be published in Chinese Journal of Advanced Software Research (Allerton Press, Inc., New York, NY 10011), Vol. 3, No. 1, 1996.

1

1. Generate the initial population P (0) at random, and set i = 0; 2. REPEAT

(a) Evaluate the fitness of each individual in P (i); (b) Select parents from P (i) based on their fitness in P (i);

(c) Apply search operators to the parents and get generation P (i + 1);

3. UNTIL the population converges or the time is up

Figure 1: A Simple Evolutionary Algorithm. 1.1 Evolutionary Algorithms as Generate-and-Test Search Although EAs are often introduced from the point of view of survival of the fittest and from the analogy to natural evolution, they can also be understood through the framework of the generateand-test search. The advantage of introducing EAs as a type of generate-and-test search algorithms is that the relationships between EAs and other search algorithms, such as simulated annealing (SA), tabu search (TS), hill-climbing, etc., can be made clearer and thus easier to explore. Under the framework of generate-and-test search, different search algorithms investigated in artificial intelligence, operations research, computer science, and evolutionary computation can be unified together. Cross-interdisciplinary studies are expected to generate more insights into the search problem in general. A general framework of generate-and-test search is shown by Figure 2.

1. Generate the initial solution at random and denote it as the current solution; 2. Generate the next solution from the current one by perturbation; 3. Test whether the newly generated solution is acceptable;

(a) Accepted it as the current solution if yes; (b) Keep the current solution unchanged otherwise.

4. Goto Step 2 if the current solution is not satisfactory, stop otherwise.

Figure 2: A General Framework of Generate-and-Test. It is quite clear that various hill-climbing algorithms can be described by Figure 2 with different strategies for perturbation. They all require the new solution to be no worse than the current one to be acceptable. SA does not have such a requirement. It regards a worse solution to be acceptable with certain probability. The difference among classical SA [10], fast SA [11], very fast SA [12], and a new SA [13] is mainly due to the difference in their perturbations, i.e., methods of generating the next solution.