What applications of EAs are there?
How can their results be applied to Software Engineering?

In principle, EAs can compute any computable function, i.e. everything a normal digital computer can do.

EAs are especially badly suited for problems where efficient ways of solving them are already known, (unless these problems are intended to serve as benchmarks). Special purpose algorithms, i.e. algorithms that have a certain amount of problem domain knowledge hard coded into them, will usually outperform EAs, so there is no black magic in EC.

EAs should be used when there is no other known efficient problem solving strategy, and the problem domain is NP-complete. That's where EAs come into play: heuristically finding solutions where all else fails.

Following is an incomplete list of successful EA applications:

Timetabling - This has been addressed quite successfully with GAs. A very common manifestation of this kind of problem is the timetabling of exams or classes in Universities, etc.

The first application of GAs to the timetabling problem was to build the schedule of the teachers in an Italian high school. The research, conducted at the Department of Electronics and Information of Politecnico di Milano, Italy, showed that a GA was as good as Tabu Search, and better than simulated annealing, at finding teacher schedules satisfying a number of hard and soft constraints. The software package developed is now in current use in some high schools in Milano. (Colorni et al 1990)

At the Department of Artificial Intelligence, University of Edinburgh, timetabling the MSc exams is now done using a GA (Corne et al. 93, Fang 92). An example of the use of GAs for timetabling classes is (Abramson & Abela 1991).

In the exam timetabling case, the fitness function for a genome representing a timetable involves computing degrees of punishment for various problems with the timetable, such as clashes, instances of students having to take consecutive exams, instances of students having (eg) three or more exams in one day, the degree to which heavily-subscribed exams occur late in the timetable (which makes marking harder), overall length of timetable, etc. The modular nature of the fitness function has the key to the main potential strength of using GAs for this sort of thing as opposed to using conventional search and/or constraint programming methods. The power of the GA approach is the ease with which it can handle arbitrary kinds of constraints and objectives; all such things can be handled as weighted components of the fitness function, making it easy to adapt the GA to the particular requirements of a very wide range of possible overall objectives . Very few other timetabling methods, for example, deal with such objectives at all, which shows how difficult it is (without GAs) to graft the capacity to handle arbitrary objectives onto the basic "find shortest- length timetable with no clashes" requirement. The proper way to weight/handle different objectives in the fitness function in relation to the general GA dynamics remains, however, an important research problem!

GAs thus offer a combination of simplicity, flexibility & speed which competes very favorably with other approaches, but are unlikely to outperform knowledge-based (etc) methods if the best possible solution is required at any cost. Even then, however, hybridisation may yield the best of both worlds; also, the ease (if the hardware is available!) of implementing GAs in parallel enhances the possibility of using them for good, fast solutions to very hard timetabling and similar problems.

Job-Shop Scheduling - The Job-Shop Scheduling Problem (JSSP) is a very difficult NP-complete problem which, so far, seems best addressed by sophisticated branch and bound search techniques. GA researchers, however, are continuing to make progress on it. Davis started off GA research on the JSSP, Whitley reports on using the edge recombination operator (designed initially for the TSP) on JSSPs. More recent work includes Nakano, and others. These report increasingly better results on using GAs on fairly large benchmark JSSPs; neither consistently outperform branch & bound search yet, but seem well on the way. A crucial aspect of such work (as with any GA application) is the method used to encode schedules. An important aspect of some of the recent work on this is that better results have been obtained by rejecting the conventional wisdom of using binary representations in favor of more direct encodings. For example, a genome directly encodes operation completion times, while in other research genomes represent implicit instructions for building a schedule. The success of these latter techniques, especially since their applications are very important in industry, should eventually spawn advances in GA theory.

Concerning the point of using GAs at all on hard job-shop scheduling problems, the same goes here as suggested above for `Timetabling': The GA approach enables relatively arbitrary constraints and objectives to be incorporated painlessly into a single optimization method. It is unlikely that GAs will outperform specialized knowledge-based and/or conventional OR-based approaches to such problems in terms of raw solution quality, however GAs offer much greater simplicity and flexibility, and so, for example, may be the best method for quick high-quality solutions, rather than finding the best possible solution at any cost. Also, of course, hybrid methods will have a lot to offer, and GAs are far easier to parallelize than typical knowledge-based/OR methods.

Other scheduling problems

In contrast to job shop scheduling some maintenance scheduling problems consider which activities to schedule within a planned maintenance period, rather than seeking to minimise the total time taken by the activities. The constraints on which parts may be taken out of service for maintenance at particular times may be very complex, particularly as they will in general interact.

Human and Computer Immunology - GAs can be used to evolve behaviors for detecting foreign bit streams, virus, by mimicking the human immunosystem and the manner in which T-cells attack human virus. Stephanie Forrest, Ph.D of the University of Mexico is working on both human and computer immunology:

Immune systems are adaptive systems in which learning takes place by evolutionary mechanisms similar to biological evolution. Their major function is to provide a defense mechanism for the body that can identify dangerous foreign material and eliminate it. Understanding the immune system is important, both because of its role in complex diseases such as AIDS and because of potential applications to computational problems. The mechanisms of the immune system are remarkably complex and poorly understood, even by immunologists.

Our immune system models are based on a universe in which antigens (foreign material) and lymphocytes (the cells that do the recognition of foreign material) are represented by strings of discrete symbols. The strings represent receptors on B cells and T cells and epitopes on antigens---the regions of the cells in which binding takes place. The complex chemistry of molecular binding is modeled in our system by string matching."

This model was used to study several different aspects of the immune system, including its ability to detect common patterns (schemas) in noisy environments, its ability to discover and maintain coverage of diverse pattern classes, and its ability to learn effectively, even when not all antibodies are expressed and not all antigens are presented. Current research directions include modeling original antigenic sin and location-specific behavior (immune cells that behave differently depending on their physical location in the body). This work is in collaboration with Alan Perelson.

The research direction for computer immunology was to build an artificial immune system for a computer. "This immune system would have much more sophisticated notions of identity and protection than those afforded by current operating systems, and it would provide a general-purpose protection system to augment current computer security systems."

Her colleagues and her developed a computer virus detection system based on immunological principles. Thechange-detection algorithm defined a set in terms of its complement. This allows them to create a detection system with the following properties: it is probabilistic and tunable (the probability of detection can be traded off against CPU time and space), it can be distributed (providing high system-wide reliability at low individual cost), and it can detect novel viruses that have not previously been identified.

She is currently working on methods for anomaly detection with the ablility to detect several common intrusions among processes.

Game Playing - GAs can be used to evolve behaviors for playing games. Work in evolutionary game theory typically surrounds the evolution of a population of players who meet randomly to play a game in which they each must adopt one of a limited number of moves. (Maynard-Smith 1982). Let's suppose it is just two moves, X and Y. The players receive a reward, analogous to Darwinian fitness, depending on which combination of moves occurs and which move they adopted. In more complicated models there may be several players and several moves.

The players iterate such a game a series of times, and then move on to a new partner. At the end of all such moves, the players will have a cumulative payoff, their fitness. This fitness can then be used as a means of conducting something akin to Roulette-Wheel selection to generate a new population.

The real key in using a GA is to come up with an encoding to represent player's strategies, one that is amenable to crossover and to mutation. possibilities are to suppose at each iteration a player adopts X with some probability (and Y with one minus such). A player can thus be represented as a real number, or a bit-string by interpreting the decimal value of the bit string as the inverse of the probability.

An alternative characterisation is to model the players as Finite State Machines, or Finite Automata (FA). These can be though of as a simple flow chart governing behaviour in the "next" play of the game depending upon previous plays. For example:

100 Play X 110 If opponent plays X go to 100 120 Play Y 130 If opponent plays X go to 100 else go to 120 Represents a strategy that does whatever its opponent did last, and begins by playing X, known as "Tit-For-Tat." (Axelrod 1982). Such machines can readily be encoded as bit-strings. Consider the encoding "1 0 1 0 0 1" to represent TFT. The first three bits, "1 0 1" are state 0. The first bit, "1" is interpreted as "Play X." The second bit, "0" is interpreted as "if opponent plays X go to state 1," the third bit, "1", is interpreted as "if the opponent plays Y, go to state 1." State 1 has a similar interpretation. Crossing over such bit-strings always yields valid strategies.

Simulations in the Prisoner's dilemma have been undertaken (Axelrod 1987, Fogel 1993, Miller 1989) of these machines.

Traveling Salesman Problem and Graph Theory - Many have worked on generating solutions for this particular NP-Complete problem in graph theory. NP-Complete problems are unsolvable with a standard linear algorithm if the problems complexity is large enough. TS can be solved with a very fast computer when the number of nodes are up to, say 50. If the graph is larger, a non-linear algorithm is needed. GA is one alternative. Some researchers have formulated solutions by mapping TS to Hamiltonian Circuit, or Boolean Satisfiablility. An on-line TS Solver can demonstrate one application of GA's for a particular instance of the problem.

The Traveling Salesman problem (TS) is a classical graph-theoretical problem. It involves a traveling salesman who has to visit each of the cities in a given network before returning to his starting point, at which time his trip is complete. The algorithmic problem asks for the shortestest route, namely for a closed tour that passes through each of the nodes in the graph and whose total cost (i.e. the sum of the distances labeling the edges) is minimal.

Its variants arise in the design of telephone networks and integrating circuits, in planning construction lines, and in the programming of industrial robots, to mention but a few applications. In all of these, the ability to find inexpensive tours of the graphs in question can be quite crucial.

Evolving Cellular Automata -

"The EVCA Project - How does evolution produce sophisticated emergent computation in systems composed of simple components limited to local interactions? In this project we are constructing models of such processes, using genetic algorithms to evolve cellular automata to perform computational tasks requiring globally-coordinated information processing. We are attempting to understand in detail the evolutionary mechanisms giving rise to the discovery of global coordination in such systems. We are also developing methods for analyzing the emergent computation that takes place in the evolved systems, using the "Computational Mechanics" framework as a starting point. A longer-term goal is to develop methods for automatic programming for large-scale decentralized multiprocessor systems." Programming in the large is one of the central themes of software engineers and maintaining control and coordination of tasks in a decentralized environment presents unique concerns.

Who is concerned with EAs?

Evolutionary computation attracts researchers and people of quite dissimilar disciplines.
EC is a interdisciplinary research field:

Computer scientists: Want to find out about the properties of sub-symbolic information processing with EAs and about learning, i.e. adaptive systems in general.

They also build the hardware necessary to enable future EAs (precursors are already beginning to emerge) to huge real world problems, i.e. the term "massively parallel computation" [HILLIS92], springs to mind.

Engineers: (of varied disciples) want to exploit the capabilities of EAs on many areas to solve their application, esp. Optimization problems. decision problems. Search problems.

Roboticists: Want to build MOBOTs (MOBile ROBOTs, i.e. R2D2's and #5's cousins) that navigate through uncertain environments, without using built-in "maps". The mobots thus have to adapt to their surroundings, and learn what they can do "move-through-door" and what they can't "move- through-wall" on their own by "trial-and-error".

Cognitive scientists: Might view CFS as a possible apparatus to describe models of thinking and cognitive systems.

Physicists: Use EC hardware, e.g. Hillis' (Thinking Machine Corp.'s) Connection Machine to model real world problems which include thousands of variables, that run "naturally" in parallel, and thus can be modelled more easily and esp. "faster" on a parallel machine, than on a serial "PC" one.

Biologists: In fact many working biologists are hostile to modeling, but an entire community of Population Biologists arose with the 'evolutionary synthesis' of the 1930's created by the polymaths R.A. Fisher, J.B.S. Haldane, and S. Wright. Wright's selection in small populations, thereby avoiding local optima) is of current interest to both biologists and ECers -- populations are naturally parallel.

Philosophers: (and some other really curious people) may also be interested in EC for various reasons.

What about all this Optimization stuff?

Just think of an optimization problem as a black box. A large black box. As large as, for example, a Coca-Cola vending machine. Now, we don't know anything about the inner workings of this box, but see, that there are some regulators to play with, and of course we know, that we want to have a bottle of the real thing...

Putting this everyday problem into a mathematical model, we proceed as follows:

What can we do in order to solve this problem? We can either try to gain more knowledge or exploit what we already know about the interior of the black box. If the objective function turns out to be smooth and differentiable, analytical methods will produce the exact solution.

If this turns out to be impossible, we might resort to the brute force method of enumerating the entire search space. But with the number of possibilities growing exponentially in n, the number of dimensions (inputs), this method becomes infeasible even for low- dimensional spaces.

Consequently, mathematicians have developed theories for certain kinds of problems leading to specialized optimization procedures. These algorithms perform well if the black box fulfils their respective prerequisites. For example, Dantzig's simplex algorithm probably represents the best known multidimensional method capable of efficiently finding the global optimum of a linear, hence convex, objective function in a search space limited by linear constraints.

Gradient strategies are no longer tied to these linear worlds, but they smooth their world by exploiting the objective function's first partial derivatives one has to supply in advance. Therefore, these algorithms rely on a locally linear internal model of the black box.

Newton strategies additionally require the second partial derivatives, thus building a quadratic internal model. Quasi-Newton, conjugate gradient and variable metric strategies approximate this information during the search.

The deterministic strategies mentioned so far cannot cope with deteriorations, so the search will stop if anticipated improvements no longer occur. In a multimodal environment these algorithms move "uphill" from their respective starting points. Hence, they can only converge to the next local optimum.

Newton-Raphson-methods might even diverge if a discrepancy between their internal assumptions and reality occurs. But of course, these methods turn out to be superior if a given task matches their requirements. Not relying on derivatives, polyeder strategy, pattern search and rotating coordinate search should also be mentioned here because they represent robust non-linear optimization algorithms.

Dealing with technical optimization problems, one will rarely be able to write down the objective function in a closed form. We often need a simulation model in order to grasp reality. In general, one cannot even expect these models to behave smoothly. Consequently, derivatives do not exist. That is why optimization algorithms that can successfully deal with black box-type situations have been developed. The increasing applicability is of course paid for by a loss of "convergence velocity," compared to algorithms specially designed for the given problem. Furthermore, the guarantee to find the global optimum no longer exists!

But why turn to nature when looking for more powerful algorithms? In the attempt to create tools for various purposes, mankind has copied, more often instinctively than geniously, solutions invented by nature. Nowadays, one can prove in some cases that certain forms or structures are not only well adapted to their environment but have even reached the optimum (Rosen 67). This is due to the fact that the laws of nature have remained stable during the last 3.5 billion years. For instance, at branching points the measured ratio of the diameters in a system of blood-vessels comes close to the theoretical optimum provided by the laws of fluid dynamics (2^-1/3). This, of course, only represents a limited, engineering point of view on nature. In general, nature performs adaptation, not optimization.

The idea to imitate basic principles of natural processes for optimum seeking procedures emerged more than three decades ago. Although these algorithms have proven to be robust and direct optimization tools, it is only in the last five years that they have caught the researchers' attention. This is due to the fact that many people still look at organic evolution as a giantsized game of dice, thus ignoring the fact that this model of evolution cannot have worked: a human germ-cell comprises approximately 50,000 genes, each of which consists of about 300 triplets of nucleic bases. Although the four existing bases only encode 20 different amino acids, 20^15,000,000, ie circa 10^19,500,000 different genotypes had to be tested in only circa 10^17 seconds, the age of our planet. So, simply rolling the dice could not have produced the diversity of today's complex living systems.

Accordingly, taking random samples from the high-dimensional parameter space of an objective function in order to hit the global optimum must fail (Monte-Carlo search). But by looking at organic evolution as a cumulative, highly parallel sieving process, the results of which pass on slightly modified into the next sieve, the amazing diversity and efficiency on earth no longer appears miraculous. When building a model, the point is to isolate the main mechanisms which have led to today's world and which have been subjected to evolution themselves. Inevitably, nature has come up with a mechanism allowing individuals of one speciesto exchange parts of their genetic information (recombination or crossover), thus being able to meet changing environmental conditions in a better way.


LinkExchange Member