Learning by Evolution

As mentioned in the introduction, this tutorial will consider the genetic algorithm as a big black box. This will allow us to understand its purpose from a conceptual point of view, by describing the overall approach to the problem.


Genetic Algorithms can be considered as an optimisation tool, allowing us to find an optimal set of values for the weights of the neural network. The quality of the solution is judged by a fitness function. This defines the fitness space for our problem: e.g. with two parameters it would be a landscape, with high mountains corresponding to the best solution, and the lowest crevasses representing unfit solutions. The process of computing the fitness of a solution is called evaluation. As the problem gets larger, this becomes a more and more expensive task. It is also slightly tricky in a noisy environment, as the trials need to be longer to prevent the lucky breaks having too much impact on the final fitness value!

Fitness Landscape

Figure 8: Top down view of a fitness landscape, showing poor solutions in black and good solutions in white.

The task of the optimisation algorithm is to find the best solution, i.e. the highest point on that landscape, while using the minimal number of evaluations. This process becomes more challenging as the interaction between the variables increases (i.e. with a non-linear problem).

It will suffice to say for the moment that genetic algorithms are well suited to this, since they can efficiently deal with smooth fitness spaces. We will consider the genetic algorithms in more detail in the next tutorial. This one is getting too crowded anyway, as there's already enough interesting information to keep you busy for a while!

Fitness Function

As mentioned, the key to the optimisation process is the fitness function, as it implicitly defines the behaviour of the solution. This allows us to control the outcome of the evolution by high-level parameters, which are conceptually simple. Admittedly, this can be fairly tedious, just like any other incremental design process. However, the behaviour can be tweaked to provide any desired characteristics, so it seems worth having such power available.

The main task of our fitness function is to reward quality obstacle avoidance behaviour! Other fitness components will be added on top of this. As mentioned in the design section, we must attempt not to constrain the scope of the solution, allowing it to evolve to something we may not have thought of. This can be achieved by using an implicit fitness function: rather than telling the optimisation algorithm exactly what we want by rewarding similarity to that behaviour, we simply punish it when it makes fatal mistakes. For obstacle avoidance, this means inflicting a penalty based on collision with objects.

If you try evolving the solution with simply this component in the fitness function, you will notice a few things. Firstly, the final bot may spin like crazy, thereby not moving anywhere since the forward movement will cancel any progress in a particular direction. Secondly, the bot could oscillate left and right, displaying unrealistic behaviour. The solution is to modify the fitness function to mimic the human attitude. This involves punishing oscillatory movement, and rewarding laziness

Remember you can visit the Message Store to discuss this tutorial. Comments are always welcome!. There are already replies in the thread, why not join in?