Overview & Motivation

Modern research in artificial intelligence strives as much to create intelligence as it does realism in behaviours. This is an effort to reproduce human intelligence, rather than just generic computer intelligence. One field that has the second goal at heart is animat research, which focuses on creating virtual animals, or even human-like synthetic creatures.

This essay describes a major trend in this field: embodied agents. They are also known as embedded agents, but the confusion with portable electronics products is a reason to shy away from that terminology. That said all along the essay, the words embededness and embodiment will be interchangeable.

First, we'll start by defining what's so special about these kinds of agent. Then we'll look at the motivation behind the creation of such agents and discuss the challenges that they entail. Finally, we'll wrap up by looking at the applications of such technology, and how everything relates to game AI projects.


Some of you may be familiar with the word ``agent'', in the context of computer science. It usually applies to a smart piece of software that can perform tasks in a somewhat intelligent way. This includes web spiders, virtual assistants like that annoying Word paperclip, or IRC bots. What do these entities have in common? They are purely virtual; they do not have a body of any kind -- or if they do, it has no use. In that sense, they have more freedom, since they do not obey to fundamental rules of physics (just electronic rules).

In virtual worlds, there are artificial animals, synthetic creatures, animats. Like their biological counter-parts, they have a body to deal with. Admittedly, in real-life, those bodies are actually submitted to physical constraints, whereas in the simulation those are just programmed rules. Fundamentally, it's the same thing.

An embodied agent is an autonomous living creature, subject to the constraints of its environment.

In effect, this is just a consequence of giving the agent a body to control! The agent is the piece of software, the brain if you will. The body is the interface between the brain and the world; it provides sensations, and can execute actions. In many respects, it can be considered a limitation of the agent's capabilities, but I prefer to see it as the definition of the agent's purpose.

So, in essence, embedding is about actively enforcing these constraints. There are more or less pro-active ways of doing this, depending on how much realism is involved. As you may expect, it is quite lax in computer games, whereas academic research tries to take a more authentic approach.



The major reason for choosing embodied agents is realism. You're actually using a biologically inspired, physically accurate simulation of the animat's body. As a consequence, many of the behaviours observed will appear authentic. This is an intrinsic consequence of the embeddedness, and has many practical examples. Here are two quick ones to wet your appetite:

These properties are what some game AI developers are striving to, without necessarily realising the grand scheme of things. Picking out and simulating a small subset of "embeddedness" is probably slightly more efficient, but the nasty idiosyncrasies and artefacts observed often make them pay the price.


When you actively define what an animat's body is capable of, you're essentially defining its behaviour. There will be room for specific individual comportment, just like humans exhibit unique characteristics despite having very similar physiological appearance. For embodied agents, such a specification is a first step towards standardisation. Not only would this make the task of researchers and developers much simpler, but this would also allow testing of underlying AI modules on equal terms.

If you lurk in gamers' (or even developers') forum after a new game with good AI comes out, you'll often hear requests for AI hardware. While I'm not sure time will change my mind on this topic, I believe the debate itself is about 3 years premature. Why? There is no robust specification for an underlying AI; many techniques have proven themselves valuable, yet none have a clear advantage over others. On the other hand, the interface with the environment will change very little over the next few years. Eventually, when it does require extending, defining a backwards-compatible specification should be no problem.


When developing complex programs, software models that promote abstraction, black boxes and object oriented design paradigms tend to shine through. AI is no different. While AI in current game simulations still remains trivial, this is not an issue. However, when these are expected to scale up in complexity and behaviour realism, one will expect the underlying code to increase as well.

The embodied animat concept is ideally equipped to expose such modularity; there's the body, and the brain. The task of designing and implementing these modules is split between the engine coders:

This paradigm, like other modular programming languages, is ideal for portability. Bots can be exported from current projects, live beyond a single game engine. They major part of the code can thereby be reused

Remember you can visit the Message Store to discuss this essay. Comments are always welcome!. There are already replies in the thread, why not join in?