This section discusses a few things to keep in mind when designing agents for the kinds of worlds that Asimulator simulates. An action that is solely based upon the current percept is called a reflex. A reflex may be good enough in some cases, such as to grab an item when finding it. An agent could be purely reflexive, wandering around randomly and grabbing items. It could also have a certain probability to shut off when perceiving Home.
But to really succeed in the world, agents need to act upon more information than the current percept. So all but the simplest agents will need some memory, called state. What this state may be depends on the class of worlds that the agent is designed to operate in. A regular class of worlds can be explored in a certain pattern. For example the class of worlds where all interior tiles are clear, all borde tiles are wall, and the agent starts in a certain corner can be explored by going in a spiral (see the sample agent Einar). Then the state can be very simple. But for an agent to operate in irregular classes of worlds (for example where interior tiles may be walls), the state must be more complex.
Let us assume that no tiles are known to the agent when the simulation begins. Then all the information that the agen receives comes as percepts. If the agent remembers all the percepts that it has received and all the actions that it has performed, it will know all that it could know. (In fact, storing the actions is redundant. The agent could infer them from the percepts.) All the information that was ever available to the agent is in the percept/action sequence. But this raw data, let us call it level 0 information, is not suitable for infering which action should be performed next.
The information has to be organized better. Most people would find it obvious to store the information in the form of a map and a variable that tells the coordinates and direction of the agent. Updating this information, let us call it level 1 information, is somewhat more complicated than simply adding a percept/action pair to the sequence. But now we have really usable information. The agent knows that tiles that it has been on are clear and tiles that it has bumped into are walls. It also knows which tiles it has not yet explored, let us call them unknowns.
But this is not all there is. While the map is a matrix, it is possible to see the world from a yet higher level, which we will call level 2 information. For example, instead of just knowing which tiles are unknown, it is possible to think of contiguous areas of unknown tiles. See the example agent Gunnar, which does this and uses the information to explore small areas of unknowns first.
But contiguous areas of unknown tiles is not the only example of level 2 information that can bee seen in the example agent Gunnar. The known tiles are also thought of as contiguos areas, called rooms, delimited by walls and something called passages (in short, narrow places, with walls on both sides, where the agent can pass through). The agent marks all tiles in a room as such, which prevents the pathfinding from searching through the rooms using the raw tile matrix. Instead the pathfinding uses precalculated routes, called tunnels, through the rooms. This speeds up pathfinding, but updating the tunnel information is much more complicated than simply updating the map.
The step from level 1 to level 2 information is somewhat like the step from raster data to vector data in computer graphics.