Agent Architectures |
[Gat, 1991-1993] ATLANTIS (A Three-Layer Architecture for Navigating Through Intricate Situations). Gat's main thesis is that the argument of planning versus reactivity is really an argument about the use of internal state, which is an argument about making predictions about the world. He proposes that internal state should be maintained only at a high level of abstraction, and that it should be used to guide a robot's actions but not to control them directly.
The fundamental problem of state is that every piece of internal state carries with it an implicit prediction that the information contained in this state will continue to be valid for some time to come. These predictions, coupled with the unpredictability of the world, are the root of the problem. The constructive observation is that state should only be used to record facts that are likely to be at least usually correct.
Abstraction is simply a description of something. The level of an abstraction is simply an inverse measure of the precision of the description. The more abstract, the more broadly applicable the information. Gat gives the example of the location of his house (he knows roughly, but not exactly, where it is). By combining abstract state information with local sensors, he is able to find the house or the mustard in his refrigerator.
Actions based on stored internal state sometimes fail. In humans, occasional errors of this sort are not a problem for two reasons: First, people use the information in their world models to guide their actions but not to control them directly. Second, people can tell when things go wrong and can take corrective action. We carefully engineer our environment to eliminate the opportunity to make the sort of mistakes that are difficult to recover from -- falling off cliffs, for example.
The ATLANTIS action model is based on operators whose execution consumes negligible time, and thus do not themselves bring about changes in the world but instead initiate processes which then cause change. These processes are called activities, and the operators which initiate (and terminate) them are called decisions.
The architecture consists of three components: the controller, the sequencer, and the deliberator.
The controller is responsible for moment-by-moment control of the robot's actuators in response to the current value of the robot's sensors. The controller is a purely reactive control system.
To support the kinds of transfer functions needed to control the reactive robot, the ALFA (A Language For Action) language was developed. ALFA is similar in spirit to REX but the two languages provide very different abstractions. ALFA programs consist of computational models which are connected to each other and to the outside world by means of communications channels. Both the computations performed and their interconnections are specified within module definitions, allowing modules to be inserted and removed without having to restructure the communications network.
The sequencer is responsible for controlling sequences of primitive activities and deliberative computations. Controlling sequences is difficult primarily because the sequencer must be able to deal effectively with unexpected failures. This requires careful maintenance of a great deal of state information because the sequencer must be able to remember what has been done in the past to decide what to do now.
The fundamental design principle underlying the sequencer is the notion of cognizant failure. A cognizant failure is a failure which the system can detect somehow. Rather than design algorithms which never fail, we instead use algorithms which (almost) never fail to detect a failure.
The sequencer initiates and terminates primitive activities by activating and deactivating sets of modules in the controller. It can also send parameters to the controller via channels.
The sequencer is modeled after Firby's Reactive Action Package (RAP) system. The system maintains a task queue, which is simply a list of tasks the system must perform. Each task contains a list of methods for performing that task, together with annotations describing under what circumstances each method is applicable. A method is either a primitive action or a list of sub-tasks to be installed on the task queue. The system works by successively expanding tasks on the queue until they either finish or fail. When a task fails, an alternate method is tried.
The main difference between the original RAP system and the ATLANTIS sequencer is that the latter controls activities rather than atomic actions. This requires a few modifications to the RAP system. First, the system must insure that two activities that can interfere with one another are not active simultaneously. Second, if a primitive activity must be interrupted the system must be sure it is cleanly terminated (since conflicts are controlled by locking resources).
The deliberator is responsible for maintaining world models and constructing plans. It performs all manner of time-consuming computations at a high level of abstraction. The results are placed in a database where they are used as an additional source of input data by the sequencer, rather than directly controlling the robot.
Note that all deliberative computations are initiated (and may be
terminated before completion) by the sequencer. It typically consists
of a set of LISP programs implementing traditional AI algorithms. The
function of the sequencer and controller is to provide an interface
which connects to physical sensors and actuators on one end, and to
classical AI algorithms on the other.
[Hayes-Roth, 1985-present] BB1 is a blackboard system for general intelligent agent control.
The central issue to agent design is the control problem: which of its potential actions should a system perform next in the problem-solving process? In solving this problem, an agent system will decide what problems to solve, what knowledge to bring to bear, how to evaluate alternative solutions, when problems are solved, and when to change its focus of attention. In solving the control problem an agent determines its own cognitive behavior.
AI should approach the control problem as a real-time planning problem. BB1 operationalizes intelligent problem solving as achievement of the following goals:
Notice that, in contrast with SOAR, this approach does not consider learning an integral part of intelligent action (it does not purport to address intelligence, just the control problem), and it also makes an explicit distinction between domain and control activity.
The BB1 architecture achieves these goals. Its important features include
The blackboard approach is a problem-solving framework originally developed for the HEARSAY-II speech understanding system. It entails three basic assumptions:
BB1 extends the basic assumptions of a blackboard system as follows: