Reflections after The First International Round-Table on Abstract Intelligent Agent (AIA93)
In " Abstract Intelligent Agent, 2". Printed by ENEA, Rome 1995, ISSN/1120-558X
(the Proceedings of the Second International Round-Table on Abstract Intelligent Agent , 23-25 February 1994)
We review the perspectives on artificial intelligent agents that
have been discussed at the Round Table in Rome in January 1993. An abstract intelligent agent (AIA) is a hypothetical model that
captures the essence of all systems which we accept as "intelligent"
in the common sense. We review different perspectives on agents discussed
at the Round -Table, including the difference between agents and physical
objects, agent behavior, observation and action capacities, communication
with other agents, architecture and reasoning. We define the concepts of
agent, abstraction, autonomy and intelligence, and we discuss how the behavioral,
social, and functional context contribute to these notions. We envision
AIA as an algorithm which can be shared among many carriers and which is
unspecific about concrete links with the real world. We confront several
concepts of autonomy, which emphasize the means of the agent, agent's self-reliance,
and capability of acting independently from other agents. We analyze different
notions of intelligence and their compatibility with the notion of a universal
abstract intelligent agent. Finally we summarize the AIA research program
which calls for an abstract dynamic architecture capturing the foundation
of intelligence, and for the specification of physical systems which can
be carriers and activators of this structure.
|...one of the central goals of AI is to develop artificial agents that embody all the components of intelligence ...|
|An abstract intelligent agent can be viewed as a common model that captures the essence of all systems which we accept as "intelligent" in the common sense.|
He claimed that it is impossible to build a complete model of human
perceptual processing, and that we can only build models of restricted
tasks. As different commentators pointed out, from the engineering point
of view "to build a complete model of human perceptual processing"
is neither necessary nor is it fully defined. The restricted tasks,
however can be defined, on different levels of abstraction, and for many
large classes of problems.
In the behavioral context, an agent is mainly viewed as an interactive physical system, whose reactions to stimuli, acting as Shannon's information, depend not only on the state of its environment, but also on the agent's internal states, usually, unknown for its external observer.
In the social context, an agent is mainly viewed and recognized by message exchanges. This point of view, rooted in sociology, has been represented by Castelfranchi.
The third, functional context, represents engineering point of view and it is concerned with agent's ability to perform a set of external and internal functions. This point of view has been stressed by Rzevski and Zytkow.
On the meta-conceptualization level, Gadomski distinguished between functional (goal-oriented) and processual representations of agents.All three perspectives employ the idea of a system whose reactions can not be derived directly from its physico-chemical properties, and which must be symbolically coded.
For example, according to Rzevski, we can distinct three types of agents:
- programmed agent; Conventional robots and computer controlled machine tools are examples of such systems.
- proto-intelligent agent; They can be artificial or biological, and react to the state of their environments. Thermostats and auto-pilots are extreme examples of such systems.
- intelligent agent; These agents also have a capability of coping with uncertainty.All Rzevski's agents can be considered abstract, because they can be physically and computationally realized in different ways. For example, the same classification features of a proto-intelligent agent can be obtained by symbol processing, subsymbolic behavior of neural network, and by physical properties of mechanical interactions. The term "autonomous intelligent agents" is used in the domain of robotics . How are abstract intelligent agents different? Why are they important?
The term "abstract intelligence" used in the name of the Round Table raised plenty of controversy. The concept of abstract agent suggests capturing the essence of intelligence while abstracting from some properties of the real agents. From the cognitive perspective, the big question is what can be abstracted away, while the resultant agent remains intelligent. From the constructive perspective, the problem is, what minimum of internal functionality, goals, and external interaction is needed to represent intelligence.
For Gadomski, a program as listed by its source code is abstract, because to become a concrete physical process in the real world, it requires two types of links with the world. First, via translation to the machine language the program can be carried by hardware which becomes its physical carrier. Second, the operation of sensors and manipulators linked to the program make each application concrete and make the knowledge collected by the computer empirically interpretable by humans. Zytkow's machine discovery system, which can be augmented by different sets of sensors and manipulators, illustrates this point. Each set of sensors and manipulators, networked to the discovery system on one end and connected to a particular experiment setup creates an application in a different domain. Numerous applications have demonstrated that the same abstract repertoire of discovery techniques can work in different domains.
It has been argued that the
|interaction with the world is necessary element for intelligence, because only the real world is a continuous source of input information.|
Everybody agrees that an artificial intelligent agent must be goal-oriented, goal-driven or goal-directed, but the attempts at goal definition brought more ideas than solutions.
In the common sense, a goal is a subjective, desired state of the world. In humans, goal can be desirable, needed or intended (M.P.Georgeff).
In artificial intelligence, goals are possible states of agent's activity domains, real or abstract, which activate reasoning that leads, typically through decision making or planning, to actions that bring the agent closer to the goal state.
Zytkow's discoverer is such an agent, based on a network of generic goals and plans for goal execution. Discovery goals require search, so most of the generic plans are search schemes.
In Gadomski's theory, what is called a goal of a system X depends on the intelligent agent who interprets X, for instance, on the designer of X or on the system X itself.
In the first case, he called the goal of X a design-goal of X, while in the second case he proposed the term intervention goal.
From his engineering perspective, each goal is always conceptualized in terms of the description of the preselected agent's domain of activity, and every intervention-goal is the product of the agent's preference system.
Castelfranchi analyzed goals, in the context of social interactions, by tuples composed of
(1) the description how the goal was established, and
(2) the registered results of the goal oriented action recognized by an intelligent agent (the observer) as successful.
- According to one notion of autonomy, represented by Castelfranchi "The more I can do, the more autonomous I am." An agent is autonomous in what it can do without help, "pressure" or commands from its environment, including other agents. Autonomy depends on individual "power" of an agent.
As a result, he distinguished different types of autonomy and reactivity of agents.
- According to another notion (Zytkow), autonomy is measured not so much by the means of the agent, but by its self-reliance. The less external help is needed into the working of an agent, the more autonomous this agent is. The opposite of autonomy is an agent who must be externally adapted to perform each task, for instance, whose search parameters, such as depth or breadth of search, search operators, acceptance thresholds, and the like, must be adjusted to the task. Because of autonomy in knowledge seeking, a good machine discoverer is a role model for autonomous intelligent agent. As a discoverer, in contradistinction to a learner, an agent cannot rely on somebody else, who knows better.
- Still another notion of autonomy, based on interaction within a society of agents, has been offered by Rzevski. He treats autonomy of an agent as capability of acting independently from other agents.
Considering that autonomy refers to agent's activity, it may be possible to reach a precise definition of autonomy after a consensus is reached on the specifications of the basic activities and functions required for an AIA.
- For Gadomski, to be intelligent, an agent must be autonomous, but sometimes an autonomous agent can be "convinced" to execute orders, depending on its own preferences.
In humans, capability for discovery, that is, for autonomous construction of knowledge, is strongly related to intelligence.
According to Zytkow's argument, if intelligence depends on autonomous transfer of problem solving and cognitive skills to a new domain, then a discoverer is indeed a role model of intelligent agent. Just as a discovery can be granted only if the discovered objects or properties have not been known to the agent, intelligence is a capability to act favorably towards the agent's goals in new situations. In both cases the capability to explore and represent new situations is critical.
Gadomski argued that if an abstract program is "stiff", that is, if it does not change, it cannot be intelligent. Since intelligence can be measured by capability to solve new problems, in contrast to solving similar problems by a known algorithm in a fixed problem space, he argued that programs are intelligent only if they are able to change themself. If discovery is performed according to a "stiff" algorithm, then no matter how complex it is, it cannot be intelligent, because it does not adjust.
In his response, Zytkow acknowledged that in AI we are far from understanding the adjustment to new circumstances. In one sense of "new circumstances", no program is intelligent. This is when we define "new circumstances" as the complement of the closure of all situations which can be effectively explored by a given discovery system.
But are we sure that we humans can really adjust beyond the closure over all elementary capabilities of our mind and body? A constructive approach to this question is explored by discovery system builders, when they add new skills to a discovery system in order to capture human discovery capabilities. At each stage of construction we can ask how far we got in matching human discovery skills.
This question can be answered empirically by extensive testing, and theoretically by the analysis of the closure. It is hard to say whether we can ever reach the complete system which misses no skills, but we will certainly build the increasingly powerful and useful intelligent agents. Zytkow argued that a fixed program does not necessarily imply lack of flexibility. We should consider not only the algorithm, but also all the data available to the running program. A fixed program can behave very differently as new knowledge is discovered and available to the program. New knowledge may change goal selection, yield new search operators which expand the search space, and so forth, all within a fixed program.
To continue this discussion, the term "fixed abstract program" must be defined in a more explicit way.
The third approach to intelligence was developed by Rzevski. His implicit assumption was that a human organization may be considered an intelligent agent, and may be more intelligent than separate humans. For Rzevski, intelligence is a capability of a system to achieve a goal or sustain desired behavior under conditions of uncertainty. For him, intelligent agents have a capability of coping with poorly structured and changing environment, learning from others and from their own experience.
High intelligence is based on ability to create own goals, concepts, and theories, and on self-understanding that leads to self-reproduction.
Summarizing, many thoughts on intelligence at AIA93 can be categorized into two contexts, which we call behavioral and structural.
In the behavioral context, intelligence can be considered a capacity of a system to execute a set of actions, which are recognized in humans as the necessary symptoms of "intelligence". According to the behavioral criteria, intelligence can be defined by a necessary minimal set of such actions. But until the architecture of intelligence is captured the construction of intelligent systems will focus on limited tasks. In this sense, of course, sufficiently complex but invariant algorithmic program, which deals effectively with a well defined domain can be viewed as "intelligent". For another program, applied in another domain, the intelligence can be the consequence of completely different internal mechanisms. Here, one system, intelligent in a domain A, is not intelligent in the domain B. We may argue that the number of different intelligent behaviors in different contexts is unbounded, so that a definition based on their enumeration is not feasible. If this is true, then from the behavioral point of view, a behavioral "general intelligence" does not exist, each intelligence is "local", and the hypothesis of one universal abstract intelligent agent is wrong.
The engineering point of view of a designer, which we call the structural context, represented by Gadomski in TOGA, and supported by Straszak, views intelligence as a property of both the functional architecture and the reasoning mechanisms of an abstract system. System's complex internal processes cause numerous, externally observed intelligent behaviors. These symptoms depend on observational capacity of the observer, and, in general, their number can be infinite.
On the other hand, intelligence of an agent can be unrecognized by its external observer, such cases are possible if knowledge or prefrences of the agent are false or inadequated to the agent's task. From the structural perspective, the behavior such as cooperation and negotiation with influence, power, autonomy, interdependence and reactivity, analyzed by Casetlfranchi, are only complex consequences of the abstract architectures of intelligent agent, and should have architecture based definitions.
|if we accept the hypothesis of abstract intelligent agent then we ought to understand intelligence in the structural context.|
1. Gadomski A.M. (editor). The Proceedings of the First International Round Table on Abstract Intelligent Agent, Rome, January 1993, Printed by ENEA, 1994
2. Borello L.M., Gadomski A.M., AIA93: First International Round Table on Abstract Intelligent Agent, AIIA Notizie,Anno VI, N.3, September, 1993
3. Laird J., Rosenbloom P.S., AI Magazine, Winter, 1991.
4. Kanade T., Groen F.C, and Hertzberger L.O., (editors). Intelligent Autonomous Systems, Proceedings of an International Conference held in Amsterdam, December, 1989
ENEA's Links on the Meta-knowledge Engineering and Management Research Server (MKEM)