Adam Maria GADOMSKI

ABSTRACT INTELLIGENT AGENTS:
PARADIGMS, FOUNDATIONS AND
CONCEPTUALIZATION PROBLEMS

          Reflections after The First International Round-Table on Abstract Intelligent Agent (AIA93)

In  " Abstract Intelligent Agent, 2".  Printed by ENEA, Rome 1995, ISSN/1120-558X

(the Proceedings of the Second International Round-Table on Abstract Intelligent Agent , 23-25 February 1994)


Adam M. Gadomski, ENEA, Italy
Jan M. Zytkow, Wichita State University, USA 


Abstract


We review the perspectives on artificial intelligent agents that have been discussed at the Round Table in Rome in January 1993. An abstract intelligent agent (AIA) is a hypothetical model that captures the essence of all systems which we accept as "intelligent" in the common sense. We review different perspectives on agents discussed at the Round -Table, including the difference between agents and physical objects, agent behavior, observation and action capacities, communication with other agents, architecture and reasoning. We define the concepts of agent, abstraction, autonomy and intelligence, and we discuss how the behavioral, social, and functional context contribute to these notions. We envision AIA as an algorithm which can be shared among many carriers and which is unspecific about concrete links with the real world. We confront several concepts of autonomy, which emphasize the means of the agent, agent's self-reliance, and capability of acting independently from other agents. We analyze different notions of intelligence and their compatibility with the notion of a universal abstract intelligent agent. Finally we summarize the AIA research program which calls for an abstract dynamic architecture capturing the foundation of intelligence, and for the specification of physical systems which can be carriers and activators of this structure.


1. Introduction


More and more frequently, the complexity of industrial and social systems under human management leads to serious unexpected consequences, caused by human errors or by deficiency in planning. To alleviate these faults, it becomes increasingly realistic to support human reasoning and human execution of complex tasks by intelligent computer systems. Models of artificial intelligent agents can provide theoretical base for construction of such systems.
Problems of artificial intelligent agents have been discussed by an international group of scientists and engineers at the Round Table in Rome in January 1993. Many perspectives were brought together. AI specialists confronted their views with thinkers motivated by philosophy and psychology. Practitioners, who work with computer system: programmers, designers, and experimentators, sought common language with theoreticians, and with those who prefer intuitive thinking about intelligent agents. During the three day brain-storming, many presentations provoked hot reactions of participants. Finally, the closing debate session was finished only for the reason of time. The organizers have been motivated by their assessment of the situation in artificial intelligence, cognitive science, and other sciences engaged in the problems of intelligence, human and artificial.
Currently, more than 130 conferences, symposia and workshops related to these subject are held every year. The explosive and fragmented development of AI technologies, the increasing number of research communities which work on seemingly similar problems, but build their hermetic perspectives, creates the need for a common theoretical foundation.
In seeking a solution one should consider the big success of physics in the beginning of XX century, when one coherent conceptualization and representation system has been created and accepted by all physicists, laying the base for enormously sophisticated and successful applications of physics in other sciences and engineering. Critical to this success was the assumption of the great physicists that the really useful theory must be elegant, pretty and simple to understand.
The need for a similar theory in the area of artificial intelligent agents motivated the AIA Round-Table. In this paper we attempt to compare different perspectives on intelligent agents discussed at AIA93, and reflect on different perspectives, different terminologies and different conceptualization systems. Rather than describe the presented papers and debates, we will summarize several main controversies and review the contributions to the general concept of abstract intelligent agent. To this end we sometimes reconceptualized the presented ideas and merged similar points of view from different papers and from contributions to the debate sessions. References are given by name only. The papers have been published in the proceedings [1]. A review of the Round-table contributions was given in [2].
We will focus on a number of questions recurred in discussions at AIA93:

- What can be called "intelligence of the agent"?
- What distinguishes abstract intelligent agent among intelligent agents?
- What is agent's autonomy?
- What are the most important features of intelligent agent?
- What conceptual frameworks are most useful in AIA modeling?

2. The Hypothesis of AIA and AIA Paradigms

At AIA93, the abstract agents were discussed from two dominant points of view. The first was based on sciences' focus on the phenomena in the real world. This cognitive science approach, based on identification and analysis of behavioral features of humans, was represented by the majority of participants, especially by psychologists and sociologists. The second was the engineering perspective, in which knowledge is applied to the construction of useful artifacts. This leads to the questions of utility and generality of the proposed intelligen problem solving mechanisms. This approach was represented by the AI researchers with the physics and engineering background.
According to Laird and Rosenbloom [3]
...one of the central goals of AI is to develop artificial agents that embody all the components of intelligence ...

Although "all the components of intelligence" are notoriously undefined, in the first approximation we can see the goal of AIA in the same way.
An abstract intelligent agent can be viewed as a common model that captures the essence of all systems which we accept as "intelligent" in the common sense.

That such a model exists is the hypothesis of AIA. A model of intelligent systems would advantage both the cognitive and the engineering perspective, enabling us to organize the study of human rational behavior, to formalize it, and to design artificial systems with similar to humans capacities for real world problem solving. The vision of numerous future AIA applications was a strong motivation for many discussants. The real-world applications make it important to consider how intelligent agents interact with environment, which includes other intelligent agents as well as physical objects. In distinction to physical objects which just interact, agents act on their goals and they also use mental representation of the environment and other agents (for simplicity, we will use the term 'agent' as equivalent to 'intelligent agent').
 
The relationships between an agent, its physical environment, and other agents which were in focus of many discussions during AIA93, fall in a number of categories:
A. environment centered, related to the representation of physical dynamic objects and other physical agents.
B. agent behavior centered, related to agent's observability and its action capacities, including communication with other agents.
C. architecture centered, focused on architectural elements and reasoning products described in such terms as knowledge, beliefs,
     goals, and plans.
D. reasoning centered, focused on mental processes and functions which connect and transform all concepts in A--C.

Several advanced frameworks presented at AIA93, varied in their treatment of categories A-D. For instance, C.Castelfranchi considered a multi-agent social environment in which many intelligent agents cooperate and negotiate using their influence, power, autonomy, and interdependence. He focused on autonomous reactive reasoning on high abstraction level and on behavior meaningful to other agents.
G. Rzevski tooks another view at multi-agent systems. He considered an architecture of perception, cognition and execution distributed among agents and applied to technological and social domain. This has been a vision of an organization which is composed of intelligent individuals but we can interpret it as one distributed intelligent agent.
 In his approach, called TOGA (Top-down Object-based Goal-oriented Approach), A.M. Gadomski proposed a general functional theory of interactions between an abstract intelligent agent and its domain of activity.
This domain is an "image" of abstract (mental) or physical systems, as well as other intelligent agents. TOGA is composed of the frameworks for top-down specification of the agent - environment couple, where the agent architecture is based on conceptual triangles composed of three basic systems:
- preferences,
- knowledge, and
- abstract domain of activity.
 
Preferences and knowledge from one triangle are domains of activity for the other, on the higher abstraction level. Reasoning processes, activated by the information input, rely on continuous modifications of those elements, until all goals produced by preference systems have been achieved.
He illustrated how some features of intelligent system behavior can be derived from such architecture. In general, his approach emphasized the need for goal-oriented reconceptualization of our knowledge and for more precise terminology, adequate to the task of AIA modeling. It can be located between the cognitive and engineering conceptualizations.
Still another perspective was introduced by J.M.Zytkow. His approach, engineering in nature, has been based on concrete software and hardware which can make discoveries in many physical domains. He described physical and abstract components of a robot-discoverer, and the mechanism of interaction with the real world via manipulations and measurements, which leads to scientific data acquisition. Discovery methods are computer executable procedures arranged in a dynamic network of goals and plans that controls creations of concrete goals, instantiates plans for goal satisfaction, which leads to new discoveries.
Other contributions were focused on specific subjects. For example, P.L.Marconi spoke on exploration of flexibility of thinking within the framework of expert systems. His work was motivated by psychopathology, in particular by analysis of schizophrenia. He pointed out a striking analogy between psychopathological descriptors and terms used to describe the mismatch between expert systems and human experts' performance. He argued that similar flexibility of thinking is desired both for treating psychopathologies and fallacies of expert systems. While making expert systems more flexible is difficult, in his research Marconi found that working with expert systems increases mental flexibility in humans.
Eliano Pessa presented a concrete iconic model of the perception of rotation as a human perceptual task, and discussed evidence for that model. He was pessimistic, however, on the feasibility of the broader task of capturing the entire human perceptual system.

He claimed that it is impossible to build a complete model of human perceptual processing, and that we can only build models of restricted tasks. As different commentators pointed out, from the engineering point of view "to build a complete model of human perceptual processing" is neither necessary nor is it fully defined. The restricted tasks, however can be defined, on different levels of abstraction, and for many large classes of problems.

3. Problems of Basic Concepts

In general, throughout the AIA93 one could notice many vague terms and apprehend a terminological redundancy. In this section we analyze the meanings of the most fundamental terms such as: agent, abstract, intelligent.

3.1 Why "Abstract" Agents?

In Distributed AI, agents are studied in three main contexts:

C1. behavioral, that is, in the contexts of physical interactions with the environment;
C2. social, that is, in the context of communication, understood as symbolic interaction with other agents;
C3. functional, that is, in the context of goals and solutions programmed by agent's designer. 

In the behavioral context, an agent is mainly viewed as an interactive physical system, whose reactions to stimuli, acting as Shannon's information, depend not only on the state of its environment, but also on the agent's internal states, usually, unknown for its external observer.

In the social context, an agent is mainly viewed and recognized by message exchanges. This point of view, rooted in sociology, has been represented by Castelfranchi.

The third, functional context, represents engineering point of view and it is concerned with agent's ability to perform a set of external and internal functions. This point of view has been stressed by Rzevski and Zytkow.

On the meta-conceptualization level, Gadomski distinguished between functional (goal-oriented) and processual representations of agents.

All three perspectives employ the idea of a system whose reactions can not be derived directly from its physico-chemical properties, and which must be symbolically coded.

 

For example, according to Rzevski, we can distinct three types of agents:

- programmed agent; Conventional robots and computer controlled machine tools are examples of such systems.

- proto-intelligent agent; They can be artificial or biological, and react to the state of their environments.  Thermostats and auto-pilots are extreme examples of such systems.

- intelligent agent; These agents also have a capability of coping with uncertainty.

All Rzevski's agents can be considered abstract, because they can be physically and computationally realized in different ways. For example, the same classification features of a proto-intelligent agent can be obtained by symbol processing, subsymbolic behavior of neural network, and by physical properties of mechanical interactions. The term "autonomous intelligent agents" is used in the domain of robotics [4]. How are abstract intelligent agents different? Why are they important?

The term "abstract intelligence" used in the name of the Round Table raised plenty of controversy. The concept of abstract agent suggests capturing the essence of intelligence while abstracting from some properties of the real agents. From the cognitive perspective, the big question is what can be abstracted away, while the resultant agent remains intelligent. From the constructive perspective, the problem is, what minimum of internal functionality, goals, and external interaction is needed to represent intelligence.

For Gadomski, a program as listed by its source code is abstract, because to become a concrete physical process in the real world, it requires two types of links with the world. First, via translation to the machine language the program can be carried by hardware which becomes its physical carrier. Second, the operation of sensors and manipulators linked to the program make each application concrete and make the knowledge collected by the computer empirically interpretable by humans. Zytkow's machine discovery system, which can be augmented by different sets of sensors and manipulators, illustrates this point. Each set of sensors and manipulators, networked to the discovery system on one end and connected to a particular experiment setup creates an application in a different domain. Numerous applications have demonstrated that the same abstract repertoire of discovery techniques can work in different domains.

          It has been argued that the

interaction with the world is necessary element for intelligence, because only the real world is a continuous source of input information.
This is increasingly better understood by construction of AI systems and by analysis of their real-world performance. If we see the necessary role of development through concrete systems, we do not want "abstract" to be opposed to "concrete". We can understand "abstract" as an algorithm, abstracted from concrete implementation, which can be shared among many carriers (in TOGA) and which is unspecific about concrete links with the real world and unspecific about important details of selection of parameter values for a particular application. Interaction loop with the environment provides the interpretation to the information and knowledge available to an agent. It is necessary to emphasize that the separation of an abstract intelligent agent from a particular carrier system, such as biological or computer, does not deprive the agent of abstract functions that represent its interaction with a physical environment.
In oposite, from the reusing perspective, software modules constructed on the base of an abstract intelligent agent model should have wide range of possible applications for various information systems.


3.2 Agent's goals

Everybody agrees that an artificial intelligent agent must be goal-oriented, goal-driven or goal-directed, but the attempts at goal definition brought more ideas than solutions.

In the common sense, a goal is a subjective, desired state of the world. In humans, goal can be desirable, needed or intended (M.P.Georgeff).

In artificial intelligence, goals are possible states of agent's activity domains, real or abstract, which activate reasoning that leads, typically through decision making or planning, to actions that bring the agent closer to the goal state.

Zytkow's discoverer is such an agent, based on a network of generic goals and plans for goal execution. Discovery goals require search, so most of the generic plans are search schemes.

In Gadomski's theory, what is called a goal of a system X depends on the intelligent agent who interprets X, for instance, on the designer of X or on the system X itself.

In the first case, he called the goal of X a design-goal of X, while in the second case he proposed the term intervention goal.

From his engineering perspective, each goal is always conceptualized in terms of the description of the preselected agent's domain of activity, and every intervention-goal is the product of the agent's preference system.

Castelfranchi analyzed goals, in the context of social interactions, by tuples composed of

(1) the description how the goal was established, and

(2) the registered results of the goal oriented action recognized by an intelligent agent (the observer) as successful.


3.3 Agent's autonomy

- According to one notion of autonomy, represented by Castelfranchi "The more I can do, the more autonomous I am." An agent is autonomous in what it can do without help, "pressure" or commands from its environment, including other agents. Autonomy depends on individual "power" of an agent.

As a result, he distinguished different types of autonomy and reactivity of agents.

- According to another notion (Zytkow), autonomy is measured not so much by the means of the agent, but by its self-reliance. The less external help is needed into the working of an agent, the more autonomous this agent is. The opposite of autonomy is an agent who must be externally adapted to perform each task, for instance, whose search parameters, such as depth or breadth of search, search operators, acceptance thresholds, and the like, must be adjusted to the task. Because of autonomy in knowledge seeking, a good machine discoverer is a role model for autonomous intelligent agent. As a discoverer, in contradistinction to a learner, an agent cannot rely on somebody else, who knows better.

- Still another notion of autonomy, based on interaction within a society of agents, has been offered by Rzevski. He treats autonomy of an agent as capability of acting independently from other agents.

Considering that autonomy refers to agent's activity, it may be possible to reach a precise definition of autonomy after a consensus is reached on the specifications of the basic activities and functions required for an AIA.

- For Gadomski, to be intelligent, an agent must be autonomous, but sometimes an autonomous agent can be "convinced" to execute orders, depending on its own preferences.


3.4 Intelligence

In humans, capability for discovery, that is, for autonomous construction of knowledge, is strongly related to intelligence.

According to Zytkow's argument, if intelligence depends on autonomous transfer of problem solving and cognitive skills to a new domain, then a discoverer is indeed a role model of intelligent agent. Just as a discovery can be granted only if the discovered objects or properties have not been known to the agent, intelligence is a capability to act favorably towards the agent's goals in new situations. In both cases the capability to explore and represent new situations is critical.

Gadomski argued that if an abstract program is "stiff", that is, if it does not change, it cannot be intelligent. Since intelligence can be measured by capability to solve new problems, in contrast to solving similar problems by a known algorithm in a fixed problem space, he argued that programs are intelligent only if they are able to change themself. If discovery is performed according to a "stiff" algorithm, then no matter how complex it is, it cannot be intelligent, because it does not adjust.

In his response, Zytkow acknowledged that in AI we are far from understanding the adjustment to new circumstances. In one sense of "new circumstances", no program is intelligent. This is when we define "new circumstances" as the complement of the closure of all situations which can be effectively explored by a given discovery system.

But are we sure that we humans can really adjust beyond the closure over all elementary capabilities of our mind and body? A constructive approach to this question is explored by discovery system builders, when they add new skills to a discovery system in order to capture human discovery capabilities. At each stage of construction we can ask how far we got in matching human discovery skills.

This question can be answered empirically by extensive testing, and theoretically by the analysis of the closure. It is hard to say whether we can ever reach the complete system which misses no skills, but we will certainly build the increasingly powerful and useful intelligent agents. Zytkow argued that a fixed program does not necessarily imply lack of flexibility. We should consider not only the algorithm, but also all the data available to the running program. A fixed program can behave very differently as new knowledge is discovered and available to the program. New knowledge may change goal selection, yield new search operators which expand the search space, and so forth, all within a fixed program.

To continue this discussion, the term "fixed abstract program" must be defined in a more explicit way.

The third approach to intelligence was developed by Rzevski. His implicit assumption was that a human organization may be considered an intelligent agent, and may be more intelligent than separate humans. For Rzevski, intelligence is a capability of a system to achieve a goal or sustain desired behavior under conditions of uncertainty. For him, intelligent agents have a capability of coping with poorly structured and changing environment, learning from others and from their own experience.

High intelligence is based on ability to create own goals, concepts, and theories, and on self-understanding that leads to self-reproduction.

Summarizing, many thoughts on intelligence at AIA93 can be categorized into two contexts, which we call behavioral and structural.

In the behavioral context, intelligence can be considered a capacity of a system to execute a set of actions, which are recognized in humans as the necessary symptoms of "intelligence". According to the behavioral criteria, intelligence can be defined by a necessary minimal set of such actions. But until the architecture of intelligence is captured the construction of intelligent systems will focus on limited tasks. In this sense, of course, sufficiently complex but invariant algorithmic program, which deals effectively with a well defined domain can be viewed as "intelligent". For another program, applied in another domain, the intelligence can be the consequence of completely different internal mechanisms. Here, one system, intelligent in a domain A, is not intelligent in the domain B. We may argue that the number of different intelligent behaviors in different contexts is unbounded, so that a definition based on their enumeration is not feasible. If this is true, then from the behavioral point of view, a behavioral "general intelligence" does not exist, each intelligence is "local", and the hypothesis of one universal abstract intelligent agent is wrong.

The engineering point of view of a designer, which we call the structural context, represented by Gadomski in TOGA, and supported by Straszak, views intelligence as a property of both the functional architecture and the reasoning mechanisms of an abstract system. System's complex internal processes cause numerous, externally observed intelligent behaviors. These symptoms depend on observational capacity of the observer, and, in general, their number can be infinite.

On the other hand, intelligence of an agent can be unrecognized by its external observer, such cases are possible if knowledge or prefrences of the agent are false or inadequated to the agent's task. From the structural perspective, the behavior such as cooperation and negotiation with influence, power, autonomy, interdependence and reactivity, analyzed by Casetlfranchi, are only complex consequences of the abstract architectures of intelligent agent, and should have architecture based definitions.


Summarizing,

if we accept the hypothesis of abstract intelligent agent then we ought to understand intelligence in the structural context.

4. Conclusions

At the AIA93 Round-table, the problem of AIA was attacked from many perspectives. The results were fruitful but preliminary, pointing at many foundamental but open questions.
The AIA research program calls for an abstract dynamic architecture which captures the foundation of intelligence, and for the specification of physical systems which can be carriers and activators of this abstract structure (Gadomski).
We wish to understand better, how the abstract intelligent agent should be defined and conceptually separated from its physical carrier system. Such a carrier must be specific, but its high level functions should be independent of their physical realization (Zytkow, Rzevski).
Psychological background of human intelligence (M.Olivetti Belardinelli) can make, in the future, a significant contribution to the identification of the interrelations between AIA and its biological carrier. The 1993 Round-Table conclusions have been by and large focused on the identification and the initial specification of the problem. The approaches ranged from concrete, yet narrow implementati (Zytkow), to general theory which captures the essential aspects of AIA but require prototypical implementation and experimental verification (Gadomski), to intuitive classification supported by many interesting examples (Rzevski), and to a large panorama of reasoning mechanisms in the social contexts, which requires better formalization and integration (Castelfranchi). Georgeff contributed a formal model of rational reasoning, which may work if the knowledge and preferences are understood and correctly structured by the agent's programmer. We wish these approaches are future expanded towards the common goal.
Effective AIA should use a goal-oriented representation of physical systems, which links what is with what is sought to be.

We hope that the future AIA Round-tables will bring progress on these issues.

References

1. Gadomski A.M. (editor). The Proceedings of the First International Round Table on Abstract Intelligent Agent, Rome, January 1993, Printed by ENEA, 1994

 2. Borello L.M., Gadomski A.M., AIA93: First International Round Table on Abstract Intelligent Agent, AIIA Notizie,Anno VI, N.3, September, 1993

 3. Laird J., Rosenbloom P.S., AI Magazine, Winter, 1991.

 4. Kanade T., Groen F.C, and Hertzberger L.O., (editors). Intelligent Autonomous Systems, Proceedings of an International Conference held in Amsterdam, December, 1989


        ENEA's Links on the Meta-knowledge Engineering and Management Research Server (MKEM)

 

   Agent and Intelligence   |    Personoids  |    TOGA Meta-theory  |   MKEM Road-Map  |  High-Intelligence & Decision Research Group