---------------------------------Adam Maria GADOMSKI -yet ------------------------- ENEA, C.R. Casaccia, ERG-ING-TISGI  


AGENTS   and  INTELLIGENCE  


The concepts of Agent and Intelligent Agent

There is still big confusion in understanding, realization and application of agents. They are conceptually located in various contexts, from cognitivistic modeling of human behavior, though robots and software functional components to autonomous software agents being employed in selected classes of tasks, for instance, information agents or mobile INTERNET agents, see  Intelligent Agents Repository  The concept agent is used in the subject matter literature in two basic contexts.

Software World

The first context is a software engineering where an agent is a "softagent" interacting only with software entities in a computer software world. This world can be distributed among different types of computers. The software agents execute , in more or less autonomous, or in more or less intelligent manner, the tasks of humans and other softagents according to the more or less human-like roles/functions designed by software specialists. In this way the softagents “live” in abstract symbolic worlds composed of programs, files, directories, servers, all being carried by computers. They require from the human users  communication protocols expressed in terms of this world representation. In the above context, human computer interactions usually are formalized in a classical software manner. Information agents, various INTERNET agents, data-base management agents are examples of such understood concept of agent.

Physical World

The second context of the term agent is a cognitive and engineering attempt to the explanation, modeling and simulation of human mental functions. An agent or intelligent agent is considered as some abstraction from human person to the specification of various professional, social and psychological roles. Usually these agents’ environment is a vision composed of preselected aspects of the real world. The agents  need to act autonomously or to support human interventions, for instance in domain oriented decision-making processes. They “live” in various simulations of the real world or acts indirectly in the physical, never completely describable world.

- In both cases they must model domain of activities and choice/plan actions.

Agents have two faces, one for their potential users, it represents expectations, promisses,... external functions. The second is for their developers, it represents how an agent is constructed, how it interacts with other software entities, ... this face represents agent internal functions, its architecture and other agent properties, usually not visible for the end-user.
In general,  not exists any known, unique and formal relation between these two faces, as it is easy to see in the subject matter literature. We have rather a separated set of many-to-many relations. - Such situation is characteristic for every technology in its early phases of development.

As we learn from physics - clear answers can only be done either in the context of a clear theory or in the context of particular concrete real-world situations.

Without a sufficiently general theory some general questions have only "pseudo-sense". - Why pseudo? - Because it is possible to obtain quasi-infinite number of more or less not congruent answers. - Of course, our final choice of some of them depends on OUR goals.

But where is the science here?

- Of course, theories in the computer science are rather goal-oriented. - I think, we need a consensus on some sufficiently general ontology ( not only on many local particular standarizations). - We need a sufficiently abstract, common conceptualization platform, something analog to the Newton's classical physics. After, we will be able to answer on such questions [softagent newsgroup messages] as:

>1. How come an agent has beliefs, intention, or other mentalistic notions, after all it is a software ?

>2. How to program those mentalistic notions ?

>3. With what an agent communicate with other agent ?

>4. How do they communicate ?

- The terms: beliefs, intention, communication, agent need to have commonly accepted definitions for software engineers, psychologists, physicists, mathematicians,... and various managers.

Let us illustrate the various points of view on the agent concept:

- Agent is “ ... an autonomous, self-contained, reactive, pro-active computer system, typically with central locus of control, that is able to communicate with other agents via some ACL (Agent Communication Language). More specific usage is to mean a computer system that is either conceptualized or implemented in terms of concepts more usually applied to humans (such as beliefs, desires, and intentions).” [M.Wooldridge, 1995].

- Agent-Oriented Programming - An approach to building agents, which proposes programming them in terms of mentalistic notions such as beliefs, desire and intentions. [M.Wooldridge, 1995].

- An autonomous agent is a system situated within and a part of an environment that senses that environment and acts in it, over time, in pursuit of its own agenda and so as to effect what it senses in the future. [Franklin and Graesser, 1995]

- Agent-Based Programming - An approach to the building software systems using various agent frames as basic functional components of the designed system architecture ( so called  MAS architecture).

In the cognitivistic perspective, for the reason of many desired but not achieved yet properties of intelligent agents they are conceptualized either
 

1) as software systems executing some functions which can substitute or support selected human mental functions, or 
2) as abstract systems of requested mental functions which are implemented on the computer.

The basic difference between these two points of view relies on the start-point conceptualization system, i.e. on different ontology assumed.

Unfortunately, cognitivistic ontologies are chronically ill defined, see for example various versions of the BDI  agent, and software engineering ontologies suffers from their too concret embedding in various software environments (Ilog, ART,..) or specialized languages (KQML, TCL, Telescript, KIF...and many many others).

Important property of agents is their capability to control the acquisition /perception of information from their environment, what includes a capability to the communication with other agents. This aspect is stressed in the applications of agent concept in software engineering..

- Apart of various generic approaches available in the subject matter literature we can observe the large cloud of intermediate terminological and conceptual noise.
In general, any existing definition of agent and intelligent agent is not commonly accepted yet, first of all, because all diffused definitions suffer on the lack of a common and sufficiently formally specified ontological context.

Definition-making is an arbitrary act and its validity depends on the definition utility. Therefore we would like to stress our intuitive but pragmatic understanding of these terms.

Agent may be considered as an entity/object which is able to execute some class of symbolic external tasks, and reacts autonomously on some changes of its  environment. [Gadomski 1993].

Such definition is intuitively in agreement with the agent metaphor concept: “software agent is a computer program that functions as a “cooperating personal assistant” to the user by performing tasks autonomously or semiautonomously as delegated by the user” [Harmon,1995], and with a common understanding agent as something more then a task perception and execution program.

In the TOGA theory, agent is defined in the context of the definitions of goal and domain of activity concepts.

Goal is a specification of a hypotetical state of the domain of activity of an agent and, in an observer perspective, an agent "tends" to obtain this state (more formal definition of the goal concept is in TOGA).

Now, we can say: an agent is every goal-driven system with capability to control its own input-output (perception/communication/intervention).

Abstract intelligent agent (AIA) is a formal integrated  model of intelligent systems which is independent of their application domains and possible physical realizations, i.e. AIA  represents only these properties which we are disposed to recognize or accept as necessary for intelligent systems.

In the engineering perspective, the idea of AIA leads to the model based design of reasoning modules for highly autonomous robots or other KB knowledge systems, such as intelligent information management systems, intelligent plant operator support systems, intelligent management decision support, intelligent CIM/CASE conceptual design support, intelligent tutoring systems, intelligent groupware/cooperation/coordination software tools. All these systems in autonomous way support humans in specification and solving of complex, incomplete and ill-defined problems referred to the real world. A model of AIA could  be a key theoretical base for the performance of such tasks.

In the socio-cognitive perspective AIA can be considered as an abstract conceptual frame for modeling the behavior of living organisms recognized arbitrarily as intelligent. It can provide the conceptual and terminological "bridge" between natural (evolution) and artificial (designed) systems, for their simulations as well as for teaching, diagnosis and treatment. These applications should be interesting for psychology, sociology, organization research and economy. We can remark that the human AIA developer can be viewed as a specific physical realization of an AIA.

Contrary to physical systems which behave according to physical laws and externally observed initial states, the behavior of an intelligent agent also depends on its unobserved and variable "mental" states represented by such abstracts concepts as
information, goal, preference, knowledge.

A most general definition of an  agent could be:
A goal-driven system capable to act on and to monitor its environment, where an intervention goal is a representation of the hypotetical state of the environment in agent memory. This goal should be modificable by the agent itself.

In such sense, agent behavior and its internal states are not uniquely determined by input information as it is in the case of physical bodes.
 

- If we assume that "a system is reactive if it always reacts in the same way on the same information" then
"an agent is not a reactive system".

If we assume that : "A system is reactive if it reacts on all input information" then "agent is not a reactive system" because it is goal-driven.

If we use term agent without indication of a particular context, its meaning refers to any software or real-word agent/intelligent-agent.

As a consequence of the above generic definitions, agent can communicate with other agents, can search selected types of information and, of course, it can be mobile.

I distinguish the following generic functional components of an intelligent agent:

- Such architecture is independent on agent roles.
- According to the above structure, agents without the kernel are not intelligent yet.

- According to my working hypothesis, other functions of intelligent agents will be possible to obtain either by aggregation or by decomposition of the above components.

We should notice that if agent knowledge, preferences and information are separated, in adequate "generalized data bases", from domain independent reasoning mechanizms then we may speak about a structural intelligence. Such intelligence is a domain/problem independent property of an agent shell.

Intelligent agents with structural intelligence are intelligent independently on the quantity and quality of their specific domain-knowledge and their observable behavior [Gadomski , Zytkow,1994]. We can imagine that many architectures (based on different ontologies) of abstract intelligent agents (AIA) is possible to construct. For this reason, I introduced the term personoid as a name of one kind of AIA which is based on generic interrelations between the concepts: information, preferences and knowledge; their suggested particular structure is called IPK architecture.

Using IPK ontology [TOGA theory], the defintions of basic mental human-like agent properties seems to be possible.

In my opinion, the personoid definition seems to be intuitively congruent and complementary to the various agent definitions being investigated in “Agentology” (soft agents, intelligent agents, cognitive agents, softbot, human agent, etc., see for instance [ S.Franklin, A.Graeser,1995/6]).

In the approach here assumed, I initially suggest the following general functional definitions of software agent and intelligent agent.
 Software agent is a functional software module that is able to execute some predefined class of  external tasks and has an autonomy during these tasks realization. It reacts on the predefined states of its environment according to acquisted  information, its own build-in preferences, and knowledge.
 Intelligent agent is an agent with a capability to change and to evaluate its own  preferences and knowledge, i.e. it has ability to learn and to change goals if the initial intervention goal is not reachable. 

 

[Gadomski,1990.93]

Some Remarks:  Structural and Behavioral Intelligence

The TOGA concept of intelligence is founded on a structural pattern of an abstract simple agent [Gadomski,1994], this assumption is contrary to the behavior based definitions of intelligence (more frequent in the subject matter literature), but it should be more efficient for intelligent systems design.

The behavioral intelligence is always visible, the structural one, if knowledge or preferences are wrong or not sufficient for task executions, can be not visible for agent external observers.

An important advantage of the structural intelligence approach is that all role-agents can be constructed using the same personoid shell. Therefore a structural definition of intelligence seems to be more efficient for the design and reusing intelligent software architectures.

We should notice that TOGA is an intelligent-agent-based rather then an agent-oriented approach. The discussed approach has evolved from the TOGA theory (1990). TOGA includes a general functional architecture of an abstract intelligent agent (AIA). A clear distinction between an abstract simple agent and an abstract intelligent agent is there performed. The AIA architecture is an “essence” of a personoid, it does not depend on its physical realization.
The TOGA software simple agents are called monads.

In such context:

beliefs - this term represents our relation to a particular data. Its formal context in the literature is rather poor. Usually it relates to other ill defined term "knowledge". Of course, in a specific contexts, beliefs may have concrete notions, BDI agent [Georgeff, Rao,..], but in my opinion, beliefs are always every data and every algorithms used by agents without sufficient experimental verification.
Belief reperesents only a relative point of view on the IPK concepts, and, subsequently, this point of view depends on the particular/current observer's (an intelligent agent) IPK. For example, in a concrete situation, what is a belief for an intelligent agent A can be seen as a knowledge/information for an intelligent agent B.


Meta-remarks

In general, one selected piece of a program can be considered as: an expression in a certain language, an agent knowledge, information/knowledge/belief/preference/intention for different intelligent agents. Therefore, the key element in the agent-base system development is a predefined and explicite establishing of the 'point of views' when the above terms we are going to use. ...

I . . .  ...yet, sorry, I'll continue later, I always expect on your comments.


Some relevant/interesting, intelligent (?) agent-related, meta-information sources::

and some persons and sentences (of course we have different conceptualizations/ontologies, but ...):

NEXT ,   .HOME


Previous modification: Nov.17, 1997. Last modification: July 20,1998.