---------------------------------Adam Maria GADOMSKI -yet ------------------------- ENEA, C.R. Casaccia, ERG-ING-TISGI
AGENTS and INTELLIGENCE
The concepts of Agent and Intelligent Agent
There is still big confusion in understanding, realization and application of agents. They are conceptually located in various contexts, from cognitivistic modeling of human behavior, though robots and software functional components to autonomous software agents being employed in selected classes of tasks, for instance, information agents or mobile INTERNET agents, see Intelligent Agents Repository The concept agent is used in the subject matter literature in two basic contexts.
The first context is a software engineering where an agent is a "softagent" interacting only with software entities in a computer software world. This world can be distributed among different types of computers. The software agents execute , in more or less autonomous, or in more or less intelligent manner, the tasks of humans and other softagents according to the more or less human-like roles/functions designed by software specialists. In this way the softagents “live” in abstract symbolic worlds composed of programs, files, directories, servers, all being carried by computers. They require from the human users communication protocols expressed in terms of this world representation. In the above context, human computer interactions usually are formalized in a classical software manner. Information agents, various INTERNET agents, data-base management agents are examples of such understood concept of agent.
The second context of the term agent is a cognitive and engineering attempt to the explanation, modeling and simulation of human mental functions. An agent or intelligent agent is considered as some abstraction from human person to the specification of various professional, social and psychological roles. Usually these agents’ environment is a vision composed of preselected aspects of the real world. The agents need to act autonomously or to support human interventions, for instance in domain oriented decision-making processes. They “live” in various simulations of the real world or acts indirectly in the physical, never completely describable world.
- In both cases they must model domain of activities and choice/plan actions.
Agents have two faces, one for their
potential users, it represents expectations, promisses,... external functions.
The second is for their developers, it represents how an agent is constructed,
how it interacts with other software entities, ... this face represents
agent internal functions, its architecture and other agent properties,
usually not visible for the end-user.
In general, not exists any known, unique and formal relation between these two faces, as it is easy to see in the subject matter literature. We have rather a separated set of many-to-many relations. - Such situation is characteristic for every technology in its early phases of development.
As we learn from physics - clear answers can only be done either in the context of a clear theory or in the context of particular concrete real-world situations.
Without a sufficiently general theory some general questions have only "pseudo-sense". - Why pseudo? - Because it is possible to obtain quasi-infinite number of more or less not congruent answers. - Of course, our final choice of some of them depends on OUR goals.
But where is the science here?
- Of course, theories in the computer science are rather goal-oriented. - I think, we need a consensus on some sufficiently general ontology ( not only on many local particular standarizations). - We need a sufficiently abstract, common conceptualization platform, something analog to the Newton's classical physics. After, we will be able to answer on such questions [softagent newsgroup messages] as:
>1. How come an agent has beliefs, intention, or other mentalistic notions, after all it is a software ?
>2. How to program those mentalistic notions ?
>3. With what an agent communicate with other agent ?
>4. How do they communicate ?
- The terms: beliefs, intention, communication, agent need to have commonly accepted definitions for software engineers, psychologists, physicists, mathematicians,... and various managers.
Let us illustrate the various points of view on the agent concept:
- Agent is “ ... an autonomous, self-contained, reactive, pro-active computer system, typically with central locus of control, that is able to communicate with other agents via some ACL (Agent Communication Language). More specific usage is to mean a computer system that is either conceptualized or implemented in terms of concepts more usually applied to humans (such as beliefs, desires, and intentions).” [M.Wooldridge, 1995].
- Agent-Oriented Programming - An approach to building agents, which proposes programming them in terms of mentalistic notions such as beliefs, desire and intentions. [M.Wooldridge, 1995].
- An autonomous agent is a system situated within and a part of an environment that senses that environment and acts in it, over time, in pursuit of its own agenda and so as to effect what it senses in the future. [Franklin and Graesser, 1995]
- Agent-Based Programming - An approach to the building software systems using various agent frames as basic functional components of the designed system architecture ( so called MAS architecture).
In the cognitivistic perspective, for the reason of many desired but
not achieved yet properties of intelligent agents they are conceptualized
The basic difference between these two points of view relies on the start-point conceptualization system, i.e. on different ontology assumed.
Unfortunately, cognitivistic ontologies are chronically ill defined, see for example various versions of the BDI agent, and software engineering ontologies suffers from their too concret embedding in various software environments (Ilog, ART,..) or specialized languages (KQML, TCL, Telescript, KIF...and many many others).
- Apart of various generic approaches available in the subject matter
literature we can observe the large cloud of intermediate terminological
and conceptual noise.
In general, any existing definition of agent and intelligent agent is not commonly accepted yet, first of all, because all diffused definitions suffer on the lack of a common and sufficiently formally specified ontological context.
Definition-making is an arbitrary act and its validity depends on the definition utility. Therefore we would like to stress our intuitive but pragmatic understanding of these terms.
Agent may be considered as an entity/object which is able to execute some class of symbolic external tasks, and reacts autonomously on some changes of its environment. [Gadomski 1993].
Such definition is intuitively in agreement with the agent metaphor concept: “software agent is a computer program that functions as a “cooperating personal assistant” to the user by performing tasks autonomously or semiautonomously as delegated by the user” [Harmon,1995], and with a common understanding agent as something more then a task perception and execution program.
In the TOGA theory, agent is defined in the context of the definitions of goal and domain of activity concepts.
Goal is a specification of a hypotetical state of the domain of activity of an agent and, in an observer perspective, an agent "tends" to obtain this state (more formal definition of the goal concept is in TOGA).
Abstract intelligent agent (AIA) is a formal integrated model of intelligent systems which is independent of their application domains and possible physical realizations, i.e. AIA represents only these properties which we are disposed to recognize or accept as necessary for intelligent systems.
In the engineering perspective, the idea of AIA leads to the model based design of reasoning modules for highly autonomous robots or other KB knowledge systems, such as intelligent information management systems, intelligent plant operator support systems, intelligent management decision support, intelligent CIM/CASE conceptual design support, intelligent tutoring systems, intelligent groupware/cooperation/coordination software tools. All these systems in autonomous way support humans in specification and solving of complex, incomplete and ill-defined problems referred to the real world. A model of AIA could be a key theoretical base for the performance of such tasks.
In the socio-cognitive perspective AIA can be considered as an abstract conceptual frame for modeling the behavior of living organisms recognized arbitrarily as intelligent. It can provide the conceptual and terminological "bridge" between natural (evolution) and artificial (designed) systems, for their simulations as well as for teaching, diagnosis and treatment. These applications should be interesting for psychology, sociology, organization research and economy. We can remark that the human AIA developer can be viewed as a specific physical realization of an AIA.
Contrary to physical systems which behave according to physical laws
and externally observed initial states, the behavior of an intelligent
agent also depends on its unobserved and variable "mental" states
represented by such abstracts concepts as
information, goal, preference, knowledge.
A most general definition of an agent could be:
A goal-driven system capable to act on and to monitor its environment, where an intervention goal is a representation of the hypotetical state of the environment in agent memory. This goal should be modificable by the agent itself.
In such sense, agent behavior and its internal states are not uniquely
determined by input information as it is in the case of physical bodes.
If we assume that : "A system is reactive if it reacts on all input information" then "agent is not a reactive system" because it is goal-driven.
If we use term agent without indication of a particular context, its meaning refers to any software or real-word agent/intelligent-agent.
I distinguish the following generic functional components of an intelligent agent:
- Such architecture is independent on agent roles.
- According to the above structure, agents without the kernel are not intelligent yet.
- According to my working hypothesis, other functions of intelligent agents will be possible to obtain either by aggregation or by decomposition of the above components.
We should notice that if agent knowledge, preferences and information are separated, in adequate "generalized data bases", from domain independent reasoning mechanizms then we may speak about a structural intelligence. Such intelligence is a domain/problem independent property of an agent shell.
Intelligent agents with structural intelligence are intelligent independently on the quantity and quality of their specific domain-knowledge and their observable behavior [Gadomski , Zytkow,1994]. We can imagine that many architectures (based on different ontologies) of abstract intelligent agents (AIA) is possible to construct. For this reason, I introduced the term personoid as a name of one kind of AIA which is based on generic interrelations between the concepts: information, preferences and knowledge; their suggested particular structure is called IPK architecture.
Using IPK ontology [TOGA theory], the defintions of basic mental human-like agent properties seems to be possible.
In my opinion, the personoid definition seems to be intuitively congruent and complementary to the various agent definitions being investigated in “Agentology” (soft agents, intelligent agents, cognitive agents, softbot, human agent, etc., see for instance [ S.Franklin, A.Graeser,1995/6]).
Some Remarks: Structural and Behavioral Intelligence
The TOGA concept of intelligence is founded on a structural pattern of an abstract simple agent [Gadomski,1994], this assumption is contrary to the behavior based definitions of intelligence (more frequent in the subject matter literature), but it should be more efficient for intelligent systems design.
The behavioral intelligence is always visible, the structural one, if knowledge or preferences are wrong or not sufficient for task executions, can be not visible for agent external observers.
An important advantage of the structural intelligence approach is that all role-agents can be constructed using the same personoid shell. Therefore a structural definition of intelligence seems to be more efficient for the design and reusing intelligent software architectures.
In such context:
I . . .
...yet, sorry, I'll continue later, I always expect on your comments.
and some persons and sentences (of course we have different conceptualizations/ontologies, but ...):
NEXT , .HOME
Previous modification: Nov.17, 1997. Last modification: July 20,1998.