DARPA WORKSHOP ON SELF-AWARE COMPUTER SYSTEMS 2004

STATEMENTS OF POSITION

Edited from originals with minimal re-formatting, hyperlinked, with images where available and some additions. The notation (...) indicates that text is omitted, such as personal greetings from emails, etc.. March 28 2005: some broken links updated, images brought uptodate, etc.. No changes in content or text.

Aaron Sloman
Bernard Baars
Brian Williams and Greg Sullivan
Danny Bobrow and Markus Fromhertz
Deborah McGuinness
Drew McDermott
Don Perlis, Mike Anderson and Tim Oates
Eyal Amir
James Van Overschelde
John McCarthy
Ken Forbus and Tom Hinrichs
Len Schubert
Lokendra Shastri
Michael Cox
Michael Whitbrock
Mike Anderson
Owen Holland
Push Singh
Raghu Ramakrishnan
Ricardo Sanz
Richard Scherl
Richard Gabriel
Richard Thomason
Robert Stroud
Sheila McIlraith
Stan Franklin
Stuart Shapiro
Yaron Shlomi

Aaron Sloman

MY BACKGROUND AND INTERESTS First degree mathematics and physics (Cape Town 1956), DPhil Philosophy (Oxford 1962), worked in Cognitive Science and AI since about 1971. Self-knowledge is a topic on which I have been thinking (and writing) for a long time, as part of a larger project to understand architectural requirements for human-like systems. E.g. my IJCAI 1971 paper on logical and non-logical varieties of representation, chapters 6, 8 and 10 of The Computer Revolution in Philosophy(1978) ; the IJCAI 1981 paper on Why robots will have emotions, my IJCAI 1985 and ECAI 1986 papers on semantics and causality, and more recent work on architectures, for instance: Invited talk at 7th Conference of Association for the Scientific Study of Consciousness: What kind of virtual machine is capable of human consciousness? Virtual machines and consciousness, A. Sloman and R.L. Chrisley, (2003,) Journal of Consciousness Studies, 10, 4-5, pp. 113--172; The architectural basis of affective states and processes, A. Sloman, R.L. Chrisley and M. Scheutz, (To Appear) in Who Needs Emotions?: The Brain Meets the Machine, Eds. M. Arbib and J-M. Fellous, Oxford University Press.

-- GENERALISING THE QUESTION The stated purpose of the workshop is to discuss a special subset (computer systems) of a class of systems that can be made aware of themselves. I would like to try to relate this to a broader class (information-processing systems), by combining biological and engineering viewpoints, partly because I think a more general theory provides deeper understanding and partly because biological examples may extend our ideas about what is possible in engineered systems. I have been trying for about 30 years to do a collection of related things including: 1. Provide a unifying conceptual framework based on the design stance for talking about 'the space of possible minds', including both natural and artificial minds. 2. Use that conceptual framework for developing both: 2.1. scientific explanations of various kinds of natural phenomena, in many kinds of animals, including humans at different stages of development, humans in different cultures, and humans with various kinds of brain damage, or other abnormalities 2.2. solutions to engineering problems where the solutions are theoretically well-founded, and it is possible to explain why they are solutions, and not just the result of searching for designs that pass various tests. Using the design stance includes: A. Developing both an ontology for requirements as well as for designs, and also a way of describing {\em relationships} between requirements and designs that is deeper and richer than the now commonplace use of fitness functions. (For natural systems the analogue of a set of requirements is a biological niche). B. Developing ontologies for characterising designs at different levels of abstraction including the physical level, physiological levels, and the kinds of functional levels familiar in AI and software engineering. This includes the description of virtual-machine architectures in terms of their components, the kinds of functions they have, the ways the components interact (types of causal interactions in virtual machines), the kinds of information they process and the forms of representation they use. See a picture of design space and niche space, and relationships between sub-regions --

RELATIONSHIPS BETWEEN DESIGN SPACE AND NICHE SPACE In this framework, the workshop task: to discuss ways in which computer systems can be made to be aware of themselves, and what forms of self-awareness will be useful for systems with various functions. can be seen as just a special case of the investigation of the relationship between designs and requirements. However, there are many different sets of requirements and often the solutions that satisfy the requirements are not unique, unless further constraints are added to the requirements. --

CONSEQUENCES OF ADDING CONSTRAINTS TO REQUIREMENTS E.g. if a requirement is specified in purely behavioural terms, then meeting the requirement over any finite time period can always be done (in principle) by a design with a sufficiently large collection of shallow condition-action rules (where the conditions include histories). However, if further constraints are added, such as that the memory should not exceed a certain size, then it may be necessary to replace the set of shallow rules with something more abstract with generative power, in order to produce the same (potentially huge) set of behaviours in the same environment. (I.e. making the same set of counter-factual conditionals true of the agent.) If an additional constraint is that the environment can change in ways that cannot be predicted by the designer, or, in the case of a natural system, if the environment changes in ways that produce requirements not covered by the specific evolutionary history of a species then that requires a higher level of abstraction in the system, namely a learning capability. (Something can be generative without learning, like a parser for a fixed grammar.) If different niches require different sorts of learning capabilities then the architecture may be provided with different learning mechanisms and in addition the ability to detect when to switch between them. However, if the sorts of learning capabilities needed are not themselves known in advance by a designer, nor acquired in the evolutionary history of a species, then a higher level ability to learn new learning abilities will be needed. --

LEARNING TO LEARN Conjecture: Humans, and possibly some other animals, somehow evolved an ability to learn to learn, though it requires each new-born individual to go through a period in which older individuals systematically increase the difficulties of the challenges set, either through explicitly designed tasks or by steadily reducing the amount of protection and help provided to the learner, or some combination. There are many other sub-divisions among sets of requirements (niches) that impose constraints on designs that can meet the requirements. --

DEVELOPING AN ONTOLOGY FOR INTERNAL STATES For example, if the environment contains other individuals that process information then it is helpful to be able to predict or explain their actions. There are many ways this could be done, e.g. either by using very large numbers of shallow correlations between observed contexts and observed behaviours. However, it may be possible to achieve far greater generality more economically if the description of other agents uses an ontology that refers not only to their behaviours, but also to information-processing mechanisms within them, e.g. mechanisms for forming beliefs, generating desires, dealing with conflicts of desires, forming and executing plans, acquiring information by perception and reasoning, etc. where the notions of beliefs, desires, etc. are not simply abstractions from behaviour but defined in terms of assumed internal architectures and mechanisms. I.e. evolution may have given some organisms the ability to use the 'design stance' in dealing with others: this is much deeper and more general than using the intentional stance (or Newell's 'Knowledge Level'), since the latter presupposes rationality, and therefore cannot be applied to animals that are not rational, since they do not reason, but nevertheless process information, e.g. insects, mice, frogs, chickens, human infants, etc. --

OTHER/SELF SYMMETRY Many of the benefits of being able to take the design stance directed at information-processors in dealing with {\em others} can also come from taking it in relation to {\em oneself}, e.g. learning to anticipate desires and beliefs one is going to have, being able to reason about percepts one would have as a result of certain actions without actually performing the actions, being able to notice and either compensate for or in some cases remedy flaws in one's own reasoning, planning, preferences, conflict-resolution strategies, emotional relations, or learning procedures. [JMC has a useful list.] This is sometimes referred to as having a 'reflective' layer in an architecture, though my students, colleagues and I have been using the label 'meta-management' since (a) in common parlance reflection is a rather passive process and (b) some members of the research community use the label 'reflective' to cover a narrower range of processes than we do. The label is not important as long as we understand the variety of possible types of functions and mechanisms that can support them.There are some people who argue that inward-directed reflective or meta-management capabilities require no other mechanisms than the mechanisms that suffice for adopting the outward-directed design stance. For just as one can observe the behaviour of others, form generalisations, make predictions, construct explanations, in terms of the internal information-processing of other individuals, so also can an agent use exactly the same resources in relation to itself, though there will be differences. E.g. there are differences arising out of the difference of viewpoint, which will sometimes make it easier to infer the state of another than to infer one's own state (e.g. because facial expressions of others can more easily be seen), and sometimes make it easier to infer one's own state because one's own behaviour is more continuously observable and sometimes in more detail. The above line of reasoning is sometimes used to claim that self-awareness requires no special architectural support. This claim is especially to be found in the writings of certain positivist or behaviourist philosophers, and those who like to repeat the quotation: 'How can I know what I think unless I hear what I say?' (Attributed variously to E.M.Forster, Graham Wallas, Tallulah Bankhead, and possibly others....)--

WHEN DO EXTRA ARCHITECTURAL FEATURES HELP? However, from a design stance it is clear that (a) not necessarily all internal states and processes of information-processing systems will be externally manifested (e.g. because available effectors may not have sufficient bandwidth, or for other reasons) (b) it is in principle possible for internal states and processes to be internally monitored if the architecture supports the right sort of concurrency and connectivity (c) in some cases the additional abilities produced by the architectural extensions may have benefits for the individual or the species, e.g. supporting high level self-debugging and learning capabilities, or supporting the ability to short-circuit ways of asking for and requesting help (e.g. when an oculist asks the patient to describe visual experiences instead of simply using behavioural tests of accuracy in catching, throwing, discriminating, etc.). In my case, for instance, I believe that the reason I have not suffered from RSI despite spending a huge amount of time at a terminal each week, is that I earlier learnt to play musical instruments and discovered there (with the help of teachers) the importance of sensing the onset of stress and deliberately relaxing in order to produce a better tone, or smoother phrasing, or even simply in order to be able to play faster pieces at the required speed. Because I developed that internal monitoring capability when playing a flute or violin I also use it when typing and at the first sign of stress am able to adjust various aspects of how I type. This requires architectural support for internal self-monitoring, not just the ability to observe my own behaviour as I would observe others. [I can give many other examples.]--

ONTOLOGIES AND FORMS OF REPRESENTATION FOR INFORMATION-PROCESSING What I've written so far is very schematic and abstract. There is a lot more to be said about the different sorts of ontologies and different forms of representation and reasoning required for different kinds of self-awareness, and the different architectural options to be considered. One of the questions to be addressed is how good the 'folk psychology' ontology is for the purposes of a machine that thinks about machines that think. There is not just one FP ontology but several, found at different stages of individual human development, and in different cultures. FP ontologies (plural) like 'naive' conceptions of physical reality, are products of biological and social evolution, and, like all such products, they need to be understood in relation to the niches they serve. I claim that there is a core of those FP ontologies which is a useful precursor to a more refined, scientific, ontology for mental phenomena which is based on explicit recognition of the nature and functions of virtual machines. This useful core includes notions like: perception, desires, beliefs, preferences, learning, hopes, fears, attitudes, moods, puzzlement, understanding, unintentional action, surprise, pleasure, pain, introspection, etc. etc. But we can re-interpret all of those on the basis of an architectural theory. (Much work of that kind is going on in the Cognition and Affect project at Birmingham.) I.e. by adopting the design stance, we, and intelligent machines of the future, can improve on the core FP ontology in the same way as adopting something like the design stance to the physical world enabled us to extend and refine the 'folk physics' (for kinds of physical stuff, properties of physical stuff, processes involving physical stuff, etc.) That required a new theory of the architecture of matter. Likewise we'll need new theories of the architectures of minds: not 'architecture of mind', as some philosophers think, because there are many possible types of mind, with different architectures, whereas one architecture exists for matter (though there are different architectures in the same physical system, at different levels of abstraction, e.g. as studied by chemistry, materials science, geology, astrophysics, etc.) In some cases it may be possible to program an 'improved' ontology for mental phenomena and the appropriate forms of representation for expressing it, directly into some self-aware machines. In other cases it may be necessary for the ontology to be bootstrapped through processes in which the machine interacts with information processors (including itself), for instance using some kind of self-organising introspective mechanism, just as self-organising perceptual mechanisms can develop ontologies that effectively classify certain classes of sensory inputs (e.g. Kohonen nets). A self-organising self-perceiver can, in principle, develop an ontology for self-description that is a product of its own unique internal history and the particular initial parameters of its internal sensors, etc. Such agents may be incapable of expressing in any kind of language used to communicate with others the {\em precise} semantics of its self-descriptors. (At least those developed in this manner: others may be defined more by their structural and functional relationships, and those can be communicated.) The 'non-communicable' (private) concepts developed in self-organising self-perceivers are 'causally indexical', in the sense of J.Campbell (1994) Past Space and Self, as explained in my 2003 JCS article with Ron Chrisley. This causal indexicality suffices to explain some of the features of the notion of 'qualia' that have caused so much confusion among philosophers and and would-be philosophers in other disciplines. (Some intelligent artifacts may end up equally confused.)--

PERCEPTUAL SHORT-CUTS In general, getting from observed behaviour of something to a description of its internal information-processing may require convoluted and lengthy reasoning. ('It must have known X, and wanted Y, and disliked Z, because it did A and B, in circumstances D and E' etc.) (Compare debugging complex software systems.) However if getting the description right quickly is important, organisms can evolve or learn extra perceptual techniques for rapidly and automatically interpreting certain combinations of cues in terms of internal states, e.g. seeing someone as happy or intending to open the door, provided that there are certain regularities in the designs and functions of the systems being observed, which are revealed in patterns of behaviour. Those patterns can then be learnt as powerful cues (e.g. the pattern of behaviour indicating that someone is trying hard to see what's happening in a particular place.) It may also be useful for organisms to evolve expressive behaviours that make it easier for others to infer their mental state. Once that starts, co-evolution of expressive behaviours and specialised perceptual mechanisms can lead to highly expert perceptual systems. (I argued in a paper published in 1992 that if there were no involuntary expressions of behaviour, the costs to intelligent species would be too high, e.g. too much hedging of bets would be necessary: http://www.cs.bham.ac.uk/research/cogaff/0-INDEX81-95.html#10 ) This suggests that cooperating 'high level' expressive and perceptual capabilities co-evolved, as manifested in our immediate reaction to these pictures: http://www.cs.bham.ac.uk/~axs/fig/postures.gif http://www.cs.bham.ac.uk/~axs/fig/faces.gif Conjecture: both high level action mechanisms and high level perceptual mechanisms linked to the ontology and forms of representation of a meta-management (reflective) architectural layer evolved in nature, and will be needed in intelligent machines interacting with other intelligent machines, e.g. humans. (Note: this is not the same thing as training low level neural nets etc. to label faces as 'happy', 'sad', etc. as many researchers now do, for such systems have no idea what happiness and sadness are.)--

SUMMARY The ontology and forms of representation that are useful in thinking, reasoning and learning about the information-processing going on in other agents can also be useful in relation to oneself. To some extent the application of such an ontology to oneself can use the same perceptual mechanisms as suffice for its application to others. (Self/Other symmetry works up to a point.) However, the development of special-purpose architectural features can support additional self-referential capabilities that may produce designs specially suited to certain sets of requirements (niches). We need to understand the trade-offs. It is possible to produce very long lists of examples of ways in which self-awareness of various kinds can be useful. However, it will be useful if we can put such examples in the context of a general conceptual and theoretical framework in which trade-offs between different design options can be discussed in relation to different sets of requirements and design constraints. The analysis of such trade-offs may be far more useful in the long run than the kinds of arguments often found in the literature as to whether one design is better than another, or whether everything can be achieved with some particular class of mechanisms, where people are often merely attempting to support their own design preferences. I.e. we need to understand trade-offs when there are no right answers. In particular, such an analysis will not only clarify engineering design requirements, but may also give us a better understanding of how various kinds of self-awareness evolved in nature. We can thereby become more self-aware. [Apologies for length.]

[PS] >> As Ron Brachman (DARPA-IPTO) has put it: A truly cognitive system would >> be able to ... explain what it was doing and why it was doing it. It >> would be reflective enough to know when it was heading down a blind alley >> or when it needed to ask for information that it simply couldn't get to >> by further reasoning. And using these capabilities, a cognitive system >> would be robust in the face of surprises. It would be able to cope much >> more maturely with unanticipated circumstances than any current machine >> can.

One of our MSc students recently used my SimAgent toolkit to extend a simulated purely reactive sheepdog to a hybrid reactive/deliberative one which switches between deliberative and reactive modes e.g. when it forms a plan, and whilst acting on the plan discovers that the world has changed and the plan must be abandoned or can be improved or modified locally. (Compare Nilsson's Teleoreactive systems.) There's a movie of it here (example 5): There's also a very shallow toy demo of agents with 'self-knowledge' of their 'emotional' state. The methods used could work for more sophisticated examples. I could bring such demos to the workshop. By then I'll also know whether a large grant proposal to the European Commission relevant to all this has been successful.

Return to Top of Page

Bernard J. Baars

The Neurosciences Institute, San Diego Institute of Intelligent Systems, University of Memphis
baars@nsi.edu; www.nsi.edu/users/baars (For regular mail, please use BJB home address: 3616 Chestnut St., Apt. 3, Lafayette, Calif. 94549. 925-283-2601.)

(Co-author: Stan Franklin Institute of Intelligent Systems University of Memphis
franklin@memphis.edu)

Human self-awareness is deictically and mismatch-driven: Some implications for autonomous agents.

Abstract

A striking fact about human self-awareness is how little of it we seem to have. If we simply ask people about themselves, they typically give incorrect, biased or very fragmentary answers. A large empirical literature supports the notion that we simply do not know ourselves very well. What self-awareness most people have is rarely explicit and voluntarily reportable. Yet there is much evidence for the existence of massive “self” (executive) functions in the brain, especially prominent in prefrontal, parietal, motor, limbic, and anterior cingulate cortices, and in subcortical regions like the PAG, MRF, hypothalamus and basal ganglia. These large areas are almost entirely unconscious at any given time. By comparison, what can become conscious at any time is a very small amount of information indeed, and heavily dependent upon object-specific regions of the sensory (posterior) cortex.

Yet at times we need to become conscious of our own patterns of behavior, especially when we make errors or try to achieve a new goal. Self-awareness is often mismatch-driven. That is, we become conscious of events when our expectations and intentions do not match the feedback we receive from the world. Emotionally painful experiences have long been conceived as mismatch-driven, and often lead to self-corrective learning, including improved metacognition. But in children even positive emotional experiences, like chase play, peek-a-boo, tickling, and positive self-display make use of surprise and uncertainty as an essential component.

Mismatch-driven self-awareness may optimize the tradeoff between a very complex executive system and a very narrow bottleneck of consciousness; in effect, they focus conscious limited capacity on those aspects of ourselves that need repair in any given situation. Thus the default state of the executive appears to be unconscious, but specific executive functions may be “called” by the contents of consciousness when they lead to a perceived mismatch.

In addition to mismatch, human self-awareness is driven deictically, i.e., by pointing to our patterns of behavior by caretakers, teachers, and ultimately, ourselves. Human children spend much more time being socialized than any other species, and much social learning depends on idealized models and corrective information provided by others. “Deixis” is a linguistic term that applies to language-based strategies for referring to things, but it is not limited to language. For example, finger-pointing and gaze-following are form of deixis in action, and human infants are distinctive in spontaneously following the gaze of caretakers. Children also enjoy pointing things out to others. Because we internalize perceived social standards, “autodeixis” --- pointing to something in ourselves --- is a particularly powerful predictor of human behavior. An entire class of emotions, the “self-conscious emotions,” like shame, embarassment, guilt, and pride, are known to be extremely powerful in directing efforts at self-control.

These points have implications for artificial agents. Global Workspace Theory (GWT) and its detailed IDA implementation have addressed the puzzle of conscious limited capacity for a number of years (Baars, 1988; Franklin & Graesser, 2000). Compared to matched unconscious processes, conscious events show (1) radically limited capacity, (2) mandatory seriality, and (3) internal consistency. These properties are consistent with a global workspace (GW) architecture, that is, a massively parallel society of specialized processors that is endowed with a central information exchange, or global workspace. Cognitive scientists have studied such architectures for some decades. As a first approximation, if we assume that representation in the global workspace are conscious and the rest of the parallel architecture is not, we can account for the three empirical features listed above.

Executive controls (“self systems”) can operate by way of the GW in order to exert control over a large set of semi-autonomous specialized processors. This view is consistent with the observation that all conscious events are attributed to a coherent but unconscious “self” in normal human functioning. Conscious perceptual events, for example, are always attributed to the same executive observer, and consciously-mediated voluntary actions are always attributed to the self as observer. This is even true when the self-system can be shown to be self-contradictory, neurally split, or functionally divided. Indeed, consciousness itself may be defined as a flow of brain information that is interpreted by coordinated executive regions, notably in parietal and prefrontal cortex (but not limited to it). In addition, this flow probably must be source-attributed to an object-like event in order to be conscious. Input that is not so interpreted is usually not treated as conscious.

Our working hypothesis has been that the limited capacity of conscious events is not a dysfunctional by-product of evolution, but reflects a functional tradeoff faced by very large, parallel “societies” of specialized processors in large brains. In computer science, such an architecture helps to solve the “relevance problem,” the question of how to access a specific knowledge source, or set of knowledge sources, that is capable of solving a problem posed in the global workspace. Global workspace architectures have therefore been used to solve difficult practical problems for some decades (Newell, 19xx; Franklin, 2000; Franklin et al, under review).

Artificial agents. If this functional tradeoff is generally true of massively parallel biological “societies” of specialized systems it may apply not just to humans, but also to artificial agents. We will discuss how this applies to the IDA implementation of GWT.

Recent brain imaging studies have given considerable support to GWT, for example in demonstrating widespread activation due to conscious (but not unconscious) sensory input (Baars, 2002). Four types of unconscious states show striking metabolic deficits in regions of the brain associated with executive functions (Baars et al, 2003). In addition, we have recently shown how the relationship of consciousness to cognitive working memory can be accounted for in these terms (Baars & Franklin, 2003). We are currently developing ways to integrate human memory systems in this theoretical framework (Franklin et al, under review).

Return to Top of Page

Brian Williams and Greg Sullivan

MIT CSAIL

Our past research has focused on languages, execution architectures and algorithms for enabling self-aware, reactive, and robust space systems. Our current work extends these capabilities to a wider range of hardware and software systems, from mobile robots, to cooperative air vehicles. We are excited to participate in the creation of the next generation of self-aware systems.

Model-based Programming: Our core research has focused on designing and implementing algorithms that enable autonomous systems to continuously monitor their internal behavior from available sensors, to detect when these behaviors deviate from intended behavior, and to plan actions in order to recover from unplanned or undesired situations. In order to dynamically detect these deviations and to either repair or work around problems, a system needs to be self-aware. In particular, a system must be aware of its evolving goals, as well as methods and constraints available for achieving those goals.

A unifying theme of our work is Model-Based Programming [AI Mag Fall 03]. In model-based programming, application development consists of creating control programs, which specify a system’s intended state evolution over time, without specifying how these states are either deduced or achieved. Model-based programming assumes that the executive of the control program is state aware, in that it continuously knows the system’s state, at the appropriate level of abstraction, and continuously acts to achieve the intended state, as is specified in the control program. The executive is fault aware, in that it is able to continuously detect and compensate for unintended behaviors. A model-based program also includes a specification of component models of the system, which the executive reasons about in order to control and monitor with respect to the intended state. Several components of model-based programming and the model-based execution architecture stand out as directly enabling the development of self-aware computer systems:

1. System Models: Models of the system under control identify how the components of a system interact, and how component behaviors evolve over time through probabilistic transitions between states. Importantly, models include specifications of faulty behavior as well as normal operation. The transitions and behaviors are specified using a logical constraint language that is also used as the language of goals, plans, and contingencies.

2. Uncertainty and Belief States: Traditionally, applications are assumed to have direct access to and control over their internal state. That is, a program can read and write available memory. Model-based programs broaden to states outside of the computer, for example, hardware states internal to a robot, or states contained by the external environment. Model-based programming assumes only indirect control over state. The value of each state variable is estimated as a belief state – a distribution over possible values. Likewise, when an application attempts to modify a system state variable, that new assignment (e.g. projector1 := On) is treated as a goal that is to be achieved.

3. Goals and Plans: Control programs specify imperative actions of the application through abstraction specifications of intended system state as it evolves over time. The model-based executive tracks this specification by continuously constructing and executing plans.

4. Model-based Executive: Model-based programming assumes a runtime component, called the model-based executive, which has two primary subsystems: (1) Mode Estimation is responsible for calculating the belief state of the system, given all observations and interactions with the system; (2) Mode Reconfiguration is responsible for forming plans based on the current goals and the current belief state, and for executing those plans effectively. Mode estimation and reconfiguration are continually active, so that once a plan is constructed, the executive monitors progress of the plan and reacts accordingly to new information, including a continuous update and modification of the plan.

5. Model-Predictive Dispatch: Traditionally, a program will have one function or method that implements a particular algorithm or task (e.g. turn_on_projector()), while decision making is implemented using conditionals in the core language. Dynamic dispatch in object-oriented languages abstracts a common conditional idiom: that of choosing an implementation based on the types of function arguments. Model-based programming abstracts the decision making process even further, by using model-predictive dispatch. It is assumed that there may be many different functionally redundant ways to achieve a given task. Different methods (subplans) may have postconditions that promise achievement of a current goal, but the methods may have different applicability contexts (preconditions) as well as different performance profiles (time, space), resource requirements, or reliability, which affect the (in)feasibility and utility of using different method combinations. Model-predictive dispatch is an element of a model-based executive that plans into the future the selection of a set of methods that feasibly and optimally achieve the intended behavior, specified within the control program.

Together, these five aspects of model-based programming support the creation of robust, fault-aware systems that incorporate reasoning into the execution process and are aware of their own goals, plans, resources, and constraints. Model-based systems are built to handle uncertainty, changing demands, and both internal and external failures. Over the past decade we have developed and deployed a range of state and fault aware systems, starting with the NASA Deep Space 1 mission in 1999.

Model-based Programming “all the way down”: Our goal is to evolve complex self-aware systems over time, through the creation of many layers of self-aware systems. Any given system, software or hardware, can be modeled as having some observations and some methods of control. As such, we can model the system and then run a model-based executive that can monitor and influence the system in terms of goals, constraints, and performance criteria.

This point of view also gives us an incremental adoption strategy. Current systems can be viewed as degenerate model-based systems, with the operating system and CPU serving as the executive. The belief states are unitary and have no notion of uncertainty, and goals are given as state updates and are assumed to be immediately satisfied.

However, we can layer model-based executives, models, and model-based control programs on top of existing software and hardware components. A trivial model-based program can simply start and stop the system under control, and observe whether or not the system is running. We likely will want to instrument the underlying systems to expose more behavior and control mechanisms. At that point, we can add more precise models and exert more interesting control.

Research Directions / Challenges: Model-based programming addresses head-on several key aspects of self-aware systems: uncertainty, abstract decision making, explicit goals and plans, and active monitoring and reconfiguration. Trying to incorporate these elements into reactive, reflective, and self-aware systems immediately poses some serious long term research challenges:

1. Once we acknowledge only indirect control and observation of our world, we immediately encounter a state space explosion. That is, if we must track the likelihood of all possible (however unlikely) state configurations, we immediately run out of time and memory. Thus we have pursued research into optimal constraint satisfaction algorithms that deal with uncertainty and that prune the explored state space in a principled manner.

2. The real world has many continuous (as opposed to discrete) valued objects, and we are researching the problems inherent in reasoning over hybrid (continuous and discrete) systems.

3. To function at reactive timescales, planning and estimation must be any time, any space algorithms. That is, they must be as accurate as possible for any given limits on time or memory. We feel that we bring to the table significant experience relevant to creating the next generation of self-aware systems. We are happy to present our own experience, our understanding of the current state of the art in related research areas, and to help devise an ambitious research program in self-aware cognitive systems.

Return to Top of Page

Daniel Bobrow and Markus Fromherz

Palo Alto Research Center

Compositional Self-awareness

In this brief paper we discuss self-awareness from the point of view of the design of modular, real-time systems in. In particular, we want to consider properties of a system performing real-time actions in the world, perhaps in coordination with other self-aware agents, and that is compositionally self-aware, meaning that it is composed of tightly coupled, self-aware parts that are together achieving the goals of the system. We have been engaged in the construction of systems that demonstrate some of the capabilities we describe, where semi-independent modules participate in the flexible manufacture of objects whose specification is not known until the job is requested, and the state and capabilities of the modules can change over time.

For a system to be considered self-aware, we insist that an agent interacting with the system be able to request and receive an appropriate description of its internal state – at the moment, perhaps with some history of past states, and possibly predictions of future states. A compositional system has (at least one) a “top-level” agent that provides its status information to the outside world, while others report to agents within the system. In addition, we require that each agent at least:

1) accept information from its context that it can use to derive goals that it will strive to achieve;

2) provide feedback about whether it achieved its goal - the lowest level of self-awareness is to know the difference between an expected and actual state/behavior;

For a system composed out of tightly coupled modules interacting with each other and the environment in real time, we expect that an agent controlling a module

3) exhibit several levels of awareness in its communication, demonstrated by:

a. capability feedback: knowledge of what it (and its constituents) can do, with cost and constraints;
b. resource feedback: resources available for current task and prediction of future availability (expectation over multiple goals);
c. intention feedback: whether or not it “intends” to try to achieve its goal;
d. progress feedback: how far it is in achieving its goal, including agent state and indication of how well it is doing on achieving its goal;
e. diagnostic feedback: what is standing in the way of it achieving its goal;

4) respond to changes of internal state and external demands by changing its behavior in a fluent and dynamic fashion.

Issues in designing a composable self-aware real-time system

In this section we will reflect on issues that we have found in building a primitive example of such a composable system. The system that we have in mind is an intelligent, reconfigurable, yet tightly integrated production system with a hierarchical control structure, with a number of agents that communicate across multiple levels and with “neighbors” in the process flow. This system is given an overlapping series of goals to manufacture a stream of objects, where the objects are specified by their output characteristics, and not by how they are to be made. The production system is to perform this series of manufacturing tasks, while maximizing system productivity, minimizing use of resources, and dealing with errors, interruptions, changes of properties of internal module capabilities, etc.

In a compositional system, the capabilities of the system arise out of the capabilities of its constituent modules. As the modules go, so goes the system. In a compositional self-aware system, modules advertise their capabilities, which are integrated at multiple levels to determine and plan the actions that will achieve the goals. While this approach is becoming possible in model-based systems, it is much harder to also integrate knowledge about deviations from the advertised behavior.

A fundamental issue in such a system is the tension between each module’s local knowledge and its global impact: a module, to a first order, primarily has its local sensors to monitor system behavior (and it is also good design to isolate and encapsulate knowledge of the working of a module); at the same time, in a tightly coupled system, any deviation from a module’s contracted behavior has immediate and potentially serious impact on the overall system state. Delays in one module may lead to violated assumptions in another module; compensation for local deviations may be out of sync with the actions of neighboring modules. As a consequence, any unplanned interaction has to be understood immediately in context in order to take appropriate action to correct or contain the behavior. This leads to another tension: any feedback in a self-aware real-time system has to be time-bounded, but constant communication between modules is expensive.

Architecture design principles for compositional self-aware systems

We have started to design and implement a control system for the production system outlined above that addresses the above tensions in order to enable a robust self-aware system. We used the following design principles as guidelines in the architecture of the control system.

Real time: divide control roles according to different time horizons and cycle times (e.g., separate control of a module’s actuators from coordination of multiple modules), abstracting knowledge to the right granularity as necessary. Awareness of state is defined in terms of the time granularity of the operations.

Closed loop: allow for feedback throughout the system, degrade gracefully (e.g., enable feedback at all levels). Closed loops are a mark of self-awareness within a time level.

Coordination facilitation: where multiple modules interact in an immediate sense and require integrated feedback of their actions, introduce controllers that facilitate the coordination of these modules. Such floating controllers may exist only temporarily and expressly for the purpose of a particular task, in a sense adding a wider “consciousness” to the self-awareness of an individual module.

Encapsulation: keep knowledge together, and act where the knowledge is (e.g., all models for planning and scheduling are in a planner/scheduler (P/S), so all decisions about plans, rerouting, exception handling, etc. have to be made by the P/S; a downside is that many changes in execution may have to be escalated up to the P/S, hence the need for delegation).

Delegation: when dividing up a task onto multiple controllers, provide context to give the controllers some freedom to react to disturbances (e.g., the P/S can give a first level module controller (FLMC) some abstracted information about other such FLMC’s and provide it generic goal boundaries to allow it to make changes within those boundaries). Similarly for each level down in the time hierarchy.

Autonomy: react locally while behavior is within bounds; keep changes local if possible. Also act at the appropriate time level.

Escalation: report feedback when behavior is out of bounds; react at the appropriate awareness level. Autonomy and escalation determine a trade-off between locally fast and globally appropriate action.

Explicit contracts: in reconfigurable systems, modules are not designed with the (unknown) whole system in mind, so represent information about capabilities and context explicitly in order to enabling reasoning about module capabilities and their interactions (e.g., module models are the contract from the FLMC to the P/S of what behaviors can be executed, and goal boundaries (envelopes) are the contract from the P/S to FLMCs about what they are allowed to change and what not, etc.).

These principles together yield a compositionally self-aware system in which knowledge about states and capabilities is shared as appropriate and no more, in turn leading to a system that is efficient in its separation of concerns and still robust in the face of real-time distributed actions of tightly coupled modules.

Return to Top of Page

Deborah L. McGuinness

Associate Director Knowledge Systems Laboratory Gates Computer Science Building, 2A Room 241 Stanford University, Stanford, CA 94305-9020 email: dlm@ksl.stanford.edu URL: (voice) 650 723 9770 (stanford fax) 650 725 5850 (computer fax) 801 705 0941

(...) I propose that a strong theme in self aware systems is that the system should be able to explain itself. It should be able to explain what sources were used to provide an answer, what manipulations if any were done to the information in the sources, what answer(s) it returned and why, any tradeoffs that were made during the process, any assumptions that were used, any issues that impacted its "decisions" such as resource limitations cutting off computation, etc. It should also be able to explain its plan if it made one, if it successfully executed, etc.

As you know, explanation is the main goal of my research on the Inference Web - This work is aimed at being able to increase trust in answers from systems by enabling those systems to be able to explain themselves. Inference Web and our Proof Markup Language for enabling interoperability of proofs is currently being driven by some government funding and programs - most notably currently by IPTO's PAL program, IXO's DAML program, and ARDA's NIMD program.

Your web page asks how I might contribute to the workshop - I would be happy to run a session on explainability of self-aware systems, provide background reading and examples, and address the questions on your page. In particular -I would like to actively participate in a discussion about representations and architectural features required for self-awareness. I would propose that something like our Proof Markup Language and Inference Web infrastructure at least enables self-aware systems and may be required in the face of needing interoperability of proofs. I further claim that without interoperability of some kind of proof or justification, we will not enable interoperability of trusted agents. (...)

Return to Top of Page

Drew McDermott

Yale University

My main interest in self-aware systems is from a philosophical point of view. I am the author of the book Mind and Mechanism, (MIT Press, 2001), which lays out a theory of consciousness applicable to both robots and humans. I believe that we can understand consciousness with techniques that are available today, but that consciousness is useful, or even recognizable, only in an intelligent system. So the main obstacle to having conscious systems is that we don’t have truly intelligent systems yet. By that I mean that our programs are so specialized and so ineffective that they are unable to inhabit the “real” world, and so don’t have to think about their role in it. Deep Blue was not conscious because the world it thought about was the world of chess, and in that world there are pawns and bishops, but there is no Deep Blue. It wasn’t able to notice the fact that it analyzed ply on which it had the move very differently from the way it analyzed ply on which its opponent had the move, so it never had to cope with ramifications of that fact.

Is “self awareness” the same as consciousness? There’s probably no answer to this question, because the terms on both sides of the equation are not terribly well defined. There have been many suggestions about the usefulness of self awareness, but they often turn out to use the term “self awareness” to mean things like “knowing what you don’t know.” This line of inquiry then leads to nonmonotonic inference or belief revision. Other definitions lead to other subfields of AI, most not as glamorous as the phrase “self awareness” might promise.

A workshop in this area should, in my opinion, focus on these unglamorous topics, and try to indicate where progress has been made and where it is most likely in the short-term and intermediate-term future.

Return to Top of Page

Don Perlis, Mike Anderson , Tim Oates

Comp Sci Dept, U of Maryland, College Park MD 20742 phone 301.405.2685 fax: 301.405.6707

THE METACOGNITIVE LOOP

Motivation


Common sense, as we understand it, is different from expert or special cleverness; instead, it is a kind of general-purpose reasoning ability that serves agents to ``get along'' (learning as they go) in a wide and unpredicted range of environments. This kind of reasoning requires the ability to recognize, and initiate appropriate responses to, novelty, error, and confusion. Examples of such responses include learning from mistakes, aligning action with reasoning and vice versa, and seeking (and taking) advice.

People tend to do this well. Is this ability an evolutionary hodgepodge, a holistic amalgam of countless parts with little or no intelligible structure? Or might there be some few key modular features that provide this capability? We think there is strong evidence for the latter, and we have a specific hypothesis about it and how to build it in a computer. In a nutshell, we propose what we call the metacognitive loop (MCL) as the essential distinguishing feature of commonsense reasoning. And we claim that the state of the art is very nearly where it needs to be, to allow this to be designed and implemented.

Idea and Rationale

Our postulated ``metacognitive loop'' in both human and machine commonsense reasoning allows humans (and should allow machines) to function effectively in novel situations, by noting anomalies and adopting strategies for dealing with them. The loop has three main steps:

i. monitor events, to note a possible anomaly
ii. assess its type and possible strategies for dealing with it
iii. guide one or more strategies into place while continuing to monitor (looping back to step i) for new anomalies that may arise either as part of the strategy underway or otherwise

People clearly use something very like MCL to keep an even keel in the face of a confusing, shifting world. This is obvious at the level of individual personal experience: we often notice things amiss and take appropriate action. In addition there is empirical evidence for this in studies of human learning strategies, where, for instance, an individual tasked with memorizing a list of foreign-language word-meaning pairs will make judgments of relative difficulty along the way, in framing study strategies.

We suspect that MCL is profoundly involved in many human behaviors well beyond formal learning situations, and that there is a specialized MCL module that carries out such activity on a nearly continual basis, without which we would be everyday incompetents (although perhaps idiot savants) as opposed to everyday commonsense reasoners. However, while we are interested in gathering additional information about MCL behaviors in humans, our main focus is on taking this as motivation for building a similar capability into computers.

MCL is largely a system-self-monitoring process, in that what it monitors centrally includes the system's own evolving knowledge base and own evolving history of activity. In particular, a major way that anomalies are noted as is formal direct contradictions in the system's KB. For instance, the system expects E and yet finds -E: both E and -E are in the KB, one perhaps as a default (what is expected) and one perhaps as a perceptual input. This is the noting; something is amiss. Next comes the assessing: what sort of anomaly might it be (perceptual error, mistaken default, etc) and what might be done (check sensors, gather more data, ask for help, etc). Lastly, do one or more of those and check progress. Might this very process get in its own way, eg by taking up so many resources that the original goals are never achieved? This is a research issue, but our studies so far suggest that MCL need not be computationally expensive.

Applications

Natural Language Human-Computer Dialog

Miscommunication is prevalent in dialog, especially human-computer dialog. Our work to date has indicated that the metacognitive loop can be a powerful tool for helping automated systems detect and recover from miscommunications. For instance, if a word is used that the system does not know, this can be noted, allowing the system to ask for specific help (``What does that word mean?''). Other examples include reasoning about ambiguous words, and appropriately retracting invalidated implicatures and presuppositions. The long-range aim is a computer system that, via dialog with humans, learns a much larger vocabulary in the process of noting and correcting its misunderstandings, akin to a foreigner learning a new language.

Reason-Guided Q-Learning

We have shown that this standard reinforcement-learning algorithm, which tends to perform poorly (recover slowly) when there is a sudden change in reward structure, can be significantly enhanced when coupled with even a minimal form of MCL.

Return to Top of Page

Eyal Amir

Computer Science Department University of Illinois at Urbana-Champaign Urbana, IL 61801, USA eyal@cs.uiuc.edu

Self-Aware Software (Position Statement) March 2, 2004

1. Agents Acting in Rich and Complex Worlds

Text-based adventure games have many challenging properties for cognitive agents, including incompletely specified goals, an environment revealed only through exploration, actions whose preconditions and effects are not known a priori, and the need of commonsense knowledge for determining what actions are likely to be available or effective. These qualities require an agent that is able to use commonsense knowledge, make assumptions about unavailable knowledge, revise its beliefs, and learn what actions are appropriate. At the same time, more traditional robotics problems arise, including sensing, object classification, focusing on relevant features on a situation, reasoning within context, and decision-making, all within a large state space.

In my work on adventure games I develop methods for controlling players (software agents). This involves

1. representing the information that the game presents to the the user using logic and probabilities,
2. updating the belief state of the user (the software agent),
3. learning the behavior of the world from the available partial information,
4. exploring the domain using commonsense information, and
5. decision making with the aid of the above and commonsense knowledge.

Computational advances that I’ve made together with Stuart Russell [Amir and Russell, 2003] show promise for exact polynomially computable solution for the first problem in the settings that we are interested in. Such results are crucial if we want to act effectively in a world that has many (¿100) objects, location, and relationships between them. Only few of these are observable at any one time, and many of them are necessary for solving the problem at hand. Recent work on the second problem shows promise for solving this problem in a tractable fashion as well.

I believe that this domain (adventure games) are a rich environment for testing and evaluating self-aware agents. For example, the agent must explore the world, making decisions about actions without available plans (one cannot plan to the goal in adventure games). For this exploration it must use common sense knowledge, such as that keys are sometimes under mats, or that doors can be opened, but trees usually cannot. I plan to use a commonsense knowledge base derived from the several sources (e.g., the WWW, OpenMind [Singh et al., 2002], others).

One of the main uses of self-awareness for this task is for determining the context for commonsense. Every piece of knowledge that we will use is surrounded by its context (e.g., a sentence is surrounded by its page), and our agent’s current knowledge and estimate for relevant information serves as the context, a search pointer into this contextualized knowledge base. With this context we can determine what knowledge is relevant and correct in our situation, and include this knowledge in making decisions.

2 Software that Executes with Some Variables Unknown

Recent advances in logical filtering technology [Amir and Russell, 2003] allows the creation of software that executes with only some of the variables fully specified. For example, assume that x is a variable taking integers from an unknown source in the range -32K to 32K. We may be able to execute our program without knowing x’s value, possibly coming out with the output “if x > 0, then the output is A; otherwise, the output is B”. This idea may be generalized to compute the output of programs with only partial input, and also to execute programs that give advice to the user on the effects his/her input will have.

3 Emotional Systems

Recent advances in cognitive psychology and neuro-biology suggests that emotions are essential for decision making (e.g., [Goleman, 1995]). This is in contrast to traditional work in AI that leaving aside the sources of utility and values and assumes that rationality is the only needed concept for intelligent, successful behavior. Behavior-based systems that incorporate both knowledge and reactivity (e.g., [Amir and Maynard-Zhang, 2004]) offer a basis for building a system that combines emotions and rationality in a fashion similar to that of biological systems.

References

[Amir and Maynard-Zhang, 2004] Eyal Amir and Pedrito Maynard-Zhang. Logic-based subsumption architecture. Artificial Intelligence, 153:167–237, 2004.
[Amir and Russell, 2003] Eyal Amir and Stuart Russell. Logical filtering. In Proc. Eighteenth International Joint Conference on Artificial Intelligence (IJCAI ’03), pages 75–82. Morgan Kaufmann, 2003.
[Goleman, 1995] Daniel Goleman. Emotional Intelligence. Bantham Books, 1995.
[Singh et al., 2002] Push Singh, Thomas Lin, Erik T. Mueller, Grace Lim, Travell Perkins, and Wan Li Zhu. Open mind common sense: Knowledge acquisition from the general public. In Proceedings of the First International Conference on Ontologies, Databases, and Applications of Semantics for Large Scale Information Systems, LNCS. Springer-Verlag, 2002.

Return to Top of Page

James P. Van Overschelde

I am currently a post-doctoral fellow working at the University of Maryland in College Park jimvano@psyc.umd.edu

I am currently investigating metacognitive processes (i.e., one's awareness of internal cognitive states and processes). My research focuses on the information used by metacognitive judgments and how the accuracy of these judgments are affected by factors like the amount of time between study and judgment and between the judgment and final test, the amount of knowledge learned previously about the items being studied, and the encoding processes performed during learning. I also do more general research on the effects of prior knowledge on learning and memory. Prior to becoming a psychologist, I worked for 10 years in the computer industry and have extensive experience with general-purpose computer programming and computer systems.

Requirements for a Self-Aware Computational Systems

For a computerized system to be self-aware it must develop a definition or concept of "self." This concept of self would then enable the system to define things and experiences as associated with self or associated with not-self. The definition of self would probably begin with a perception of (and the ability to monitor) its physical boundaries and its physical and internal states. Like very young children (pre-2 years), a system must start by being aware of its physical form and it must learn its abilities to control its form and to learn the form's restrictions. This physical definition of self is not taught, but would be extracted from the experiences (for how, see below).

Then, along with the ability to perceive and represent objects in its environment, it would begin to be able to classify experiences as perceptions of objects in the environment (not self) or experiences occurring within the systems boundaries (self). These internal states and experiences could be computational (e.g., goal states), physical, or perceptual (e.g., red ball).

Over time this initial concept of self, which is relatively crude by comparison with human beings, would give rise to a more elaborate self concept. Although computationally challenging given current technology, for this self concept to emerge I believe that all experiences have to be stored separately (episodic memory), and that patterns have to be able to be extracted (semantic memory) from these episodes to form more general concepts. This extraction of patterns from memory can be automatic or intentional. As a system's states are monitored, some states may "match" with stored episodes (of states) and this match may enhance the processing of certain inputs and inhibit the processing of other inputs. For example, a new experience of a soda can would include the visual information (e.g., color, patterns, size, shape) and the tactile information (e.g., weight, curvature). A second experience of the can would include different, but similar visual experience, and say, the auditory information occurring when it hits the floor. Many of these experiences would occur, shared features would reinforce each other, and a concept of "can" would emerge from these experiences. As a result of this concept, the system may then enhance the fine motor control and monitoring so as not to crush the can when attempting to pick it up. Furthermore, the extracted general concepts then become the information for new episodes, episodes from which future generalizations (more elaborate general concepts) are possible. Thus, new concepts can build on prior concepts.

The more elaborate conceptual self would be extracted from the many episodes in which the physical self interacts with things outside of the physical self, and it would be a very elaborate knowledge structure. As this conceptual self further develops, it becomes an "I". The system would be able to describe its experience (to us) of moving a soda can from a desk to a garbage can. It would have an I-based concept of self. The can would be outside of the boundary of self, outside of the concept of self, and it would be classified as not-self. Then system and environment would constitute two elaborate and separate definitions/concepts, and the self concept evolves. We humans would probably describe/interpret the system as self-aware after the system has developed these two concepts (self, not-self) and the ability to classify experiences/things as different.

Finally, I believe the system would have to have a limited-capacity monitoring system. Maybe the system can monitor only 10 systems at one time. Then the allocation of monitoring resources would have to occur. This would limit the amount of information stored in the episodic memory and the extraction of patterns from episodic memory would be influenced by the allocation of monitoring resources.

In summary, the system must be able to be given some basic abilities/systems (sensory systems, internal state monitoring systems, episodic and semantic memory systems, pattern recognition, etc), it must have the ability to selectively monitor different internal systems, and it must be able to interact with its environment. (...)

Return to Top of Page

John McCarthy

Computer Science Department Stanford University Stanford, CA 94305 jmc@cs.stanford.edu

NOTES ON SELF-AWARENESS (@steam.stanford.edu:/u/jmc/f03/selfaware.tex: begun Sun Oct 26 12:45:55 2003, latexed January 13, 2004 at 5:04 p.m. )

[Editor's note: some of the formulas have been transcribed into plaintext notation, eg inverted-A => 'forall', etc..The original can be viewed here in PDF.]

1 Introduction.

Developing self-aware computer systems will be an interesting and challenging project. It seems to me that the human forms of self-awareness play an important role in humans achieving our goals and will also be important for advanced computer systems. However, I think they will be difficult to implement in present computer formalisms, even in the most advanced logical AI formalisms. The useful forms of computer agent self-awareness will not be identical with the human forms. Indeed many aspects of human self-awareness are bugs and will not be wanted in computer systems. (McCarthy 1996) includes a discussion of this and other aspects of robot consciousness.

Nevertheless, for now, human self-awareness, as observed introspectively is the best clue. Introspection may be more useful than the literature of experimental psychology, because it gives more ideas, and the ideas can be checked for usefulnes by considering programs that implement them. Moreover, at least in the beginning of the study of self-awareness, we should be ontologically promiscuous, e.g. we should not identify intentions with goals Significant differences may become apparent, and we can always squeeze later. [Note.Some philosophers who emphasize qualia may be inclined to regard self-awareness as a similar phenomenon—in which a person has an undifferentiated awareness of self, like the qualia oriented notion of the pure sensation of red as distinct from blue. This is not at all what is needed for AI. Rather we study the specific aspects of self and its activity which it is useful to be aware]

Some human forms of self-awareness are conveniently and often linguistically expressed and others are not. For example, one rarely has occasion to announce the state of tension in ones muscles. However, something about it can be expressed if useful. How the sensation of blue differs from the sensation of red apparently cannot be verbally expressed. At least the qualiaoriented philosophers have put a lot of effort into saying so. What an artificial agent can usefully express in formulas need not correspond to what humans ordinarily say, or even can say. In general, computer programs can usefully be given much greater powers of self-awareness than humans have, because every component of the state of the machine or its memory can be made accessible to be read by the program.

A straightforward way of logically formalizing self-awareness is in terms of a mental situation calculus with certain observable fluents. The agent is aware of the observable mental fluents and their values. A formalism with mental situations and fluents will also have mental events including actions, and their occurrence will affect the values of the observable fluents. I advocate the form of situation calculus proposed in (McCarthy 2002).

Self-awareness is continuous with other forms of awareness. Awareness of being hot and awareness of the room being hot are similar. A simple fluent of which a person is aware is hunger. We can write Hungry(s) about a mental situation s, but we write Holds(Hungry, s), then Hungry can be the value of bound variables.. Anohter advantage is that now Hungry is an object, and the agent can compare Hungry with Thirsty or Bored. I’m not sure where the object Hunger comes in, but I’m pretty sure our formalism should have it and not just Hungry. We can even use Holds(Applies(Hunger, I), s) but tolerate abbreviations, especially in contexts.

[Notes.The English grammatical form ”I am hungry” has no inevitability to it. French has ”J’ai faim”, literally ”I have hunger”, and German has ”Es hungert mich”, literally ”It hungers me”. In French the noun ”faim” meaning ”hunger” is used, whereas in English an adjective “hungry” is used. In logical we have both; I don’t see a use for a logical version of the German form.
Holds(Hungry(I), s) might be written if our agent needs to compare its hunger with that of other agents. However, if we use formalized contexts ((McCarthy 1993)) we can get by with Holds(Hungry, s) in an inner context in which the sentences are about the agent’s self. We won’t use formalized contexts in these notes, but an informal notion of context can avoid some worries. For example, some discussions are carried out entirely in contexts in which the fact that John McCarthy is a professor at Stanford is permanent. However, when needed, this context can be transcended. Likewise there are useful time-limited contexts in which George W. Bush is permanently President of the United States.
In spite of the fact that English has an enormous vocabulary, the same word is used with diverse meanings. I don’t speak of simple homonyms like ”lock on a door” and ”lock of hair”. These can be ruthlessly eliminated from our computer language, e.g. by having words lock1 and lock2. A more interesting example is that one can speak of knowing a person, knowing a fact, and knowing a telephone number. German uses kennen for the first and wissen for the second; I don’t know about the third. In my (McCarthy 1979), ”First order theories of individual concepts and propositions”, I use different words for the different concepts. I suspect that it will be useful to tolerate using the same term in related senses, e.g. using the same word for the bank as an institution and as a building, because too many related meanings will arise.
]

Our goal in this research is an epistemologically adequate formalism in the sense of (McCarthy and Hayes 1969) for representing what a person or robot can actually learn about the world. In this case, the goal is to represent facts of self-awareness of a system, both as an internal language for the system and as an external language for the use of people or other systems. Basic entities, e.g. automaton states as discussed in (McCarthy and Hayes 1969) or neural states may be good for devising theories at present, but we cannot express what a person or robot actually knows about its situation in such terms.

2 Of what are we aware, and of what should computers be aware?

Humans are aware of many different aspects of their minds. Here are samples of kinds of self-awareness—alas not a classification.

1. Permanent aspects of self and their relations to each other and aspects of other persons. Thus I am human like other humans. [I am a small child, and I am ”supposed to do” what the others are doing. This is innate or learned very early on the basis of an innate predisposition to learn it. [Note.Autistic children may be deficient in this respect.] What might we want an artificial agent to know about the fact that it is one among many agents? It seems to me that the forms in which selfawareness develops in babies and children are likely to be particularly suggestive for what we will want to build into computers.
2. I exist in time. This is distinct from having facts about particular time, but what use can we make of the agent knowing this fact—or even how is the fact to be represented?
3. I don’t know Spanish but can speak Russian and French a little. Similarly I have other skills. It helps to organize as much as possible of a system’s knowledge as knowledge of permanent entities.
4. I often think of you. I often have breakfast at Caffe Verona.
5. Ongoing processes I am driving to the supermarket. One is aware of the past of the process and also of its future. Awareness of its present depends on some concept of the “extended now”.

Temporary phenomena

6. Wants, intentions and goals: Wants can apply to both states and actions. I want to be healthy, wealthy and wise. I want to marry Yumyum and plan to persuade her guardian Koko to let me.
7. I intend to walk home from my office, but if someone offers me a ride, I’ll take it. I intend to give X a ride home, but if X doesn’t want it, I won’t.
8. If I intend to drive via Pensacola, Florida, I’ll think about visiting Pat Hayes. I suppose you can still haggle, and regard intentions as goals, but if you do you are likely to end up distinguishing a particular kind of goal corresponding to what the unsophisticated call an intention.
9. Attitudes
Attitudes towards the future: hopes, fears, goals, expectations, anti-expectations, intentions action: predict, want to know, promises and commitments.
Attitudes toward the past: regrets, satisfactions, counterfactuals I’m aware that I regret having offended him. I believe that if I hadn’t done so, he would have supported my position in this matter. It looks like a belief is a kind of weak awareness.
Attitudes to the present: satisfaction, I see a dog. I don’t see the dog. I wonder where the dog has gone.
There are also attitudes toward timeless entities, e.g. towards kinds of people and things. I like strawberry ice cream but not chocolate chip.
10. Hopes: A person can observe his hopes. I hope it won’t rain tomorrow. Yesterday I hoped it wouldn’t rain today. I think it will be advantageous to equip robots with mental qualities we can appropriately call hopes.
11. Fears: I fear it will rain tomorrow. Is a fear just the opposite of a hope? Certainly not in humans, because the hormonal physiology is different, but maybe we could design it that way in robots. Maybe, but I wouldn’t jump to the conclusion that we should. Why are hopes and fears definite mental objects? The human brain is always changing but certain structures can persist. Specific hopes and fears can last for years and can be observed. It is likely to be worthwhile to build such structures into robot minds, because they last much longer than specific neural states.
12. An agent may observe that it has incompatible wants.

2.1 Mental actions

The companion of observation is action. A theory of self-awareness, i.e. of mental observation, is complemented by a theory of mental action.

(McCarthy 1982) discusses heuristics for coloring maps with four colors. A form of self-awareness is involved. In coloring a map of the United States, the goal of coloring California can be postponed to the very end, because it has only three neighbors and therefore no matter how the rest of the map is colored, there will always be a color for California. Once California is removed from the map, Arizona has only three neighbors. The postponement process can be continued as long as possible. In the case of the US, all states get postponed and then can be colored without backtracking. In general it is often possible to observe that in planning a task, certain subtasks can be postponed to the end. Thus postponement of goals is a mental action that is sometimes useful.

A human-level agent, and even an agent of considerably lower level, has policies. These are creatable, observable both in their structure and in their actions, and changeable.

Useful actions: decide on an intention or a goal. Drop an intention.

Clearly there are many more mental actions and we need axioms describing their effects.

3 Machine self-awareness

Self-awareness is not likely to be a feasible or useful attribute of a program that just computes an answer. It is more likely to be feasible and useful for programs that maintain a persistent activity.

What kind of program would be analogous to Pat deciding while on the way to his job that he needed cigarettes? See formula (4) above.Here are some possibiliities.

It’s not clear what event in a computer program might correspond to Pat’s sudden need for cigarettes. The following examples don’t quite make it.

A specialized theorem-proving program T1 is being operated as a subprogram by a reasoning program T. Assume that the writer of T has only limited knowledge of the details of T1, because T1 is someone else’s program. T might usefully monitor the operation of T1 and look at the collection of intermediate results T1 has produced. If too many of these are redundant, T may restart T1 with different initial conditions and with a restriction that prevents sentences of a a certain form from being generated.

An operating system keeps track of the resources a user is using, and check for attempts to use forbidden resources. In particular it might check for generation of password candidates. In its present form this example may bad, because we can imagine the checking be done by the programs that implement supervisor calls rather than by an inspector operating with clock interrupts. While the programs called by clock interrupts exhibit a simple form of self-awareness, the applications I know about are all trivial.

The main technical requirement for self-awareness of ongoing processes in computers is an interrupt system, especially a system that allows clock interrupts. Hardware supporting interrupts is standard on all computers today but didn’t become standard until the middle 1960s. [Historical note: The earliest interrupt system I know about was the “Real time package” that IBM announced as a special order instigated by Boeing in the late 1950s. Boeing wanted it in order to control a wind tunnel using an IBM 704 computer. At M.I.T. we also needed an interrupt system in order to experiment with time-sharing on the IBM 704. We designed a simple one, but when we heard about the much better “Real time package” we started begging for it. Begging from IBM took a while, but they gave it to us. The earliest standard interrupt system was on the D.E.C. PDP-1 and was designed by Ed Fredkin, then at Bolt, Beranek and Newman, who persuaded D.E.C. to include it in the machine design. Again the purpose was time-sharing.] The human brain is not a computer that executes instructions in sequence and therefore doesn’t need an interrupt system that can make it take an instruction out of sequence. However, interruption of some kind is clearly a feature of the brain.

With humans the boundary between self and non-self is pretty clear. It’s the skin. With computer based systems, the boundary may be somewhat arbitrary, and this makes distinguishing self-awareness from other awareness arbitrary. I suppose satisfactory distinctions will become clearer with experience.

3.1 Interrupts, programming languages and self-awarenes

Consider a persistent program driving a car that is subject to observation and modification by a higher level program. We mentioned the human example of noticing that cigarettes are wanted and available. The higher level program must observe and modify the state of the driving program. It seems that a clock interrupt activating the higher level program is all we need from the hardware.

We need considerably more from the software and from the programming languages. A cursory glance at the interrupt handling facilities of C, Ada, Java, and Forth suggests that they are suitable for handling interrupts of high level processes by low level processes that buffer the transfer of information.

Lisp and Smalltalk can handle interrupts, but have no standard facilities.

My opinion, subject to correction, is that self-awareness of the kinds proposed in this note will require higher level programming language facilities whose nature may be presently unknown. They will be implemented by the present machine language facilities.

However, one feature of Lisp, that programs are data, and their abstract syntax is directly represented, is likely to be necessary for programs that examine themselves and their subprograms. This feature of Lisp hasn’t been much used except in macros and has been abandoned in more recent programming languages—in my opinion mistakenly.

4 Formulas

Formalized contexts as discussed in (McCarthy 1993) will be helpful in expressing self-awareness facts compactly.

Pat is aware of his intention to eat dinner at home.

c(Awareness(Pat)) : Intend(I, Mod(At(Home), Eat(Dinner)))
or
Ist(Awareness(Pat), Intend(I, Mod(At(Home), Eat(Dinner))))

Here Awareness(Pat) is a certain context. Eat(Dinner) denotes the general act of eating dinner, logically different from eating Steak7642. Mod(At(Home), Eat(Dinner)) is what you get when you apply the modifier “at home” to the act of eating dinner. I don’t have a full writeup of this proposal for handling modifiers like adjectives, adverbs, and modifier clauses. Intend(I, X) says that I intend X. The use of I is appropriate within the context of a person’s (here Pat’s) awareness.

We should extend this to say that Pat will eat dinner at home unless his intention changes. This can be expressed by formulas like

¬Ab17(Pat, x, s) & Intends(Pat, Does(Pat, x), s) implies (exists s' > s)Occurs(Does(Pat, x), s'). (2) in the notation of (McCarthy 2002). [editorially corrected]

Here’s an example of awareness leading to action.

Pat is driving to his job. Presumably he could get there without much awareness of that fact, since the drive is habitual. However, he becomes aware that he needs cigarettes and that he can stop at Mac’s Smoke Shop and get some. Two aspects of his awareness, the driving and the need for cigarettes are involved. That Pat is driving to his job can be expressed with varying degrees of elaboration. Here are some I have considered.

Driving(Pat, Job, s)

Doing(Pat, Drive(Job), s)

Holds(Doing(Pat, Mod(Destination(Job), Drive)), s)

Holds(Mod(Ing, Mod(Destination(Job, Action(Drive, Pat))), s) . (3)

They use a notion like that of an adjective modifying a noun. Here’s a simple sentence giving a consequence of Pat’s awareness. It uses Aware as a modal operator. This may require repair or it may be ok in a suitably defined context.

Aware(Pat, Driving(Job, s), s) & Aware(Pat, Needs(Cigarettes), s) & Aware(Pat, About-to-pass(CigaretteStore, s), s) implies Occurs(StopAt(CigaretteStore), s). (4)

The machine knows that if its battery is low, it will be aware of the fact.

Knows(Machine, (forall s')(LowBattery(s') implies Aware(LowBattery(s' ))), s) (5)

The machine knows, perhaps because a sensor is broken, that it will not necessarily be aware of a low battery.

Knows(Machine, ¬(forall s')(LowBattery(s') implies Aware(LowBattery(s' ))), s) (6)

The positive sentence “I am aware that I am aware . . . ” doesn’t seem to have much use by itself, but sentences of the form “If X happens, I will be aware of Y” should be quite useful.

5 Miscellaneous

Here are some examples of awareness and considerations concerning awareness that don’t yet fit the framework of the previous sections.

I am slow to solve the problem because I waste time thinking about ducks. I’d like Mark Stickel’s SNARK to observe, “I’m slow to solve the problem, because I keep proving equivalent lemmas over and over”.

I was aware that I was letting my dislike of the man influence me to reject his proposal unfairly.

Here are some general considerations about what fluents should be used in making self-aware systems.

1. Observability. One can observe ones intentions. One cannot observe the state of ones brain at a more basic level. This is an issue of epistemological adequacy as introduced in (McCarthy and Hayes 1969).

2. Duration. Intentions can last for many years, e.g. ”I intend to retire to Florida when I’m 65”. ”I intend to have dinner at home unless something better turns up.”

3. Forming a system with other fluents. Thus beliefs lead to other beliefs and eventually actions.

Is there a technical difference between observations that constitute selfobservations and those that don’t? Do we need a special mechanism for self-observation? At present I don’t think so.

If p is a precondition for some action, it may not be in consciousnes, but if the action becomes considered, whether p is true will then come into consciousnes, i.e. short term memory. We can say that the agent is subaware of p.

What programming languages provide for interrupts?

References

McCarthy, J. 1979. Ascribing mental qualities to machines In M. Ringle (Ed.), Philosophical Perspectives in Artificial Intelligence. Harvester Press. Reprinted in (McCarthy 1990).
McCarthy, J. 1982. Coloring maps and the Kowalski doctrine Technical Report STAN-CS-82-903, Dept Computer Science, Stanford University, April. AIM-346. McCarthy, J. 1990. Formalizing Common Sense: Papers by John McCarthy. Ablex Publishing Corporation. McCarthy, J. 1993. Notes on Formalizing Context. In IJCAI93.
McCarthy, J. 1996. Making Robots Conscious of their Mental States In S. Muggleton (Ed.), Machine Intelligence 15. Oxford University Press. Appeared in 2000. The web version is improved from that presented at Machine Intelligence 15 in 1995.
McCarthy, J. 2002. Actions and other events in situation calculus In B. S. A.G. Cohn, F. Giunchiglia (Ed.), Principles of knowledge representation and reasoning: Proceedings of the eighth international conference (KR2002). Morgan-Kaufmann.
McCarthy, J., and P. J. Hayes. 1969. Some Philosophical Problems from the Standpoint of Artificial Intelligence In B. Meltzer and D. Michie (Eds.), Machine Intelligence 4, 463–502. Edinburgh University Press. Reprinted in (McCarthy 1990).

Return to Top of Page

Kenneth D. Forbus and Thomas R. Hinrichs

 

Qualitative Reasoning Group, Northwestern University 1890 Maple Avenue, Evanston, IL, 60201, USA

Self-Modeling in Companion Cognitive Systems:Current Plans

Background: We are developing Companion Cognitive Systems, a new architecture for software that can be effectively treated as a collaborator. Here is our vision: Companions will help their users work through complex arguments, automatically retrieving relevant precedents, providing cautions and counter-indications as well as supporting evidence. Companions will be capable of effective operation for weeks and months at a time, assimilating new information, generating and maintaining scenarios and predictions. Companions will continually adapt and learn, about the domains they are working in, their users, and themselves.

The ideas we are using to try to achieve this vision include:

Analogical learning and reasoning: Our working hypothesis is that the flexibility and breadth of human common sense reasoning and learning arises from analogical reasoning and learning from experience. Within-domain analogies provide rapid, robust predictions. Analogies between domains can yield deep new insights and facilitate learning from instruction. First-principles reasoning emerges slowly, as generalizations created from examples incrementally via analogical comparisons. This hypothesis suggests a very different approach to building robust cognitive software than is typically proposed. Reasoning and learning by analogy are central, rather than exotic operations undertaken only rarely. Accumulating and refining examples becomes central to building systems that can learn and adapt. Our cognitive simulations of analogical processing (SME for analogical matching, MAC/FAC for similarity-based retrieval, and SEQL for generalization) form the core components for learning and reasoning.

Distributed agent architecture: Companions will require a combination of intense interaction, deep reasoning, and continuous learning. We believe that we can achieve this by using a distributed agent architecture, hosted on cluster computers, to provide tasklevel parallelism. The particular distributed agent architecture we are using evolved from our RoboTA distributed coaching system, which uses KQML as a communication medium between agents. A Companion will be made up of a collection of agents, spread across the CPUs of a cluster. We are assuming roughly ten CPUs per Companion, so that, for instance, analogical retrieval of relevant precedents proceeds entirely in parallel with other reasoning processes, such as the visual processing involved in understanding a user’s sketched input.

Robustness will be enhanced by making the agents “hot-swappable”, i.e., the logs maintained by the agents in operation will enable another copy to pick up (at a very coarse granularity) where a previous copy left off. This will enable an Agent whose memory is clogging up (or crashes) to be taken off-line, so that its results can be assimilate while another Agent carries on with the task. This scheme will require replicating the knowledge base and case libraries as necessary to minimize communication overhead, and broadcasting working memory state incrementally, using a publish/subscribe model, as well as disk logging. These logs will also be used for adaptation and knowledge reformulation. Just as a dolphin only sleeps with half of its brain at a time, our Companions will use several CPU’s to test proposed changes by “rehearsing” them with logged activities, to evaluate the quality and performance payoffs of proposed learned knowledge and skills.

Modeling: We are using the DARPA subset of Cyc KB contents, augmented with our own work on qualitative and spatial representations and reasoning, as the starting point for this effort. We use declarative models will be used whenever possible, to enhance opportunities for incremental adaptation and learning. Logs of user interactions will be kept and used for experiments in learning dialogue, user, and task models. Logs of internal operations will be kept for experiments in learning self-models.

Status: Our initial domains for Companion experiments include everyday physical reasoning and tactical decision games. The interface will use our sketching systems, combined with a concept map system, textual inputs, and specialized dialogues. At this writing (1/31/04) a 10 CPU cluster has been installed and the first-cut agent shells are being debugged on it. The bootstrap reasoning facilities and initial domain representations are being assembled from our prior work on projects supported by DARPA, ONR, and NSF. Corpora and examples needed for our domain learning experiments are being selected and formalized.

Issues: Our Companion architecture raises a number of issues for building self-aware systems, including • How should resource reallocation decisions be made, given changing demands in terms of interaction and background assignments? • What contents and level of detail are needed in the agent logs to support efficient hot-swapping? • How should the agent logs be organized to support effective improvement of operations and self-modeling?

Acknowledgements: This research is supported by DARPA IPTO.

Return to Top of Page

Len Schubert

Computer Science Department , University of Rochester schubert@cs.rochester.edu Fax: (585) 273-4556 http://www.cs.rochester.edu/u/schubert/

My interest in general is in building systems with common sense, with competence in natural language, and with some degree of autonomy/initiative (a motivational system and planning capability). Most of my research since about 1975 has been concerned in one way or another with this general interest. Focal areas have been knowledge representations sufficiently rich to capture everything we can express in words, knowledge organization for efficient associative access, specialist (e.g., taxonomic, partonomic and temporal) inference methods to support a general reasoner, general knowledge acquisition, natural language parsing and interpretation, and efficient planning. Some representative publications are listed at the end.

Self-awareness has always seemed to me to be an essential feature of an agent with human-like common sense and NL capabilities, and one that will be viewed as useful and adaptive when helping with the accomplishment of "everyday" tasks involving people (including their attitudes and plans), things, events, situations, and time. Such an agent should know what its own capabilities and limitations are, and should have a sense of its own "personal" history, and in particular its recent interactions with users, and an awareness of the current situation that enables coherent, natural, context-dependent dialogue. With these ideas in mind, I wrote an NSF proposal in 1999, whose summary read in part as follows:

The Conscious Machine (COMA) Project

The goal of this project is to equip an intelligent agent with a model of itself - some knowledge of its own characteristics, knowledge, abilities, limitations, preferences, interests, goals, plans, past and current activities and experiences, etc. A degree of self-awareness, in this sense, would be intrinsically interesting in a computerized agent, as well as potentially useful. The intrinsic interest lies in the fact that self-awareness is something still largely lacking in machines, yet is regarded as a very significant aspect of intelligent, deliberate activity at least in humans. The COMA project will provide a vehicle for exploring the representational, architectural and operational requirements for overt self-awareness (i.e., overtly displayed in interactions with users), and the role of self-awareness in intelligent agency and communication. The practical significance of the project lies in its potential for showing how to endow specialized applications or agents with a realistic sense of what they can and cannot do, what they know and don't know, and what they have done, are doing, and intend to do. Such an ability would make specialized agents more life-like, user-friendly and transparent, and would relieve users of the need to learn and keep in mind a lot of functional and contextual details about them. The existing intelligent system infrastructure at U. of Rochester will provide a decisive head start on this project. The EPILOG system, which will be used to build a self-aware core agent, offers a `language-like' comprehensive knowledge representation with associative access, input-driven and goal-driven inference, and support by multiple specialists. The core agent will then be integrated with the TRIPS system, a robust, speech- and GUI-based interactive planning assistant...

The central themes of the proposal concerned (1) the notion of "well-founded self-awareness" (the notion that self-knowledge should be in a logical form that allows its use by the general inference machinery, and that can be overtly expressed in language); (2) methods of instilling knowledge about knowledge categories (e.g., knowledge about the appearance, behaviors, etc., of a thing); (3) methods of instilling knowledge about self and the world; and (4) keeping track: instilling knowledge of past and present eventualities.

The proposal included the following subsection:

Sample session -- a preliminary attempt To further clarify what we mean by overtly displayed self-awareness in artifacts, we ... show a simple preliminary example of the sort of behavior we have in mind. The example is necessarily at a rather primitive level (given the immature state of the art in this area), but it is real. It is an actual transcript of a brief session with the PI's EPILOG system, a general knowledge-based system aimed at supporting commonsense inference and language understanding (e.g., Schubert & Hwang, 1999). Inputs to the system are currently supplied using a representation called EL (Episodic Logic) that is tightly and systematically related to natural language, and outputs of the system are expressed in EL and, for simple outputs, in English (produced by a rudimentary English generator). The only editing done to the transcript is the deletion of the verbose `stream of thought' between inputs and outputs, and insertion of explanatory glosses and comments, signalled by `
;;'.

EPI(9): (new-session)
;; The user initiates a new session

EPI(10): (query '(wh a1 (a1 action-type) (E e1_ep (now during e1) ((epilog-system do a1) ** e1))))
;; The user asks, "What are you doing right now?", i.e.,
;; "What type of action are you doing such that `now' is during
;; your doing it?"

Answer(s): ((KA (L VAR1 (VAR1 CONVERSE-WITH YOU))))
;; The system answers that it is conversing with the user
;; KA is an action-reifying operator and L is lambda

EPI(11): (query '(wh x (x = (system-knowledge-about me nil))))
;; The user asks, "What do you know about me?" (without topical constraint)

Answer(s):
;; (EL representations omitted)
;; That I am conversing with you;
;; That you are conversing with me;
;; That you asked me what I know about you;
;; That you asked me what I was doing;
;; That you are human.

EPI(12): (query '(wh x (x = (system-knowledge-about (K train) nil))))
;; The user asks "What do you know about trains?" (without topical constraint)

Answer(s): (($ 'SET (THAT ((K (PLUR TRAIN)) MEANS-OF-TRANSPORTATION))))
;; The system answers that trains are a means of transportation

EPI(15): (query '(you human))
;; The user asks, "Are you human?"

Answer: NO
No, EPILOG-SYSTEM is a computer-system.
;; The longer reiteration of the answer is produced by the English
;; generator, which also makes use of the inference system's proof

EPI(16): (query '(you believe (that (not you human))))
;; The user asks, "Do you believe that you are not human?"

Answer: YES
Yes, EPILOG-SYSTEM is a computer-system.

EPI(17): (assertion '(me = aaron))
;; The user asserts "I am Aaron"
;; The system responds by printing inferences (omitted)

EPI(18): (query '(wh x (x = you)))
;; The user asks, "Who are you?"

Answer(s): (EPILOG-SYSTEM)
;; This is the list of answers satisfying the question, i.e.,
;; the system answers "I am the EPILOG-system"

EPI(19): (query '(wh x (x = me)))
;; The user asks, "Who am I?"

Answer(s): (AARON)
;; The system answers, "You are Aaron"

EPI(20): (query '(wh x (you able x)))
;; The user asks, "What kinds of things can you do?"

Answer(s):
((KA (L VAR2 (E VAR1 (VAR1 HUMAN) (VAR2 CONVERSE-WITH VAR1)))))
((KA (L VAR4 (E VAR3 (VAR3 HUMAN) (E VAR1 (VAR1 TRANSPORTATION-PROBLEM) (VAR4 HELP VAR3 (KA (L VAR2 (VAR2 SOLVE VAR1)))))))))
;; The system answers
;; "I can converse with humans"
;; "I can help solve transportation problems"

The above dialogue is not the result of hacks or templates, but the result of adding some purely declarative knowledge, some `generators' that posit propositions in response to input (e.g., that a human is at the other end, whenever a dialogue is initiated), and a simple belief specialist that allows EPILOG to perform ``simulative inference" about the beliefs of other agents (Kaplan & Schubert 2000).

The NSF proposal was not funded (neither the general thrust, nor the facetious acronym, seemed to sit well with the reviewers). So this effort has had to remain on the back burner, but I am keenly interested in resuming work on it. The stated themes of the workshop -- representations/architecture for self-awareness, kinds of self-knowledge, and useful behaviors facilitated by self-awareness -- seem to me just right. My contribution would be on specific representational and reasoning issues.

As far as the workshop is concerned, I view my potential contribution as a response to John McCarthy's call for comments on/additions to his list of the kinds of self-awareness that an AI system would need as part of a human-like capacity for commonsense reasoning and interaction. I think his list covers most of what is needed, and his emphasis on basing self-awareness on *sentences* about one's body, mental faculties, abilities, intentions, personal history, etc., is crucial (and is reflected in theme (1) of my proposal). Some commentators on self-awareness use that notion in a weak sense, namely access by one part of a system to the contents of another part. That seems trivially satisfied by any CPU+memory+I/O system. The stronger, more interesting sense is that of being able to bring a fact about oneself into the general "reasoning mill", allowing its use for inference, verbal communication, planning, etc.

This point impinges on McCarthy's call for observable, not just obeyable goal stacks. I would understand "observable" in the strong sense of enabling formation of sentences about it that can be reasoned with. Winograd's SHRDLU (followed by many later systems with "explanation" abilities) was able to observe its goal stack and "report" on it, but not reason with the observed facts. For instance, I believe it would have been stumped by the second question in the exchange

FRIEND: Why did you do that?
SHRDLU: TO CLEAR OFF THE BLUE BLOCK.
FRIEND: But wasn't it clear already?

Concerning the notion of knowing whether one knows the value of a certain term, I think an explication in terms of inferrability is on the right track, but don't think that inferrability of a fact by a practical reasoner should not be modelled as existence of a proof of that fact from the reasoner's KB. The human practical reasoning mechanism is surely very complex and employs highly specialized subsystems (e.g., for taxonomies and for spatio-temporal conceptualization). Therefore to say I can't infer something is to say something about that complex mechanism and the knowledge at its disposal. No realistic notion of knowing or being-able-to-infer can neglect the mechanism. This doesn't necessarily make inference of (not) knowing or (not) being able to infer very complex. How do I know that I don't know a certain telephone number? By running my belief-retrieval mechanism (which certainly does more than just check my explicit beliefs, but by definition has a fast turn-around time) and finding it comes up empty-handed. (In his on-line paper, McCarthy refers to interpreting not-knowing p as failure to prove p a 'perfunctory approach'. I think he is right if failure-to-prove is implemented in some arbitrary way, such as the method of negation-as-failure in prolog that he mentions. But it is shown in Kaplan & Schubert, AIJ 120(1) that a conceptualization of (not) knowing in terms of a self-query algorithm leads to a very interesting and nontrivial theory of belief/ knowledge, sharing some properties with traditional belief logics but, like Konolige's deduction model, avoiding logical omniscience.) How do I know that I can't infer the number? Well, I may not know this immediately, for instance in a case where someone has given me a set of clues about or an encryption of the number. However, I imagine that the first thing we do when we try to determine whether we can infer something that we don't know is to initiate the inference attempt. If this immediately reaches an impasse, i.e., nothing relevant comes to mind that we can reason with (I think that's a detectable condition), we conclude that we can't infer it. If we don't immediately reach an impasse, then as long as we haven't reached a positive or negative answer, we don't know whether we can infer it. (Note that we might reach a negative answer by a route other than an impasse, e.g., we know an encrypted version, and after pondering the encryption scheme, realize that we're trying to solve a nontrivial instance of an NP-hard problem.) McCarthy's speculative answer involves model-building (relativized to an assumption that the rest of the KB has a model), and I certainly think model building (and model elimination) are important inference strategies, but I think we need theories of knowing and being-able-to-infer that abstract from particular strategies and apply to classes of self-query and inference algorithms.

Some types of self-knowledge I'm inclined to add to McCarthy's list (motivated by the proposal above) are the following:

(i) Knowledge about knowledge categories (theme (2) of the proposal). Suppose that someone who is going to pick you up at the airport but doesn't know you asks over the phone, "What do you look like?" To answer that, you need to selectively retrieve "appearance propositions" about yourself (claims about your height, facial features, eye-wear, hair style and color, etc.). In other contexts you might be asked about your personality, job skills, hobbies, personal history, political attitudes, etc. Knowing (explicitly) that certain propositions one believes about oneself are appearance propositions, etc., is a kind of higher-level self-knowledge. Mind you, the same sort of knowledge-categorizing knowledge is also needed for knowledge about others. There's really nothing special about this kind of self-knowledge, except that we know more about ourselves than anyone else.

(ii) Knowledge about how one comes to believe things, through perception, reasoning, and being told (or reading). For instance, if you ask me "Were there any telephone calls while you were in your office?", a "no"-answer would be based in part on my belief that if I am within earshot of a phone and am conscious, and the phone rings, I'll hear it. (I also believe that if I was conscious over some time period, I'll recall that.) Furthermore, if I hear it (and it wasn't too long ago) I'll remember it. So, since I don't remember a call though I was in earshot of the phone and conscious, there was none. Similarly (to borrow a famous example from McCarthy and Hayes), I know that if I look up a telephone number in a telephone book (or a fact in an encyclopedia, etc.), I'll then know it; if I know a person, and encounter that person face-to-face, I'll recognize that person (well, most of the time!); etc. McCarthy mentions "Observing how [an agent] arrived at its current beliefs" and "... answering `Why do I believe p'", but seems to have in mind record-keeping about pedigree, rather than general knowledge about how one comes to know things.

(iii) Summary self-knowledge. I once heard an interviewee on radio being asked, "What was your childhood like?" In response, the inteviewee said things like "My childhood was fairly normal, though a bit turbulent since my parents moved many times. Much of it was spent in the country, and I very much enjoyed playing in the fields and woods", etc. I find it quite mysterious how we come up with such summary self-knowledge (which by the way is nothing like the summaries produced by current "summarization" programs). How does one "find out", from the particulars of one's personal history, and other people's histories, that one had "a fairly normal childhood" (etc)? Again, this summarization or abstraction ability extends to knowledge other than self-knowledge.

References
L.K. Schubert and M. Tong, ``Extracting and evaluating general world knowledge from the Brown Corpus", Proc. of the HLT-NAACL Workshop on Text Meaning, May 31, 2003, Edmonton, Alberta, pp. 7-13.
A. Gerevini, L. Schubert, ``DISCOPLAN: an Efficient On-line System for Computing Planning Domain Invariants", 6th European Conference on Planning (ECP-01), Toledo, Spain, September 12-14, 2001.
L.K. Schubert, ``The situations we talk about", in J. Minker (ed.), Logic- Based Artificial Intelligence, (Kluwer International Series in Engineering and Computer Science, Volume 597), Kluwer, Dortrecht, 2000, pp. 407-439.
A.N. Kaplan and L.K. Schubert, ``A computational model of belief", Artificial Intelligence 120(1), June 2000, pp. 119-160.
L.K. Schubert and C.H. Hwang, "Episodic Logic meets Little Red Riding Hood: A comprehensive, natural representation for language understanding", in L. Iwanska and S.C. Shapiro (eds.), Natural Language Processing and Knowledge Representation: Language for Knowledge and Knowledge for Language, MIT/AAAI Press, Menlo Park, CA, and Cambridge, MA, 2000, pp. 111-174.
A. Gerevini and L.K. Schubert, "Efficient algorithms for qualitative reasoning about time", Artificial Intelligence 74(2), pp. 207-248, 1995.
L.K. Schubert, "Monotonic solution of the frame problem in the situation calculus: An efficient method for worlds with fully specified actions," in H. Kyburg, R. Loui and G. Carlson (eds.), Knowledge Representation and Defeasible Reasoning, Kluwer, Dortrecht, pp. 23-67, 1990.
L.K. Schubert and F.J. Pelletier, "Generically speaking, or, using discourse representation theory to interpret generics", in G. Chierchia, B. Partee, and R. Turner (eds.). Properties, Types, and Meanings II. Reidel, Dortrecht, pp. 193-268, 1989.

Return to Top of Page

Lokendra Shastri

ICSI, Berkeley; Homepage

My research focuses on understanding the neural basis of knowledge representation, reasoning, and memory, and I will draw upon this work to address the themes of the workshop. In this context, the work on computational modeling of episodic memory formation via cortico-hippocampal interactions seems particularly relevant; it examines the neural basis of knowledge that is accessible to consciousness, that includes source information (Here “source information” refers to how and where the information was acquired.) and autobiographical knowledge, and that, to a large extent, forms the basis of our sense of self.

There is a good deal of evidence that in the human brain, different sorts of knowledge is represented in different memory systems. While these systems are interdependent and interact with one another, they have distinct anatomical loci and functional properties. In some cases, there is a remarkable match between the architecture and local circuitry of a brain region and the functional properties of the memory system encoded by the region.

Knowledge encoded in only some of the memory systems is accessible to conscious processes. Moreover, only some of the consciously available knowledge is explicitly linked to individual experiences, specific sources, and spatio-temporal contexts. It is this last sort of knowledge that is required for self-awareness.

Distinct types of memory/knowledge in the human brain (the ones in bold letters play an important role in self-awareness):

Declarative memory (explicit, accessible to conscious thought)

a. Episodic memory (memories of specific instances, situated in a particular spatio-temporal context; often includes source knowledge; includes autobiographical memory)
b. Prospective memory (ability to remember a future intention, i.e., remember to do something specific at a particular time and place)
c. Working memory (short-term, low capacity memory of items that are currently active and that are being manipulated)
d. Semantic memory

i. Entities
ii. Concepts (categories)
iii. Generalizations and abstractions over many instances (incl. “rules”)
iv. Statistical knowledge

Utile memory (utilities associated with situations)

a. Specific (?) general (?)

Procedural memory

a. motor schemas (walking, playing the piano, riding a bike). This knowledge is encapsulated and not *directly* available to the rest of the system; non-declarative/implicit.

Specific aspects of a system that it should be aware of

1. The system should be capable of monitoring its own actions and evaluating its own performance

2. The system should be aware of its capabilities and limitations (How much weight can I lift? Can I solve differential equations? Can I ski downhill?)

3. The system should be aware of its beliefs and the sources of some of its critical beliefs. The specification of a source need not overly specific; it may suffice to know the nature of the source (I read it in an encyclopedia)

4. The system should be aware of its own desires and intentions and the grounding of its intentions in utility (an intention is grounded in some condition from which it derive utility. The intention of walking to the bakery derives its utility from “satisfaction of hunger” - walking to the store will enable me to buy a muffin, eating the muffin will satisfy my hunger, and the satisfaction of hunger carries utility.) A good explanation of a system’s intensions must be grounded in conditions that carry utility.

Recursion: In my opinion, the recursion implicit in the use of ‘self’ is unlikely to pose a major technological problem, since a shallow bound on the depth of such recursion is likely to suffice in dealing with most real-world situations. Let me draw an analogy from natural language. While it is possible to construct sentences with arbitrary levels of center-embedding, center-embedding of depth greater than three renders most sentences incomprehensible (and hence, are uncommon). Similarly, while one can construct examples that require arbitrarily deep nested structures of know(know(believe(want …))) etc., it is unlikely that arbitrarily deep nested structures would occur in real-world situations encountered by a system.

Return to Top of Page

Michael Cox

Wright State University

My interests include computational introspection and representations and the role they play in learning and object-level performance. My doctoral research at Georgia Tech examined the claim that introspection is necessary for effective learning in cognitive systems. The research evaluated a natural language understanding system with and without a portion of its introspective apparatus. My postdoctoral research at Carnegie Mellon involved mixed-initiative planning between active human participants and a nonlinear state-space planner. Because the planner is based upon derivational analogy, the system can present to the user a trace of its own reasoning in the form of the rationale used to make planning decisions. This justification structure can then be used by humans when making subsequent planning decisions. Since I joined the faculty at Wright State University, I have continued both lines of research. Additionally I was co-chair of the 1995 AAAI Spring Symposium on Representing Mental States and Mechanisms(ftp://ftp.cc.gatech.edu/pub/ai/symposia/aaai-spring-95/home_page.html).

An unaddressed theme the workshop might add to its agenda is that of evaluation. One of the problems of research into self-aware systems is the lack of an empirical basis to claims made with respect to reflection, metareasoning, introspection, metaknowledge, and related topics. Introspection is no panacea and indeed has been shown to lead to degraded performance in both humans and machines.

One of the problems IPTO has in its investigation of self-aware systems is with respect to evaluation also. Unfortunately DARPA wishes to evaluate self-aware systems in terms of strict object-level performance. Thus evaluation is indirect. The result will be research teams that fine-tune their performance system and include reflective attributes after the fact. In any case, self-awareness will not be the main focus, and I believe that research dollars may be wasted. Evaluation methodologies might instead include analyses of the contribution due to self-awareness itself. However it is an open question as to the best means for developing such methodologies. If self-awareness is not clearly operationalized, how can we measure it?

Selected Publications:

Cox, M. T. (1996a). An empirical study of computational introspection: Evaluating introspective multistrategy learning in the Meta-AQUA system. In R. S. Michalski & J. Wnek, (Eds.), Proceedings of the Third International Workshop on Multistrategy Learning (pp. 135-146). Menlo Park, CA: AAAI Press / The MIT Press.
Cox, M. T. (1996b). Introspective multistrategy learning: Constructing a learning strategy under reasoning failure. (Tech. Rep. No. GIT-CC-96-06). Doctoral dissertation, Georgia Institute of Technology, College of Computing, Atlanta.
Cox, M. T., & Ram, A. (1999). Introspective multistrategy learning: On the construction of learning strategies. Artificial Intelligence, 112, 1-55.
Lee, P., & Cox, M. T. (2002). Dimensional indexing for targeted case-base retrieval: The SMIRKS system (pp 62-66). In S. Haller & G. Simmons (Eds.) Proceedings of the 15th International Florida Artificial Intelligence Research Society Conference. Menlo Park, CA: AAAI Press.
Veloso, M. M., Pollack, M. E., & Cox, M. T. (1998). Rationale-based monitoring for continuous planning in dynamic environments . In R. Simmons, M. Veloso, & S. Smith (Eds.), Proceedings of the Fourth International Conference on Artificial Intelligence Planning Systems (pp. 171-179). Menlo Park, CA: AAAI Press.

Return to Top of Page

Michael Witbrock

Cycorp Inc 3721 Executive Center Drive, Suite 100 Austin, TX 78731-1615 witbrock@cyc.com Tel: 1-512-342-4000

Self Aware Computer Systems
Note 2004-02-2 from Cycorp Inc.

After two decades of work, the Cyc knowledge base is a substantial artifact, with more than a hundred thousand concepts represented by more than two million assertions and tens of thousands of rules. However, representing and manipulating a body of knowledge similar to that held by a human adult remains a daunting challenge. It is not clear how large the requisite knowledge base will be, but it is clear that producing it entirely by human effort would be a Herculean enterprise. It is our hypothesis, in the Cyc project, that a knowledge base of about the current size of Cyc can provide an effective inductive bias for semi-automated and then automated acquisition of the requisite knowledge in much the same way as evolution has provided each of us with the inductive bias required to gain knowledge from experience and from interactions with human culture.

Early experiments in the DARPA RKF project showed that it was possible for Cyc to acquire knowledge from cooperative human interlocutors, by using its existing knowledge to formulate new knowledge seeking goals. It was not, however, aware of what it was doing, and this showed in its behaviour; the Kraken RKF system has two modes of operation - it either asks knowledge-elicitation questions in FIFO or LIFO fashion. In short, it lacks impulse control. We attribute at least some of this inadequacy to a lack of self awareness - the system does not know why it is asking questions; it does not know how it is asking questions; it doesn't substantially distinguish the current discourse context from the general knowledge base; it is only dimly aware of the person with whom it is communicating.

These lacks substantially reduce the effectiveness of the knowledge formation process. They prevent the system from deciding whether asking a particular question best satisfies its goals – it has no goals. They prevent the system from choosing the best way to approach asking the question – the question has no context. They prevent the systems from framing the question in terms of the preceding and subsequent interactions – it doesn’t really know what they are or will be. They prevent the system from recognizing that its questions may be annoying the user – it does not really know what its questions are, or the effect they might have.

We believe that one path towards giving Kraken (and other Cyc-based systems) an useful degree of self awareness, is to represent, and reason about, the system and its operation in the same way that the system reasons about the knowledge in the KB. We are taking a first step in this direction by adopting a discourse modeling system based on the Cyc microtheory structure, that represents and stores user utterances, and the currently licensed interpretations for those utterances, in CycL, in the KB. This record will include a CycL representation of the fact that the user input some text to the system, and when. It will include a representation of the possible parses for the text, and of the fact that they were produced by a particular parser. It will include a formal representation of the information the system presents to the user, and of the interface that is used to present it. In short, it will include a formal representation of the entire interaction, subject to formal reasoning. We are also in the process of integrating the Bugzilla bug-tracking system with Cyc, with the intention of allowing the system to reason about its operational status. These small steps should allow us, for example, to have the system apply automated planning to structure its interactions with users, asking only “Yes/No” questions, or relying on a parameterized NL system, for example, if it knows that a bug is currently afflicting its ability to use the parsers., or deciding that there are no questions important enough to ask the user, if it is three o’clock in the morning.

Under the ARDA funded Ginko and DARPA Seedling BUTLER projects, we are doing initial work on fully automated knowledge acquisition from text. Such efforts are even more critically dependent on the system having a representation of its knowledge acquisition goals, methods, and limitations. The system must know, for example, that it is pointless to persist in pursuing a goal of acquiring a kind of knowledge for which passage extraction appears to succeed, but parsing never does. Nor should it pursue knowledge that is so infrequently used in inference that acquiring storing sufficient quantities of it would exceed system processing or storage capacity.(It might, for example, occasionally be useful to know the street address of small businesses; acquiring the street address of every small business, prospectively, by reading the web, would exceed any reasonable processing capacity.)

There do not appear to be immediate threats of crippling representational difficulties in pursuing this path of moving from simple program execution to planning for and reasoning about program behaviour. What does present a challenge is reasoning efficiently enough to build responsive systems in this way, especially if they would benefit from using common sense knowledge in reasoning about appropriate system behaviour.

Of course, the Kraken RKF system, being a quasi-conversational system that relies heavily on background knowledge, is a good illustration of the virtues of self-aware systems, and the deficits of programs that lack awareness. The same observations apply to other systems, however. A planetary rover that represented, and could reason about its sub-systems, for example, might notice that its flash memory file load had exceeded its highest previous level , and institute strict post-condition tests on the operation of subsystems, to determine why the memory was filling and to predict when the capacity (also represented) would be exceeded.

Summary: In order to succeed in automating knowledge acquisition in Cyc, it will be important to represent and reason about this knowledge acquisition process, and the means that are used to pursue it. Although KA seems to us to be an ideal context in which to investigate self aware systems, they are far more widely applicable. The primary barriers to effective self awareness, of the sort described, seem to be less representational that computational: how shall we efficiently reason about the large quantities of knowledge required. These are similar to the open issues involved in large scale reasoning in general, and common sense reasoning in particular.

Return to Top of Page

Michael L. Anderson

Institute for Advanced Computer Studies University of Maryland College Park, MD 20742 anderson@cs.umd.edu

As an undergraduate at the University of Notre Dame I majored in an integrated natural science program, a modified pre-medical curriculum designed for those planning graduate school in an interdisciplinary subject. I followed this up with a Ph.D. in philosophy from Yale University, where I concentrated on the philosophy of mind and cognitive science, and studied in particular the role of real-world behavior in fixing the referents of abstract mental symbols. Since 2001, after time spent both as a college professor, and as a programmer with two different scientific organizations, I have been a post-doctoral research associate with Don Perlis’ Active Logic, Metacognitive Computation and Mind research group at the University of Maryland.

My work at the University of Maryland has concentrated on demonstrating the performance improvements—especially in the area of perturbation tolerance, the ability of a system to smoothly handle unexpected or anomalous situations—that can be achieved by enhancing AI systems with metacognitive monitoring and control components. We have shown, in several very different projects, that the often relatively simple expedient of getting a system to monitor itself and its performance, note any anomalies, assess the anomalies, and guide into place a targeted response, can significantly improve the perturbation tolerance of AI systems.

For instance, one long-running project has involved enhancing natural language human-computer interaction systems with metacognitive monitoring and control, including the ability to produce and interpret metalanguage, that is, dialog about the ongoing linguistic interaction or its components. We have designed and built systems that can note and resolve apparent contradictions in user intentions (by itself or with help from the user), as well as realize that a problem in interpreting a user’s command is the result of the system not knowing a certain word, and ask about, and learn, the meaning of the unknown word.

More recently we have shown that the perturbation tolerance of some machine learning techniques (such as Q-learning, a kind of reinforcement learning) can be improved with metacognitive monitoring and control. In fact, we showed that just noticing a problem, and taking the relatively drastic step of throwing out a learned policy and starting over from scratch, can improve performance in response to high-degree perturbations. Preliminary results from ongoing work suggest (not surprisingly) that a more nuanced assessment of the perturbation, and more careful choice of response, can improve performance for a much broader range of perturbations.

In addition to such technical work, we have been trying to move forward on theoretical issues, as well. The most relevant recent work is probably “The roots of self-awareness”, forthcoming from Phenomenology and the Cognitive Sciences, in which Don Perils and I try to sketch a ground-up and computationally feasible approach to self-reference in autonomous agents. This is a very exciting research area, in which contributions promise to have far-reaching effects.

(...)

Some relevant publications:

“Enhancing Q-learning with metacognitive monitoring and control for improved perturbation tolerance” Michael L. Anderson, Tim Oates and Don Perlis. Under review.
“The roots of self-awareness” Michael L. Anderson and Don Perlis. Phenomenology and the Cognitive Sciences 4, forthcoming “Embodied Cognition: A field guide”
Michael L. Anderson Artificial Intelligence 149(1): 91-130, 2003. “Representations, symbols and embodiment” Michael L. Anderson Artificial Intelligence 149(1): 151-6, 2003. “Representations of Dialogue State for Domain and Task Independent Meta-Dialogue” David Traum, Carl Andersen, Yuan Chong, Darsana Josyula, Michael O'Donovan-Anderson, Yoshi Okamoto, Khemdut Purang and Don Perlis. Electronic Transactions on AI 6, 2002.
“Talking to Computers” Michael L. Anderson, Darsana Josyula and Don Perlis. Workshop on Mixed Initiative Intelligent Systems, IJCAI-2003.
“Time Situated Agency: Active Logic and Intention Formation” Michael L. Anderson, Darsana Josyula, Yoshi Okamoto, and Don Perlis. Cognitive Agents Workshop, German Conference on Artificial Intelligence, September 2002.
“The Use-Mention Distinction and its Importance to HCI” Michael L. Anderson, Yoshi Okamoto, Darsana Josyula, and Don Perlis. EDILOG: Sixth Workshop on the Semantics and Pragmatics of Dialog, September 2002.
“Seven Days in the Life of a Robotic Agent” Waiyian Chong, Michael O'Donovan-Anderson, Yoshi Okamoto and Don Perlis. GSFC/JPL Workshop on Radical Agent Concepts, January 2002, NASA Goddard Space Flight Center, Greenbelt, MD, USA
“Handling Uncertainty with Active Logic” M. Anderson, M. Bhatia, P. Chi, W. Chong, D. Josyula, Y. Okamoto, D. Perlis and K. Purang. Proceedings of the AAAI Fall Symposium, 2001

Return to Top of Page

Owen Holland

Department of Computer Science University of Essex Wivenhoe Park CO3 3NR UK

I began working on machine consciousness in 1999 when I was appointed Senior Principal Research Scientist at the Cyberlife Research Institute, Cambridge, UK. I continued while working as visiting faculty on ONR/DARPA and ARO/DARPA projects at Caltech in 2000 (Co-PI since 1998), and submitted two (unfunded) project proposals for machine consciousness to NSF jointly with Christof Koch and Rod Goodman from Caltech. In 2001 Christof, Rod, myself, and David Chalmers (Director of the Center for Consciousness Studies, University of Arizona) organised the first international workshop on machine consciousness at the Banbury Center, Cold Spring Harbor Laboratory (funded by the Swartz Foundation – see http://www.swartzneuro.org/banbury.asp) One of the follow-up activities was an edited book, Machine Consciousness (ed. O. Holland, Imprint Academic, 2003, http://www.imprint.co.uk/books/holland.html ). After Caltech I went to Starlab in Brussels, Belgium, in 2001 as Chief Scientist for the Machine Consciousness Project, and then on to the University of Essex where in 2003 I was awarded a grant of over $900,000 for a project on ‘Machine consciousness through internal modelling’ (UK Engineering and Physical Science Research Council, PI O. Holland). I am also a member of the UK government Foresight Cognitive Systems Project.

My broader background is diverse. Since 1972, I have held faculty appointments in Psychology, Electronics Engineering, Electrical Engineering, and Computer Science, and I have also worked in private research laboratories and in industry.

Proposed contribution

Let’s be honest: The existence proof for self-awareness, and for the claimed benefits of self-awareness, is the human being, and self-awareness in humans is rooted (perhaps inextricably) in what we call consciousness. I propose to offer two perspectives to the meeting:

(1) If self-awareness in computers is only achievable through the creation of full-blown machine consciousness (and we cannot yet be sure that this is not the case) then we should think very carefully before entrusting the control of military action to any such system, because human consciousness exhibits a wide range of undesirable functional characteristics. (In addition, it is far from clear exactly what functional benefits drove the evolution and development of consciousness in humans.) The science writer Tor Norretranders put it quite succinctly:

“Consciousness is a peculiar phenomenon. It is riddled with deceit and self-deception; there can be consciousness of something we were sure had been erased by an anaesthetic; the conscious I is happy to lie up hill and down dale to achieve a rational explanation for what the body is up to; sensory perception is the result of a devious relocation of sensory input in time; when the consciousness thinks it determines to act, the brain is already working on it; there appears to be more than one version of consciousness present in the brain; our conscious awareness contains almost no information but is perceived as if it were vastly rich in information. Consciousness is peculiar.” (Norretranders T. [1998] The User Illusion: Cutting Consciousness down to Size. Trans. J. Sydenham. Allen Lane, Penguin Press.)

(2) At a more technical level, I will present an analysis of the problems of an autonomous mobile robot charged with executing a long-term mission in a hostile, uncertain, and dynamic environment, and will show that the development of an adequate scheme for the selection and control of action may have to involve the development of separate but interacting internal models of relevant aspects of the robot (I prefer to call this the Internal Agent Model [IAM] rather than the self-model) and relevant aspects of the environment (the world model). I will discuss the nature of these models, and the kind of architecture in which they must be embedded, showing the relationships between the IAM and self-awareness, and making the case that many of the apparent problems of consciousness may also occur within these systems. I will also deal with some key differences between an embodied system such as a robot, and an unembodied system such as a computer, and the likely effects of these differences on the nature, contents, and functional benefits of any self-awareness.

Return to Top of Page

Push Singh

 

MIT

(...) I've been trying to formulate an ontology of "reflective critics" that see problems in deliberations, for example:

- Credit was given to a given action for producing an outcome, when it was really produced by another action.
- Actions seem are failing, and the system realizes that preconditions for those actions that had been assumed in fact did not hold.
- Several methods seem to apply to the current problem, but a decision has failed to be made about which to select.
- While formulating a plan of action, the system realizes that the situation had changed and the problem had taken care of itself.
- The system has had only a few experiences dealing with this problem.
- The system had expected a particular outcome from a given action, and in fact a different outcome had ensued because it interacted with an object that had previously not been noticed. - The system had been depending on certain conditions to hold for a period of time, but in fact those conditions only held more briefly.
- The system's memory of an event was revealed not to have been an accurate description of the original happening.

It seems to me an important problem to determine how to represent such reflective critics. Expressing these criticisms requires a suitable ontology of mental objects, situations, and events (such as the mental situation calculus [John McCarthy] describe[s] in [his] consciousness paper.) Some initial thoughts about how to represent reflective critics are in my paper here: There is also some discussion of reflective critics in chapter 7 of Marvin's new book: Figure 4 in this upcoming AI magazine article is also relevant, as it sketches an architecture with not one but several reflective layers.(...)

Return to Top of Page

Raghu Ramakrishnan

raghu@cs.wisc.edu Computer Sciences Department University of Wisconsin, Madison, USA

I’ve worked on database systems, logic programming, and data mining over the past 20 years. My most relevant current research is the EDAM project on exploratory data analysis and monitoring, which is funded by an NSF ITR and is a collaboration with atmospheric aerosol chemists at Carleton College and UW-Madison. Links and more information can be found off my web page at www.cs.wisc.edu/~raghu.Position Paper on Self-Aware Systems Raghu Ramakrishnan raghu@cs.wisc.edu Computer Sciences Department University of Wisconsin, Madison, USA

1. Introduction

Let me begin by making it clear that I have not thus far thought explicitly about self-aware systems, and that I have put this position paper together after receiving your invitation. Initially, I wasn’t sure I wanted to undertake research in another, radically different, direction. My first reaction was that that the vision of a completely general-purpose theory and architecture for self-aware systems was a tumble-weed, always just out of grasp and heading deeper into the desert. I must confess that this continues to be my current belief, and so I might well be the contrarian point of view at the workshop. Nonetheless, in thinking about this, I came to the conclusion that this is an excellent long-term vision in that it idealizes a strong thread of activity that I find emerging in area after systems area: self-managing systems. If we can chart out one or more paths towards this distant goal of general-purpose self-aware systems, and identify achievable milestones along these paths, we could see real progress. In particular, I believe that low-hanging fruit is in self-aware systems aimed at specific tasks; a general theory and architecture will perhaps emerge, and the attempt to abstract will in any case deepen our understanding of the underlying issues.

 

2. Themes

I think the central issues are what the system’s objectives are, what information it should be aware of, and the reasoning capabilities that connect the two. The first two themes in the CFP capture the second point (what information the system should be aware of), but not the other two. (The discussion paragraph that follows does mention the user’s knowledge and goals, but I think these deserve more emphasis.) In fact, my main point is that rather than focus solely on the theory and architecture of a general purpose self-aware system, we should identify specific tasks for which we should aim to build self-aware systems, and work towards the long-term vision of general purpose systems by abstracting from these more specific systems.

Where does the specificity come from? From the objectives, or tasks, that the system is intended to address. To me, this is the starting point. What information to represent, how, and how to reason with it, are all issues that flow from the strategic objectives of the self-aware system. I would therefore like to see some discussion of potential “killer applications” of self-aware systems. I propose self-managing systems of various flavors as the first wave of such killer apps. By extending the scope of the systems that are managed, and what we mean by “manage”, I think we will see a continuum emerge, leading from current systems to general self-aware systems.

The rest of this short statement describes ideas specific to self-aware data analysis and monitoring systems. I offer the following tentative desiderata for a Self-Aware Data Analysis and Monitoring system (and I carefully abstain from an acronym that brings to mind erstwhile dictators):

∑ Should allow users to describe their strategic objective, rather than just the tactical queries or individual analysis steps one at a time. A strategic objective is implicit in a history of user activity, and a long term goal would be to learn this, in addition to enabling users to describe it.

∑ Should be able to observe, abstract, and represent history, in terms of changes in observed data and user activity, and in the context of (currently believed) strategic objectives.

∑ Should be able to learn from history in order to enhance performance or to implement capabilities that augment the user’s efforts to reach strategic goals. In particular:

o Should be able to learn what available data is most relevant, and what missing (but obtainable) data would be most useful, and be intelligent in gathering data and reasoning strategically.

One might say that these criteria are not specific to data exploration systems. While that may be true, some points of emphasis should be noted. First, I think explicitly in terms of a user’s long term or strategic objectives; this is one way to narrow the scope of what the system is expected to be aware of. As I noted earlier, I think building self-aware systems with specific objectives is in itself a hard challenge, and fully general systems are a longer-term vision. Second, I’m limiting my attention to the components that direct data gathering, representation, and analysis. I’m excluding consideration of mechanisms for gathering data; for example, I’m excluding interesting challenges such as how to capture sensory perceptions.

Depending on how we interpret the terms in my characterization, it could be argued that any number of existing systems either do or do not satisfy these criteria. For example, is an adaptive structure such as a B+ tree index self-aware? Is a database system with a component that observes query and update traces and automatically tunes the selection of indexes self-aware? This illustrates my point that there is a continuum from existing systems through ongoing research on self-managing systems to the long-term vision of an overarching theory and architecture for self-aware systems in general. In the next section, I briefly describe a couple of examples of issues that are in the spirit of self-aware systems, but clearly fall beyond the capabilities of current systems and require us to develop new ways to think about the problem.

2. Examples of Research Challenges in Self-Aware Mining and Monitoring

2.1 Subset Mining

Exploratory data analysis is typically an iterative, multi-step process in which data is cleaned, scaled, integrated, and various algorithms are applied to arrive at interesting insights. Most algorithmic research has concentrated on algorithms for a single step in this process, e.g., algorithms for constructing a predictive model from training data. However, the speed of an individual algorithm is rarely the bottleneck in a data mining project. The limiting factor is usually the difficulty of understanding the data, exploring numerous alternatives, and managing the analysis process and intermediate results. The alternatives include the choice of mining techniques, how they are applied, and to what subsets of data they are applied, leading to a rapid explosion in the number of potential analysis steps.

An important direction for research is how to provide better support for multi-step analyses. One approach that we are pursuing in the context of atmospheric aerosol data analysis in the EDAM project at Wisconsin is called subset mining. As the name suggests, the goal is to identify subsets of data that are “interesting”, or have interesting relationships between them. Initially, we want to devise ways to enable users to specify what they consider interesting, in terms of atmospheric science; ultimately, we’d like to infer what is interesting.

We illustrate the subset mining paradigm through an example taken from the application domain that we are currently investigating, analysis of atmospheric aerosols. Consider a table of hourly wavelength-dependent light absorption coefficients, with one row for each reading, and another table of ion-concentration readings, also measured hourly. Consider the following query, which is of great interest in atmospheric studies: Are certain ranges of light absorption levels strongly correlated to unusually high concentrations of particulate metal ions? This is an example of a subset mining query, and the complexity arises from the lack of restriction on the ranges of light absorption coefficients and ion concentrations to be considered. We must consider all possible light absorption coefficient ranges, and for each, carry out (at a minimum) a SQL query that calculates corresponding ion concentrations. In addition, like typical “data mining” questions, this query involves inherently fuzzy criteria (“strong correlation”, “unusually high”) whose precise formulation gives us some latitude. As another example, if we have identified clusters based on the location of each reading, we can readily refine the previous query to ask whether there are such correlations (for some ranges of reactive mercury levels) at certain locations.

To summarize, there are three main parts to a subset mining query:

(1) A criterion that generates several subsets of a table,

(2) A correspondence—e.g., a relational expression—that generates a subset of a second table for each of these subsets, and

(3) A measure of interestingness for the second subset (that indirectly serves as a similar measure for the original subset).

To see why subset mining queries are especially useful for integrated analysis of multiple datasets, using a combination of mining techniques, observe that Steps (1) and (3) could both be based on the results of (essentially any) mining techniques, rather than just simple SQL-style selections and aggregation. Further, we can analyze a single table by simply omitting Step (2); intuitively, this allows us to compare an arbitrary number of “groups” in the data to see if they are correlated to, or deviate from, other groups in terms of high-level properties. Examples include the dominant sources of certain elements in observed particles or the correlation between temperatures and light-absorption of atmospheric aerosols. Thus, subset mining offers a promising approach to the broader challenge of describing and optimizing multi-step mining tasks.

A significant technical challenge is to optimize subset mining, at least for several particular instances (i.e., query classes). This will require research into cost estimation for the data mining techniques used as components of the subset mining instance, as well as ways to “push” abstract constraints derived from the subset-generation component into the interestingness-measure component (and vice-versa). While there are parallels in database query optimization and evaluation for relational algebra operations, we expect that these issues will have to be tackled on a case-by-case basis for different data mining techniques when they are used to instantiate the subset mining framework.

2.2 Phenomenological Mining

The goal of identifying strategic analysis objectives that go beyond the application of specific techniques such as clustering or finding associations is also connected to a proposal by John McCarthy that called for “phenomenological mining”. For example, a frequent itemset analysis might tell us that beer and diapers are purchased together relatively often, but a phenomenological analysis should tell us that a customer who purchases both is likely to be a new father. McCarthy’s “phenomena” can be seen as a declarative representation of patterns that are of strategic interest to the user, and we can frame the problem in terms of how to (semi-automatically) compose existing analysis techniques to discover such patterns.

Our current work on analyzing streams of mass spectra offers an interesting concrete example. Recently developed “time-of-flight” mass spectrometers are capable of generating spectra describing individual atmospheric aerosol particles. The spectra reflect the presence of one or more compounds, some of which are of interest to scientists and drawn from an underlying lexicon of compounds, and some of which are background noise and might perhaps be outside the available lexicon. The labeling task is to identify what compounds of interest are present in a given particle, given the mass spectrum (a plot of intensity versus the ratio of mass to charge) for the particle. As a result of labeling, each particle is associated with a set of compounds, which is essentially the market-basket abstraction of a customer purchasing a set of items together. One can, of course, now apply standard frequent itemset analysis on the resulting stream of labeled particles. In addition, one can do phenomenological analysis by representing interesting phenomena as mass spectra, and labeling each particle with the phenomena that characterize it, rather than the compounds that it contains!

2.3 Intelligent Data Gathering

Database systems have looked extensively at how to manage, index, retrieve, and process data. There is no work that I am aware of that considers what data to gather. This is a fundamental new challenge in monitoring systems, and it is intimately connected to self-awareness: almost by definition, intelligent data gathering relies upon an awareness of what we seek to monitor.

Consider the following example. Epidemiologists are deeply concerned with the impact of environmental pollutants on quality of life and life expectancy. They’d like to understand the correlations between where people have been and, for example, trends in cancer or mortality. Knowing where people have been, at the appropriate level of granularity to respect privacy, can be accomplished in a variety of ways. But how do we maintain a continuous model of the ambient conditions in, say, Chicago over the next five years? We can put a variety of data gathering devices to work at fixed points, on dedicated mobile devices such as geosynchronous satellites or a graduate student driving a truck with instruments (we’ve done this, it’s not a joke!), or opportunistically on mobile devices with other primary objectives (such as city buses!).

The question to consider is this: Given certain objectives (e.g., our desire to reason about a certain collection of expected trajectories for the people in our study), and certain resource constraints (a fixed budget, inability to re-route city buses, unreasonable demands from graduate students for time off to sleep), what is the best schedule for deploying our data gathering resources?

Return to Top of Page

Ricardo Sanz

Universidad Polit´ecnica de Madrid   Ricardo.Sanz@etsii.upm.es

I am professor of Systems Engineering and Automatic Control at the Automatic Control Department of the Universidad Polit´ecnica de Madrid, Spain, where I lead the recently created Autonomous Systems Laboratory. I obtained a degree in electrical engineering and a PhD in robotics and artificial intelligence from the same university in 1987 and 1991, respectively. My research interests are focused on complex control systems, control system architecture, intelligent control, software-intensive controllers, complex controller engineering processes, modular control, object-based distributed realtime systems and philosophical relations between control and intelligence. My research experience has been focused on large scale distributed intelligent control and applications in large process control and software architectures and technologies for distributed intelligent control systems (cement, oil, chemical and electrical utilities). I am a senior member of the Control Systems Society of the IEEE, chairman of the IFAC Technical Committee on Computers and Control and chairman of the OMG Control Systems Working Group. I am also an associate editor of the IEEE Control Systems Magazine in charge of topics between control, software and artificial intelligence.

Conscious Control Systems

The evolution observed in controller engineering shows a trend towards complexity due to the strong requirements posed to new controllers (increased performance, robustness, maintainability, etc. ). These requirements can be seen in any type of complex control system, from flight management systems, integrated X-by-wire automotive controllers to plant-wide strategic controllers.

Complexity is becoming a barrier to the engineering process of these controllers and while some technologies have been successfully employed (for example the use of hot replication in fault tolerant control), some of the problems are still lacking an appropriate solution.

At the same time, we have seen an increase in the use of reflection and integration technologies in this type of controllers for different purposes. Controllers are exploiting perceptions and representations of themselves to perform new tasks and manage increased levels of uncertainty (for adaptation, self-configuration, fault tolerance, system-wide optimisation, preventive maintenance, etc. ). The examples are uncountable. When these representations and the associated action generation systems are integrated by need of the control requirements, a primitive form of self emerges.

To my understanding this trend is leading to a vision of complex integrated reflective controllers in a way that is converging with what is being proposed by recent neuroscience models of biological intelligence and consciousness.

I perceive an extremely interesting convergence between theoretical artificial intelligence, cognitive robotics, human neuroscience and the practice of real control systems. This is particularly remarkable due to the fact that all these disciplines pursued very different objectives. And now they are resulting the same.

I’m interested in the workshop because “selves” are appearing in complex controllers and I want to have a scientific understanding of them to be able to engineer selves for real controllers if they can provide new functionalities (as it looks like).

In relation with this need I’ve been recently involved in some activities:

• The organisation (with Aaron Sloman and Ron Chrisley) of the workshop ”Models of Consciousness” held in September last year (see here).
• The organisation (with Igor Aleksander and Mercedes Lahnstein) of the workshop ”Machine Consciousness: Complexity Aspects” held in September last year (see here).
• The promotion of research funding from the European Commission and the preparation of the ”Self-aware Control Systems” white paper for the future Bioinspired Intelligent Information Systems Call (here ).

My contribution to the workshop will be:
• The presentation of the state of the art in complex control systems related with the topics of the workshop.
• The presentation of a control-based model of system awareness and consciousness.
• The presentation of the ”Self-aware Control Systems” focus point of the Bioinspired Intelligent Information Systems call in search for trans-Atlantic cooperation.
• The presentation of a rationale for a scientific theory of scalable awareness of application in the construction of intelligent embedded systems.

Return to Top of Page

Richard Scherl

Computer Science Department Monmouth University West Long Branch, N.J. 07764 email: rscherl@monmouth.edu phone: 732-571-4457

(...) One of my research interests has been in the logical specification of of agents and the development of logic-based agent programming languages. A couple of relevant publications are as follows:

Richard Scherl and Hector Levesque, "Knowledge, Action, and the Frame Problem." Artifical Intelligence, pp. 1-39, vol. 144, 2003.
Yves Lesperance, Hector Levesque, Fangzhen Lin, and Richard Scherl, "Ability and Knowing How in the Situation Calculus." In Studia Logica, vol. 66(1), pp. 165-186, 2000.
Hector Levesque, Ray Reiter, Yves Lesperance, Fangzhen Lin, and Richard Scherl.
"GOLOG: A Logic Programming Language for Dynamic Domains." In the Journal of Logic Programming, vol 31(1-3), May 1997.

Some relevant questions that come to mind are: How does self awareness differ from (if at all) or relate to meta-reasoning? What language features are useful for writing self-aware programs? A potential research project of interest to me is the development of a reflective or self-aware version of GoLog. It is not clear that I would find time to make significant progress on this prior to the workshop, though.

Return to Top of Page

Richard P. Gabriel

Distinguished Engineer, Sun Microsystems, Inc.

My CV and bio can be found at my website ). I have been a researcher in artificial intelligence systems, programming languages (including exotic programming languages), parallel computing, object-oriented languages, complexity and biologically inspired computing, programming environments, and large systems. In the 1970s I designed and implemented a multi-layer, introspective programming system with some of the characteristics talked about here. I believe the unique perspective I could bring to the workshop stems from my diverse background and interests.

Self-Sustaining Systems

I have a small laboratory at Sun Microsystems which is just embarking on a research program on what we call “self-sustaining systems.” A self-sustaining system is any combination of hardware and software which is able to notice errors, inefficiencies, and other problems in its own behavior and resources, and to repair them on the fly. We imagine that such systems will be quite large and that they might exhibit both hardware and software failures. This means that some of the failures will be intermittent or based on coincidences—of placement, of timing, and of naming.

We plan to explore a number of approaches, some inspired by good engineering principles, some inspired by biological mechanisms and mechanisms or principles from other disciplines, and some deriving from a reëxamination of accepted practices in design, coding, and language design. For example, we expect to examine the following:

• there is an existing practice in high-reliability systems, especially in telecom software and equipment, in real-time and embedded systems, and in distributed real-time and fault-tolerant systems, which use resource estimation, checkpointing, and rollback—among other techniques—to ensure proper operation. We plan to examine the literature, mostly in the form of software patterns and pattern languages to determine what can be used for self-sustainability.

• most programming languages are designed for running (correct) algorithms embodied in correct programs, the design center being the easy expression of precise calculations. But this is perhaps contrary to the needs of some parts of self-sustaining systems, which must be able to express the observation of running programs and alter their running behavior, and programs written to do this must be highly tolerant of errors of their own. Such robustness of an observational layer can perhaps be achieved by operating only on “safe” data or by being declarative or reactionary. We expect to look at programming constructs aimed at expressing temporal reasoning about and operations on programs more intuitively than is done now with low-level synchronization primitives.

• our intuition is that one key to self-sustaining programs is an observational, programmable layer which not only observes, interrupts, and alters, but which consists of decaying persistent memory. This intuition is hard to express. We imagine that this secondary layer is “soupy” and uses gradients to convey and combine information in an almost geometrical manner. For example, suppose there is a distributed system in which patterns of information flow vary over time. If a particular data structure is subject to frequent repair, it might be correlated with information flow which might persist or repeat, or which might be transitory. The traces of these events, if kept forever, might prove too complex to analyze; but with repetition, the pattern of causation might be reinforced and the amount of data to plow through would be greatly reduced. Another example is a tightly packed multiprocessor which can suffer heat-related failures when heavy-duty computations take place on them. Imagine an observational layer that is monitoring heat and memory or computation failure rates. If the observational memory were to hold data about the heat or failure rate and treat it like dissipating heat or dissolving chemicals, then adjacent memory and computational elements would “inherit” some of the characteristics of the problems and computations would not be scheduled onto them as frequently, thereby reducing errors by moving computations away from overheated areas.

Part of this intuition is that the observational layers need to be simpler to program than the real computational layers, so that correctness of code running there is easier to show or if possible, irrelevant. Another part is that the stuff of which this layer is made must be fundamentally more robust or at least less fragile than the tightly wound stuff that makes up the base layer. One way to accomplish this is to make this layer not computationally complete, but highly limited and based on rudimentary operations.

• after reading the work of Martin Rinard and his students at MIT, we believe that repairing deviations from “acceptable” behavior is a good approach for certain types of robustness. One way to think of this is that the supposed tight bounds on what would be considered correct behavior is often artificially too restrictive. For example, in triangle-based rendering, degenerate triangles causing division by 0 in some algorithms can probably skip expensive checking for degeneracy and use a quick exception-handling mechanism to return a fixed value, since all that might happen is that a handful of pixels in a gigantic image might have a slightly wrong color. What is acceptable to a self-sustaining system might be much more diverse (or forgiving) than a single execution path.

• many programming languages shortchange exception handling, treating them, well, like exceptions. In living systems, a large fraction of the mechanisms at various levels are concerned with preservation, conservation, and repair, while in an exceptionally careful software system, perhaps 5% of the mechanism are devoted to such things. We believe that many exception handling systems are not only unsuitable for programming self-sustainability, but the mechanisms themselves break modularity, causing additional errors due to programming mistakes.

Self-sustainability and self-awareness appear to be related, perhaps only at the implementation level, but perhaps conceptually as well. A self-sustaining system performs some degree of reflection and can alter its own behavior based on what it sees, and this is what a self-aware system does.

 

Return to Top of Page

Richard Thomason

University of Michigan, Dept. of Philosophy http://www.eecs.umich.edu/~rthomaso/

(...) I'm primarily a logician, but I have strong secondary interests in computer science, linguistics and philosophy, and have worked in all three of these areas. I haven't published much that bears directly on the topic of this workshop, but last fall I read John McCarthy's position statement on consciousness and commented on it. Since then I have thought (a little) about the issues John raises about how to formalize the reasoning processes that could lead us to conclude that we don't know something. I think I have something to say about these issues and could have more to say by the time of the workshop. The following summarizes the ideas.

1. I don't think it's a good idea to formalize reasoning about knowledge (or its absence) using syntactic representations of propositions. This leads directly to Goedel numbering and the semantic paradoxes, and in my opinion needlessly confuses the core reasoning issues by entangling them in insolubilia. And there is no direct linguistic or semantic evidence for such an approach to propositional attitudes. If nonclosure of knowledge under logical consequence is wanted, then it is best to treat propositions as unanalyzed primitives.

2. On the other hand, modal approaches to knowledge and belief -- even ones like Levesque's that explicitly formalize "only knowing" -- do not seem to provide a very helpful account of the reasoning that could justify conclusions of the form "I don't know P" (NKCs).

3. If you are a Bayesian, nothing could be easier than a NKC. Consult your probability function and select the propositions with values less than 1. But I take it, we are concerned here with the pretheoretic, commonsense reasoning that would be used in setting up a Bayesian model of some moderately small domains, or that may be an essential component of any very large and complex domain.

4. Conditionals of the form -KP-->-KQ provide one way to justify such conclusions. Symmetry considerations can justify such conditionals. E.g., to know nothing about that a, b, and c are arranged in a line but not how they are arranged is to know a large number of conditionals such as "If abc is possible for all I know, so is bac".

5. Reflecting on the conditions under which I could have evidence for a conclusion is another way of justifying NKCs. If we can form generalizations about all the possible justifications that are available to us we may be able to show that P can have no such justification, or -- more generally -- that we could only have a justification of P if we had one of Q. This would support a conditional -KQ-->-KP. Maybe there are some propositions which we would remember if we knew them. (Moore's example of how he knows he doesn't have a brother is an example.) If Remember(P) is a performable test whose failure guarantees -KP, we have a potentially rich source of NKCs.

6. Mathematicians engage in two primary forms of reasoning; they prove results and [they] construct examples. Logicians have mainly concentrated on formalizing the first activity, and have paid less attention to the second.

7. To develop a logic that does justice to the reasoning involved and positive and negative knowledge attributions, we need to add a mechanism of example construction to an epistemic logic. Constructions that preserve what is known can provide a direct source of NKCs. Relative consistency results (e.g., Goedel's proof of the relative consistency of the Continuum Hypothesis) rely on this method, but for common-sense reasoning we need to look at simpler constructions.

8. In a little-known paper that I think may be relevant, Theodore Hailperin axiomatized the formulas invalid in some finite domain. This paper actually succeeds in formalizing reasoning to an NKC, but as you might expect the "axioms" and rules are extremely peculiar. I believe that Hailperin's techniques can be generalized and I intend to look into this, but I don't yet have any results along these lines that are worth reporting.

REFERENCE

@article{ hailperin:1961a, author = {Theodore Hailperin}, title = {A Complete Set of Axioms for Logical Formulas Invalid in Some Finite Domain}, journal = {{Z}eitschrift {f}\"ur {M}athematische {L}ogik and {G}rundlagen {d}er {M}athematik}, year = {1961}, volume = {7}, pages = {84--96}, xref = {Review: Mostowski in JSL 27, pp. 108--109.} }
@article{ levesque:1990a, author = {Hector J. Levesque}, title = {All {I} Know: A Study in Autoepistemic Logic}, journal = {Artificial Intelligence}, volume = {42}, number = {3}, year = {1990}, pages = {263--309} }
@article{ moore_rc:1985a1, author = {Robert C. Moore}, title = {Semantical Considerations on Nonmonotonic Logic}, journal = {Artificial Intelligence}, year = {1985}, volume = {25}, number = {1}, pages = {75--94} }

Return to Top of Page

Robert Stroud

Centre for Software Reliability School of Computing Science University of Newcastle upon Tyne

Towards ‘self-aware’ intrusion-tolerant computing systems

My background is in dependability, fault tolerance and distributed systems rather than Artificial Intelligence. However, I have a long-standing interest in the use of reflection asa mechanism for separating functional and non-functional concerns. In particular, I haveexplored the use of reflective programming techniques, in the form of metaobject protocols, as a way of enhancing the dependability properties of an application, withoutmodifying the application itself. Instead, reflection is used to customise and extend thebehaviour of the underlying interpreter for the application, for example, by performing each operation in triplicate and voting on the result. Thus, the kinds of non-functionalconcerns that these techniques can be applied to are those that can be mapped ontofunctional changes to the underlying interpreter. The approach can also be used to separate mechanism from policy – policy enforcement takes place at the meta level, andis transparent to the application. For example, a meta level could monitor the behaviourof an application for safety violations, or impose additional security constraints, and thus support flexible security policies. Policy enforcement mechanisms can be programmed atthe meta level as meta level objects, and this set of meta level abstractions effectivelydefines the vocabulary in which policies can be expressed. The particular policy that is to be applied to an application is defined by the binding between application objects andmeta level policy objects, and this binding can be separated from the application andexpressed dynamically, and perhaps even adaptively.

The starting point for this position paper is a question that I was posed during the viva ofa PhD candidate whose thesis I was examining, namely “what is the nature of ‘self’ in a fault tolerant system?”. I found the thesis interesting because it viewed fault tolerance from a cognitive science perspective, and therefore gave me a different way of thinkingabout my own discipline. It hadn’t occurred to me before to think about fault tolerance interms of a ‘self’/’non-self’ distinction, but this might be one way of thinking about error detection, which is concerned with detecting erroneous states in the system before theylead to failures. Indeed, there is a whole body of work on computational immunology thatis concerned with building secure systems using insights gained from the way in which the human immune system works. But are such systems ‘self-aware’ in any real sense?I’m not a philosopher, but I think the answer is ‘no’ because I don’t think that providing asystem with a model of correct (or abnormal) behaviour and programming it to monitor its own behaviour against this reference model amounts to self-awareness. Nor do Ibelieve that a system that incorporates some form of dynamic learning to construct itsown model of normal behaviour, and detect anomalous behaviour is necessarily any more ‘self-aware’ than a system that is provided with the model as input data. In my view, inorder to be truly ‘self-aware’, such a system should incorporate some sort of diagnosticcapability, and should be able in some sense to ‘step outside itself’ in order to reflect upon the correctness of its decisions or judgements about its own behaviour or that of itsenvironment, and perhaps modify its own behaviour accordingly.

For the last few years, I have been working on the problem of intrusion tolerance, i.e.fault tolerance with respect to deliberate faults such as attacks on the security of a system.In particular, I was the director of the MAFTIA project (see www.maftia.org), which was a three-year European collaborative research project that investigated the use of faulttolerance techniques to build intrusion-tolerant secure distributed systems.

Dependability terminology makes a distinction between fault, error, and failure. A failureoccurs when a system deviates from its intended behaviour. An error is an erroneous partof the system state that could lead to a subsequent failure, and a fault is the adjudged or hypothesized cause of the error. Fault tolerance is generally implemented by errordetection followed by system recovery. System recovery consists of two parts: errorhandling, whose purpose is to eliminate errors from the system’s state, and fault handling, whose purpose is to prevent the underlying faults that caused the error from beingreactivated. If the system state contains enough redundancy, then it is possible to useerror compensation and fault masking techniques to achieve recovery without the need for explicit error detection. However, unless faulty components are diagnosed andreplaced, the system will progressively lose redundancy over time. Furthermore, faultmasking only works if failures occur independently, and this is a dubious assumption to make in a hostile environment. Thus, it would seem inevitable that some form of errordetection and fault diagnosis capability is required, and this is where I believe that someelement of ‘self-awareness’ will be necessary. Unfortunately, it is not possible to build a reliable fault tolerance mechanism using an unreliable error detection mechanism, and thecurrent generation of intrusion detection systems are woefully inadequate to the taskbecause of their high rate of false positives and false negatives.

To illustrate how self-awareness could be both a help and a hindrance in building anintrusion-tolerant system, consider a security system that is responsible for monitoringthe output of an intrusion detection system, which in turn is monitoring a system that might or might not be under attack. The security system must decide whether the errorsignals it receives from the intrusion detection system are ‘normal’ false positives thatcan be safely ignored, false positives that can be eliminated by tuning the performance of the intrusion detection system itself, or true positives that must be acted upon byaddressing the vulnerabilities in the system that are enabling it to be successfully attacked(and similarly for negative signals). Furthermore, based on the subsequent behaviour of the intrusion detection system, the security system must reflect upon whether it made thecorrect decision, bearing in mind that an attacker might be deliberately concealing theirattacks by exploiting vulnerabilities in this decision making process. So it is not sufficient to use self-awareness to build an intrusion-tolerant system, the self-aware component ofthe system must itself be made intrusion-tolerant. Indeed, this is perhaps an argumentagainst using self-awareness to build an intrusion-tolerant system, unless the selfawareness can be distributed throughout the system in a way that makes it resilient toattack. So perhaps ‘self-awareness’ should in some sense be an emergent property of theoverall behaviour of the system rather than a specific function located in a specific component of the system.

In conclusion, fault tolerance provides an important but challenging application area for the concept of ‘self-awareness’, and my contribution to the workshop would be as anexpert in this particular application domain.

Return to Top of Page

Sheila McIlraith

Dept of Computer Science, U. Toronto

My research program is primarily in the area of artificial intelligence (AI), but with significant overlap to aspects of software engineering. The overarching theme of my research is the development of self-describing, self-diagnosing and self-(re)configuring autonomous or automated systems.

We are seeing a proliferation of complex, interconnected hardware and software systems, including smart buildings, power distribution systems, micro-electro-mechanical systems (MEMS), interacting mobile devices, Web-accessible programs, and autonomous space systems. These systems are often componentbased, and sometimes distributed, often the result of networked software and/or connected component devices. Developing robust, fault-adaptive software for these systems is difficult and time consuming. Our approach to addressing these problems is based on artificial intelligence techniques in model-based systems, influenced by work in software engineering and control theory. We build declarative models of system components, thus making them self-describing. We combine these models with automated reasoning techniques that enables these systems to be reconfigurable, self-diagnosing and adaptive, in response to failures and changing environments.

My research contributes to the theoretical foundations of AI and other relevant disciplines, by developing formal principles and computational techniques, informed and motivated by real-world problems. Two problem domains that have motivated my work in recent years are:

Analysis of interoperating programs & devices, as exemplified by Semantic Web Services, and

Analysis of Complex Physical Systems, as exemplified by NASA space systems.

These problem domains are instances of a broad array of seemingly disparate problems that share core technical challenges. All are appropriately modeled as compositional discrete and hybrid (discrete + continuous) dynamical systems. Broad shared research challenges include a) how to specify the behaviour of these systems so that they are self-describing, and b) how to develop automated reasoning techniques to manipulate these specifications to perform reasoning that addresses tasks such as system configuration, automated composition, system diagnosis, and system reconfiguration or repair.

In the interest of submitting this statement of interest in a timely fashion, I’ll end with five of my publications that are related to the research issues raised above.

1 Lerner, U., Moses, B., Scott, M., McIlraith, S., and Koller, S. “Monitoring a Complex Physical System using a Hybrid Dynamic Bayes Net”. Eighteenth Annual Conference on Uncertainty in Artificial Intelligence (UAI2002), August 2002.

2 McIlraith, S. and Son, T. “Adapting Golog for Composition of Semantic Web Services”. Eighth International Conference on Knowledge Representation and Reasoning (KR2002), pages 482-493, April, 2002.

3 McIlraith, S. “Modeling and Programming Devices Agents.” In Rash, J.L., Rouff, C.A., Truszkowski, W., Gordon, D., and Hinchey M.G. (Eds.), Formal Approaches to Agent-Based Systems, First International NASA Goddard Workshop, Lecture Notes in Artificial Intelligence 1871:63–77, Springer-Verlag, Greenbelt, MD, April 2000.

4 McIlraith, S., Son, T.C. and Zeng, H. “Semantic Web Services.” IEEE Intelligent Systems, Special Issue on the Semantic Web, 16(2):46–53, March/April, 2001.

5 McIlraith, S. “Explanatory Diagnosis: Conjecturing Actions to Explain Observations.” Sixth International Conference on Principles of Knowledge Representation and Reasoning (KR’98), pp. 167–177, Trento, Italy, June 1998.

Return to Top of Page

Stan Franklin

Dunavant Professor of Computer Science, Institute for Intelligent Systems, The University of Memphis, Memphis, TN 38152, franklin@memphis.edu, www.cs.memphis.edu/~franklin

Mostly funded by sizable grants from the Office of Naval Research, for the past six years my research team and I have developed the IDA technology (http://csrg.cs.memphis.edu/CSRG/index.html). IDA (Intelligent Distribution Agent) is an intelligent, autonomous, software agent (Franklin 2001). At the end of each sailor's tour of duty, the sailor is assigned to a new billet. This assignment process is called distribution. The Navy employs some 280 people, called detailers, to effect these new assignments. IDA's task is to facilitate this process by completely automating the role of detailer.

IDA communicates with sailors via email in natural language, by understanding the English content of messages and producing life-like responses. Sometimes she will initiate conversations. She must access several databases, again understanding the content. She must see that the Navy's needs are satisfied by adhering to some ninety policies and seeing that job requirements are fulfilled. She must hold down moving costs, but also cater to the needs and desires of the sailor as well as is possible. This includes negotiating with the sailor via an email correspondence in natural language. At this writing an almost complete version of IDA is up and running and had been demonstrated and tested to the satisfaction of the Navy.

IDA is a “conscious” software agent in that she implements a large part of Baars’ global workspace theory (Baars 1988). In his global workspace theory, Baars, along with many others, postulates that human cognition is implemented by a multitude of relatively small, special purpose processes, almost always unconscious. (It's a multiagent system.) Communication between them is rare and over a narrow bandwidth. Coalitions of such processes find their way into a global workspace (and into consciousness). This limited capacity workspace serves to broadcast the message of the coalition to all the unconscious processors, in order to recruit other processors to join in handling the current novel situation, or in solving the current problem. Thus consciousness, in this theory, allows us to deal with novel or problematic situations that can’t be dealt with efficiently, or at all, by habituated unconscious processes. In particular, it provides access to appropriately useful resources, thereby solving the relevance problem.

Processing in IDA is, for the most part, a continuing iteration of a cognitive cycle of activities involving modules called perception, working memory, episodic memory, long-term associative memory, consciousness, action selection and motor activity (Baars & Franklin 2003, Franklin et al. in review).

Though IDA is functionally “conscious,” she’s not self-aware. However, self-awareness of several different sorts can be added to the IDA architecture with no major modifications, that is, with no additional modules. Her current cognitive cycle will suffice.

As a first step, we intend to give IDA the capability of answering questions about her current state of “consciousness,” of what she’s aware of in the world. This would be the beginning of a so-called narrative self (Gazzaniga 1998). The design of this addition to IDA is currently in progress. The next step will be to add Damasio’s proto-self (Damasio 1999) to IDA, giving her some opportunity to act in self-preservation through a minimal (core) self (Ramamurthy & Franklin in review). An autobiographical self for IDA is already in the works (Ramamurthy et al. submitted). All this, and more, was the subject of a talk given to the Workshop on Machine Consciousness at Turino, Italy, during October of 2003. A PowerPoint show for that talk is available on request. (...)

References

Baars, B. J. 1988. A Cognitive Theory of Consciousness. Cambridge: Cambridge University Press.
Baars, B. J., and S. Franklin. 2003. How conscious experience and working memory interact. Trends in Cognitive Science 7:166–172.
Damasio, A. R. 1999. The Feeling of What Happens. New York: Harcourt Brace.
Franklin, S. 2001. Conscious Software: A Computational View of Mind. In Soft Computing Agents: New Trends for Designing Autonomous Systems, ed. V. Loia, and S. Sessa. Berlin: Springer (Physica-Verlag).
Franklin, S., B. J. Baars, U. Ramamurthy, and M. Ventura. in review. The Role of Consciousness in Memory. .
Gazzaniga, M. S. 1998. The Mind's Past. Berkeley: University of California Press.
Ramamurthy, U., and S. Franklin. in review. Self-preservation mechanisms for Cognitive Software Agents. .
Ramamurthy, U., S. D'Mello, and S. Franklin. submitted. Modified Sparse Distributed Memory as Transient Episodic Memory for Cognitive Software Agents. .

Return to Top of Page

Stuart C. Shapiro

Department of Computer Science and Engineering and Center for Cognitive Science 201 Bell Hall University at Buffalo, The State University of New York Buffalo, NY 14260-2000 shapiro@cse.buffalo.edu

I am interested in self-aware, natural language (NL) competent autonomous agents/cognitive robots. This interest derives from my long-term interest in knowledge representation and reasoning in support of NL competence, and the grand goal of AI to produce systems that have human-level intelligence. To me, human intelligence largely exhibits itself through people’s abilities to communicate in NL, not only about intellectual topics, but also about behavior, including giving and accepting instructions about how to perform various tasks.

For the past 16 years, I have been interested in the interaction of reasoning and acting, where both, of course, are subject to NL discussions. For the past 12 years, this has expanded to include questions of what beliefs come from interactions with other agents, what from reasoning, and what from an embodied agent’s own body. These latter beliefs, contribute to a sense of self-awareness. Cassie [1, 6, 7, 8, 9, 12, 13, 14, 15, 17, 18, 19], is the name of our cognitive robot. She is implemented in the GLAIR architecture [4, 5, 16] and the SNePS knowledge representation and reasoning system [18, 20, 21]. The following shows an interaction with Cassie in the role of a FEVAHR (Foveal Extra-Vehicular Activity Helper-Retriever) robot [2, 3, 15]. She is in a room with several people and several other robots. This is a transcription of an actual interaction, slightly reformatted, and with labels added to show the source of each sentence.

Stu: Come here.
Cassie: I came to you, Stu.
Cassie: I am near you.

Stu: John went to the green robot.
Cassie: I understand that John went to the green robot.

Stu: Go to the green robot and then to Bill.
Cassie: I found the green robot.
Cassie: I am looking at the green robot.
Cassie: I went to the green robot.
Cassie: I am near the green robot.
Cassie: I found Bill.
Cassie: I am looking at Bill.
Cassie: I went to Bill.
Cassie: I am near Bill.

Stu: Where are you?
Cassie: I was near you and I was near the green robot and I am near Bill.

Stu: Help Bill.
Cassie: I am talking to you, Bill.
Cassie: I came to you.
Cassie: I am near you.
Cassie: I am following you.

Bill: Who are you?
Cassie: I am the FEVAHR and my name is ‘Cassie’.

Bill: Who went to the green robot?
Cassie: John went to the green robot and I went to the green robot.

Bill: Who have you talked to?
Cassie: I talked to Stu and I am talking to you.

Notice that Cassie has beliefs about herself and others, knows whom she is talking with, can remember and report on her actions using appropriate tense and aspect, can be informed about the actions of others, and can understand and use certain deictic terms such as I, you, and here.Some of the features of SNePS/GLAIR/Cassie that facilitate Cassie’s current level of self-awareness are:

• a SNePS term represents Cassie herself, and is used in all her beliefs about herself;

• a mind/body distinction is made in the GLAIR architecture; most beliefs enter the mind via NL interaction or other sense organs, but beliefs about what Cassie, herself is doing, are entered directly by the body level as it performs primitive actions;

• there is a temporal model, consisting of temporally related time-denoting terms; each event Cassie has beliefs about are associated with a time, and thus related to each other in time;

• a set of registers are located at the body level containing mind-level terms to give Cassie a point of view, including an I register containing Cassie’s self-term, a YOU register containing the term representing the individual Cassie is conversing with, and a NOW register containing the term representing the current time.

• when the body level inserts a belief into the mind that Cassie is performing some action, the NOW register is advanced to a new time term, that term is temporally related to old time terms, and the event that this action is being performed is associated with the new time; the sequence of time terms that have been the value of NOW form the temporal spine of an episodic memory;

• the representation of acts that are to be done, that have been done, that are expressed in imperative NL sentences, and that are expressed as predicates of NL sentences is the same, facilitating NL discussion of acts including NL instructions about new acts.

In some recent versions of Cassie [1, 13], the mind and upper body level run on one computer while lower body levels run on other machines. They are connected by IP sockets: different sockets for different modalities, so that, for example, Cassie can talk and move at the same time. One of these versions [1] has a self-perception socket from the lower levels to the upper levels that allows Cassie to hear herself talking. This is important for timing sequences of actions, so her mind doesn’t outrun her mouth.

(...)

References

[1] J. Anstey, D. Pape, S. C. Shapiro, and V. Rao. Virtual drama with intelligent agents. In H. Thwaites, editor, Hybrid Reality: Art, Technology and the Human Factor, Proceedings of the Ninth International Conference on Virtual Systems and MultiMedia (VSMM 2003). International Society on Virtual Systems and MultiMedia, 2003.
[2] C. Bandera, S. Shapiro, and H. Hexmoor. Foveal machine vision for robots using agent based gaze control. Final Technical Report #613-9160001, Amherst Systems Inc., Buffalo, NY, September 1994.
[3] H. Hexmoor and C. Bandera. Architectural issues for integration of sensing and acting modalities. In Proceedings of the IEEE International Symposium on Intelligent Control: ISIC/CIRA/ISIS Joint Conference, pages 319–324. IEEE Press, 1998.
[4] H. Hexmoor, J. Lammens, and S. C. Shapiro. Embodiment in GLAIR: a grounded layered architecture with integrated reasoning for autonomous agents. In D. D. Dankel II and J. Stewman, editors, Proceedings of The Sixth Florida AI Research Symposium (FLAIRS 93), pages 325–329. The Florida AI Research Society, April 1993.
[5] H. Hexmoor and S. C. Shapiro. Integrating skill and knowledge in expert agents. In P. J. Feltovich, K. M. Ford, and R. R. Hoffman, editors, Expertise in Context, pages 383–404. AAAI Press/MIT Press, Cambridge, MA, 1997.
[6] H. O. Ismail. Reasoning and Acting in Time. Ph.D. dissertation, Technical Report 2001-11, University at Buffalo, The State University of New York, Buffalo, NY, August 2001.
[7] H. O. Ismail and S. C. Shapiro. Cascaded acts: Conscious sequential acting for embodied agents. Technical Report 99-10, Department of Computer Science and Engineering, University at Buffalo, Buffalo, NY, November 1999.
[8] H. O. Ismail and S. C. Shapiro. Conscious error recovery and interrupt handling. In H. R. Arabnia, editor, Proceedings of the International Conference on Artificial Intelligence (IC-AI’2000), pages 633–639, Las Vegas, NV, 2000. CSREA Press.
[9] H. O. Ismail and S. C. Shapiro. Two problems with reasoning and acting in time. In A. G. Cohn, F. Giunchiglia, and B. Selman, editors, Principles of Knowledge Representation and Reasoning: Proceedings of the Seventh International Conference (KR 2000), pages 355–365, San Francisco, 2000. Morgan Kaufmann.
[10] F. Lehmann, editor. Semantic Networks in Artificial Intelligence. Pergamon Press, Oxford, 1992.
[11] F. Orilia and W. J. Rapaport, editors. Thought, Language, and Ontology: Essays in Memory of Hector-Neri Casta˜neda. Kluwer Academic Publishers, Dordrecht, 1998.
[12] W. J. Rapaport, S. C. Shapiro, and J. M. Wiebe. Quasi-indexicals and knowledge reports. Cognitive Science, 21(1):63–107, January–March 1997. Reprinted in [11, pp. 235–294].
[13] J. F. Santore and S. C. Shapiro. Crystal Cassie: Use of a 3-D gaming environment for a cognitive agent. In R. Sun, editor, Papers of the IJCAI 2003 Workshop on Cognitive Modeling of Agents and Multi-Agent Interactions, pages 84–91. IJCAII, 2003.
[14] S. C. Shapiro. The CASSIE projects: An approach to natural language competence. In J. P. Martins and E. M. Morgado, editors, EPIA 89: 4th Portugese Conference on Artificial Intelligence Proceedings, Lecture Notes in Artificial Intelligence 390, pages 362–380. Springer-Verlag, Berlin, 1989.
[15] S. C. Shapiro. Embodied Cassie. In Cognitive Robotics: Papers from the 1998 AAAI Fall Symposium, Technical Report FS-98-02, pages 136–143. AAAI Press, Menlo Park, California, October 1998.
[16] S. C. Shapiro and H. O. Ismail. Anchoring in a grounded layered architecture with integrated reasoning. Robotics and Autonomous Systems, 43(2–3):97–108, May 2003. 3
[17] S. C. Shapiro, H. O. Ismail, and J. F. Santore. Our dinner with Cassie. In Working Notes for the AAAI 2000 Spring Symposium on Natural Dialogues with Practical Robotic Devices, pages 57–61, Menlo Park, CA, 2000. AAAI.
[18] S. C. Shapiro and W. J. Rapaport. SNePS considered as a fully intensional propositional semantic network. In N. Cercone and G. McCalla, editors, The Knowledge Frontier, pages 263–315. Springer-Verlag, New York, 1987.
[19] S. C. Shapiro and W. J. Rapaport. Models and minds: Knowledge representation for natural-language competence. In R. Cummins and J. Pollock, editors, Philosophy and AI: Essays at the Interface, pages 215–259. MIT Press, Cambridge, MA, 1991.
[20] S. C. Shapiro and W. J. Rapaport. The SNePS family. Computers & Mathematics with Applications, 23(2– 5):243–275, January–March 1992. Reprinted in [10, pp. 243–275]. [21] S. C. Shapiro and The SNePS Implementation Group. SNePS 2.6 User’s Manual. Department of Computer Science and Engineering, University at Buffalo, The State University of New York, Buffalo, NY, 2002. Available as http://www.cse.buffalo.edu/sneps/Manuals/manual26.ps.

Return to Top of Page

Yaron Shlomi

(no photo found)(no homepage found)

U. Maryland

I am a first year graduate student in the Cognitive Psychology program. My advisor is Tom Nelson, whose research is focused on human memory and metacognition. I will detail some of my interests below.

Whether humans or machines, agents are expected to monitor their status and their progress towards their pre-determined goals. As a student, I need to assess whether I really know the material for tomorrow’s exam, and whether going out to the movies is a good idea given my mastery of the material. Likewise, the autonomous vehicle traveling to Mars should detect deviations from its flight path. I am interested in how the agents monitor their progress.

When inadequate progress to the goal has been detected, it may be necessary for the agents to ask for help from another agent, whether a professor, a peer, or a computer. A crucial question regards the timing for posting the help request. Do agents ask for help when they should? This is an important question, as in highly stressed military contexts, asking too early may burden a super-ordinate agent; asking too late may lead to life-threatening situations.

Asking for help from another agent poses additional questions. The agent asking for help must have a model of the capabilities of other agents, so that it is possible to turn to the agent with the required capabilities. In this context, I think that the question of modeling another agent’s capabilities is intriguing: as one’s knowledge about another agent becomes comprehensive, what’s left of that other agent’s privacy, or the other agent’s “privileged” knowledge?

I think that the questions posed above only begin to describe a common thread of awareness in humans and computers. The workshop on Self-Aware Computers provides a unique opportunity for people from diverse backgrounds to explore these commonalities. By attending the workshop, I hope to share my curiosity for metacognition. After learning more about the computer-oriented perspectives that will be represented in the workshop, I am sure that some of my questions will be answered and new ones will emerge.

Return to Top of Page

This document last edited March 28, 2005 by Pat Hayes