has become commonplace for critics of AI to note that relatively
fewer expert systems are in widespread use than was predicted in
the giddy atmosphere surrounding their initial development and early
deployment in domains such as medicine. One reason typically given
for the limited range of application of expert systems is their
failure to gain acceptance by users. In particular, the inadequate
approach to explanation found in most expert systems can be a significant
impediment to user acceptance.
stands to reason that a machine offering expert judgment in a given
domain is more likely to find acceptance by those seeking its advice
if it can explain its recommendations. Accordingly, an explanation
capability should enable a user to get a complete, understandable
answer to any sort of relevant question about the knowledge explicitly
and implicitly embodied in a system's implementation formalism (e.g.,
rules, frames, or whatever). Unfortunately, the capacity of most
current expert systems to explain their behavior (e.g., conclusions)
is limited to either causal descriptions of the behavior of the
performance environment's reasoning mechanism or to the display
of canned text and graphics files. In this paper we will describe
a new approach to explanation, which we will call participatory
Why is Explanation so Hard?
typical article about explanation begins like this one; that is,
it offers some compelling reasons for why an adequate explanation
facility is crucial to the success of an expert system, and then
laments the inadequacies of current explanation schemes (e.g., Jackson,
1986; Partridge, 1987). Much of the remainder of the paper is usually
devoted to bemoaning the current state of the explanation art and
speculating about what future AI (artificial intelligence) advances
will be necessary before expert systems can offer explanations that
take into account "the user's aims, objectives, hopes and fears,
all with respect to the particular expertise of the AI system" (Partridge,
Any system able to meet the above goals, that is, to take into account
the user's "aims, objectives, hopes and fears" will in essence be
able to pass the Turing test for explanation -- thus, requiring
a solution to the general AI problem before substantially improving
the explanation capabilities of our systems. The sort of issues
commonly raised in such discussions seem to be roughly equivalent
to the general AI question (i.e., the whole ball of wax), and include
such worthy foes as the problem of context, relevance, sufficiency
(i.e., when to stop explaining) and other fundamentally difficult
bugbears. Solve the AI problem, and you also will have solved the
explanation problem, as well as many others. In the meantime, however,
expert systems are coming into widespread use without adequate capacities
for explaining their findings. This view of explanation places us
in the somewhat peculiar position of waiting for the future advent
of strong AI to make palatable the advice offered by the first broadly
useful fruit (i.e., expert systems) of the AI research program.
Empowering Users to Construct Explanations
doubts concerning the near-term feasibility of meeting all the practical
challenges facing traditional approaches to explanation do not imply
that we assume that a fully intelligent explanation system embodying
a passive mode of learning would resolve the problem. On the contrary,
even if we were to postulate the existence of such a system, the
epistemological objections raised here, as well as in the literatures
of education and psychology, would remain unanswered. In short,
we favor the design of explanation facilities that acknowledge and
exploit the active role played by the learner/user in the process
of meaning making. In response to many of the current and envisioned
approaches to explanation, we are inclined to offer the slogan,
much instruction -- not enough human construction."
we have noted elsewhere (Bradshaw, Ford, Adams-Webber, and Boose,
1993) designers of most expert systems leave little room for the
active participation of users in their efforts to make meaning (i.e.,
construct an explanation) from the data presented to them in response
to a request for explanation. The user is seen as a relatively passive
receiver of the information intended to comprise an explanation.
contrast, participatory explanation puts the user in an environment
where he or she can assume an active role in the process of constructing
his or her own explanation by freely exploring the domain model.
A maxim that may help clarify our view of explanation is:
makes an utterance an explanation takes place in the ear (actually
the mind) of the receiver of the explanation, not in the mouth
of the provider of the explanation.
this constructivist perspective, the user of any knowledge-based
system is engaged in an ongoing cycle of observation, prediction,
feedback and control. This constructivist outlook casts the designer
of an explanation facility in the role of devising 'cognitive prosthetics'
by means of which the system's users can stand on the shoulders
of giants (i.e., the domain experts) in order to better understand
the domain. Thus, a crucial question for knowledge engineers is
to what extent can our explanation facilities support the users
in their effort toward meaning making?
Foundations for Participatory Explanation
(Ford, Petry, Adams-Webber, & Chang, 1991; Ford & Adams-Webber,
1991; Ford, Bradshaw, Adams-Webber, & Agnew, 1992) we have elaborated
a constructivist epistemology that can serve as a foundation for
our efforts toward the development of user-centered participatory
explanation systems. For the sake of brevity we will offer only
a brief discussion here and direct interested readers to the above
references. Although it is logically possible that there are an
infinite number of different ways of interpreting some aspect of
reality some conceptual models are more useful than others. Thus,
in a sense, "experts" can be said to have developed significantly
more useful models of the "reality" underlying a specific domain
than has the ordinary practitioner. For example, domain experts
can be viewed as having built-up repertories of working hypotheses,
or "rules of thumb" (i.e., functional but fallible anticipations
held with high confidence and uncertain validity) that guide their
expert performance. We advocate attempting to make the expert's
relatively superior model available to the community of users as
a basis upon which the latter may directly improve their own performances,
and also construct more useful explanations of relevant events (Pope
& Gilbert, 1985). Concept maps (see Section 4.1) were developed
for just this purpose.
Assimilation Theory and Concept Maps
theory is a cognitive learning theory that has been widely applied
to education (Ausubel, 1963; Ausubel, Novak, & Hanesian, 1978).
Like Kelly's (1955) personal construct theory, it is based on a
constructivist model of human cognitive processes. Specifically,
it describes how concepts are acquired and organized within a learner's
concept map is assimilation theory's major methodological tool for
ascertaining and representing what is known. In educational settings,
concept mapping techniques have aided people of every age to examine
many fields of knowledge. Much of the assimilation theoretic research
to date has involved and exploited concept mapping (Novak & Gowin,
1984). In addition, concept maps are of increasing interest to those
engaged in the process of knowledge acquisition for the construction
of knowledge based systems (Ford, Stahl et al., 1991; Snyder, McNeese,
Zaff, & Gomes, 1992). Essentially, concept maps provide context-dependent
representations of a specific domain of knowledge within a set of
concepts constructed so that the interrelationships among the included
concepts are evident. In fact, concept maps have been shown to help
students "learn how to learn" by making explicit their personally
constructed knowledge and providing a structure for linking in new
information. Concept maps offer a flexible framework for eliciting,
representing, and communicating the emerging domain model. In this
way, they are well suited to a participatory explanation paradigm
in which the user explores the domain model built through the collaboration
of the knowledge engineer, domain expert, and the modeling environment.
Concept maps structure a set of concepts into a hierarchical framework.
More general, inclusive concepts are found at the highest levels,
with progressively more specific and less inclusive concepts arranged
below them. In this way, concept maps display Ausubel's notion of
subsumption, namely that new information is often relative to and
subsumable under more inclusive concepts. All concepts at any given
level in the hierarchy will tend to have a similar degree of generality.
Relationships between concepts in a map represent propositions.
Propositions form semantic units by linking together two or more
concepts. Figure 1 shows a portion of a concept map produced by
an expert in nuclear cardiology.
1. A portion of a concept map from the domain of nuclear cardiology.
our approach to participatory explanation (see ICONKAT discussion
in section 5.1), concept maps are an important mediating representation
used to provide a hierarchically ordered, conceptual overview of
the domain model arising from the collaborative efforts of the expert
and knowledge engineer. The concept maps provide "knowledge landscapes"
(essentially topographical maps) of the domain that comprise the
organizational structure for entire domain model. It is into this
semantic structure that other mediating representations (e.g., repertory
grids, video, text, etc.) are linked during the knowledge acquisition
phase. The ICONKAT approach to participatory explanation, described
in Section 5.1, involves users constructing their own explanations
while navigating their way through the linkages and nodes of the
hierarchically organized concept maps comprising the domain model.
explanation, as do most approaches to providing advanced explanation
facilities, requires access to an explicit model of the domain under
consideration. This raises the issue of where the domain model comes
from? Fortunately, it is possible to provide a modeling environment
(e.g., ICONKAT) that supports the expert and knowledge engineer's
efforts to collaborate in the construction of a domain model in
such a way that the resulting model will be well-suited to participatory
explanation. Instead of our arduously constructing a model of human
expertise, and then throwing it away (upon translation into the
syntax of the performance environment), an explanation facility
also should exploit the model formed during the knowledge acquisition
process. If not, then the implicit connections that establish the
"logical" structure of the domain model may be lost, and as a result,
much effort will be required to "put Humpty Dumpty back together
again." This is essentially the task (i.e., reassembling Humpty)
confronting those knowledge engineers who do not treat explanation
as part of the knowledge acquisition process itself, and consequently,
lack an adequate model in their attempts to construct explanation
systems post hoc. Thus, one key to the design of explanation subsystems
that are capable of more than shallow and/or mechanistic accounts
is to recognize that the development of an explanation facility
is a fundamental aspect of the knowledge acquisition process.
Section 5.1, we discuss the specific explanation facilities of the
ICONKAT (Ford, Stahl et al., 1991) knowledge acquisition tool. The
ICONKAT approach to explanation reflects recent research in cognitive
science (Yager & Ford, 1990), psychology (Kelly 1955; Pope & Gilbert,
1985) and education (Novak, 1977), which emphasizes an active and
participatory approach to learning.
5.1 Participatory Explanation and ICONKAT
(Integrated Constructivist Knowledge Acquisition Tool) is a knowledge
acquisition and representation system under evolutionary development
at the University of West Florida. ICONKAT incorporates principles
and techniques from both personal construct theory (Kelly, 1955)
and assimilation theory (see Section 4.1). ICONKAT provides extensive
interactive assistance to the domain expert and knowledge engineer
in collaboratively modeling expertise. ICONKAT was used in the design
and construction of NUCES: Nuclear Cardiology Expert System (Ford,
Cañas, Coffey, Andrews, Schad, & Stahl, 1992). This is a large-scale
expert system for the diagnosis of first pass cardiac functional
images, a noninvasive radionuclide technique used to evaluate heart
wall motion abnormalities.
particular, ICONKAT supports the participatory explanation paradigm
in which the domain model that emerges from the knowledge acquisition
process is subsequently exported from the development environment
to the delivery environment -- where it serves as the foundation
of the explanation capability for the deployed system. ICONKAT's
collaborative modeling environment exploits the expressiveness of
concept maps to assist users in hierarchically organizing the various
mediating representations (e.g., other concept maps, repertory grids,
images, audio, video, documents) into browseable hypermedia domain
models (see Figure 2). Interestingly, concept maps play a twin role
in this process. First, concept maps are one of the principle means
by which the expert and knowledge engineer represent knowledge about
the domain. In particular, concept maps have proven excellent in
eliciting and representing what the participants see as the knowledge
landscape or topology at a given level of abstraction. Second, concept
maps furnish a rich organizational framework that can serve as the
interface to the domain model. Thus, while the expert and knowledge
engineer collaborate in using concept maps to model the former's
problem solving knowledge, they are also in essence building the
structure of the interface that subsequent users will employ to
explore the model when desiring an explanation. A session from NUCES
(a medical expert system built in the ICONKAT environment) illustrates
the participatory explanation approach (see Figure 2). When a user
requests explanation, the performance environment is interrupted,
and the user is switched into the context-sensitive explanation
subsystem and conveyed to an appropriate location within the multi-dimensional
space representing the model. From there, the user can assume an
active role in the process of constructing his or her own explanation
by freely exploring the conceptual model and browsing among a wealth
of supporting objects (e.g., audio, video, documents, images, repertory
grids, concept maps, rules, etc.). Users end their browsing as soon
as they are confident that they have constructed an adequate explanation
from the available information. Participatory explanation engages
the user in an interactive process of observation, interpretation,
prediction, and control.
on the image to see a large version)
2. A NUCES session illustrating the notion of participatory explanation.
The navigation problem, an important concern in participatory explanation,
is largely ameliorated by use of concept maps as a guide to traversing
the logical linkages among clusters of related objects (see the
"Concept Map" window in Figure 2). Concept maps provide an elegant,
easily understood interface to the domain model. A system of concept
maps is interrelated by generalization and specialization relationships
among concepts, which lead to a hierarchical organization. The explanation
subsystem in NUCES provides a window that shows the hierarchical
ordering of the various maps, highlights the current location of
the user in the hierarchy, and permits movement to any other map
by clicking on the desired map in the hierarchy (see the window
"Concept Map Hierarchy" in Figure 2).
Depending on the location of the user in the domain model, he or
she has different options to explore. At each node, the user can
select from a menu of icons as shown in Figure 3. These correspond
to text (a textual document), images, a popup menu of concept maps,
repertory grids or video (implemented using QuickTime) related to
the topic of the selected node. These icons will appear in various
combinations depending on what information is available for a given
concept. The "Concept Map" window in Figure 3 shows how the concepts
(nodes) are populated with the icon menus illustrated in Figure
3. At any time, the user can backtrack by clicking on the "back-arrow"
icon, as shown in the "Concept Map" window.
3. Close-up of the explanation icons.
scheme provides the user great flexibility in navigating through
related concepts, as well as guideposts in moving among the various
sources of information available for a specific concept.
main focus of this paper is to introduce participatory explanation
-- a new approach to the design of explanation facilities for knowledge-based
systems. This paper elaborates a constructivist theoretical foundation
for participatory explanation. ICONKAT is presented as an example
of a knowledge acquisition system that directly supports this approach
to explanation. Further, a discussion of the explanation subsystem
of NUCES, an expert system for nuclear cardiology, is presented
as an example of a full-scale expert system that embodies the participatory
approach to explanation.
D.P. (1963). The Psychology of Meaningful Verbal Learning.
New York: Grune and Stratton.
D.P., Novak, J.D., & Hanesian, H. (1978). Educational Psychology:
A Cognitive View (2nd Ed.). New York: Holt, Rinehart and Winston.
Reprinted (1986). New York: Warbel and Peck.
J.M., Ford, K.M., Adams-Webber, J.R., & Boose, J.H. (to appear Jan.
1993). Beyond the repertory grid: New approaches to constructivist
knowledge acquisition tool development. In K.M. Ford & J.M. Bradshaw
(Eds.), special issue on knowledge acquisition of International
Journal of Intelligent Systems. Also to appear in K.M. Ford
& J.M. Bradshaw (Eds.), Knowledge Acquisition as Modeling,
New York: Wiley (1993).
K.M. & Adams-Webber, J.R. (1991). Knowledge acquisition and constructivist
epistemology. In R.R. Hoffman (Ed.), The Psychology of Expertise:
Cognitive Research and Empirical AI (pp. 121-136). New York:
K.M., Cañas, A.J., Coffey, J., Andrews, E.J., Schad, N.,
& Stahl, H. (1992). Interpreting functional images with NUCES: Nuclear
Cardiology Expert System. In M.B. Fishman (Ed.), Proceedings
of Fifth Annual Florida AI Research Symposium (pp. 85-90). Ft.
Lauderdale, FL: FLAIRS.
K.M., Petry, F., Adams-Webber, J.R., & Chang, P.J. (1991). An approach
to knowledge acquisition based on the structure of personal construct
systems. IEEE Transactions on Knowledge and Data Engineering,
K.M., Bradshaw, J.M., Adams-Webber, J.R., & Agnew, N.M. (to appear
Jan. 1993). Knowledge acquisition as a constructive modeling activity.
International Journal of Intelligent Systems.
K.M., Stahl, H., Adams-Webber, J.R., Cañas, A.J., Novak,
J.D., & Jones, J.C. (1991). ICONKAT: An integrated constructivist
knowledge acquisition tool. Knowledge Acquisition Journal,
P.J. (1986). Introduction to Expert Systems. Reading, MA:
G.A. (1955). The
Psychology of Personal Constructs, Vols. 1 & 2. New York: W.W.
J.D. (1977). A Theory of Education. Ithaca, NY: Cornell University
J.D. & Gowin, D.B. (1984). Learning How to Learn. Ithaca,
NY: Cornell University Press.
D. (1987). The scope and limitations of first generation expert
systems. Future Generation Computer Systems, 3, 1-10.
M. & Gilbert, J. (1985). Constructive science education. In F. Epting
& A.W. Landfield (Eds.), Anticipating Personal Construct Psychology
(pp. 111-127). Lincoln: University of Nebraska Press.
D.E., McNeese, M.D., Zaff, B.S., & Gomes, M. (1992). Knowledge acquisition
of tactical air-to-ground mission information using concept mapping.
In Proceedings of the AAAI Cognitive Aspects of Knowledge Acquisition
Session of the Spring Symposium (pp. 228-234). Stanford, CA:
R.R. & Ford, K.M. (1990). A formal constructivist model of knowledge
revision. In M.B. Fishman (Ed.), Proceedings of Third Florida
Artificial Intelligence Research Symposium (pp. 154-158). Cocoa
Beach, FL: FLAIRS.