TRIPS Demonstration Videos

Here is a list of some TRIPS demonstration systems we have built over the years. Each system demonstrates specific capabilities. While each application required some domain-specific customization, the underlying capabilities in natural language understandingcollaborative problem solvingmixed-initiative interaction, using multiple modalities for input and output, dynamic conceptual learning, etc., it is important to note that they are all supported by the same general framework for building dialogue systems.

Year System Description Video References
1998 This six-and-a-half-minute movie shows TRIPS assisting a user in constructing a plan to evacuate the residents of the fictitious island of Pacifica ahead of an approaching hurricane. Note in particular TRIPS’ support for human decision-making, including support for hypothetical, or “what-if” analyses, as well as support for both top-down and bottom-up planning. TRIPS also supports intelligent plan revision in response to changes in the situation. TRIPS_Pacifica [1]
2002 This video of the Chester medication advisor demonstrates how intention recognition and system initiative are used by the system to engage in natural and helpful dialogue. The system uses an avatar to communicate its state to the user (eg, idle, attentive, thinking, speaking). TRIPS_Chester [3]
2003 This movie shows TRIPS being used for building and executing plans in a collaboration with semi-autonomous robotic agents. The system used multiple modalities, including a video display that showed what the underwater robot was observing. The user could use language not only to communicate about plans, give commands and obtain information from the robots, but also to manipulate the GUI (in addition to other modalities such as mouse and keyboard). TRIPS_Lou [2]
2005 The PLOW system performs collaborative learning and execution. It can learn a parameterized task from a single demonstration, combined with step-by-step instruction, much like we all learn. It also uses language to learn robust representations for identifying objects of interest on webpages. This demo shows a session of learning and executing a procedure for buying a book online. TRIPS_Plow_v1 [4,5,6]
2006 An overview of PLOW demonstrating the use of active learning for one-step learning of iterations, using corrections, as well as using the knowledge gathered from webpages to perform inference. TRIPS_Plow_v2 [4,5,6]
2007 This version of the system showed natural and effective use of TRIPS to support complex human-robot teamwork. Two users communicate with five robots via their own TRIPS-based assistant; one is mobile, and wears a head-mounted display. The system has a rich multi-modal interface and is capable of interpreting deictic references, point-of-view descriptions, low-level teleoperation commands as well as high-level commands and qualitative attributes. It can dynamically learn and ground new descriptions of physical objects, and use them for reference during the planning and execution of the task. TRIPS_CoOps [7]
2008 This version of PLOW demonstrates learning of complex tasks via task composition. Tasks (which may be shared among users) can be retrieved based on natural descriptions of goals, and then PLOW combines their inputs and outputs appropriately. This version of the system shows how the conversation can be carried via email, or via a lightweight chat interface. TRIPS_Plow_v3 [4,5,6]
2009 PLOT was a version of the Plow system that worked on a text-based terminal interface. This video demonstrates learning collaborative procedures, in which some steps may need to be executed by the user while others can be executed automatically by the system. A small-scale evaluation showed that NAVY corpsmen with no programming experience could succesfully teach the system a procedure for making appointments, with very little training. TRIPS_PLOT [9]
2009 CARDIAC is a prototype for an intelligent conversational assistant that provides health monitoring for chronic heart failure patients. CARDIAC supports user initiative through its ability to understand natural language and connect it to intention recognition. The system is designed to understand information that arises spontaneously in the course of the interview, such as when the patient gives more detail than necessary for answering a question. CARDIAC was intended to demonstrate the possibility of developing cost-effective, customizable, automated in-home conversational assistants that help patients manage their care and monitor their health using natural language.
Note: This demo is audio only, and is a segment of an actual session with a user.
TRIPS_Cardiac [8]
2010 Sense was a follow-up to Plow, which showcased the system’s ability to reason about goals and find tasks that could be chained to accomplish them with minimal involvement from the user. It also demonstrated learning of user- and task-specific ontologies (so that, for example, the same procedure for finding small houses could be used by two users even when they might have different ideas of what counts as a small house). TRIPS_Sense
2012 Tegus (ThE Geospatial language Understanding System) demonstrated point and path geolocation based on natural language descriptions of street-level features (“tell me what you see and I’ll tell you where you are”). TRIPS_Tegus [10]
2014 The ASMA (Asthma Self-Management Aid) system was designed to provide interaction via mobile phones, using unconstrained text messages, with adolescents suffering from asthma. Its goal was to facilitate symptom monitoring, treatment adherence, and adolescent–parent partnership. The system was developed through a partnership with the School of Nursing at the University of Rochester, where it is currently going through a second phase of evaluations on real patients. ASMA [11]



References

  1. ^ Ferguson, G., and Allen, J. (1998). TRIPS: An Intelligent Integrated Problem-Solving Assistant. In Proceedings of the Fifteenth National Conference on Artificial Intelligence (AAAI-98), 567–573. Menlo Park, CA: AAAI Press. [pdf]
  2. ^ Chambers, N., J. Allen, L. Galescu, and H. Jung (2005). A Dialogue-Based Approach to Multi-Robot Team Control. In L. E. Parker, F. E. Schneider, and A. C. Schultz (Eds.), Multi-Robot Systems. From Swarms to Intelligent Automata Volume III, pp. 257-262. Springer Netherlands. [doi:10.1007/1-4020-3389-3_21]
  3. ^ Allen, J., Ferguson, G., Blaylock N., Byron, D., Chambers, N., Dzikovska, M., Galescu, L., and Swift, M. (2006). Chester: Towards a Personal Medication Advisor. Journal of Biomedical Informatics 39(5):500-513. Elsevier. [pdf]
  4. ^ Chambers, N., Allen, J., Galescu, L., Jung, H., Taysom, W. (2006). Using Semantics to Identify Web Objects. Proceedings of the Twenty-First National Conference on Artificial Intelligence (AAAI-06), Boston, MA. AAAI Press. [pdf]
  5. ^ Allen, J., Chambers, N., Ferguson, G., Galescu, L., Jung, H., Swift, M., and Taysom, W. (2007). PLOW: A Collaborative Task Learning Agent. Proceedings of the Twenty-Second Conference on Artificial Intelligence (AAAI-07). Vancouver, Canada, Jul 22-26. Outstanding paper award winner. [pdf]
  6. ^ Jung, H., Allen, J., Galescu, L., Chambers, N., Swift, M. and Taysom, W. (2008). Utilizing Natural Language for One-Shot Task Learning. Journal of Logic and Computation,18/3:475-493, Oxford University Press. [doi:10.1093/logcom/exm071]
  7. ^ Johnson, M., Intlekofer Jr, K., Jung, H., Bradshaw, J. M., Allen, J., Suri, N., & Carvalho, M. (2008). Coordinated operations in mixed teams of humans and robots. In Proceedings of the First IEEE Conference on Distributed Human-Machine Systems (DHMS 2008), Athens, Greece (in press, 2008). [pdf]
  8. ^ Ferguson, G., J. Allen, L. Galescu, J. Quinn, and M. Swift (2009). CARDIAC: An intelligent conversational assistant for chronic heart failure patient heath monitoring. In AAAI Fall Symposium on Virtual Healthcare Interaction (VHI’09). [pdf]
  9. ^ Blaylock, N., de Beaumont, W., Galescu, L., Jung, H., Allen, J., Ferguson, G., and Swift, M. (2012). Play-by-play learning for textual user interfaces. In Applied Natural Language Processing: Identification, Investigation, Resolution, McCarthy, P. M., and Boonthum-Denecke, C. (eds.), pp. 351-364. IGI Global. [doi:10.4018/978-1-60960-741-8]
  10. ^ Blaylock, N., J. Allen, W. Beaumont, L. Galescu, and H. Jung (2012). Street-Level geolocation from natural language descriptions. Traitement Automatique des Langues, 53 (2), 177-205. [pdf]
  11. ^ Rhee H, Allen J, Mammen J, Swift M. Mobile phone-based asthma self-management aid for adolescents (mASMAA): a feasibility study. Patient preference and adherence. 2014;8:63-72. doi:10.2147/PPA.S53504.