Augmentics

 

IHMC researchers Anil Raj and Sergey Drakunov work to improve the interactivity and situation awareness between individuals and the technological systems with which they interact. The methods use integrated multisensory, multimodal and neural interfaces to both help people understand the behavior and state of a device or system and enable automation in technological systems to dynamically optimize automated assistance. The resulting augmentic solution can improve human-machine team performance on both simple and complex tasks. We have developed augmentic displays to support aerospace and motorsports applications, dismounted soldiers, diving, teleoperated robots, control of swarms of unmanned aerial vehicles (UAVs) and sensorimotor assistive devices for individuals with impairment of vision, balance, hearing or with musculoskeletal weakness (using powered and passive wearable exoskeletons). Testing has confirmed decreased cognitive workload and training time while increasing task performance.

Multimodal interfaces provide or receive information in more than one way using a single sense or sensor type. For example, visual displays can provide high-precision information via high-resolution central vision as well as lower precision information using low-resolution peripheral vision. Multisensory displays present information to individuals using multiple sensory channels or measure multiple kinds of sensors (like a polygraph) simultaneously. In our work, we typically use visual, audio and tactile displays to present information. These multimodal, multisensory displays can provide high and low resolution visual, monoaural, stereo and spatial audio and two dimensional and three-dimensional tactile representations of system data. We measure psychophysiologic changes using neural interfaces, which include non-invasive and invasive measurements of either electrochemical activity of neural tissue (for example, through brainwave recordings), the physiologic effects of that activity (for example, changes in heart rate), or physical changes (for example, looking in a particular direction) including movement of control input devices and verbal communications through natural language processing (NLP) with TRIPS.

Creating an effective augmentic solution requires integration of the multisensory, multimodal and neural interfaces, along with contextual information about both the task or system and the current environmental conditions. We often apply Cognitive Task Analysis (CTA) and Concept Maps (Cmaps), using CmapTools, to specific domains to model work practices and create knowledge models to identify which information to present and when to present it to maximize user understanding. By using sensors that track real time changes in the optical, acoustic, or vibratory environment, as well as location, motion and movement, the system can infer the environmental context of the users actions, decisions and responses.

This allows adaptive algorithm components in the augmentic interface to dynamically reallocate data flow among available sensory interfaces based on estimations of various dimensions of cognitive state.  This maximizes the likelihood of the user understanding the system information and improves performance of the human-machine team. By monitoring human actions and psychophysical measures in context, the augmentic system can also reallocate tasking between user and automation. During times of high workload or cognitive stress, the automation can adaptively increase its role to offload tasks from the user. Conversely, when the user becomes drowsy, distractible or bored due to lack of stimulation, the adaptive automation can modulate or shift tasks back to the user to keep him or her engaged and avoid automation surprises.

In order to connect these various components together IHMC has developed a flexible integration framework, which is an extension of the KAoS ontology-based agent infrastructure. This adaptive multi-agent integration (AMI) architecture allows the heterogeneous elements (e.g., sensors, displays, algorithms, automation, environmental data) to share high volume, high data rate signals in a cohesive fashion, with reliable, predictable, deterministic, real-time (at psychophysiologic time scales) performance. We have developed a control theoretic machine learning approach to rapidly learn the state of different components in the human-machine system and identify changes in state in order to drive the adaptation of the augmentic system to improve human-machine cooperative control of complex systems.