Humanoid Robots as Human Avatars
There are many scenarios where the skills offered by humans are the ideal response, but actual human action is simply too dangerous. The development of robots to function as human surrogates represents the ideal compromise, enabling the robot to act as a human avatar in an otherwise inhospitable environment. These robots can then be used for everything from emergency response to colonizing Mars.
While the human form is not required for robots to function as human avatars – for example, a Double Bot provides good telepresence – it does have major advantages. The humanoid form maximizes the robot’s utility and mobility in human-centered environments, allows it to use human tools, and provides a direct map to functioning as a human surrogate. However, planning, control, and interfacing with humanoids pose significant challenges, making the development of human avatars a daunting task.
To tackle this problem, we are building on our past experiences such as the DARPA Robotics Challenge, and actively pursuing four main research thrusts: mobility, manipulation, planning, and interfacing.
Humanoid Mobility and Balance
To get to all the places they may need to go, these robots need to be highly mobile. We have been focusing on improving humanoid mobility, including walking across line contacts, recovering from pushes, and using angular momentum to help balance. To do this, we are relying on many proven walking concepts developed at IHMC, as well exploring new techniques made possible through advances in hardware and computing.
While simple planning approaches, such as walking in a straight line, or having an operator selecting individual footholds, have worked well so far, they either cannot handle complicated terrains, or require significant down time for the operator to plan the path. Instead, we are focusing on intelligent, automated footstep planning to tackle these complex scenarios, allowing the robot to automatically walk to a desired location over rough terrain, around obstacles, and through occlusions.
The humanoid form has a huge potential workspace, with the ability to use its entire body to reach for things, and move to a new location to get a better grasp or handle objects. However, using each of these degrees of freedom is very hard, with robots typically only using the joints in the arms for manipulation. To make the robots as effective as possible, we are working on using the entire body, including stepping, to maximize the effective workspace.
Teaming robots with human partners in a system that optimizes their complementary strengths can enhance their performance. For instance, a human at a remote monitoring post, surveying the scene through the robot’s “eyes,” might be called on to make rapid judgments a robot is poor at, such as quickly determining the best path through a debris field. The robot, equipped with precise scanning tools, can quickly provide precise measurements of distance or size in situations where humans could only make rough estimates. However, this requires appropriating interfacing of the human operator with the machine. We are exploring both traditional computer based human interfaces, such as the ones we used during the DRC, along with methods made possible by new Virtual Reality technology.
This work was funded in part by the National Science Foundation (NSF) with the National Robotics In through the NASA Grant No. NNX12AP97G.
- Humanoid Robots as Human Avatars
- Nadia Humanoid
- Exoskeleton for Improving Mobility
- Cybathlon 2020
- Quadrupedal Locomotion
- Open Source Initiative
- DARPA Robotics Challenge
- M2V2 Humanoid
- Learning Locomotion
- X1 Mina Exoskeleton
- Cybathlon 2016
- The Grasshopper