Multi-modal Interface for Natural Operator Teaming with Autonomous Robots (MINOTAUR)
The Armed Forces need unmanned ground vehicles (UGVs) that can autonomously accompany a Warfighter or vehicle during maneuvers through complex environments. These UGVs will help solve logistical problems such as transporting more equipment and supplies that individual soldiers now carry in a backpack. A common problem for unmanned platforms is the need for active remote control or teleoperation, even for mundane tasks such as long-distance travel. Teleoperation is undesirable for these types of situations because it requires “heads down” attention from Warfighters, which both reduces their situational awareness and causes fatigue. It also makes it difficult for the Warfighter to focus on other tasks, such as looking out for threats, or walking without tripping on objects below.
While various semi-autonomous leader-follower prototypes have been developed, current systems remain unreliable, cumbersome, and unable to adapt to changing conditions. A trade off on the Warfighter’s physical burden is replaced by an increased cognitive workload and decreased trust in the system.
The Charles River Analytics Solution
Charles River Analytics and our teammate, 5D Robotics, developed an intuitive soldier-machine interface for controlling robotic leader-follower systems in small team operations. The Multi-modal Interface for Natural Operator Teaming with Autonomous Robots, or MINOTAUR, fuses multiple proven leader-tracking and robot control technologies to provide a reliable, hands-free interface for Warfighters operating in challenging environments. The system integrates multiple complementary leader-follower technologies to allow operation in inclement weather, poor lighting, and non-line-of-sight scenarios. It draws on a novel model of operator intent that enables context-sensitive control and feedback based on multiple asynchronous sensor inputs. The system includes a lightweight, wearable operator control unit for rapid control and assessment of a robotic teammate.
5D Robotics’ Segway RMP following a human operator through a shanty town at the Intuitive Robotic Operator Control (IROC) Challenge at the Muscatatuck Urban Training Center
MINOTAUR will let operators use voice and hand signals to send messages to the robot. It may be used to help communicate with, or relay directions to, other squads or assets. The robotic platform can become a true support agent as part of a human-robot team, only requiring direct teleoperation for executing specific tasks that need the most human skill, such as disarming or detonating an IED. The system’s gesture recognition capability will facilitate the deployment of UGVs as squad support platforms. Instead of requiring a squad member’s full cognitive capacity to directly control a UGV during transit or while on patrol, MINOTAUR will enable natural and reliable control of mule-like UGV systems, reducing the cognitive burden on Warfighters in the field, increasing trust in human-robot teams, accelerating the adoption of useful mule-like robots, and removing Warfighters from harm’s way.
View of the world from the perspective of the MINOTAUR system. Both gesture and voice-based navigation commands are supported.
This material is based upon work supported by the United States Army under Contract No. W56HZV-13-C-0286. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the United States Army.
UNCLASSIFIED: Distribution Statement A. Approved for public release.