Context-aware Hands-free Multimodal Interfaces for Medical Information Documentation
To be presented at the Military Health System Research Symposium (MHSRS), Kissimmee, FL (August 2018)
Background: Military and civilian medical personnel across all echelons of medical care play a critical role in evaluating, caring for, and treating Warfighters. Accurate medical documentation is critical to patient outcomes. However, medical documentation remains a complex and time-consuming task that is not suited to the operational context. Traditional electronic health record systems are plagued by user interfaces (UIs) that are overly complex and unwieldy. These UIs are very visual- and hands-intensive, forcing the provider to break away from active patient engagement at inopportune, highly stressful, and critical times. These interfaces unnecessarily add physical and cognitive workload to the overburdened medical provider. As these tools require focused attention and protracted manual interaction, many medical providers cannot capture key information as it is ascertained and processed or while performing complex procedures. The eyes- and hands-intensive nature of these interfaces mean that practitioners are unable to efficiently capture this critical information while caring for the patient. Instead, they must rely on memory when documenting medical information at a later time. This post-care documentation is inefficient, labor intensive, and error-prone due to challenges of prospective memory (remembering to do something in the future) and recall (Holbrook et. al, 2005; Gawande et. al, 2003).
Methods: To address this need, we designed, demonstrated and prototyped a set of multimodal heads-up, hands-free interfaces and context-aware tools that support streamlined medical information capture. These interfaces enable providers to flexibly capture data across multimodal interfaces, providing real-time prompting or semi-automated data capture support. These interfaces allow providers to augment, correct, or otherwise flag semi-automated information through hands-free interaction techniques and context-sensitive interfaces, making post care review quick and easy as relevant information is earmarked for later when the provider has more time for dedicated interaction and review.
In designing these tools, we applied mature, grounded cognitive systems engineering approaches. We analyzed the work domain to understand medical care provider workflows, identifying key tasks and electronic health record interaction requirements to be supported, and assessed current capabilities to drive the design of hands-free multimodal user interfaces and support tools. This analysis focused on the Tactical Combat Care Casualty Card (TCCC card) for key medical information for capture. We also investigated medical care documentation across levels of care and requirements for effective patient handoffs across medical care providers. The outputs of this analysis drove the design of the multimodal hands-free interfaces and context-aware support tools.
Results: Using outputs from the work domain analysis and a user-centered iterative design approach, we designed a set of multimodal interfaces and context-aware support tools that more efficiently and effectively support information capture activities of medical care personnel across operational contexts and echelons of care. These interfaces included augmented-reality wearable glasses for visual image capture and heads-up peripheral information display, advanced natural language processing technologies, and a range of voice-based input methods for natural and more robust audio capture in noisy contexts. These interfaces will aid in extended design, development, and integration efforts and directly support our demonstration and evaluation efforts under follow-on efforts.
In designing voice-based methods, we identified Systemic Functional Grammar (SFG, Halliday, 2003), to support natural, conversational, voice-based interactions. SFG aims to address the limits of traditional proscriptive grammars, or serial and often time-consuming interaction flows. Instead of limiting the medical care provider to a few key command phrases, SFG leverages efficient natural language processing methods for context-aware semantic analysis. Using voice-based input, the SFG infers speaker intent in difficult, disfluent, noise-filled inputs or operational environments. The approach is robust against nonstandard syntax and partial input, making it ideal for the tactical combat casualty care medical documentation setting.
We also designed a set of context-aware workflow support toolsthat increase the efficiency and effectiveness of medical care providers. Typical information capture systems require a serial, step-by-step, highly prescribed process of data input. Instead, we designed a set of interaction methods to support: (1) non-linear, context-based information capture; (2) automated support for information capture systems; and (3) post-care documentation support tools. To minimize the amount of information documentation by the medical provider, we designed methods that support semi-automated information capture, where the user can augment and correct capture in real time or later for post care documentation. We designed interface methods that support partial input, which allows the provider to verify suggested data, and ways to recall those capture points from context. During a post-care review process, related image, sound, text and other files are deeply integrated into a holistic review, meaning the provider can view multiple forms of data (text-based, audio recordings, videos, stills) for supporting context, and cue the provide to key events. We also identified cutting edge computer vision techniques to support semi-automated information capture, such as parsing patient’s information from a dog tag and leveraging pose estimation for wound estimation.
To support the rapid prototyping and demonstration of these hands-free multimodal interfaces and context-aware support tools, we developed a software and hardware prototyping and demonstration environment (PDE) to rapidly assess the feasibility of these tools within a representative operational domain. Finally, we developed a formal human-in-the-loop evaluation plan for evaluating this approach under follow-on efforts.
Conclusion: We designed and demonstrated a set of multimodal interface components and context-aware support tools. These components enable medical personnel to more efficiently and effectively input critical medical information across a range of operational environments. These interfaces minimize interaction costs by combining multimodal data capture methods, vision technologies, and efficient hands- and eyes-free interaction mechanisms. These context-aware support tools provide operationally and medically tailored information so that care providers can seamlessly document information without interruption to their medical care workflow. This approach allows medical providers, particularly those in combat casualty care situations, to provide care work more effectively and efficiently, thereby reducing the risk of both treatment and documentation errors and improving patient outcomes. Future work includes refinement and development of the proof-of-concept prototype, and human-in-the-loop evaluation with representative users and mission tasks.
This work is supported by the US Army Medical Research and Materiel Command under Contract No.W81XWH-17-C-0180. The views, opinions and/or findings contained in this report are those of the authors and should not be construed as an official Department of the Army position, policy or decision unless so designated by other documentation.
For More InformationTo learn more or request a copy of a paper (if available), contact Stephanie Kane.
(Please include your name, address, organization, and the paper reference. Requests without this information will not be honored.)