Virtual and Mixed Reality

Virtual and Mixed Reality Environments for Surgery
Current Graduate Students: Adam Rankin, John Baxter, Uditha Jayarathne and Utsav Pardasani
Minimally invasive procedures require a fundamentally new set of tools in order to provide visual information to the interventionalist that would otherwise be acquired through direct vision. This information could come from a variety of sources, from registered pre-operative images, inter-operative imaging such as ultrasound, or tracked tools, depending on the precise requirements of the procedure.
However, conveying this information to the interventionalist in a usable and effective manner is still a difficult question, relying on insight from human factors. The cognitive and perceptual limitations of the interventionalist need to be considered as fundamental constraints on the design of environments for any aspect of the intervention.
Our Research Objective
Are
Our objectives in researching virtual and mixed environments for surgery are as diverse. We are interested in answering questions from the most applied level such as the design and evaluation of interoperative environments, to the most fundamental, asking questions about the perceptual requirements of an interventional task. To address these questions, our group is integrating imaging data with various virtual elements to provide more context to the intra-operative images, or to provide an adequate substitute for direct vision.
Our Contribution
We have developed a large suite of software based on the visualization toolkit, VTK, for every aspect of
Building on our previous contributions in virtual and mixed environments for surgery, we have developed echelon, an extensible framework which allows for the integration of real-time ultrasound, pre-operative models, and tracking to be combined and shown to the interventionalist on highly configurable displays. This framework has since been specialized for beating heart mitral valve repair as the
We have also developed a series of utilities for augmented reality systems based on the commercially-available Vuzix 920AR hardware, allowing us to construct augmented reality environments for neurosurgical planning for
Key Questions
- What are the fundamental cognitive and perceptual limitations of interventionalists, and how can they be used to better design interventional environments?
- Under what conditions can virtual and augmented environments improve surgical interventions, training, and planning?
- What
methods of pre-operative image visualization are usable during an intervention? - What
forms of tracking are optimal for any given intervention? - Can vision-based tracking provide a robust and usable tracking framework for augmented-reality environments?