Virtual and Mixed Reality

vr

Virtual and Mixed Reality Environments for Surgery

Current Graduate Students: Adam Rankin, John Baxter, Uditha Jayarathne and Utsav Pardasani

Minimally invasive procedures require a fundamentally new set of tools in order to provide visual information to the interventionalist that would otherwise be acquired through direct vision. This information could come from a variety of sources, from registered pre-operative images, inter-operative imaging such as ultrasound, or tracked tools, depending on the precise requirements of the procedure.

However, conveying this information to the interventionalist in a usable and effective manner is still a difficult question, relying on insight from human factors. The cognitive and perceptual limitations of the interventionalist need to be considered as fundamental constraints on the design of environments for any aspect of the intervention.

Our Research Objective

Are head mounted displays appropriate for surgical navigation in the absence of direct vision? Or would a monitor in the operating room be sufficient? Can stereoscopic endoscopes give us enough depth discrimination to find structures when they are not visible monoscopically?

Our objectives in researching virtual and mixed environments for surgery are as diverse. We are interested in answering questions from the most applied level such as the design and evaluation of interoperative environments, to the most fundamental, asking questions about the perceptual requirements of an interventional task. To address these questions, our group is integrating imaging data with various virtual elements to provide more context to the intra-operative images, or to provide an adequate substitute for direct vision.

Our Contribution

We have developed a large suite of software based on the visualization toolkit, VTK, for every aspect of image guided interventions. Our group has developed flexible and extensible libraries for virtual and mixed reality surgical environments for a variety of interventions from cardiac, to neurosurgical, to laparoscopic procedures.

Building on our previous contributions in virtual and mixed environments for surgery, we have developed echelon, an extensible framework which allows for the integration of real-time ultrasound, pre-operative models, and tracking to be combined and shown to the interventionalist on highly configurable displays. This framework has since been specialized for beating heart mitral valve repair as the NeoNav system.

We have also developed a series of utilities for augmented reality systems based on the commercially-available Vuzix 920AR hardware, allowing us to construct augmented reality environments for neurosurgical planning for tumour resection in which patient data is virtually overlaid over a physical model, making it easier for both novices and experts to interact with.

Key Questions

  • What are the fundamental cognitive and perceptual limitations of interventionalists, and how can they be used to better design interventional environments?
  • Under what conditions can virtual and augmented environments improve surgical interventions, training, and planning?
  • What methods of pre-operative image visualization are usable during an intervention?
  • What forms of tracking are optimal for any given intervention?
  • Can vision-based tracking provide a robust and usable tracking framework for augmented-reality environments?