Augmented Reality Interfaces for Robot Surgery

Using robots to navigate through the body allows for smaller less conspicuous incisions and quicker recovery for the patient. However, the surgeon can become confused when the location and orientation changes as the robot moves through the body. To use an analogy, I was helping the surgeon feel like they were behind the wheel of the car instead of controlling a remote control toy.
For more research conducted in parallel to design, please see my portfolio.

Process

  • Objective: Test Adaptive controls for virtual
  • Hypotheses: Adaptive controls will cause less disorientation for surgeons.
  • Method: Virtual laparoscopic surgery used to test standard and adaptive controls.
  • Synthesize: Adaptive controls found to help.

 

My Contributions

  • Writing simulation software.
    • C++, OpenGL
  • Conducted user testing.
    • A/B Timed tests
    • 24 participants

 

Navigation Through a Virtual Colon

User testing, searching for polyps

Guiding a endoscope through the colon can cause a lot of disorientation because controls for yaw and pitch change as the endoscope twists. A robot can sense orientation and change controls based on the information (b). A study was done to see if this information would help surgeons or if it introduced new problems. The surgeon guides the robot through the colon (a) in search of polyps (c). Eye-tracking and motor control are used to measure disorientation. Normally the gaze tracks just ahead of where the robot will be moving however when disoriented the gaze and robot movement loose correlation. The task was to guide a small circle to the polyps to remove them (d). The heat map (e) shows the averaging of all participants with the majority gazing at the target and then looking further down the tunnel. Participants using the updating controls to be less disorienting and controls are being updated for the iSnake robot.

Using Two Images to Create False 3D

double vision for laproscopic tool

Motion parallax is the idea that your brain picks up 3D queues from objects in foreground moving faster than objects in background when the head is moved back and forth. The problem being that current 3D displays simply show different views to each of the eyes so all 3D queues are lost for motion Parallax. To re-create motion parallax for the surgeon two 3D cameras were placed at 90 degrees of each other. Using infra-red beams 3D information could be obtained of the scene. Using this information a virtual camera could be recreated at any point between the two 3D cameras. A Microsoft Kinnect was used to track the participants head which in turn controlled where the virtual camera was positioned. All participants felt the added information was valuable however as this was a prototype there were a few problems with artifacts from the 3D information that still need to be resolved.

Process

  • Objective: Test artificial 3D vision.
  • Hypotheses: Artificial 3D vision will give more depth perception than two individual camera images.
  • Method: Artificial 3D vision tested against standard two camera vision.
  • Synthesize: Artificial 3D vision found to help, but causes problems that need to be resolved.

 

Deliverables

Conference Submission

  • Introducing Motion Parallax to Assist Spatial Awareness

 

My Contributions

  • Wrote software for experiment.
    • C++, OpenGL
  • Conducted user testing.
    • 22 participants, 10 doctors.
    • A/B timed tasks
    • Qualitative Questionare
  • Wrote and submitter paper for conference.

 

Process

  • Objective: Test if Yaw and Roll of robot can be transmitted to surgeon.
  • Hypotheses: Using vibration the surgeon will be able to tell the orientation of the robot.
  • Method: A chest strap with vibration actuators was attached to the surgeons and tested against no strap.
  • Synthesize: Surgeons did not feel it was worth the extra discomfort.

 

Deliverables

Conference Submission

 

  • Linking the Vestibular System to Robot Orientation

 

My Contributions

  • Research into surgery tool noise.
  • Vibrating Strap fabrication.
  • Writing simulation software.
    • C++
  • Conducted user testing
    • 14 participants.
    • A/B error rate measurement.
    • Qualitative Questionare
  • Wrote and submitter paper for conference.

 

Linking the Surgeons Sense of Balance to the Robot

flexible robot arm

The idea is to send orientation information of the robot through to the cutaneous sense of the skin to help the surgeon keep track of the orientation of the robot in situ during the surgery. Using the cutaneous sense to relay body orientation is already being done to help people with vestibular and balance problems. Pizo-electric disks were built into a pad to mimic the orientation of a virtual robot while testing participants to make their way through a 3D maze. Sending robotic movement information to the surgeons vestibular system decreased disorientation but raised their level of discomfort since they had to stay relatively still. For the prototype to work it would need to be built in a more portable manner.

How the Brain Interprets Multiple Camera Views

dual surgical displays

The experiment was to find out which felt more natural for surgeons to switch views between cameras: Head movement or eye movement. To test eye movement a Tobii eye tracker was used. When the eyes moved to a small view of the other camera that view would take over the screen. To track head movement a magnetometer (digital compass) was wired to bluetooth and fitted to a headband. When the participant turned their head the position could be tracked. The task setup was to try to touch small spheres hidden behind colored cones. The spheres could be seen in the top down camera but the robotic virtual arms could only be seen in the front view camera. Surgeons familiar with robotic surgery received three minutes in randomized trials to try to collect as many spheres as possible. Head movement was the preferred modality for 90% of the participants from the questionnaire. It showed almost twice as many switches between screens as when eye tracking was used but also, on average, had over 30% more spheres collected. When using eye-tracking surgeons under stress would forget to switch to the other screen to recall the position on screen.

Process

  • Objective: Compare head turning to eye tracking for multiple camera views.
  • Hypotheses: The surgeon will feel that head turning is a more natural motion.
  • Method: A digital compass strapped to the head was compared to eye tracking to signal the computer to change camera images.
  • Synthesize: Surgeons preferred head turning for camera changes.

 

My Contributions

  • Writing simulation software.
    • C++, OpenGL
  • Conducted user testing
    • 32 participants.
    • A/B error time and (error) deviation from path.
    • Qualitative Questionare
  • Writing and submitting conference paper.