Head-worn 3D-Visulization of the Invisible for Surgical Intra-Operative Augmented Reality

ATTRACT stories: H3D-VISIONAIR project (please, accept the cookies to see the video)

Project at a glance

Pending information.

Public summary

During surgery surgeons must identify vital anatomical structures (nerves and lymph nodes) to prevent unnecessary healthy tissue damage. Correct intraoperative identification by normal eyesight remains enormously challenging due to natural variation. Therefore, surgeons require high-tech intra-operative imaging for reliable high-resolution visual discrimination of critical anatomical structures.

H3D-VISIONAIR offers a breakthrough and disruptive head-worn augmented reality (AR) system for surgeons to see beyond normal eyesight. It consists of two multi-spectral cameras (combining visual range with NIR visualization), a belt computer with real-time data processing, and a high-end stereoscopic head mounted display with wireless connection to the operating room infrastructure. The spectral signature of specific annotated tissues form input to develop machine-learning based models to allow automatic segmentation.

The models are used to generate the AR-overlays on top of the normal surgical field of view. In ATTRACT 1, we demonstrated TRL3 proof-of-concept of H3D-VISIOnAiR by showing head-mounted 2D realtimespectral+RGB acquisition, processing and displaying of AR overlays for a simulated surgical task with max delay of 30ms for AR imaging. Within this project, our extended consortium brings all required expertise to achieve a demonstrator at TRL 6.

Therefore, we will acquire far more spectral data of both adipose and nerve tissue in vivo in the operating rooms of MUMC and AUMC with a high-end contactless hyperspectral camera’s available at AUMC and MUMC. These forms input to select the required up to 5 relevant wavelength bands to discriminate nerve from adipose tissue. Hardware development consists of the implementation of the chosen wavelength bands into the Quest Medical Imaging sensor cameras allowing for the combination of 5 sensors/bands in 1 optical path and final integration in the 3D head mounted display ‘Digital Surgical Loupe’ of i-Med, as well as realization of continued real-time image acquisition, processing and displaying within max. delay of 30ms. Software development consist of AI algorithms to segment nerve tissue from the images, and translate models into real-time GPU at the start with finally efficient dedicated FPGA programming to be implemented in the demonstrator.

The unique software we will develop will determine the footprint of selected tissues at a staggering 60times per second / per pixel (3D). So, within 20 ms an AR overlay of detected tissues is projected on an RGB image, f.e. bright green for nerves is seen almost real-time. The wireless headset is connected with the operating room image routing therefore the team can real-time see what the surgeon sees, but it also enables real-time remote peer support. This is like a dream for a surgeon. With proper IMDD documentation facilitated by UM, METC approval is acquired, and during surgery in the operating room in vivo imaging performed with the H3D-VISIOnAiR demonstrator.