Multi-Modal Registration Techniques in Neurosurgery

Outline

Image-guided surgery (IGS) refers to a surgical operation that is guided by a tracked surgical instrument based on pre-operative and / or intra-operative images. Integrating information from different image data sources, in particular pre-operative and intra-operative ones, is helpful to the physicians, because the surgeon can intra-operatively confirm the borders of the work-field on a pre-operative 3D scan and also identify neighboring anatomical structures of importance. 3D pre-operative data can be extracted from MRI or CT, while intra-operatively, usually only 2D/2.5D data are available, because of the different image acquisition devices that are available before and during a surgery. For this reason, registration of images of different dimensions is necessary.

The aim of this project consists in developing robust and automatic surface-to-volume registration algorithms, tailored to the needs of cranial neuronavigation, i.e. IGS for brain surgery. In particular, the project focus on registering 2.5D intra-operative cortical surfaces to 3D pre-operative ones, extracted for example from MRI or CT. This problem is particularly challenging because of the so called brain shift}, i.e. the change in size and volume the brain undergoes after the dura mater is opened. For this reason, pre-operative and intra-operative surfaces must be registered using a non-rigid approach. In [2], a non-rigid registration approach for photos and MRI that accounted for brain shift and used the position of the sulci (fissures on the cortex) for the alignment, was proposed. The alignment was attained by finding a 3D deformation defined on the pre-operative cortical surface. The approach was defined as an energy minimization problem, with a data term that uses an automatic moment-based classification of sulci on the cortical surface with a user guided sulci marking of the photographs, and a second order regularizer. The latter accounts for the fact that the sulci only provide matching information on a low dimensional subset of the surface. The main limitation of this approach is a robust classification of the sulci in the photos.

For this project, new intra-operative imaging modalities that contain more depth information than standard photos are taken into consideration. To acquire 2.5D intra-operative data, the depth sensor Microsoft Kinect v2 has been used. The Kinect v2 is a low-cost sensor that delivers a stream of 2D color images and a stream of depth images by using the time-of-flight principle to reconstruct the distance of the object from the sensor. The applicability of range images for image-guided radiation therapy has been previously investigated in [3]. 

The project is done jointly with the group of Prof. Schaller at the University Hospital of Geneva and the group of Prof. Rumpf at the University of Bonn.