publication

Automatic brain tissue segmentation in fetal MRI using convolutional neural networks

Khalili, N., Lessmann, N., Turk, E., Claessens, N., de Heus, R., Kolk, T., Viergever, M. A., Benders, M. J. N. L., Isgum, I.

DOI: https://doi.org/10.1016/j.mri.2019.05.020

Magnetic Resonance Imaging 64 p. 77-89

Abstract

MR images of fetuses allow clinicians to detect brain abnormalities in an early stage of development. The cornerstone of volumetric and morphologic analysis in fetal MRI is segmentation of the fetal brain into different tissue classes. Manual segmentation is cumbersome and time consuming, hence automatic segmentation could substantially simplify the procedure. However, automatic brain tissue segmentation in these scans is challenging owing to artifacts including intensity inhomogeneity, caused in particular by spontaneous fetal movements during the scan. Unlike methods that estimate the bias field to remove intensity inhomogeneity as a preprocessing step to segmentation, we propose to perform segmentation using a convolutional neural network that exploits images with synthetically introduced intensity inhomogeneity as data augmentation. The method first uses a CNN to extract the intracranial volume. Thereafter, another CNN with the same architecture is employed to segment the extracted volume into seven brain tissue classes: cerebellum, basal ganglia and thalami, ventricular cerebrospinal fluid, white matter, brain stem, cortical gray matter and extracerebral cerebrospinal fluid. To make the method applicable to slices showing intensity inhomogeneity artifacts, the training data was augmented by applying a combination of linear gradients with random offsets and orientations to image slices without artifacts. To evaluate the performance of the method, Dice coefficient (DC) and Mean surface distance (MSD) per tissue class were computed between automatic and manual expert annotations. When the training data was enriched by simulated intensity inhomogeneity artifacts, the average achieved DC over all tissue classes and images increased from 0.77 to 0.88, and MSD decreased from 0.78 mm to 0.37 mm. These results demonstrate that the proposed approach can potentially replace or complement preprocessing steps, such as bias field corrections, and thereby improve the segmentation performance.