Slice-propagated 3D Gastric Tumour Segmentation From A Single 2D Annotation

Published in ECR 2021 (Oral), 2021

Recommended citation: Chen, ZiFan, et al. "Slice-propagated 3D Gastric Tumour Segmentation From A Single 2D Annotation." European Congress of Radiology (ECR) 2021. https://connect.myesr.org/course/ai-in-abdominal-imaging/

Zifan Chen, Jiazheng Li, Yiting Liu, Jie Zhao, Li Zhang, Lei Tang, Bin Dong

PURPOSE

To develop a deep learning model on 3D CT images to generate the volumetric segmentation of a gastric tumor from the annotation of its largest cross-section.

METHOD AND MATERIALS

In this work, 232 CT volumes of venous, arterial, or delayed phases are obtained from 128 gastric cancer patients, of which 98 (173 CT scans) are randomly chosen as the training set and the remaining 30 (59 CT scans) as the test set. Two radiologists manually annotate the largest cross-section of the gastric tumors. A two-stage deep learning model is established to generate the to convert the 2D annotations into the complete 3D tumor segmentationssegmentation from the annotation of a randomly selected 2D slice. In the first stage, the proposing stage, our model iteratively segments the adjacent slices from the annotated 2D slice to eventually obtain a coarse-grained segmentation of the entire tumor. In the second stage, the refining stage, the difficult-to-segment pixels are assigned higher weights during training to achieve more accurate predictions. In both stages, the popular U-net is used as the network backbone. For evaluation, the annotation of the largest cross-section of the tumor is used to generate its 3D segmentationall slices of the tumors in the test set are annotated by two radiologists, and Dice and Kappa coefficients are used to compare the segmentation results and the manual annotations. The inference time of our model and the time of manual annotation are also compared.

RESULTS

The Dice (Kappa) coefficients between the segmentation results and the manual annotations of the two radiologists are 0.7778 (0.7772), P=0.2821, and 0.7747 (0.7741), P=0.1774, respectively, which is comparable to the inter-observer variability of 0.7971 (0.7966) between the radiologists. Our method takes about 1.15 seconds to process a CT volume on a standard GPU (the time of annotating the largest cross-section is about 40 seconds), while manual annotation takes about 7 minutes.

CONCLUSION

The proposed method can greatly enhance the efficiency of 3D gastric tumor segmentation without accuracy loss, facilitating the quantitative analysis of gastric cancer.

CLINICAL RELEVANCE/APPLICATION

Most gastric tumors usually have an irregular shape, which makes it difficult to quantify accurately. We propose an efficient method to achieve 3D tumor segmentation that supports the extraction of the structural, textural, and positional information in 3D. Our method has the potential to provide better quantitative analysis for gastric tumors, thereby improving the treatment evaluation and management of gastric cancer.