Using Statistical Leaarning together with Machine Learning and AI to develope a framework for contouring medical images
Updated on October 14, 2022 by Surajit Ray
medical imaging Pet CT AI Contouring
15 min READ
The primary goal for developing the new statistical framework is to mimic the human perception for tumour delineation and marry it with all the advantages of an analytic method using modern-day computing environment. Our proposed framework will explicitly use statistical models which can accommodate the 2D and 3D spatial relationships in medical images along with their natural ability to fuse images from different modalities (e.g. CT/MRI) and provide complementary information for image segmentation tasks. This project has six main aims:
Combine Statistical Learing with AI to provide a new framework for probabilistic contouring
The use of functional imaging such as PET in radiotherapy (RT) is rapidly expanding with new cancer treatment techniques. A fundamental step in RT planning is the accurate segmentation of tumours based on clinical diagnosis. Furthermore, recent tumour control techniques such as intensity modulated radiation therapy (IMRT) dose painting requires the accurate calculation of multiple nested contours of intensity values to optimise dose distribution across the tumour. Recently, convolutional neural networks (CNNs) have achieved tremendous success in image segmentation tasks, most of which present the output map at a pixel-wise level. However, its ability to accurately recognize precise object boundaries is limited by the loss of information in the successive downsampling layers. In addition, for the dose painting strategy, there is a need to develop image segmentation approaches that reproducibly and accurately identify the high recurrent-risk contours. To address these issues, we propose a novel hybrid-CNN that integrates a kernel smoothing-based probability contour approach (KsPC) to produce contour-based segmentation maps, which mimic expert behaviours and provide accurate probability contours designed to optimise dose painting/IMRT strategies. Instead of user-supplied tuning parameters, our final model, named KsPC-Net, applies a CNN backbone to automatically learn the parameters and leverages the advantage of KsPC to simultaneously identify object boundaries and provide probability contour accordingly. The proposed model demonstrated promising performance in comparison to state-of-the-art models on the MICCAI 2021 challenge dataset (HECKTOR).
With the increasing integration of functional imaging techniques like Positron Emission Tomography (PET) into radiotherapy (RT) practices, a paradigm shift in cancer treatment methodologies is underway. A fundamental step in RT planning is the accurate segmentation of tumours based on clinical diagnosis. Furthermore, novel tumour control methods, such as intensity modulated radiation therapy (IMRT) dose painting, demand the precise delineation of multiple intensity value contours to ensure optimal tumour dose distribution. Recently, convolutional neural networks (CNNs) have made significant strides in 3D image segmentation tasks, most of which present the output map at a voxel-wise level. However, because of information loss in subsequent downsampling layers, they frequently fail to precisely identify precise object boundaries. Moreover, in the context of dose painting strategies, there is an imperative need for reliable and precise image segmentation techniques to delineate high recurrence-risk contours.To address these challenges, we introduce a 3D coarse-to-fine framework, integrating a CNN with a kernel smoothing-based probability volume contour approach (KsPC). This integrated approach generates contour-based segmentation volumes, mimicking expert-level precision and providing accurate probability contours crucial for optimizing dose painting/IMRT strategies. Our final model, named KsPC-Net, leverages a CNN backbone to automatically learn parameters in the kernel smoothing process, thereby obviating the need for user-supplied tuning parameters.The 3D KsPC-Net exploits the strength of KsPC to simultaneously identify object boundaries and generate corresponding probability volume contours, which can be trained within an endto-end framework. The proposed model has demonstrated promising performance, surpassing state-of-the-art models when tested against the MICCAI 2021 challenge dataset (HECKTOR).
The global pandemic of coronavirus disease 2019 (COVID-19) is continuing to have a significant effect on the well-being of the global population, thus increasing the demand for rapid testing, diagnosis, and treatment. As COVID-19 can cause severe pneumonia, early diagnosis is essential for correct treatment, as well as to reduce the stress on the healthcare system. Along with COVID-19, other etiologies of pneumonia and Tuberculosis (TB) constitute additional challenges to the medical system. Pneumonia (viral as well as bacterial) kills about 2 million infants every year and is consistently estimated as one of the most important factor of childhood mortality (according to the World Health Organization). Chest X-ray (CXR) and computed tomography (CT) scans are the primary imaging modalities for diagnosing respiratory diseases. Although CT scans are the gold standard, they are more expensive, time consuming, and are associated with a small but significant dose of radiation. Hence, CXR have become more widespread as a first line investigation. In this regard, the objective of this work is to develop a new deep transfer learning pipeline, named DenResCov-19, to diagnose patients with COVID-19, pneumonia, TB or healthy based on CXR images. The pipeline consists of the existing DenseNet-121 and the ResNet-50 networks. Since the DenseNet and ResNet have orthogonal performances in some instances, in the proposed model we have created an extra layer with convolutional neural network (CNN) blocks to join these two models together to establish superior performance as compared to the two individual networks. This strategy can be applied universally in cases where two competing networks are observed. We have tested the performance of our proposed network on two-class (pneumonia and healthy), three-class (COVID-19 positive, healthy, and pneumonia), as well as four-class (COVID-19 positive, healthy, TB, and pneumonia) classification problems. We have validated that our proposed network has been able to successfully classify these lung-diseases on our four datasets and this is one of our novel findings. In particular, the AUC-ROC are 99.60, 96.51, 93.70, 96.40% and the F1 values are 98.21, 87.29, 76.09, 83.17% on our Dataset X-Ray 1, 2, 3, and 4 (DXR1, DXR2, DXR3, DXR4), respectively. 2021 The Authors
The advancing technology for automatic segmentation of medical images should be accompanied by techniques to inform the user of the local credibility of results. To the extent that this technology produces clinically acceptable segmentations for a significant fraction of cases, there is a risk that the clinician will assume every result is acceptable. In the less frequent case where segmentation fails, we are concerned that unless the user is alerted by the computer, she would still put the result to clinical use. By alerting the user to the location of a likely segmentation failure, we allow her to apply limited validation and editing resources where they are most needed. We propose an automated method to signal suspected non-credible regions of the segmentation, triggered by statistical outliers of the local image match function. We apply this test to m-rep segmentations of the bladder and prostate in CT images using a local image match computed by PCA on regional intensity quantile functions. We validate these results by correlating the non-credible regions with regions that have surface distance greater than 5.5mm to a reference segmentation for the bladder. A 6mm surface distance was used to validate the prostate results. Varying the outlier threshold level produced a receiver operating characteristic with area under the curve of 0.89 for the bladder and 0.92 for the prostate. Based on this preliminary result, our method has been able to predict local segmentation failures and shows potential for validation in an automatic segmentation pipeline.
Statistical imaging together with other machine learning techniques are the epitome of digitalizing healthcare and are culminating towards developing innovative tools for automatic analysis of three-dimensional radiological images - PET (Positron Emission Tomography) images [1]. However, the three major challenges in radiology are: (1) increasing demand for medical imaging (2) decreasing turnaround times caused by mass data (3) diagnostic accuracy that leads to a quantification of images. To address these challenges along with ethical issues regarding the use of Artificial Intelligence in patient care, there is a need to develop a new framework of statistical analysis which can be readily used by clinicians and can be trained with a relatively lower number of samples. Most existing algorithms segments a 2D slice by assigning the grid of pixels into the tumour or non-tumour class. Instead of a pixel-level analysis, we will assume that the true intensity comes from a smooth underlying spatial process which can be modelled by a kernel estimates [2]. In this project, we have developed a kernel smoothing-based probability contour method on PET image segmentation, which provides a surface over images that produces contour-based results rather than pixel-wise results, thus mimicking human observers' behaviour. In addition, our methodology provides the tools for developing a probabilistic approach with uncertainty measurement along with the segmentation. Our method is computational efficient and can produce reproductive and robust results for tumour detection, delineation and radiotherapy planning, together with other complementary modalities, such as CT (Computed tomography) images.
Statistical imaging together with other machine learning techniques are the epitome of digitalizing healthcare and are culminating towards developing innovative tools for automatic analysis of three-dimensional radiological images - PET (Positron Emission Tomography) images [1]. However, the three major challenges in radiology are: (1) increasing demand for medical imaging (2) decreasing turnaround times caused by mass data (3) diagnostic accuracy that leads to a quantification of images. To address these challenges along with ethical issues regarding the use of Artificial Intelligence in patient care, there is a need to develop a new framework of statistical analysis which can be readily used by clinicians and can be trained with a relatively lower number of samples. Most existing algorithms segments a 2D slice by assigning the grid of pixels into the tumour or non-tumour class. Instead of a pixel-level analysis, we will assume that the true intensity comes from a smooth underlying spatial process which can be modelled by a kernel estimates [2]. In this project, we have developed a kernel smoothing-based probability contour method on PET image segmentation, which provides a surface over images that produces contour-based results rather than pixel-wise results, thus mimicking human observers' behaviour. In addition, our methodology provides the tools for developing a probabilistic approach with uncertainty measurement along with the segmentation. Our method is computational efficient and can produce reproductive and robust results for tumour detection, delineation and radiotherapy planning, together with other complementary modalities, such as CT (Computed tomography) images.
The primary goal of this is research is to build a statistical framework for automated PET image analysis that is closer to human perception. Although manual interpretation of the PET image is more accurate and reproducible than thresholding-based semiautomatic segmentation methods, human contouring has large interobserver and intraobserver variations and moreover, it is extremely time-consuming. Further, it is harder for humans to analyze more than two dimensions at a time and it becomes even harder if multiple modalities are involved. Moreover, if the task is to analyze a series of images it quickly becomes an onerous job for a single human. The new statistical framework is designed to mimic the human perception for tumour delineation and marry it with all the advan- tages of an analytic method using modern day computing environment.