Home About us Editorial board Ahead of print Current issue Search Archives Submit article Instructions Subscribe Contacts Login 

 
ORIGINAL ARTICLE
Ahead of print publication  

Autosegmentation of lung computed tomography datasets using deep learning U-Net architecture


1 Department of Radiation Oncology, Princess Alexandra Hospital, Queensland; Science and Engineering Faculty, Queensland University of Technology, Brisbane, Australia
2 Department of Radiation Oncology, Princess Alexandra Hospital, Queensland, Australia

Date of Submission20-Jan-2021
Date of Decision09-Apr-2021
Date of Acceptance30-Apr-2021
Date of Web Publication25-Feb-2022

Correspondence Address:
Prabhakar Ramachandran,
Department of Radiation Oncology, Princess Alexandra Hospital, 199, Ipswich Road, Woolloongabba, Queensland
Australia
Login to access the Email id

Source of Support: None, Conflict of Interest: None

DOI: 10.4103/jcrt.jcrt_119_21

 > Abstract 


Aim: Current radiotherapy treatment techniques require a large amount of imaging data for treatment planning which demand significant clinician's time to segment target volume and organs at risk (OARs). In this study, we propose to use U-net-based architecture to segment OARs commonly encountered in lung cancer radiotherapy.
Materials and Methods: Four U-Net OAR models were generated and trained on 20 lung cancer patients' computed tomography (CT) datasets, with each trained for 100 epochs. The model was tested for each OAR, including the right lung, left lung, heart, and spinal cord. Dice similarity coefficient (DSC) and Hausdorff distance (HD) were used to assess the agreement between the predicted contour and ground truth.
Results: The highest of the average DSC among the test patients for the left lung and the right lung was 0.96 ± 0.03 and 0.94 ± 0.06, respectively, and 0.88 ± 0.04 for heart, and 0.76 ± 0.07 for the spinal cord. The HD for these corresponding DSCs was 3.51 ± 0.85, 4.06 ± 1.12, 4.09 ± 0.85, and 2.76 ± 0.52 mm for left lung, right lung, heart, and spinal cord, respectively.
Conclusion: The autosegmented regions predicted by right and left lung models matched well with the manual contours. However, in a few cases, the heart model struggled to outline the boundary precisely. The spinal cord model had the lowest DSC, which may be due to its small size. This is an ongoing study aimed to assist radiation oncologists in segmenting the OARs with minimal effort.

Keywords: Contours, deep learning, lung computed tomography datasets, U-net architecture



How to cite this URL:
Mehta A, Lehman M, Ramachandran P. Autosegmentation of lung computed tomography datasets using deep learning U-Net architecture. J Can Res Ther [Epub ahead of print] [cited 2022 Nov 29]. Available from: https://www.cancerjournal.net/preprintarticle.asp?id=338495




 > Introduction Top


Radiotherapy and treatment planning

Radiation therapy is widely used to effectively treat patients suffering from the adverse effects of benign and malignant tumors. High-energy ionizing radiation is the key component of the treatment which, upon interaction with the tumor, damages the DNA of the target cells, leading to cell death. Radiotherapy can also be used alongside chemotherapy and surgery for the treatment of cancer.[1] Successful treatment delivery requires a treatment planning system (TPS) and delivery systems such as linear accelerators that deliver a high dose of radiation to the tumor volume with minimal dose to the surrounding normal tissues.[2] Delineation of tumor volumes and normal tissues such as organs at risk (OARs) on any given image datasets (computed tomography [CT], magnetic resonance imaging, positron emission tomography, digital subtraction angiography, ultrasound, etc.) is commonly referred to as contouring. The regions of interest (ROIs) generated through the contouring process are detailed in the International Commission on Radiation Units and Measurements (ICRU) 50[3] and ICRU 62.[4] The ROIs include gross tumor volume (GTV), clinical target volume (CTV), internal target volume (ITV), and planning target volume (PTV).

Imaging technology has rapidly evolved in the last few decades with the introduction of modern volumetric imaging techniques. Current imaging technology collects a huge amount of data per patient, posing a challenge when it comes to data processing, analysis, and storage. It needs a significant amount of time, skill, and care for the clinician to analyze and generate contours, which could be impacted by human error. The contoured target and OAR volumes vary with the observers' ability to distinguish the structures. During contouring, the radiation oncologist goes slice by slice to mark the target volumes and OARs on the planning dataset.[5] The TPS provides a suite of tools to aid this process.

Advancement in dose delivery techniques and image guidance allows accurate delivery of dose distribution in relation to targets and OARs. Thus, contouring of targets and OARs has great importance in ensuring precise dose delivery to the target volume and limiting dose received by OARs. Therefore, any contouring method should be consistent, accurate, and precise.[6]

Most atlas-based automated contouring software provide similar results to the manual contours produced by radiation oncologists. However, it is noted that some structures require manual intervention, and others may have to be discarded. In addition to this, software performance is often dependent on the contrast present in the image.[7] Therefore, there is scope for improvement for automatic contouring methods that generate consistent and accurate contours.

Deep learning for segmentation

Deep learning is a subset of artificial intelligence and is based on multiple layers of interconnected array elements in an architecture similar to neurons in the brain. In this context, these neurons accept input data and extract features for learning, which are then passed onto subsequent layers progressively. During the training process, both input and ground truth data are supplied to the model. There are numerous layers between the input, and output layers, as illustrated in [Figure 1]. These layers are collectively called hidden layers, and the extraction of relevant features occurs here.[8],[9],[10]
Figure 1: Deep learning model with layers

Click here to view


Convolutional neural networks (CNNs) are one of the most popular deep learning neural network architectures, suitable for tasks such as classification, segmentation, and object detection. CNNs use locally connected convolutional layers, with individual array elements influenced only by nearby array elements in the previous layer, encoding positional information into the architecture itself. If an image or mask is wanted at the output layer, it is efficient to add a set of upsampling layers to the architecture. In these layers, feature representations of the data are encoded by creating high dimensional feature vectors, whereas the decoding path provides output images using a trained upsampling technique. These networks are known as fully convolutional networks (FCNs). An entity called the loss function plays an important role here, quantifying the difference between the output and ground truth data. To minimize the loss function, the error is back-propagated to earlier layers, modifying the weight at each node during each training iteration. The network usually needs many epochs to alter its weights to produce acceptable accuracy.[11]

U-Net architecture

The concept of FCN U-Net[12] was proposed in 2015 as a potential tool for segmentation of biomedical images. It includes the downsampling path consisting repeated sets of two 3 × 3 convolutions, each followed by a rectified linear unit (ReLu) and a 2 × 2 max-pooling layer. The upsampling path consists of an upsampling of the feature set followed by a 2 × 2 upconvolution, a concatenation with the corresponding feature map from the downsampling path, and a set of two 3 × 3 convolutions, each followed by a ReLu. The study demonstrates the segmentation of electron microscopy data which contained 30 images of 512 × 512 matrix size and corresponding segmented ground-truth maps. The model was able to predict the region of interest with a 0.061-pixel error. The model was also tested on microscopic images of cells and achieved an intersection over union of 0.92.[12],[13] This suggests that the U-net architecture can be used to segment different types of images by altering the architecture.

U-Net for autosegmentation

U-Net has become a popular tool for image segmentation in radiation oncology. In the case of thorax, it has been used to segment lungs[14],[15],[16] and for delineating OARs other than lungs.[17] Skourt et al.,[14] used the traditional U-Net model consisting of convolutional layers with ReLU and max-pooling layers. Ferreira et al.[15] adopted a similar approach with the exception of using a different activation function, a parametric ReLU that eliminates the issue of vanishing gradient. They also introduced dropouts to increase the accuracy. In Pang et al.,[16] preprocessing techniques are used to provide distinct input for better training followed by the use of U-Net model in a similar fashion like others. Yang et al.[17] employed two U-Net models, a 3D U-Net to segment larger organs, namely lungs and heart, whereas the other structures were trained using sliced-based (2D) U-Net. In addition to this, in 2D U-Net model, convolution layers were swapped with the Google Inception layer for better training settings. The loss function used was the Dice coefficient with adaptive movement estimation (ADAM) as the optimizer and had early stopping functionality during training to avoid overfitting.

Although Skourt et al.[14] managed to contour lungs with an acceptable result, the contoured regions also included a portion of nodules and parts of blood vessels. In the case of Ferreira et al.,[15] the authors focused on lung lobe segmentation instead of the entire lung. The methods adopted by Pang et al.[16] are dependent on the image preprocessing techniques. Yang et al.[17] made minor changes to their model which relied on the size of the OARs during training, and added the Google Inception layer instead of convolution layers to learn the features better in the input data.

Even though a number of publications have demonstrated various strategies to autosegment OARs in lung cancer, most of them rely on preprocessing or introducing layers other than convolution or combining U-net with other deep learning networks. In this study, we propose a simple approach without image preprocessing to create a model to segment OARs such as heart, right lung, left lung, and spinal cord.


 > Materials and Methods Top


Dataset

In this study, anonymized lung datasets from the Cancer Imaging Archive named “NSCLC-Radiomics” were used to train and test the model for lung OARs.[18] The dataset consists of 23 patient CT data scans containing contours of thorax OARs.[13] To demonstrate the model's capability to work with any CT data, different image datasets are chosen for training and testing. [Table 1] illustrates the training to validation datasets ratio for OARs and each dataset includes CT images and its corresponding RT structure file defining the OAR volumes as ground truth.
Table 1: Training, validation, and test data ratio

Click here to view


Data reading

The model was programmed using Python in the Google Colaboratory (Colab) integrated development environment, with training and test datasets accessed through Google Drive. The contours of the lungs, spine, and heart were extracted from the RT structure Digital imaging and communication in medicine (DICOM) file by converting the data points (in Cartesian coordinates) into binary pixel matrix using properties from images such as pixel spacing and patient position. In addition to this, the converted binary pixel matrix is linked to CT images by a unique identifier (UIDs) given in both CT images and RT structure DICOMs.

In [Figure 2], boxes with the black background are used to convert cartesian coordinates to pixel values (as shown in the equation below), and one with the orange background is used to connect the CT images to their OAR contours.
Figure 2: Digital imaging and communication in medicine computed tomography image properties (left) and the properties of RT structure file (right)

Click here to view






Here, a and b are x and y cartesian coordinates of contour data, respectively. Whereas, X and Y are the pixel coordinates for the corresponding contour data.

Colab's GPU facility helped to train the model at a faster speed as compared to regular speed of the local processing units.

Model structure

The model is based on a modified U-net architecture. We have included additional layers and filters to improve the prediction accuracy of our models. To avoid the problem of overfitting during training, a regularization method known as dropout was adopted to randomly exclude some layers during training.

Contour data from the RT structure file of each patient was connected to each of the CT images to obtain the input and desired output for training. A python dictionary was created, which contained the input and output connected through UIDs. A python class contained batch size, image size, path of the data was designed which would read the data from dictionary based on the batch size supplied by the user.

The model was initiated by defining python functions to perform downsampling, upsampling, and bottleneck region tasks. Downsampling functions (DSF) process the input image of a given dimension by applying several filters, convolution kernels, padding style, and strides of the convolution kernel. The Keras library was used to make the layers for these functions; the first step defined within the DSF was convolution which is a function of filters, kernel size, padding, strides, and activation function, followed by ReLu, which converts negative outputs to zero and keeps the positive unchanged. This output was then max-pooled to reduce the size of the image. This is repeated multiple times, as shown in [Figure 3], and in half of these layers, max-pool values were dropped out randomly to avoid overfitting.
Figure 3: U-Net architecture

Click here to view


The next function defined was for the bottleneck region, which gets the input from the last downsampled layer and performs convolution on it.

The upsampling function outputs only one variable; however, it goes through two steps before it provides an output. It performs upsampling on the given image size, and then concatenates by connecting to the same sized image from the downsampling function followed by convolution. The filter sizes used in the study are depicted in [Figure 3]. Although ReLu was used as the activation function in all three functions, the last upsampled layer uses a sigmoid function to get the result within the range of 0–1. After processing the input image through downsampling, bottleneck, and upsampling [Figure 3], the model was optimized using ADAM, with binary cross-entropy used for the loss function. ADAM is a first-order gradient-based optimization of stochastic objective functions. ADAM doesn't require high memory to operate and shows good results for large datasets. It is proven that it is faster than other popular optimizers such as SGDNesterov and AdaDelta. On the other hand, Binary cross-entropy was used because the autosegmentation of OARs is a binary classification, and this entropy-based metric is efficient at quantifying the difference between the predicted binary mask and ground truth data.

For training, 90% of the data were used for training and the remaining for validation. In this study, 100 epochs were used for training, and each epoch was further divided into multiple batches based on the available memory.

Evaluation

This study used two evaluation matrices, the Dice similarity coefficient, and Hausdorff distance (HD), to validate and compare the models' outcomes.

After completing model training, the independent test datasets were fed to the Keras predict function to obtain output as created by the trained model. This was then compared and evaluated against the actual/ground truth contours of the test datasets using Dice Similarity Coefficient and HD as described below.

Dice similarity coefficient

The Dice similarity coefficient is a reproducibility validation metric that defines the spatial overlap index. It ranges between 0 and 1, where the numbers represent no overlap and complete overlap between two entities, respectively.[19] It is given as follows.



Hausdorff distance

The HD is defined as the maximum distance of a point (p) in the predicted contour set (say, P) to the nearest point (g) in the ground truth contour set (say, G) and vice versa. Mathematically, it can be formulated as follows.[20]



For G to P, the above equation can be written as



Where d (p,g) and d (g,p) is the Euclidean distance between P and g and g and p, respectively.

The maximum of the two distances obtained from Equations 1 and 2 was taken into account while calculating the HD.


 > Results Top


Testing on validation data

[Figure 4] illustrates various contours generated by the model in comparison to ground-truth contours. For the left lung, extra pixels can be seen in the anterior part of the lung in 1c and 2c as the input image consists of a visible portion of the lung, which might not have been considered during manual contouring. In addition to this, the predicted contour (3c) shows the top of the lung distinctly, whereas, in the ground truth, it is intact. For the remaining cases, the predicted contour has the same features as the ground truth.
Figure 4: (a) Input image, (b) ground truth, (c) predicted contour, and (d) ground truth overlaid onto the predicted contour for each organ at risk

Click here to view


In the case of right lung data (2c), the ground truth has the posterior portion of the lung only. However, the model is able to predict the missing portion too. In 3c, the model does not consider some portion as part of the lung and excludes it whereas, 1c and 4c show good agreement between the predicted and the ground truth.

In the case of the spinal cord, the predicted contours are very similar to their counterpart ground truth. However, there is a minor difference in the shapes due to the small structure of the spinal cord contour. The model in case of the heart, provides a close match with the validation data. In 3c, the shape of the heart varies marginally. Overall, the predicted contours are similar to the ground truth.

Testing on a different dataset

The model's capabilities were also tested on 360 axial lung CT slices as shown in [Figure 5],[Figure 6],[Figure 7], and [Figure 9] for left lung, right lung, spinal cord, and heart, respectively. The predicted contours are illustrated in column c for each OAR, whereas column b shows the ground truth. The predicted contours are overlaid onto the actual contours to demonstrate the agreement between the contours represented by column d. The outcomes yielded by the model for each contour are divided into three parts for each patient as best, median, and worst. [Table 2] shows the Dice similarity coefficient (DSC) and HD of the above descriptors.
Figure 5: Left lung autosegmentation (a) input image, (b) ground truth, (c) predicted contour, and (d) ground truth overlaid onto the predicted contour

Click here to view
Figure 6: Autosegmentation of right lung (a) input image, (b) ground truth, (c) predicted contour, and (d) ground truth overlaid onto the predicted contour

Click here to view
Figure 7: Autosegmentation of spinal cord (a) input image, (b) ground truth, (c) predicted contour, and (d) ground truth overlaid onto the predicted contour

Click here to view
Figure 9: Heart segmentation (a) input image, (b) ground truth, (c) predicted contour, and (d) ground truth overlaid onto the predicted contour

Click here to view
Table 2: Dice similarity coefficient and Hausdorff distance for three scenarios

Click here to view


Left lung model

For test patient 1, the best result of the left lung model predicts similar output as the ground truth contour with a DSC of 0.98 and maximum HD of 1.45 mm. In the worst one, the model predicts a few extra pixels which lie in the right lung region and some missing pixels in the left lung. However, it matches ~75% with the ground truth with an HD of 5.29 mm. Overall, the average DSC of all left lung slices is 0.92 ± 0.06, and that of HD is 3.66 ± 0.82 mm. Test patient 2's best result predicted by the model is similar to the ground truth with 0.97 DSC and HD of 1.94 mm whereas, in the median result, a few pixels are missing on the contour edge; however, it successfully excludes the non-lung region. In test patient 3, the worst-case scenario, the model predicts the region correctly, but unexpected pixels can also be noticed, which yielded a DSC of 0.71 and HD of 4.25 mm. The interquartile range for the left lung for all test cases was found to be shorter compared to all other structures [Figure 5].

Right lung model

In the worst scenario of test case 1, the predicted contour completely excludes the liver, which was included in the ground truth data and has pixels around the edge of the lung with DSC of 0.74 and HD of 5.86 mm. The average DSC for this patient is 0.91 ± 0.07 and HD of 4.27 ± 0.95 mm. As such, our right lung model was able to predict lung better than the supplied data which has a portion of the liver included in the lung contour. In the case of patient 2, the worst result has some extra pixels outside the volume of interest. However, it seems to predict the extended volume correctly that was not contoured in the ground truth which reduces the DSC to 0.56 with an HD of 5.77 mm. For patient 3, the overall performance of the model for this patient is 0.90 ± 0.06 as the average DSC and an average HD of 3.90 ± 1.1 mm. The right lung has a wider interquartile compared to the left lung [Figure 6] and [Figure 8].
Figure 8: Dice Similarity coefficient and Hausdorff distance for all test patients

Click here to view


Spinal cord model

[Figure 7] shows the best result for test patient 1 with a DSC and HD of 0.93 and 1.7 mm, respectively. In the case of patient 2, the worst predicted contour shows the predicted contour not covering the spinal cord region properly; however, the area contoured is within the cord which gives a DSC of 0.55 and HD of 3.51 mm. Patient 3 has a similar trend as patient 1 and 2 where the best-case scenario has a DSC of 0.93 followed by median and worst result with a DSC of 0.74 and 0.52, respectively. Of all the test cases, test case 3 has the wider interquartile range [Figure 7] and [Figure 8].

Heart model

In test case 1, the best-case scenario produces a DSC of 0.92 and HD of 2.32 mm as the predicted contour marginally differs from the ground truth. For patient 2, median output has a close resemblance to the ground truth except for some heart area excluded by the model, which produces a DSC of 0.92 and HD of 4.31 mm. Patient 3 showed a similar trend as in patients 1 and 2; the best result has a DSC of 0.93 and HD of 2.58 mm. [Figure 8] demonstrates a wider interquartile range for patient 3 compared to test case 1 and 2 [Figure 9].


 > Discussion Top


Manual contouring of OARs and target volume is a laborious process. Delineation accuracy is largely dependent on user expertise, and it is one of the sources of error in the treatment chain. Deep learning models such as U-Net could be used to obtain accurate OAR segmentation within a short time frame, which may aid the radiation oncologist and multidisciplinary team in planning and treating the patient efficiently. Autosegmentation can also reduce the time required to replan cases due to weight loss or target volume changes during radiotherapy treatment. It may also be a valuable tool in delivering adaptive radiotherapy, which imposes tight time constraints to plan and treat the patient in the treatment position. Our study has demonstrated that each model yields satisfactory results in terms of Dice coefficient and HD by testing on independent patient test datasets. The spinal cord has the lowest DSC, close to 0.75, which could be due to the smaller size of the manually contoured volume. This may be improved by increasing the number of training datasets. On the other hand, it is evident from the small HDs that the predicted contours are at the same location as the manually contoured volume. In the case of the heart, our model struggled to analyze the heart boundaries in a few cases where the pixels outside and inside the heart boundaries are similar. This could be due to the amount and quality of data used for training for the heart, which is approximately one-third of the data used for other OARs. Therefore, this issue could be addressed using a greater number of datasets.

In most lung volume CT datasets, the predicted masks are almost identical to their corresponding manual contours. However, in some images, random pixels are observed around the predicted masks, which could be eliminated using morphological image processing techniques such as erosion followed by dilation. This technique, known as opening, convolves a kernel composed of 0 and 1 over the predicted contour, which produces a new binary image by eliminating small details. This is followed by dilation on the resultant image, which works opposite to erosion. Hence, it adds a layer of pixels around the contour to compensate for the reduction in the size of the largest predicted contour. In other words, it diminishes the noise from the image.[21]

The results of the predicted contours are compared [Table 3] and [Table 4] with studies that have adopted similar U-net segmentation approaches for the same OARs. In a segmentation-based study on thoracic CT datasets, Skourt et al.[14],[20] have shown that the segmentation of combined lungs resulted in a DSC of 0.9512. A recent study published by Yang et al.[17] reviewed 7 autosegmentation methods, which included 5 deep learning and 2 multi-atlas segmentation approaches. In this study, method 7 is similar to our approach as it operates on the principle of slice-based U-net technique, whereas method 2 preprocesses the input images by resizing and intensity normalization prior to using U-net for training and hence results for method 7 are shown in [Table 3] and [Table 4] for DSC and HD comparison, respectively. The DSC values of our study for heart and right and left lung are similar to method 7. However, for the spinal cord, the DSC was better with method 7 (0.83 vs. 0.76). One possible reason for this could be that the number of patient datasets used in their study was significantly greater than in our study. However, the HDs we calculated turned out to be better than method 7 for the spinal cord and heart. In similar studies by Ferreira et al.,[15] Huynh et al.,[22] and Pang et al.,[16] only lungs were trained, and their DSC was between 0.85 and 0.89 [Table 3]. A major advantage of using deep learning models for autosegmentation is the consistency in delineating the OARs/target volume compared to the inter-variability of contour volumes among different clinicians. There are situations where manual contouring methods tend to overestimate OARs, as illustrated with our right lung model, predicting lung better than the supplied data for the worst-case scenario.
Table 3: Comparison of our model with other similar works based on Dice similarity coefficient

Click here to view
Table 4: Comparison of our model with other similar works based on Hausdorff distance

Click here to view


Overall, our study with a limited number of training datasets gave comparable results with similar studies. However, we aim to improve our model's accuracy by adding more datasets in the near future.


 > Conclusion Top


The U-Net model proposed in this study can contour OARs for patients undergoing lung cancer radiotherapy. Deep learning-based prediction models have the potential to reduce the clinical time needed for contouring and can potentially remove a significant source of human error in the radiation oncology treatment workflow. This is an ongoing study aimed at extending the automated structure delineation models for other common treatment sites in radiation oncology.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.



 
 > References Top

1.
Forstner D, Yap M. Advances in radiation therapy. Med J Aust 2015;203:394-5.  Back to cited text no. 1
    
2.
Shi C. Book review: Strategies for radiation therapy treatment planning. J Appl Clin Med Phys 2019;20:166-7.  Back to cited text no. 2
    
3.
ICRU 50. Prescribing, Recording and Reporting Photon Beam Therapy. Bethedsa, MD: International Commission on Radiation Units and Measurements; 1993.  Back to cited text no. 3
    
4.
ICRU report 62. International commission on Radiation Units and Measurements. Prescribing, recording and reporting photon beam therapy. Supplement to ICRU report 50; 1999.   Back to cited text no. 4
    
5.
Terezakis SA, Heron DE, Lavigne RF, Diehn M, Loo BW Jr. What the diagnostic radiologist needs to know about radiation oncology. Radiology 2011;261:30-44.  Back to cited text no. 5
    
6.
Jameson MG, Holloway LC, Vial PJ, Vinod SK, Metcalfe PE. A review of methods of analysis in contouring studies for radiation oncology. J Med Imaging Radiat Oncol 2010;54:401-10.  Back to cited text no. 6
    
7.
Razzak M, Naz S, Zaib A. Deep learning for medical image processing: Overview, challenges and the future. In: Lecture Notes in Computational Vision and Biomechanics. Springer Link, Switzerland AG.; 2017. p. 323-50.  Back to cited text no. 7
    
8.
Delpon G, Escande A, Ruef T, Darréon J, Fontaine J, Noblet C, et al. Comparison of automated atlas-based segmentation software for postoperative prostate cancer radiotherapy. Front Oncol 2016;6:178.  Back to cited text no. 8
    
9.
Furat O, Wang M, Neumann M, Petrich L, Weber M, Krill C, et al. Machine learning techniques for the segmentation of tomographic image data of functional materials. Front Mater 2019;6:145.  Back to cited text no. 9
    
10.
Hesamian MH, Jia W, He X, Kennedy P. Deep learning techniques for medical image segmentation: Achievements and challenges. J Digit Imaging 2019;32:582-96.  Back to cited text no. 10
    
11.
Albawi S, Mohammed TA, Al-Zawi S. Understanding of a Convolutional Neural Network. Antalya: 2017 International Conference on Engineering and Technology (ICET); 2017. p. 1-6.  Back to cited text no. 11
    
12.
Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells W, Frangi A, editors. Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. MICCAI 2015. Lecture Notes in Computer Science. Vol. 9351. Cham: Springer; 2015.  Back to cited text no. 12
    
13.
Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. IEEE Trans Pattern Anal Mach Intell 2014;39:640-51.  Back to cited text no. 13
    
14.
Skourt A, Hassani BE, Majda A. Lung CT image segmentation using deep neural networks. Procedia Comput Sci 2018;127:109-13.  Back to cited text no. 14
    
15.
Ferreira FT, Sousa P, Galdran A, Sousa MR, Campilho A. End-to-End Supervised Lung Lobe Segmentation. In: 2018 International Joint Conference on Neural Networks (IJCNN). Rio de Janeiro, Brazil: IEEE; 2018. p. 1-8.  Back to cited text no. 15
    
16.
Pang T, Guo S, Zhang X, Zhao L. Automatic lung segmentation based on texture and deep features of HRCT images with interstitial lung disease. Biomed Res Int 2019;2019:2045432.  Back to cited text no. 16
    
17.
Yang J, Veeraraghavan H, Armato SG 3rd, Farahani K, Kirby JS, Kalpathy-Kramer J, et al. Autosegmentation for thoracic radiation treatment planning: A grand challenge at AAPM 2017. Med Phys 2018;45:4568-81.  Back to cited text no. 17
    
18.
Wee L, de Ruysscher D, Dekker A, Aerts H. NSCLC-Radiomics – The Cancer Imaging Archive (TCIA) Public Access – Cancer Imaging Archive. https://www.cancerimagingarchive.net/. [Last accessed on 2020 Jun 10].  Back to cited text no. 18
    
19.
Zou KH, Warfield SK, Bharatha A, Tempany CM, Kaus MR, Haker SJ, et al. Statistical validation of image segmentation quality based on a spatial overlap index. Acad Radiol 2004;11:178-89.  Back to cited text no. 19
    
20.
Rucklidge W. Efficient Visual Recognition Using the Hausdorff Distance. Springer, New York, USA; 1996. p. 27-42.  Back to cited text no. 20
    
21.
Efford N. Digital Image Processing: A Practical Introduction Using Java™. Boston, Massachusetts, USA: Addison Wesley, 2000: Pearson Education; 2000.  Back to cited text no. 21
    
22.
Huynh TH, Anh NN. A deep learning method for lung segmentation on large size chest X-ray image. In: 2019 IEEE-RIVF International Conference on Computing and Communication Technologies (RIVF). Vietnam: IEEE; 2019. p. 1-5.  Back to cited text no. 22
    


    Figures

  [Figure 1], [Figure 2], [Figure 3], [Figure 4], [Figure 5], [Figure 6], [Figure 7], [Figure 8], [Figure 9]
 
 
    Tables

  [Table 1], [Table 2], [Table 3], [Table 4]



 

 
Top
 
 
  Search
 
     Search Pubmed for
 
    -  Mehta A
    -  Lehman M
    -  Ramachandran P
    Access Statistics
    Email Alert *
    Add to My List *
* Registration required (free)  

  >Abstract>Introduction>Materials and Me...>Results>Discussion>Conclusion>Article Figures>Article Tables
  In this article
>References

 Article Access Statistics
    Viewed552    
    PDF Downloaded33    

Recommend this journal