Classification of Benign and Malignancy in Lung Cancer Using Capsule Networks with Dynamic Routing Algorithm on Computed Tomography Images
DOI:
https://doi.org/10.37965/jait.2023.0218Keywords:
capsule network, lung cancer, computed tomography, deep learning, image classificationAbstract
There is a widespread agreement that lung cancer is one of the deadliest types of cancer, affecting both women and men. As a result, detecting lung cancer at an early stage is crucial to create an accurate treatment plan and forecasting the reaction of the patient to the adopted treatment. For this reason, the development of convolutional neural networks (CNNs) for the task of lung cancer classification has recently seen a trend in attention. CNNs have great potential, but they need a lot of training data and struggle with input alterations. To address these limitations of CNNs, a novel machine-learning architecture of capsule networks has been presented, and it has the potential to completely transform the areas of deep learning. Capsule networks, which are the focus of this work, are interesting because they can withstand rotation and affine translation with relatively little training data. This research optimizes the performance of CapsNets by designing a new architecture that allows them to perform better on the challenge of lung cancer classification. The findings demonstrate that the proposed capsule network method outperforms CNNs on the lung cancer classification challenge. CapsNet with a single convolution layer and 32 features (CN-1-32), CapsNet with a single convolution layer and 64 features (CN-1-64), and CapsNet with a double convolution layer and 64 features (CN-2-64) are the three capsulel networks developed in this research for lung cancer classification. Lung nodules, both benign and malignant, are classified using these networks using CT images. The LIDC-IDRI database was utilized to assess the performance of those networks. Based on the testing results, CN-2-64 network performed better out of the three networks tested, with a specificity of 98.37%, sensitivity of 97.47% and an accuracy of 97.92%.
Metrics
Published
How to Cite
Issue
Section
License
Copyright (c) 2023 Authors
This work is licensed under a Creative Commons Attribution 4.0 International License.