Convolutional Neural Networks for Automated Diagnosis of Diabetic Retinopathy in Fundus Images
Keywords:automated diagnosis of diabetic retinopathy, DCNNs, diabetic retinopathy, self-diagnosis
Diabetic retinopathy (DR), a long-term complication of diabetes, is notoriously hard to detect in its early stages due to the fact that it only shows a subset of symptoms. Standard diagnostic procedures for DR now include optical coherence tomography and digital fundus imaging. If digital fundus images alone could provide a reliable diagnosis, then eliminating the costly optical coherence tomography would be beneficial for all parties involved. Optometrists and their patients will find this useful. Using deep convolutional neural networks (DCNNs), we provide a novel approach to this problem. Our approach deviates from standard DCNN methods by exchanging typical max-pooling layers with fractional max-pooling ones. In order to collect more subtle information for categorization, two such DCNNs, each with a different number of layers, are trained. To establish these limits, we use DCNNs and features extracted from picture metadata to train a support vector machine classifier. In our experiments, we used information from Kaggle’s open DR detection database. We fed our model 34,124 training images, 1,000 validation examples, and 53,572 test images to train and test it. Each of the five classes in the proposed DR classifier corresponds to one of the steps in the DR process and is given a numeric value between 0 and 4. Experimental results show a higher identification rate (86.17%) than those found in the existing literature, indicating the suggested strategy may be effective. We have jointly developed an algorithm for machine learning and accompanying software, and we’ve named it deep retina. Images of the fundus acquired by the typical person using a portable ophthalmoscope may be instantly analyzed using our technology. This technology might be used for self-diagnosis, at-home care, and telemedicine.
How to Cite
Copyright (c) 2023 Authors
This work is licensed under a Creative Commons Attribution 4.0 International License.