Segmentation of Diabetic Retinopathy Lesions in Retinal Fundus Images Using Multi-View Convolutional Neural Networks


Hassan Khastavaneh 1 , * , Hossein Ebrahimpour-Komleh 1

1 Department of Computer and Electrical Engineering, University of Kashan, Kashan, Iran

How to Cite: Khastavaneh H, Ebrahimpour-Komleh H . Segmentation of Diabetic Retinopathy Lesions in Retinal Fundus Images Using Multi-View Convolutional Neural Networks, Iran J Radiol. 2019 ; 16(Special Issue):e99148. doi: 10.5812/iranjradiol.99148.


Iranian Journal of Radiology: 16 (Special Issue); e99148
Published Online: December 8, 2019
Article Type: Abstract
Received: October 26, 2019
Accepted: December 8, 2019


Background: Diabetic retinopathy is one of the leading causes of blindness worldwide. Furthermore, it is considered the most important complication of diabetes mellitus, which creates various lesions in the retina at its different stages. Moreover, these lesions appear in different forms of hemorrhages, exudates, and microaneurysms. The count and type of these lesions can determine the severity and progression of the disease. Early detection of these lesions can lead to better treatment and blindness prevention. The accurate segmentation of these lesions is required to detect them and specify their counts and types. Since the manual segmentation of retinal lesions is tedious and time-consuming, automated segmentation is preferred. Furthermore, in screening programs where a large population needs to be considered, automated segmentation is inevitable. Therefore, automatic segmentation of retinal lesions is the first stage of any typical computer-aided diagnosis system for early diagnosis of the disease. Automated segmentation of retinal lesions is a challenging task due to the shape diversity and inhomogeneity existing in these lesions. Hence, more advanced segmentation techniques capable of modeling lesion complexities are required to tackle difficulties regarding the automated segmentation of diabetic retinopathy lesions in retinal fundus images.

Objectives: In this study, we proposed an automated pixel-based method for the segmentation of different types of lesions on the retinal fundus images.

Methods: This method utilized convolutional neural network with a particular architecture to describe and label the pixels of fundus images as either normal or lesion. The proposed method had four phases of pre-processing, view generation, segmentation, and post-processing. The pre-processing stage attempted to enhance input images for better segmentation. In the view generation phase, multiple views that described a pixel form different perspectives were extracted for all pixels on images. The segmentation phase, which indeed was a convolutional neural network capable of handling multi-view data, received multiple views corresponding to each pixel and decided if it belonged to a normal or a lesion area. The segmentation network with its unique architecture could handle diversities and complexities existing in the retinal lesions, leading to accurate segmentation. Finally, the post-processing phase refined the segmentation results by reducing false positives. In addition to segmentation, the proposed method detected lesion types in the segmentation process.

Results: The proposed method was implemented and its performance was evaluated using standard performance measures including accuracy, sensitivity, specificity, dice similarity coefficient, and Jaccard coefficient. The segmentation network was trained with 54 images and tested with 27 images. The experimental results were very promising and comparable to the state-of-the-art methods of fundus lesion segmentation.

Conclusion: A method based on convolutional neural networks for the segmentation of retinal lesions on fundus images was proposed. Alongside the promising experimental results, the method could jointly produce different lesion masks for different lesion types as a significant functionality.

Copyright © 2019, Author(s). This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License ( which permits copy and redistribute the material just in noncommercial usages, provided the original work is properly cited.