Moustari Mohamed Abderaouf2025-06-042025-06-042025-05-06https://repository.univ-msila.dz/handle/123456789/46366The fundus images of patients with Diabetic Retinopathy (DR) often display nu- merous lesions scattered across the retina. Current methods typically utilize the entire image for network learning, which has limitations since DR abnormalities are usually localized. Training Convolutional Neural Networks (CNNs) on global images can be challenging due to excessive noise. Therefore, it's crucial to enhance the visibility of important regions and focus the recognition system on them to improve accuracy. This thesis investigates two tasks; the first one is a novel two-branch attention-guided con- volutional neural network (AG-CNN) with initial image preprocessing for DR classi- fication. The AG-CNN initially establishes overall attention to the entire image with the global branch and then incorporates a local branch to compensate for any lost dis- criminative cues. The second task is improving diabetic retinopathy classification by combining handcrafted and deep features. We extract LBP, HOG, and GLCM to cap- ture texture patterns and use DenseNet-121 for deep feature extraction. The fusion of these features enables a more comprehensive representation of the retinal images, en- hancing the model’s ability to discriminate between different severity levels of diabetic retinopathy. We conduct extensive experiments using the APTOS 2019 DR dataset for both tasks.enGradient Weighted Class Activation Mapping - Deep Learning - Dia- betic Retinopathy Classification - Two-stage System - Feature Extraction - Handcrafted Features - Feature Fusion - Image Pre-processing - Region of Interest ExtractionDeep learning-based medical data analysis for disease prediction and classificationThesis