Repository logo
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Српски
  • Yкраї́нська
  • Log In
    Have you forgotten your password?
Repository logo
  • Communities & Collections
  • All of Digital Repository
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Српски
  • Yкраї́нська
  • Log In
    Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Hachemi Manar Zahrat ELOla"

Now showing 1 - 1 of 1
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Sign Language Recognition System Case Study : Algerian signs
    (University of M’sila, 2025-06-30) Hachemi Manar Zahrat ELOla; Chennafi Karima; ENCA/ BRIK Mourad
    Considering that communication is essential for human connection, the deaf community faces unique obstacles. Therefore, sign language is the best alternative for overcoming these communication barriers, as it is considered the most effective means of communication, involving many hand movements. However, sign language is often misunderstood by those not part of the deaf community, necessitating the use of interpreters. This has led the community to develop techniques to facilitate interpretation tasks. Despite progress in deep learning, there is still limited research on recognizing and translating Algerian Arabic sign language. This lack of research has prompted us to focus specifically on advancing studies in Algerian Arabic sign language. This thesis introduces improved methodologies to construct a comprehensive framework for processing, translating, and generating Algerian Arabic sign language from input videos. We begin by utilizing the Mediapipe library for identifying human body parts. Then, for sign language recognition, particularly in Arabic, we employed three distinct models: Convolutional Neural Networks (CNN), 63 Long Short-Term Memory (LSTM), and a hybrid CNN-LSTM approach. Using the ArabSign-A dataset, we adapted it to focus on individual words, achieving an accuracy of 95.23% for the CNN model, 88.09% for the LSTM model, and 96.66% for the hybrid model. A comparative analysis was conducted to evaluate our methodology, demonstrating superior discrimination between static signs compared to prior researc

All Rights Reserved - University of M'Sila - UMB Electronic Portal © 2024

  • Cookie settings
  • Privacy policy
  • Terms of Use