Repository logo
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Српски
  • Yкраї́нська
  • Log In
    Have you forgotten your password?
Repository logo
  • Communities & Collections
  • All of Digital Repository
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Српски
  • Yкраї́нська
  • Log In
    Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Ilhem Aya Ould Chakmakdji"

Now showing 1 - 1 of 1
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Entre subjectivité pédagogique et objectivité algorithmique : Vers une évaluation hybride des productions écrites à l’ère de l’IA (ChatGPT). - Cas des élèves de 1ère AM, au collège "Belouaer Belgassem,d’Al Anasser-BBA
    (University of Mohamed Boudiaf- M’sila, 2025-07-03) Ilhem Aya Ould Chakmakdji
    This master's thesis explores the evaluation of written productions in French as a Foreign Language (FLE) in the era of Artificial Intelligence (AI). Faced with the challenges of traditional human evaluation (potential subjectivity, workload) and the promises of automated evaluation tools, particularly generative AI like ChatGPT. A comparative study was conducted, confronting the evaluation of a teacher (via transcription) with that of ChatGPT on a corpus of eight written productions by middle school students on the theme of sports. The analysis revealed notable convergences in identifying factual language errors (spelling, basic grammar) and the overall understanding of the topic by the students. However, significant divergences emerged concerning the severity of grading (the teacher often being more lenient), the finesse of stylistic and syntactic analysis (ChatGPT proving more sensitive to repetitions and awkward phrasing), and especially the nature and format of the feedback (annotation codes for the teacher versus written text, complete corrections, and remedial exercises for ChatGPT). The results confirm that AI can constitute a useful complement to human evaluation, particularly for formative feedback on surface linguistic aspects and the speed of return. It can improve certain aspects but cannot replace the expert judgment of the teacher, which is essential for evaluating the deep qualitative dimensions of writing. These findings argue for a hybrid approach to evaluation, combining the strengths of both modalities. They also underscore the crucial importance of training teachers and learners in the critical and informed use of these tools, while reaffirming the centrality of human judgment in the final evaluation process.

All Rights Reserved - University of M'Sila - UMB Electronic Portal © 2024

  • Cookie settings
  • Privacy policy
  • Terms of Use