Fine-Tuning Strategies for Sentiment Analysis in the Algerian Dialect: A Comparative Study on DziriBERT
Abstract
The aim of this work is to explore sentiment analysis in the Algerian dialect through the adaptation of pre-trained linguistic models. We focus on DziriBERT, a model specifically developed for the Algerian dialect and derived from the BERT (Bidirectional Encoder Representations from Transformers) model, recognised for its performance in various natural language processing tasks. Three fine-tuning approaches were studied: full fine-tuning, freeze tuning, and LoRA tuning (Low-Rank Adaptation). Experiments were conducted on two separate corpora, ADArabic and Adouane, to evaluate the robustness and generalisation of DziriBERT model. The results show that the LoRA method achieved the best performances, on the ADArabic corpus, it achieved 83.26% on terms of accuracy and 80.94% for F1-score. On the Adouane corpus, LoRA reached the highest performance, with an accuracy of 85.28% and an F1-score of 82.52%. These results confirm the relevance of using DziriBERT for sentiment analysis in the Algerian dialect and highlight the effectiveness of LoRA tuning as a lightweight and efficient alternative to full fine-tuning, with significantly reducing the number of adjustable parameters.
Downloads
Copyright (c) 2026 ITEGAM-JETIA

This work is licensed under a Creative Commons Attribution 4.0 International License.








