Evaluating Bias Mitigation Techniques in Large Language Models: A Study on the Accuracy-Fairness Trade-off in Decision-Making

  • Unique Paper ID: 182533
  • PageNo: 2422-2429
  • Abstract:
  • The rise of large language models (LLMs) has brought significant progress across many decision-making domains, from hiring recommendations to legal risk assessments. However, as these models are increasingly integrated into real-world systems, concerns around fairness and unintended bias have become more prominent. While model accuracy is often the primary benchmark during development, fairness remains an underrepresented metric—despite its crucial impact on equitable decision-making. This study investigates the trade-offs between optimizing for accuracy and ensuring fairness in LLMs used for decision-making. We conduct a systematic evaluation of fairness-aware interventions applied to pre-trained transformer models across real-world, demographically sensitive datasets. Through comparative analysis, the study offers insights into how different mitigation strategies impact both performance and fairness, informing more responsible and equitable deployment of LLMs in practice.

Copyright & License

Copyright © 2026 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{182533,
        author = {Priyanka Harishchandra Raikwad and Shubhangi P. Tidake},
        title = {Evaluating Bias Mitigation Techniques in Large Language Models: A Study on the Accuracy-Fairness Trade-off in Decision-Making},
        journal = {International Journal of Innovative Research in Technology},
        year = {2025},
        volume = {12},
        number = {2},
        pages = {2422-2429},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=182533},
        abstract = {The rise of large language models (LLMs) has brought significant progress across many decision-making domains, from hiring recommendations to legal risk assessments. However, as these models are increasingly integrated into real-world systems, concerns around fairness and unintended bias have become more prominent. While model accuracy is often the primary benchmark during development, fairness remains an underrepresented metric—despite its crucial impact on equitable decision-making.
This study investigates the trade-offs between optimizing for accuracy and ensuring fairness in LLMs used for decision-making. We conduct a systematic evaluation of fairness-aware interventions applied to pre-trained transformer models across real-world, demographically sensitive datasets. Through comparative analysis, the study offers insights into how different mitigation strategies impact both performance and fairness, informing more responsible and equitable deployment of LLMs in practice.},
        keywords = {Large Language Models, Fairness, Bias Mitigation, Accuracy-Fairness Trade-off, Decision-Making, Fairness-Aware Interventions, Model Optimization, Responsible AI, Fairness Metrics},
        month = {July},
        }

Cite This Article

Raikwad, P. H., & Tidake, S. P. (2025). Evaluating Bias Mitigation Techniques in Large Language Models: A Study on the Accuracy-Fairness Trade-off in Decision-Making. International Journal of Innovative Research in Technology (IJIRT), 12(2), 2422–2429.

Related Articles