A Comprehensive Review of Generative AI and Large Language Models : Techniques, Trends, and Challenges

  • Unique Paper ID: 201624
  • Volume: 12
  • Issue: 12
  • PageNo: 4252-4254
  • Abstract:
  • Generative artificial intelligence has emerged as one of the most consequential developments in modern computing. Large language models, in particular, have made it possible for machines to produce fluent text, code, and other forms of content with a level of flexibility that was previously difficult to achieve. This review examines the technical foundations of these systems, with emphasis on transformer architectures, self-supervised pretraining, fine-tuning strategies, reinforcement learning from human feedback, parameter-efficient adaptation, and retrieval-augmented generation. It also considers recent directions in the field, including multimodal models, domain-specific systems, and the growing integration of generative AI into practical workflows across industry and research. Alongside these advances, the paper discusses persistent concerns related to factual reliability, bias, interpretability, privacy, computational demand, and governance. The goal is to provide a clear and balanced overview of where the field stands, what has enabled its progress, and what obstacles still need to be addressed before these systems can be used responsibly at scale.

Copyright & License

Copyright © 2026 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{201624,
        author = {Mr. Deshmukh Harshad Mangesh and Mr. Tathe S.G.},
        title = {A Comprehensive Review of Generative AI and Large Language Models : Techniques, Trends, and Challenges},
        journal = {International Journal of Innovative Research in Technology},
        year = {2026},
        volume = {12},
        number = {12},
        pages = {4252-4254},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=201624},
        abstract = {Generative artificial intelligence has emerged as one of the most consequential developments in modern computing. Large language models, in particular, have made it possible for machines to produce fluent text, code, and other forms of content with a level of flexibility that was previously difficult to achieve. This review examines the technical foundations of these systems, with emphasis on transformer architectures, self-supervised pretraining, fine-tuning strategies, reinforcement learning from human feedback, parameter-efficient adaptation, and retrieval-augmented generation. It also considers recent directions in the field, including multimodal models, domain-specific systems, and the growing integration of generative AI into practical workflows across industry and research. Alongside these advances, the paper discusses persistent concerns related to factual reliability, bias, interpretability, privacy, computational demand, and governance. The goal is to provide a clear and balanced overview of where the field stands, what has enabled its progress, and what obstacles still need to be addressed before these systems can be used responsibly at scale.},
        keywords = {Generative AI, Large Language Models, Transformers, Self-Supervised Learning, RLHF, Multimodal Models, AI Ethics, Bias and Fairness, Natural Language Processing, Artificial Intelligence Trends.},
        month = {May},
        }

Cite This Article

Mangesh, M. D. H., & S.G., M. T. (2026). A Comprehensive Review of Generative AI and Large Language Models : Techniques, Trends, and Challenges. International Journal of Innovative Research in Technology (IJIRT), 12(12), 4252–4254.

Related Articles