Does Prompt Design Affect LLM Outputs A Study Across Structured and Vague Inputs

  • Unique Paper ID: 183166
  • PageNo: 544-548
  • Abstract:
  • Prompt engineering is essential for improving the performance of large language models (LLMs) like GPT-3.5 and GPT-4. This study investigates how various prompt structures—such as differences in wording, tone, specificity, and formatting—affect LLM performance in key natural language processing tasks, including sentiment analysis, summarization, and question answering. Multiple prompt styles were evaluated for each task, ranging from direct to descriptive and formal to informal. Both human evaluations and quantitative metrics, such as accuracy and clarity scores, were used for assessment. The findings indicate that even slight modifications in prompt wording can greatly impact the clarity, accuracy, and completeness of the model's outputs. These results highlight the significance of prompt design in practical LLM applications and suggest that how prompts are formulated is a vital element in achieving the best outcomes.

Copyright & License

Copyright © 2026 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{183166,
        author = {Aakanksha Bomble and prashant kulkarni and vishnu potdar},
        title = {Does Prompt Design Affect LLM Outputs A Study Across Structured and Vague Inputs},
        journal = {International Journal of Innovative Research in Technology},
        year = {2025},
        volume = {12},
        number = {3},
        pages = {544-548},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=183166},
        abstract = {Prompt engineering is essential for improving the performance of large language models (LLMs) like GPT-3.5 and GPT-4. This study investigates how various prompt structures—such as differences in wording, tone, specificity, and formatting—affect LLM performance in key natural language processing tasks, including sentiment analysis, summarization, and question answering. Multiple prompt styles were evaluated for each task, ranging from direct to descriptive and formal to informal. Both human evaluations and quantitative metrics, such as accuracy and clarity scores, were used for assessment. The findings indicate that even slight modifications in prompt wording can greatly impact the clarity, accuracy, and completeness of the model's outputs. These results highlight the significance of prompt design in practical LLM applications and suggest that how prompts are formulated is a vital element in achieving the best outcomes.},
        keywords = {},
        month = {August},
        }

Cite This Article

Bomble, A., & kulkarni, P., & potdar, V. (2025). Does Prompt Design Affect LLM Outputs A Study Across Structured and Vague Inputs. International Journal of Innovative Research in Technology (IJIRT), 12(3), 544–548.

Related Articles