Optimizing interactions: Strategies for prompt engineering in large language models

Authors

  • V PremaLatha Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, VADDESWARAM, AP, INDIA-522302 Author
  • Dinesh Kumar Anguraj Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, VADDESWARAM, AP, INDIA-522302 Author
  • Nikhat Parveen Department of Information Science, University of Bisha, P.O. Box 551, Bisha, Saudi Arabia Author

DOI:

https://doi.org/10.71459/edutech202524

Keywords:

Prompt engineering, Large Language Models (LLMs), Artificial intelligence, Design patterns, Frameworks, Information retrieval, AI communication

Abstract

This manuscript delineates an innovative investigation into the rapidly evolving domain of prompt engineering, an essential competency in the contemporary landscape of sophisticated artificial intelligence, particularly concerning Large Language Models (LLMs) such as ChatGPT. Prompt engineering, defined as the meticulous formulation of precise and impactful prompts, is instrumental in directing LLMs to conform to explicit parameters, facilitate intricate procedures, and uphold the integrity of both the quality and quantity of their generated outputs. We present a groundbreaking aggregation of prompt engineering methodologies, systematically articulated as discrete patterns. These patterns bear resemblance to the notion of design patterns within software engineering, providing versatile and adaptable solutions to prevalent challenges encountered during interactions with LLMs. Our investigation elucidates a variety of frameworks for prompt engineering, illuminating their capacity to tackle a diverse array of issues faced in information retrieval operations. We additionally investigate a range of pattern-oriented methodologies that have been demonstrated to provoke augmented responses from AI models. This manuscript aspires to deliver a thorough compendium of these prompt engineering paradigms, presenting invaluable insights and pragmatic strategies that will enable users to fully leverage the potential of their engagements with large language models (LLMs), thereby making a substantial contribution to the domain of AI communication.

References

[1] Vaswani, Ashish; Shazeer, Noam; Parmar, Niki; Uszkoreit, Jakob; Jones, Llion; Gomez, Aidan N.; Kaiser, Łukasz; Polosukhin, Illia (2017). "Attention Is All You Need." arXiv:1706.03762v7 [cs.CL]. Retrieved August 2, 2023, from arXiv.

[2] Diab, Mohamad; Herrera, Julian; Chernow, Bob (2022). "Stable Diffusion Prompt Book." Retrieved August 7, 2023.

[3] Ziegler, Albert; Berryman, John (2023). "A Developer’s Guide to Prompt Engineering and LLMs." GitHub Blog. Published July 17, 2023. Retrieved from GitHub Blog.

[4] Radford, Alec; Wu, Jeffrey; Child, Rewon; Luan, David; Amodei, Dario; Sutskever, Ilya (2019). "Language Models are Unsupervised Multitask Learners." OpenAI Blog. Retrieved from OpenAI.

[5] Gu, J.; et al. (2023). "A Systematic Survey of Prompt Engineering on Vision-Language Foundation Models." arXiv preprint arXiv:2307.12980.

[6] "Prompt Engineering: A Detailed Guide for 2024" (2024). Published by DataCamp.

[7] "What is an AI Prompt Engineer and How Do You Become One?" (2024). Published by TechTarget.

[8] "Six Skills You Need to Become an AI Prompt Engineer" (2024). Published by ZDNet.

[9] "ChatGPT Prompt Engineering for Developers" (2024). Published by DeepLearning.AI.

[10]"What is Prompt Engineering? - Generative AI" (2024). Published by AWS.

Downloads

Published

2025-04-15

How to Cite

PremaLatha , V., Dinesh Kumar, A., & Parveen, N. (2025). Optimizing interactions: Strategies for prompt engineering in large language models. Edu - Tech Enterprise, 3, 24. https://doi.org/10.71459/edutech202524