Towards an Effective Prompting With AI Systems: The Jujitsu Framework
Keywords:
business, economics, AI, artificial intelligence, large language model, LLM, prompt engineering, prompting, framework, objective communicationAbstract
As Large Language Models (LLMs) become central to AI applications, prompt engineering has emerged as a critical skill for optimizing model output. However, existing prompting techniques lack a solid theoretical foundation and continue to exhibit persistent limitations, such as insufficient objectivity and inadequate awareness of training data. We present a Framework that offers a principled model for prompt engineering, aligning with both the core mechanisms of LLMs and the principles of objective communication. This framework addresses critical gaps in existing methods and provides a systematic, transferable approach to prompt design. By working with - rather than against – the LLM architecture, the framework enhances accuracy, interpretability, and versatility across domains, thereby advancing the evolving landscape of human-AI interaction.
References
Bahdanau, D., Cho, K., & Bengio, Y. (2015). Neural machine translation by jointly learning to align and translate. In Proceedings of the International Conference on Learning Representations (ICLR). https://doi.org/10.48550/arXiv.1409.0473
Brown, T.B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., . . . Amodei, D. (2020). Language models are few-shot learners. In Advances in Neural Information Processing Systems, 33, 1877–1901. https://doi.org/10.48550/arXiv.2005.14165
Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (pp. 4171–4186). Association for Computational Linguistics. https://doi.org/10.18653/v1/N19-1423
Goldstein, N. (Ed.). (2022). The Associated Press stylebook: 2022–2024 (56th ed.). The Associated Press.
Iverson, C., Christiansen, S., Flanagin, A., Fontanarosa, P.B., Glass, R.M., Gregoline, B., & Young, R.K. (Eds.). (2020). AMA manual of style: A guide for authors and editors (11th ed.). Oxford University Press.
Hayakawa, S.I., & Hayakawa, A.R. (1990). Language in thought and action (5th ed.). Harcourt Brace Jovanovich.
Hugging Face. (n.d.). NLP course – Summarization chapter. Retrieved from https://huggingface.co/course
IBM Cloud Education. (2023). What is prompt engineering? Retrieved from https://www.ibm.com/cloud
Knoth, N., Smith, A., & Reyes, M. (2024). AI literacy and its implications for prompt engineering strategies. Computers and Education: Artificial Intelligence. https://doi.org/10.1016/j.caeai.2024.100161
Korzynski, P., Mazurek, G., Krzypkowska, P., & Kurasinski, A. (2023). Artificial intelligence prompt engineering as a new digital competence: Analysis of generative AI technologies such as ChatGPT. Entrepreneurial Business and Economics Review, 11(3), 25–37. https://doi.org/10.15678/EBER.2023.110302
Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., & Neubig, G. (2023). Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Computing Surveys, 55(9), Article 195. https://doi.org/10.1145/3560815
Lo, L.S. (2023). The CLEAR path: A framework for enhancing information literacy through prompt engineering. The Journal of Academic Librarianship, 49(6), 102707. https://doi.org/10.1016/j.acalib.2023.102707
McCann, B., Keskar, N.S., Xiong, C., & Socher, R. (2018). The natural language decathlon: Multitask learning as question answering. Retrieved from https://arxiv.org/abs/1806.08730
McKinsey & Company. (2024). What is prompt engineering? McKinsey & Company. Retrieved from https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-prompt-engineering
Mehra, P. (2024, January 12). The rise and fall of prompt engineering: A short-lived revolution. TechCircle. Retrieved from https://techcircle.in/
Miguelañez, C. (2025, January 6). Common LLM prompt engineering challenges and solutions. Latitude Blog. Retrieved from https://latitude.so/blog/common-llm-prompt-engineering-challenges-and-solutions/
Petroni, F., Rocktäschel, T., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.H., & Riedel, S. (2019). Language models as knowledge bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) (pp. 2463–2473). https://doi.org/10.18653/v1/D19-1250
Sabbatella, A., Ponti, A., Giordani, I., Candelieri, A., & Archetti, F. (2024). Prompt optimization in large language models. Mathematics, 12(6), 929. https://doi.org/10.3390/math12060929
Shin, T., Razeghi, Y., Logan, R.L., IV, Wallace, E., & Singh, S. (2020). AutoPrompt: Eliciting knowledge from language models with automatically generated prompts. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (pp. 4222–4235). https://doi.org/10.18653/v1/2020.emnlp-main.346
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., . . . Polosukhin, I. (2017). Attention is all you need. In Advances in Neural Information Processing Systems, 30. https://doi.org/10.48550/arXiv.1706.03762
Wei, J., Wang, X., Schuurmans, D., Bosma, M., Chi, E., Le, Q., & Zhou, D. (2022). Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35, 2164–2181.