Prompt Literacy

&
DOI:10.59668/371.14442
Artificial IntelligenceGenerative AIprompt engineeringprompt literacyhuman-AI interactionnatural language processing

Prompt literacy enables anyone to communicate with and direct generative AI systems without needing expertise in computer programming. Prompts are commands formulated in natural human language that unlock the capabilities of AI and guide its outputs. With prompt literacy, people can successfully interact with generative AI to achieve defined objectives, while exercising judgment and responsibility. Prompts serve as an accessible interface between users and automated systems, translating human intents into AI-compatible directives. Effectively crafted prompts are key to enabling generative AI to produce meaningful, targeted results. Just as traditional literacy involves mastering the written word, developing prompt literacy requires learning how to clearly formulate instructions for AI in its processing language. Command of this skill allows humans to reap benefits from AI, directing its open-ended potential to useful and ethical ends through targeted prompts.

In the digital era, the emergence of Generative AI (GenAI) systems has revolutionized the way we interact with technology (Chiu, 2023; Moorhouse et al., 2023). GenAI systems or tools are available to the general public as both free and paid services, and include names such as ChatGPT, Claude AI, Google Bard, Pi.ai and Bing AI. At the heart of this interaction lies Prompt Literacy, a skill that empowers individuals to communicate with AI without the complexity of programming languages (Gattupalli et al., 2023; Jacobs & Fisher, 2023; ). It's a bridge crafted from everyday language, enabling users to guide AI through tasks with simple, direct commands. This skill has rapidly become a cornerstone of digital fluency, akin to learning to navigate a website or send an email in the early days of the internet.

The evolution of prompt literacy parallels the rise of user-friendly AI interfaces (Abedin et al., 2022). GenAI tools now respond to the layperson’s inquiries, from complex problem-solving to creating art in the form of images (such as Dall-E 3). Prompts are simply nothing but conversations and dialogue between a real human transforming their abstract thought into concrete AI action.

For the general public and STEM educators alike, mastering prompt literacy is not just about efficiency; it’s about shaping the future. As GenAI becomes more integrated into our daily lives and learning environments, the ability to harness its potential responsibly and effectively becomes crucial. Prompts are not just about the commands we give, but understanding the language that breathes life into ideas, making technology an extension of human intent.

Crafting Effective Prompts

Crafting effective prompts is akin to providing a skilled artisan with the precise tools and clear instructions to create a masterpiece. The art of prompt crafting lies in the specificity and clarity that guide a GenAI tool to generate desired outcomes. As we delve into this craft, let’s explore proven strategies and highlight common pitfalls to avoid.

Strategies for creating effective prompts

Crafting effective prompts serves as the foundation for meaningful interaction with Generative AI. By precisely tailoring our language, we can direct AI towards producing specific, relevant, and accurate outputs, ensuring that the technology reliably amplifies human intent. As we stand on the brink of a new era of human-AI collaboration, the ability to communicate effectively with these advanced systems becomes not just advantageous, but imperative for unlocking their full potential.

Common pitfalls to avoid in prompt construction

When venturing deep into the field of GenAI, the efficacy of communication is not solely determined by what we ask but also by how we ask it. We believe crafting prompts is a delicate balance where common missteps can lead to a cascade of confusion and inaccuracy. This is called “hallucinations” (Hanna & Levic, 2023; Yao et al., 2023). Large corporations are working to minimize such inaccuracies in generated outputs, and it will only lead to improvements in future models. However, recognizing these pitfalls is crucial for anyone looking to harness the power of AI effectively. Here are some common pitfalls and why they matter:

Prompt Engineering Frameworks

Navigating the intricacies of GenAI requires more than a rudimentary understanding of technology; it demands proficiency in prompt literacy, a discipline that shapes the very dialogue between humans and machines. As educators and learners grapple with the nuances of this interaction, structured models for prompt crafting offer a roadmap to clarity and efficacy. The CAST model (Jacobs & Fischer, 2023), the CLEAR model ​​(Lo, 2023), and the TRUST model (Trust, 2023) not only optimize communication with GenAI systems but also imbue the process with ethical considerations, universal design for learning (UDL), and pedagogical integrity. These models serve as blueprints, guiding educators and users alike in formulating prompts that harness the full potential of GenAI systems responsibly.

CAST Model

The CAST model, conceived by education researchers Jacobs and Fisher (2023), stands for Criteria, Audience, Specifications, and Testing. It instructs users to delineate the constraints or rules for GenAI outputs (Criteria), identify the intended recipients of the information (Audience), incorporate detailed descriptors for precision (Specifications), and employ a cycle of user feedback and refinement (Testing). This model is akin to a compass in the hands of explorers, guiding both teachers and students through the GenAI landscape with prompts that are as educational as they are functional.

Figure 1

Evolution of a GenAI prompt using the CAST Model.

A figure in words showing the Evolution of a GenAI prompt using the CAST Model.
 Source: Jacobs & Fischer, 2023.

CLEAR Model

The CLEAR framework streamlines prompt engineering into five fundamental components: Concise, Logical, Explicit, Adaptive, and Reflective (Lo, 2023). This model advocates for brevity and directness (Concise), a coherent structure of inquiry (Logical), unambiguous output expectations (Explicit), flexibility in approach (Adaptive), and a commitment to continuous improvement (Reflective). Emphasizing prompt precision and adaptability, the CLEAR model acts as a “scaffold” that elevates the quality of AI-generated content, particularly in academic libraries, ensuring relevance and applicability to the task at hand.

TRUST Model

The TRUST model—focused on Transparency, Real World Applications, Universal Design for Learning, Social Knowledge Construction, and Trial and Error—serves as a pedagogical tool to deter student reliance on AI for academic dishonesty. Developed by Trust (2023), this model encourages educators to clarify assignment purposes (Transparency), connect learning to tangible outcomes (Real World Applications), cater to diverse learning strategies (Universal Design for Learning), foster collaborative understanding (Social Knowledge Construction), and embrace a growth mindset (Trial and Error). The TRUST model is not merely a prompt-crafting guide but a manifesto for designing educational experiences that are robust against the temptations of AI-assisted cheating, promoting integrity and deep learning.

Together, these models form a triad of strategies that empower users to wield GenAI with intentionality and insight, ensuring that this powerful technology serves as a catalyst for learning and innovation, rather than an oracle that obfuscates the learning journey.

References 

Abedin, B., Meske, C., Junglas, I., Rabhi, F., & Motahari-Nezhad, H. R. (2022). Designing and managing human-ai interactions. Information Systems Frontiers, 24(3), 691–697. https://doi.org/10.1007/s10796-022-10313-1

Chiu, T. K. F. (2023). The impact of Generative AI (GenAI) on practices, policies and research direction in education: A case of ChatGPT and Midjourney. Interactive Learning Environments, 1–17. https://doi.org/10.1080/10494820.2023.2253861

Deng, Y., Zhang, W., Chen, Z., & Gu, Q. (2023, November 7). Rephrase and respond: Let large language models ask better questions for themselves. arXiv.Org. https://arxiv.org/abs/2311.04205

Gattupalli, S. S., Maloy, R. W., Edwards, S. A., & Rancourt, M. (2023). Designing for learning: Key decisions for an open online math tutor for elementary students. Digital Experiences in Mathematics Education. https://doi.org/10.1007/s40751-023-00128-3

Hanna, E., & Levic, A. (2023, January 1). Comparative Analysis of Language Models: Hallucinations in ChatGPT: Prompt Study. DIVA. https://www.diva-portal.org/smash/record.jsf?pid=diva2%3A1764165&dswid=6109

Jacobs, H. H., & Fisher, M. (2023). Prompt literacy: A key for ai-based learning. ASCD. https://www.ascd.org/el/articles/prompt-literacy-a-key-for-ai-based-learning

Kim, J., Park, S., Jeong, K., Lee, S., Han, S. H., Lee, J., & Kang, P. (2023, November 7). Which is better? Exploring Prompting Strategy For LLM-based Metrics. arXiv.Org. https://arxiv.org/abs/2311.03754

Lo, L. S. (2023). The CLEAR path: A framework for enhancing information literacy through prompt engineering. The Journal of Academic Librarianship, 49(4), 102720. https://doi.org/10.1016/j.acalib.2023.102720

Moorhouse, B. L., Yeo, M. A., & Wan, Y. (2023). Generative AI tools and assessment: Guidelines of the world’s top-ranking universities. Computers and Education Open, 5, 100151. https://doi.org/10.1016/j.caeo.2023.100151

Ronanki, K., Cabrero-Daniel, B., Horkoff, J., & Berger, C. (2023, November 7). Requirements Engineering using Generative AI: Prompts and Prompting Patterns. arXiv.Org. https://arxiv.org/abs/2311.03832

Tjuatja, L., Chen, V., Wu, S. T., Talwalkar, A., & Neubig, G. (2023, November 7). Do LLMs exhibit human-like response biases? A case study in survey design. arXiv.Org. https://arxiv.org/abs/2311.04076

Trust, T. (2023, August 4). Essential considerations for addressing the possibility of ai-driven cheating, part 2. Faculty Focus | Higher Ed Teaching & Learning. https://www.facultyfocus.com/articles/teaching-with-technology-articles/essential-considerations-for-addressing-the-possibility-of-ai-driven-cheating-part-2/

Wang, X., Li, C., Wang, Z., Bai, F., Luo, H., Zhang, J., Jojic, N., Xing, E. P., & Hu, Z. (2023, October 25). PromptAgent: Strategic planning with language models enables expert-level prompt optimization. arXiv.Org. https://arxiv.org/abs/2310.16427

Yao, J.-Y., Ning, K.-P., Liu, Z.-H., Ning, M.-N., & Yuan, L. (2023, October 2). LLM Lies: Hallucinations are not Bugs, but Features as Adversarial Examples. arXiv.Org. https://arxiv.org/abs/2310.01469

This content is provided to you freely by EdTech Books.

Access it online or download it at https://edtechbooks.org/encyclopedia/prompt_literacy.