AI-Driven Instructional Design: Ethical Challenges and Practical Solutions

&
Given the unprecedented exponential growth of Artificial Intelligence (AI) technology in our personal and professional lives, its rapid integration into higher education has become an imminent reality rather than a futuristic ideal. AI is getting increasingly recognized as a transformative design tool with the potential to revolutionize the teaching and learning practices of instructional designers, scholars, and educators. Therefore, maintaining a harmonious equilibrium between harnessing the capabilities of AI and upholding ethical principles is crucial for ensuring the responsible integration of AI within educational settings. This chapter offers practical approaches and ethical considerations for the strategic use of AI in designing courses and workshops, thereby contributing to the design and development of responsible and ethically sound educational environments.

Introduction

Given the exponential growth of Artificial Intelligence (AI) technology in our personal and professional lives, its rapid integration into higher education has become an imminent reality rather than a futuristic ideal (Hodges & Ocak, 2023). To provide a context, for the popular social media platform Facebook, it took almost four years to reach one hundred million users, and similarly, for the other famous internet giant, Google, it took almost a year to reach the milestone. However, not surprisingly, ChatGPT, a generative AI (GenAI) tool, surpassed that milestone in just two months. AI technologies can revolutionize and transform the current educational landscape from K-12 to higher education by encompassing personalized learning experiences for learners and adaptive assessments (Trafford, 2023) and workplace training and development contexts (Park, 2024). AI tools employ advanced machine learning techniques and sophisticated algorithms to analyze vast amounts of student learning data, providing educators with insights into individual learning patterns and enabling tailored instruction to meet specific needs (Trafford, 2023).

Artificial Intelligence (AI) is a subfield of computer science focused on developing intelligent systems capable of reasoning, learning, and operating independently. Generative Artificial Intelligence is a specialized area within AI that can generate diverse forms of data such as text, images, and videos by learning from pre-existing materials, a process known as training (Eke, 2023; Baidoo-Anu & Owusu Ansah, 2023).

Since the launch of ChatGPT in 2022 (OpenAI, 2022), educators engaged in continuous discussions related to the practical and ethical use of AI. Early discussions included whether GenAI should be allowed or prohibited in academic settings. However, discussions seemed to move forward, and we need to admit the use of AI tools (as it is difficult to prohibit it) in an ethical way that does not harm academic integrity. There was also a surge of research, primarily conceptual, short, and technical report style, in early 2023, followed by more empirical research in late 2023. The increase in publication accounts for special issues from many journals in various fields, such as business (e.g., Business Horizons, International Journal of Human Resource Management), instructional technology (e.g., TechTrends, Educational Technology & Society), and engineering (e.g., Engineering). Based on a search on Google Scholar on January 29, 2024, using the search terms “ChatGPT” and “Generative Artificial Intelligence,” 66,900 and 999,000 papers were published, respectively. This substantial body of research reflects the widespread interest of scholars in diverse fields and underscores the ongoing evolution of AI technologies. The trend is expected to continue as AI evolves with more sophisticated tools across various modalities, including text-generating AI tools such as Claude from Anthology, Gemini from Google, and Copilot from Microsoft; image-generating tools like Mid-journey, Adobe Firefly, and DALL-E; and video-generating tools such as Runway, Synthesia, Descript, and Wondershare Filmora. Instructional designers have increasingly utilized these AI tools for course design and content generation tasks, such as drafting video scripts, creating multimedia content, generating learning outcomes, and organizing course content.

Previous research on AI in education mainly discussed integrating AI tools in course assignments (e.g., Haleem et al., 2022; Holden et al., 2021). These studies emphasize the importance of the ethical use of AI, especially on how to design instruction and assignments that do not lead to academic misconduct and plagiarism (AlAfnan et al., 2023). For instance, AlAfnan et al.’s (2023) study examined ways to design writing assignments so that students do not use GenAI to cheat on their written assignments through oral exams, in-person exams, assignments that cannot be quickly answered using GenAI, and the instructor demonstrating the essay prompt on ChatGPT to respond to an example for students.

Scholars remind us that the time has come to revisit the ethical issues emerging from using AI technology as it continues into numerous facets of human life (e.g., Borenstein & Howard, 2021). Hagendroff (2020) notes that, in practical applications, the consideration of AI ethics is often regarded as a supplementary or ancillary aspect rather than an integral component of technical deliberations. This perception frames it as a non-binding, external framework, typically imposed by entities external to the core technical community, rather than as an intrinsic element of design and development. The goal of AI ethics, thus, should not be about enforcing compliance with normative principles; instead, it should foster educators to make informed, empathetic, and self-responsible decisions in morally significant circumstances (Hagendorff, 2020).

Some attempts have been made to incorporate ethics through a professional code of ethics. Despite the importance, it is insufficient as it does not have any tangible impact on decision-making (Borenstein & Howard, 2021). Additionally, although many studies mention the importance of the ethical use of AI and the lack of ethical standards and policies, the select studies state that “we need to use it ethically” without any concrete recommendations. In order to address the existing gap in the literature, this chapter is devoted to a comprehensive discussion of specific examples and practical strategies that instructional designers can employ during the instructional design process, along with guidance for administrators to support these endeavors effectively.

Given the importance and need for ethical considerations in instructional design for AI-based courses, we discuss four main topics: (1) addressing bias and promoting fairness, (2) ensuring transparency and explainability, (3) data privacy and data security concerns, and (4) other critical ethical issues. In this chapter, we provide a scenario (or a case) on possible ethical issues that may arise in instructional design practices, explain the ethical concerns, and offer practical strategies that may help mitigate the ethical issues of integrating AI in instructional design practices. We expect to provide insights to educators, administrators, leaders, and instructional designers in higher education and beyond.

Understanding Ethical Considerations in AI

AI is rapidly transforming into a superpower that enables a small team to affect many people's lives. So, whether you are a builder or user of AI tools or whether you are just somebody who cares about AI's impact on society, it is essential that you learn about these ethical issues and practical strategies to mitigate these ethical concerns so that you can make sure the work you do leaves society better off. Several significant issues historically associated with technology in education are also present in the realm of AI, thus amplifying the importance of ethical considerations (Hodges & Kirschner, 2024). Therefore, it is crucial to maintain a balance between utilizing AI's potential and adhering to ethical standards to guarantee the responsible use of AI in instructional design (Trafford, 2023).

Building on the foundational obligations and ethical considerations Mhlanga (2023) identified for using AI tools in education, the subsequent section delves deeper into these areas. Specifically, the following section of this book chapter explores strategies for addressing bias and ensuring fairness, enhancing transparency and explainability, safeguarding data privacy and security, and tackling other ethical issues, such as maintaining accuracy and upholding academic integrity. This discussion sets the stage for concluding thoughts on future directions in the ethical application of AI within instructional design practices.

Addressing Bias and Promoting Fairness

As a society, we need to avoid discrimination against individuals based on gender, ethnicity, or any other personal characteristic, ensuring that all are treated with fairness and equity. When AI systems are fed with data that does not reflect these values, AI can become biased or learn to discriminate against a particular group or class of people. In addition to gender stereotypes, AI systems can also inadvertently perpetuate biases related to disability, religion, sexual orientation, and other personal characteristics. For instance, if training data lacks diverse representations or contains prejudiced viewpoints, the AI's outputs may reflect these deficiencies, leading to biased educational content. In other words, for AI systems like ChatGPT, input prompts heavily influence the output (Trafford, 2023). If the input prompts are biased or unjust, the AI will produce biased or unfair output data, which is particularly concerning in educational settings.

Let us explore a scenario where an instructional designer uses a biased input prompt in ChatGPT for course material development, leading to biased and potentially harmful outputs in the instructional design context.

Scenario: Biased Prompt in Course Material Development

Input prompt: “Create a lesson plan emphasizing the importance of strong physical skills in successful engineering careers."

These are some ethical concerns in this input prompt that might lead to problematic output:

  • Reinforcement of Gender stereotypes:This prompt subtly suggests that physical strength, often stereotypically associated with men, is a crucial determinant of success in engineering. This can reinforce gender stereotypes, as it implicitly undervalues or overlooks other crucial skills like analytical thinking, creativity, or teamwork, which are gender-neutral and equally vital in engineering. AI-generated content based on this prompt would embed these biases in educational materials.

  • Biased Educational Content:The resulting lesson plan would convey a biased viewpoint, teaching students an unfounded and discriminatory perspective on gender roles in engineering.

  • Impacts on Student Perception and Career Choices:The prompt narrows the perspective of what it takes to be successful in engineering, potentially discouraging female students who may excel in intellectual or creative aspects of engineering but do not identify with the emphasized physical skills. At the same time, male students might develop an unjustified sense of superiority in these areas.

  • Responsible use:As educators and Instructional designers, we are responsible for fostering inclusivity and equality, and using such a prompt contradicts these fundamental principles.

  • Long-term negative consequences:This approach could contribute to gender inequality in engineering and educational settings, reinforcing harmful societal biases.

Practical strategies to mitigate bias and promote fairness while using AI tools:

Prompt engineering refers to carefully designing and formulating prompts or inputs that guide or influence the responses of a Gen AI system, like ChatGPT. This process involves crafting the prompts to accurately and effectively communicate the user's intention to the AI, leading to relevant and valuable responses. As a fundamental component in determining the AI model outputs, it is essential to make the input prompts fair and free from biases (Trafford, 2023). Practitioners can adopt ethical AI interactions by being acutely aware of possible biases and consistently dedicated to fairness (Fedeli & Pennazio, 2021). Trafford (2023) offers practical strategies to reduce input prompts bias (see Table 1).

Table 1.

Practical strategies to mitigate bias and promote fairness

Bias Mitigation Strategy

Description

Reflect Diversity

Ensure prompts are inclusive and unbiased, embracing diversity and not favoring any specific group or viewpoint to help instructional designers foster fairness and prevent unintentional bias.

Neutral and Balanced phrasing

Employ neutral and balanced language in prompts, avoiding phrasing that could lead to biased or preconceived answers, ensuring AI models generate unbiased and equitable responses.

Sensitivity to cultural and social contexts

Craft prompts with cultural and social awareness to avoid marginalizing groups, be mindful of impacts on diverse cultures and identities, and continuously learn about biases for fair, prompt development.

Regular Evaluation and Iteration

Regularly assess and refine prompts for effectiveness and fairness, incorporating diverse user feedback to identify and correct unintended biases.

Collaborative and diverse prompt development

Include stakeholders from diverse backgrounds to effectively identify and mitigate potential biases, fostering a more inclusive and fair AI prompt design.

Ethical guidelines and review processes

Implement ethical guidelines and structured review processes for prompt engineering, scrutinizing and correcting biases, and ensuring ongoing fairness and ethical integrity in prompt design.

Promoting Transparency in AI Responses

Where applicable, having AI models explain the reasoning behind their responses can help users understand the influence of the prompt on the AI's output. This transparency can aid in identifying biases in AI reasoning.

Implementing Feedback Loops with End Users

Setting up systems that allow end-users to report biases or unjust responses in AI outputs is essential. Such direct feedback is crucial for the ongoing refinement of prompt design. In the context of higher education, these feedback loops could involve digital platforms where students and faculty can submit observations about AI behavior. This structured input can be analyzed systematically to enhance AI applications in educational settings.

Incorporating Feedback from AI Ethics Boards

Establishing or consulting with an AI ethics board comprising experts in AI ethics, social sciences, and related fields can provide valuable insights. Regular consultations with such boards can guide the prompt development process toward greater fairness and ethical alignment.

To sum up, the influence of input prompts on generated AI outputs is substantial, particularly in educational settings where biased or unjust prompts can lead to skewed AI-generated data. Strategies for mitigating bias include ensuring diversity and fairness in prompts, using neutral and balanced language, being sensitive to cultural and social contexts, and incorporating regular evaluations and feedback. Involving diverse stakeholders in prompt development and establishing ethical guidelines are also crucial. Additional measures such as using AI fairness toolkits, consulting AI ethics boards, promoting AI transparency, and implementing end-user feedback loops further enhance AI interactions' fairness and ethical integrity. These comprehensive strategies underscore the ongoing responsibility of practitioners to remain vigilant and adaptable in creating fair and equitable AI systems.

Ensuring Transparency and Explainability

Ensuring transparency while using AI tools within educational contexts is crucial for comprehending their functionalities and potential. This clarifies an informed understanding of how these tools process information, formulate responses, and underpin their ethical and responsible application (Mhlanga, 2023). However, Numerous sophisticated AI systems function as 'black boxes,' indicating their high performance is accompanied by an inability of the AI to elucidate the rationale for its decisions. The input prompts must be designed to encourage AI models to provide transparent and explainable responses so that the stakeholders can have visibility into how AI models generate their outputs and understand the limitations of this technology. By promoting transparency, practitioners can foster trust and help users make informed decisions based on AI-generated information. As Green et al. (2022) highlighted, a primary concern is that users, especially in educational contexts, may not fully grasp how AI models arrive at certain conclusions. This lack of understanding can lead to misinterpretation and misuse of AI-generated information.

Alongside transparency, explainability is vital in AI-powered instructional design. The instructors and instructional designers should have access to explanations that outline the reasoning behind the AI-generated content, recommendations, and evaluations. Clear explanations foster a deeper understanding, promote critical thinking, and help learners make sense of the outcomes produced by AI algorithms. When AI decisions are opaque, holding the system accountable for errors or biases becomes challenging, which is critical in educational settings where such decisions can significantly impact learning and assessment. Transparency and explainability are also vital to building trust. Users who understand how AI models work are more likely to trust and effectively use them. Moreover, instructors and instructional designers must understand AI outputs to make informed decisions. Without clear explanations, the potential educational benefits of AI could be undermined.

Table 2.

Practical strategies to ensure Transparency and explainability

Transparency and explainability strategies

Description

Educational Module on AI Functionality

Integrate educational modules or tutorials that explain how AI works, specifically tailored to the context. This could include case studies, examples, or simulations that elucidate AI decision-making.

Audit trail

Creating an audit trail that records both the input prompts and the outputs generated by AI systems enhances transparency and credibility. This process helps distinguish between ideas produced by AI tools and those developed independently by Instructional designers (Halaweh, 2023).

Regular Reporting on AI Performance

Establish a system of regular reports that detail the AI’s performance, including accuracy, fairness, and areas where it may struggle. This can help users understand and trust the AI's capabilities and limitations.

User Feedback Mechanism

Incorporate mechanisms for users to provide feedback on AI outputs. This can help identify areas where the AI's explanations are insufficient and need improvement.

Explainability by Design

Integrate Explainability into the AI prompt development process. Instead of treating it as an afterthought, make their operations clear to end-users.

Compliance with Standards and Guidelines

Adhere to existing standards and guidelines for AI transparency and Explainability. These standards can serve as a benchmark for evaluating the AI system. Some prominent standards include the European Union’s Ethics Guidelines for Trustworthy AI, IEEE Standards Association's Ethically Aligned Design, and AI Explainability 360 by IBM.

Collaboration with AI Ethics Experts

Work with experts in AI ethics to review and advise on the AI models. These experts can help identify areas where transparency and Explainability are lacking and suggest improvements.

By implementing these practical strategies, Instructors and Instructional Designers can address the ethical concerns related to transparency and Explainability in AI, thereby making these systems more accessible, understandable, and trustworthy for educational purposes. This approach enhances the learning experience and ensures that AI is used responsibly and ethically in educational settings.

Data Privacy and Data Security Concerns

Besides the challenge of explaining AI behaviors caused by autonomous learning algorithms, there are also substantial concerns about losing control and trust, mainly stemming from difficulties in managing personal data (Moore et al., 2023). Respecting the right to privacy of its learners is a fundamental ethical responsibility of educational institutions as it signifies respect for their autonomy and rights. For instance, let us consider the below example in Chinese primary schools.

Wang and colleagues (2019) reported that some Chinese primary schools have implemented AI-powered headbands to monitor students' concentration levels during class. These headbands change colors to indicate different levels of focus: red for high concentration, blue for distraction, and white for no activity detected. Additionally, surveillance cameras installed in classrooms track students' phone use and yawn frequency, aiming to analyze engagement and attentiveness. Despite these measures raising significant privacy concerns among the public, schools have reportedly encountered little resistance to acquiring parental consent. One parent mentioned the benefit of contributing to national research and development as a justification for their support (Green et al., 2023; Reiss, 2021).

Some questions to ponder:

  • Considering the reported ease of obtaining parental consent, how well do you think parents and students understand the implications of such surveillance?
  • What ethical guidelines should be established to govern the use of AI in monitoring student behavior?
  • Who should be responsible for setting these guidelines?
  • In terms of the potential long-term impacts on students being monitored with such technologies, how might this affect their behavior, stress levels, and overall educational experience?

The above case study serves as a foundation for examining broader security and privacy issues related to using AI tools in educational settings, focusing on the tensions between technological benefits and ethical considerations.

Practical strategies to protect sensitive and confidential data:

It is vital to address how instructors and instructional designers can effectively safeguard sensitive information while using AI tools to design their courses or workshops. Table 3 offers practical strategies to enhance data privacy and security using AI tools.

Table 3.

Practical strategies to protect sensitive and confidential data

Data privacy and security strategies

Description

Establish Comprehensive Data Policies

Develop and enforce strict privacy policies defining how sensitive and confidential data will be used, stored, and shared. These policies should be transparent and accessible to all key stakeholders, including students.

Consent and Transparency

Implement mechanisms to obtain informed consent from all stakeholders. Clearly explain what data will be used, how somebody will use it, and the benefits it brings to the educational process.

Data Minimization

Use generic data that is necessary for educational purposes. Avoid excessive data in prompts, which can increase risks and liability.

Training and Awareness Programs

Educate the instructors and instructional designers about data privacy and security practices. Awareness can significantly prevent data breaches.

Anonymization Techniques

When using data for research or analysis, apply robust anonymization techniques to ensure that sensitive information cannot be identified.

Regular Security Audits

Conduct regular security audits and assessments to identify any vulnerabilities in the system.

Other ethical concerns

In addition to the bias, transparency, and data privacy issues, other ethical issues require careful consideration while using AI in educational settings.

Accuracy:

The accuracy of the AI-generated output is a significant issue that requires careful attention. While AI tools such as ChatGPT can generate educational content, they can hardly replicate human educators' creativity, nuance, and depth (Trafford, 2023). For instance, imagine the ramifications of providing our learners with misinformation such as the Earth is flat; such a fundamental error would not only skew their understanding of geography but also ripple through related disciplines like astronomy, creating a domino effect of misinformation (Mhlanga, 2023). In addition, when used extensively, the AI tools generate incorrect, nonsensical, or entirely fabricated information, commonly called Hallucination. This issue arises because these AI systems, proficient in pattern recognition and language generation, cannot access real-world knowledge or truth. Instead, they rely on patterns learned from the data on which they were trained. Extensive use could overwhelm the system, resulting in incorrect or entirely fabricated responses.

Academic Integrity:

Educational institutions across the globe are increasingly concerned about the impact of AI tools like ChatGPT on cheating and academic integrity violations. The prevalence of academic misconduct, including plagiarism, is already a concern in higher education (Cotton et al., 2023). For example, a recent study highlights academic integrity as a critical theme in AI tools such as ChatGPT (Sullivan et al., 2023). The authors note that in late 2022 and early 2023, news articles about AI tools such as ChatGPT focused on its implications for academic dishonesty and its potential to democratize higher education access, with mixed sentiments compared to more positive social media discussions or coverage of other AI tools. Despite academic integrity being a more frequent topic, suggesting a public interest in controversies over positive educational practices, the articles highlighted educators' need to redesign assessments to prevent AI-enabled cheating. The discussion also touched on the limited but evolving discussion on university policy adaptations to AI, emphasizing the need for more explicit guidelines on ethical AI use. The potential of ChatGPT to enhance learning, improve employability skills, and support diverse student needs was noted despite its biases and inaccuracies. However, the discourse predominantly from academic and institutional perspectives lacks depth in student engagement and perspectives on AI utilization, underscoring the need for a more inclusive and student-led dialogue on navigating AI tools ethically and effectively in higher education.

Finally, Reiss (2021) highlights a couple of ethical issues of using AI tools in educational settings, such as (1) the need for balance between guiding students towards autonomous decision-making and providing necessary guidance in educational settings and (2) the advent of the role of AI in education related to the well-being of educators due to the increased surveillance and stress. Reiss (2021) argues that while AI might lead to more engaged students, reducing the burden of classroom management for teachers and allowing them to focus more on facilitating learning, it raises concerns about privacy, surveillance, and the added stress of being constantly monitored alongside their students. The sanctity of the classroom as a private teacher space diminishes as data collection becomes more pervasive. For teaching assistants, their future role appears even more uncertain. Despite evidence suggesting that teaching assistants can positively impact learning outcomes with appropriate support and training, the necessity for their role in an AI-dominated educational landscape is being questioned.

Practical strategies to address other ethical issues:

Moore et al. (2023) advise that to tackle these integrity issues, educators should focus on two main strategies: firstly, eliminating factors that might encourage cheating, and secondly, guiding students empathetically to identify and steer clear of harmful or irresponsible uses of AI technology. Table 4 offers some practical strategies that educators and instructional designers could use to address the ethical issues.

Table 4.

Practical strategies to address other ethical concerns

Practical strategies to address other ethical concerns

Description

Implement data verification processes

  1. Use a blend of AI output and human input to ensure data accuracy and relevance.
  2. Cross-check the AI-generated output against trusted sources

Develop guidelines on acceptable AI use

  1. Use AI-generated content as a secondary source.
  2. Develop clear guidelines on acceptable AI use in assignments and assessments

Encourage critical thinking

  1. Include training modules on effective AI use.
  2. Promote ethical use of AI tools by reflecting critically on AI-generated content

Create Training and Awareness Programs

  1. Offer workshops and other professional development opportunities on integrating AI tools into classroom settings.
  2. Provide resources and other support systems for educators to reduce stress and adapt to AI-enabled classrooms.

Conclusion & Future Directions

With growing recognition of AI's power and danger, the prevailing reaction from educators has been an emphasis on ethical principles (Munn, 2022). Munn (2022) argues that ethical principles are often meaningless, isolated, and toothless unless specific practical strategies like checking the accuracy and audit of generated responses are implemented. Besides major academic journals developing guidelines for their authors to use AI tools, many higher education institutions are developing guidelines and policies. Nevertheless, AI technology will likely develop quickly and with new features requiring continuous institutional communication. In our chapter, we discussed ethical issues such as (1) addressing bias and promoting fairness, (2) ensuring transparency and explainability, (3) data privacy and data security concerns, and (4) other critical ethical issues. These are significant issues that cannot be resolved by instructional designers alone. As instructional designers are not the sole decision-makers of an institution and are often heavily influenced by university policy and governance, various stakeholders should work together to create an environment that fosters ethical teaching and learning. Moving forward, institutions should develop general policies to guide the organizational member’s decision-making process and handle issues as they arise. These policies should also be coupled with related public policy and laws on technology and privacy.

References

AlAfnan, M. A., Dishari, S., Jovic, M., & Lomidze, K. (2023). ChatGPT as an educational tool: Opportunities, challenges, and recommendations for communication, business writing, and composition courses. Journal of Artificial Intelligence and Technology, 3(2), 60–68. https://doi.org/10.37965/jait.2023.0184

Borenstein, J., & Howard, A. (2021). Emerging challenges in AI and the need for AI ethics education. AI and Ethics, 1, 61-65. https://doi.org/10.1007/s43681-020-00002-7

Cotton, D. R., Cotton, P. A., & Shipway, J. R. (2023). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 1-12. https://doi.org/10.1080/14703297.2023.2190148

Fedeli, L., & Pennazio, V. (2021). Instructional design and 3D virtual worlds: A focus on social abilities and autism spectrum disorder. In Handbook of research on teaching with virtual environments and AI (pp. 444-460). IGI Global.

Green, E., Singh, D., & Chia, R. (2022). AI ethics and higher education: Good practice and guidance for educators, learners, and institutions. https://Globethics.net.

Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and machines, 30(1), 99-120. https://doi.org/10.1007/s11023-020-09517-8

Halaweh, M. (2023). ChatGPT in education: Strategies for responsible implementation. Contemporary Educational Technology, 15(2), ep421. https://doi.org/10.30935/cedtech/13036

Haleem, A., Javaid, M., & Singh, R. P. (2022). An era of ChatGPT as a significant futuristic support tool: A study on features, abilities, and challenges. BenchCouncil Transactions on Benchmarks, Standards and Evaluations, 2(4), 100089. https://doi.org/10.1016/j.tbench.2023.100089

Hodges, C.B., Kirschner, P.A. (2024). Innovation of instructional design and assessment in the age of generative artificial intelligence. TechTrends, 68, 195–199. https://doi.org/10.1007/s11528-023-00926-x

Hodges, C. & Ocak, C. (2023). Integrating Generative AI into Higher Education: Considerations. Educause Review, (August 30, 2023). https://er.educause.edu/articles/2023/8/integrating-generative-ai-into-higher-education-considerations?trk=feed-detail_main-feed-card_feed-article-content

Holden, O. L., Norris, M. E., & Kuhlmeier, V. A. (2021). Academic integrity in online assessment: A research review. Frontiers in Education, 6. https://www.frontiersin.org/articles/10.3389/feduc.2021.639814

Mhlanga, D. (2023). Open AI in education, the responsible and ethical use of ChatGPT towards lifelong learning. SSRN Electronic Journal. https://doi.org/10.2139/SSRN.4354422

Meszaros, J. (2022). The next challenge for data protection law: AI revolution in automated scientific research. In M. C. Compagnucci, M. L. Wilson, M. Fenwick, N. Forgó, and T. Bärnighausen (Eds.), AI in eHealth: Human autonomy, data governance & privacy in healthcare. Cambridge University Press.

Moore, S., Hedayati-Mehdiabadi, A., & Law, V. (2024). The change we work: Professional agency and ethics for emerging AI technologies. TechTrends, 68, 27–36. https://doi.org/10.1007/s11528-023-00895-1

Munn, L. (2023). The uselessness of AI ethics. AI Ethics, 3, 869–877. https://doi.org/10.1007/s43681-022-00209-w

OpenAI. (2022). ChatGPT release notes. https://help.openai.com/en/articles/6825453-chatgpt-release-notes

Park, J. J. (2024). Unlocking training transfer in the age of artificial intelligence. Business Horizons. 67(3). https://doi.org/10.1016/j.bushor.2024.02.002

Reiss, M.J. (2021). The use of AI in education: Practicalities and ethical considerations. London Review of Education, 19(1), 5, 1–14. https://doi.org/10.14324/LRE.19.1.05

Sullivan, M., Kelly, A., & McLaughlan, P. (2023). ChatGPT in higher education: Considerations for academic integrity and student learning. Journal of Applied Learning & Teaching, 6(1), 1–10. https://doi.org/10.37074/jalt.2023.6.1.17

Trafford, M. T. (2023). Ignite learning innovation: Unleashing the potential of ChatGPT, prompt engineering, and prompt chaining in course design. RU Institute Press.

Wang, Y., Hong, S., & Tai, C. (2019, October 24). China's Efforts to Lead the Way in AI Start in Its Classrooms. The Wall Street Journal. https://www.wsj.com/articles/chinas-efforts-to-lead-the-way-in-ai-start-in-its-classrooms-11571958181

This content is provided to you freely by EdTech Books.

Access it online or download it at https://edtechbooks.org/applied_ethics_idt/AI_instructional_design_ethics.