Integrating Care Ethics in GenAI-Enhanced Instructional Design: A Framework for Inclusive and Equitable Higher Education Practices

GenAI is a disruptive technology that is transforming higher education (HE). There are concerns that GenAI can minimize equitable and accessible learning, besides bringing ethical issues about students’ privacy and bias. This conceptual paper proposes reflecting on Tronto’s ethics of care framework to guide IDers when working with GenAI. This paper presents two case examples generated from real practices conducted by two IDers to: 1) streamline the instructional design process, and 2) create course content. Educators and practitioners can benefit from this framework to design inclusive and accessible learning.

Introduction

Instructional design (ID), a field dedicated to creating effective learning, has recently faced increasing pressure to adopt more inclusive and equitable approaches (Rao, 2021). This shift stems from a growing recognition of the diverse needs and backgrounds of learners in an increasingly globalized educational environment. Concurrently, the emergence of artificial intelligence (AI) and, specifically, Generative AI (GenAI) in higher education (HE) has captured the attention of educators, but it also raises ethical concerns about inclusivity and equity. Current GenAI models are trained on vast datasets that can unintentionally perpetuate biases embedded in the data. When these biases infiltrate learning, marginalized groups may be disadvantaged, further exacerbating existing educational inequities (Baker & Hawn, 2022).

The GenAI introduction, with its capacity to automatically modify learning pathways, introduces a degree of abstraction that raises important ethical questions. While GenAI offers transformative potential for improving efficiency in designing learning with its ability to automatically generate content and adapt learning materials based on individual learner data, it also creates ethical issues. Automation capability is impacting the fact that universities are increasingly experimenting with GenAI to explore their implications for learning at scale (Yusuf et al., 2024). The focus on using automated capabilities to scale learning may potentially impact the design and delivery of inclusive and equitable learning experiences (Yusuf et al., 2024). The automation of design processes risks depersonalizing learning and overlooking the detailed cultural, emotional, and individual needs of learners (Hauske & Bendel, 2024).

While there is substantial discourse on the technical applications of GenAI in education, there is a notable lack of frameworks that critically examine how GenAI-enhanced ID can remain ethically grounded. We understand AI-enhanced ID from two perspectives: (1) Instructional designers (IDers) use GenAI to enhance the ID process (creating ID maps and documentation to streamline workflows). Ruiz-Rojas et al. (2023) demonstrated that the integration of GenAI tools with an ID matrix, i.e., the 4PADAFE matrix, can significantly enhance the teaching-learning process in massive open online courses (MOOCs), particularly by allowing educators to design personalized and scalable learning experiences; (2) IDers use GenAI to develop instructional content (test items, video scripts, and assignments) to automate and personalize learning. GenAI can generate adaptive quiz questions tailored to individual learners’ performance or create interactive video scripts that adapt based on students’ responses; and (3) IDers collaborate with faculty to integrate GenAI into course assignments and activities, where students use GenAI as part of structured coursework to support learning. For example, in engineering and computer science courses, GenAI tools can support students by generating code snippets with explanations, while in writing-intensive courses, GenAI can help students revise drafts by providing personalized feedback on grammar, flow, and clarity.

Therefore, to address the above-mentioned challenges, this conceptual work is an attempt to understand how ethically grounded questions can be answered with the integration of an ethical GenAI-enhanced ID framework. The framework can guide GenAI-enhanced ID in creating inclusive and equitable learning environments. Care ethics, a moral philosophy emphasizing relationships, empathy, and compassion, can be one of the ways to provide this guide (Costello et al., 2022). The ability of GenAI to autonomously generate and modify content can be regulated with the care ethics approach that emphasizes attentiveness, responsibility, competence, and responsiveness to the needs of others (Tronto, 1998). Thus, we propose the integration of Tronto’s care ethics into the GenAI-enhanced ID. Tronto’s care ethics can provide a meaningful framework for ensuring that GenAI-enhanced ID remains aligned with the strategies to mitigate bias, enhance inclusion, and ensure equitable access to high-quality education for all students.

Literature Review

GenAI in ID

The integration of GenAI into teaching and learning processes has garnered increasing attention among IDers as well. Historically, when AI-based tutoring systems became widely available, the field of AI in Education (AIEd) focused on assessment/feedback/grading automation for faculty and personalized learning for students (Kumar et al., 2024).

GenAI's technological capabilities to automate assessment, feedback, and grading have the potential to revolutionize ID support faculty to automate content creation and streamline course development, significantly improving both efficiency and scalability (Dickey et al., 2023). Escalante et al. (2023) emphasized how GenAI real-time feedback helps correct mistakes and refine learning strategies, particularly beneficial in large-scale learning environments where human feedback may not always be available.

However, over-automation and reliance on GenAI could limit creative and pedagogical flexibility (Choi et al., 2024) by risking reducing cultural sensitivity and emotional attunement, where understanding students' backgrounds is critical for personalized learning (Walter, 2024). This over-reliance may fail to meet the diverse needs of learners (Abdelghani et al., 2025). Further, Buddemeyer et al. (2021) found that GenAI materials may also encode identity information inaccurately, leading to representational harms for underrepresented groups. Similarly, Hess et al. (2024) warned that GenAI may amplify gender stereotypes, while Zheng and Stewart (2024) pointed out that GenAI often reflects WEIRD cultural values (Western, Educated, Industrialized, Rich, and Democratic), alienating non-Western learners. These findings bring biases rather than alleviating them. Singh et al. (2024) and Buddemeyer et al. (2021) stressed that GenAI is only as unbiased as the data they are trained on, with algorithms reinforcing existing inequalities.

GenAI can dynamically analyze learner profiles and modify educational strategies in real time, creating individualized learning (Borah et al., 2024). Huang et al. (2025) found that students using GenAI-personalized content performed better and retained more information compared to those in traditional classrooms. Huang et al. (2025) demonstrated that GenAI is particularly effective in personalizing learning paths and creating tailored educational experiences that significantly improve student performance and retention compared to traditional static curricula. Much of the existing research has focused on the GenAI automation and personalization capabilities, and limited studies have examined the broader impact of GenAI on equity and inclusion in learning. Ethical dimensions of GenAI (data privacy, student consent, and the potential for bias) remain underexplored. As GenAI continues to be embedded in educational settings, the need for clear ethical guidelines will become increasingly important. These concerns highlight the urgent need for a responsible GenAI-enhanced ID framework that maintains pedagogical transparency and promotes active learning strategies with a focus on the ethical implications of its use.

Tronto’s Ethics of Care

Fisher and Tronto (1990) define care as a species activity that includes everything we do to maintain, continue, and repair our world so that we can live in it as well as possible. Tronto (1993) sees caring as something that can be cooperated among humans. Following Tronto (1993), Pantazidou and Nair (1999) proposed the service-oriented approach and demonstrated how Tronto’s ethics principles could apply to engineering. Kardon (2005) examined the application of Tronto’s work to case studies to measure the ethical adequacy of engineering roles and responsibilities. It was when an engineering phase included each element, i.e., attentiveness, responsibility, competence, and responsiveness. Campbell et al. (2012) also applied Tronto’s framework to analyze engineering students’ attitudes to cross-disciplinary collaboration to address complex real-world problems. The authors concluded that most students appeared to lack interdisciplinary appreciation and a collaborative approach to addressing the ethics of care in engineering.

Similarly to the works on the ethics of care in engineering, the ID also emphasizes the importance of care and social justice in its practice. Govender and colleagues (2023) adopted Tronto’s concepts of paternalistic and parochial care to reflect on their care practices to highlight the importance of creating spaces where students can co-create care relationships. The authors made an argument for a caring learning design that includes considering learners' diverse backgrounds, experiences, and learning styles and the broader social and cultural contexts in which learning occurs (Govender et al., 2023). Following Fisher and Tronto’s (1990) conception of care to bring well-being activities and practices, Zamora et al. (2025) shared how care can be an explicit tool for social transformation when it is rooted in actual practices and mechanisms aimed towards a lasting social reorganization.

Care Ethics and GenAI-enhanced ID

We envision ID as a field that inherently embodies the ethics of care principles in its emphasis on collaboration, empathy, and compassion. ID is conceptualized as a caring profession that fosters relationships and understanding among diverse stakeholders. This relationship requires a deep understanding of stakeholders’ needs and the learning context, mirroring the contextual awareness central to care ethics. As the ID involves multiple perspectives, building relationships becomes a critical aspect where care ethics can be applied.

We will describe how four phases of Tronto’s Ethics of Care can be aligned with some of the GenAI ethical considerations to our two perspectives: (1) IDers use GenAI to enhance ID, and (2) IDers use GenAI to develop instructional content.

Caring about requires attentiveness to the need to be cared about. This first phase of Tronto’s ethics of care is aligned with the most important ethical considerations - a lack of diversity. “In the end, a relatively small group of predominantly white men determines how AI systems are designed, for what purposes they are optimized, what is attempted to realize technically, etc.” (Hagendorff, 2020, p. 105). With any technology tool, there might be issues of equity and accessibility; in any institution, several students have no access to internet connectivity, which translates into having no access to GenAI and other tools, creating a divide between students who have access and those who lack access. These ethical considerations should be addressed by stakeholders working in collaboration (Al-Zahrani, 2024). It is important for faculty, students, and HE stakeholders to stay informed about emergent technologies and the issues those technologies bring to the HE landscape (Yang & Beil, 2024). IDers need to raise awareness of the implications of GenAI issues and concerns.

Taking care of is based on responsibility, and these are not necessarily responsibilities of obligations, but they are more contextually cultural practices. For example, ID as a field and, specifically, in HE institutions, can take more responsibility to provide policies and clear guidelines for the use of GenAI (Duah & McGivern, 2024; Eke, 2023; Shailendra et al., 2024). Currently, only a few HE institutions have instituted guidelines for the responsible use of GenAI in research, teaching, and learning (Al-Zahrani, 2024), which vary from blocking the use of GenAI to making changes to the HE setting (Ahmad et al, 2023; Duah & McGivern, 2024), which will require ongoing faculty training (Yang & Beil, 2024).

Care giving is about competence aligned with moral consequentialism. As Tronto stated, sometimes care will be inadequate because the resources available to provide for care are inadequate. This care element can be aligned with the challenge of over-reliance on GenAI use (Ahmad et al, 2023; Bozkurt, 2024). There is a concern about students not acquiring the skills they are supposed to gain in their classes if they use GenAI tools instead of doing the work themselves. This might create other problems, such as individuals becoming dependent on GenAI to do simple work or a loss of decision-making (Ahmad et al., 2023; Shailendra et al., 2024). GenAI defies the concepts of academic integrity and bias (Ahmad et al, 2023; Al-Zahrani, 2024; Haleem et al., 2022; Shailendra et al., 2024). Bozkurt (2024) proposes to redefine the concept of academic integrity to empower students for integrity by engaging in collaborative learning, enhancing critical thinking skills, and developing AI literacy (Rasul et al., 2024). There is little transparency about the training of GenAI tools done by their creators. Since GenAI models are the Large Language Models (LLMs) trained on extensive collections of digital texts, they can adopt societal biases related to race, gender, culture, and other factors in their functioning and results (Jadhav et al., 2024). Unfortunately, GenAI tools are known for hallucination, or they may provide inaccurate information, something that will contribute to the spread of fake news (Hagendorff, 2020), which might occur because of the way those tools have been trained.

Care receiving requires responsiveness of the care receiver who is a student in the context of HE. In the age of GenAI, one of the biggest challenges is protecting users’ privacy (Ahmad et al., 2023; Shailendra et al., 2024). Yang and Beil (2024) called for a “critical evaluation of existing data privacy policies and practices” in HE to continue supporting student learning (p. 63). Cyber-attacks might be intensified since GenAI tools require collecting many pieces of data to be able to predict the information given to users (Ahmad et al., 2023; Shailendra et al., 2024; Yang & Beil, 2024).

Case Examples

The following two case examples are the cases from the practical experiences of the authors of this paper. The case examples address our perspectives on GenAI-enhanced ID and Tronto’s ethics of care. The authors are practicing IDers who use GenAI in different contexts while working on the course design or development with university faculty, depending on the long-term or short-term collaborations. The first case example describes a practical ID situation where Ji Hyun was involved as an IDer helping faculty develop a new course. The second case example demonstrates a collaborative partnership with the engineering faculty where Larisa played the role of an IDer. These two case examples are not typical case studies per qualitative research design standards. These are the practical examples of how GenAI, and specifically, ChatGPT integration by two IDers.

Case Example 1

This case example examines the redesign of a traditional graduate course on Quantitative Research Methodologies Using R to incorporate ChatGPT and R Studio's AI-powered add-ons, allowing students to use these tools for tasks like code suggestions, statistical explanations, data interpretation, and research support. A preliminary pilot study, conducted with five students who previously took the traditional version of the course, served as a formative step to explore the feasibility and gather initial feedback on integrating these GenAI-enhanced instructional practices before a larger-scale implementation. While the small sample size limits the generalizability of findings, the insights gained were crucial for refining the course design and anticipating potential challenges.

The course design addresses challenges in GenAI integration, including ethical use, academic integrity, and fostering critical thinking, while accommodating diverse student backgrounds in R programming and research methodologies. This redesign uses Tronto’s care ethics framework and User Experience (UX) methods to ensure a balance between GenAI integration and human oversight. This approach centers around creating inclusive and equitable learning experiences, where GenAI enhances student engagement without compromising ethical standards. Table 1 outlines how the phases of care ethics align with the UX methods and the ongoing involvement of human oversight to ensure both ethical and effective use of GenAI.

Table 1

Integration of Care Ethics, UX Methods, and Human Oversight

Phase

Description

UX Methods

Human Oversight

Care About

Identifying and understanding students' needs and concerns about using GenAI in quantitative research

Empathy Mapping, Contextual Inquiry

ChatGPT analyzes student engagement data and course feedback to generate initial empathy maps. The IDers conduct contextual inquiry sessions, observing students' workflows to refine their understanding of their challenges and concerns. Human-led workshops address concerns that ChatGPT may overlook, such as the ethical implications of using ChatGPT for research.

Taking Care Of

Assuming responsibility for addressing identified needs in course design

Journey Mapping, Persona Development

ChatGPT generates draft learner journeys and personas based on analysis of student behavior and performance data. The IDers and professor review these ChatGPT-generated drafts, adjusting them to ensure they align with ethical considerations and reflect diverse learner needs. Responsibility sessions involve human-led discussions to ensure all identified needs are thoughtfully addressed, particularly to ethical GenAI use.

Care Giving

Meeting students' needs through the design of course materials and activities

Prototyping, Usability Testing

ChatGPT supports rapid prototyping of course materials - R code examples and research scenarios. Usability testing is conducted with a focus on how students interact with ChatGPT-generated content. Human-led iterations refine the materials to foster critical reflection on GenAI’s role in quantitative research, ensuring that students remain engaged with both the methodology and ethical considerations.

Care Receiving

Monitoring and responding to students' feedback on the GenAI-integrated course

Feedback Analysis, Longitudinal Studies

ChatGPT analyzes feedback and engagement data throughout the course. The IDers lead longitudinal studies to track changes in students' attitudes toward GenAI use in research, incorporating insights into future iterations of the course. Human-led design sprints address personal and emotional growth based on student responses, ensuring that care is adapted as the course progresses.

Care About

During the ‘Care About’ phase, the ID team used empathy mapping and contextual inquiry to identify and understand students' needs when using GenAI in quantitative research. The process began with a ChatGPT-assisted analysis of previous student performance metrics and end-of-course surveys. To ensure privacy, all data was anonymized before processing. ChatGPT analyzed data points, revealing patterns and common challenges encountered in the course's traditional version. We revealed insights, such as a significant variance in students' prior programming experience and a gap between theory and practice, that led to the decision to incorporate ChatGPT into the course for more real-time support and examples during the coding process. Traditional office hours and asynchronous feedback methods were not fully meeting this need for real-time, individualized support. The data showed a consistent pattern of students struggling with the complexity of R programming, feeling overwhelmed by the syntax and structure of R, which often led to delays in grasping advanced statistical concepts, and facing challenges in translating theoretical statistical concepts into practical R code. The data indicated that students often spent disproportionate amounts of time debugging code and troubleshooting errors, which detracted from their ability to focus on the underlying statistical principles and research methodologies.

Given these findings, the ID team recognized an opportunity to utilize ChatGPT's capabilities to tackle these difficulties. ChatGPT's ability to provide immediate, context-aware coding suggestions and explanations aligned perfectly with the need for real-time support in R programming. By allowing students to use ChatGPT, the course could offer a more adaptive and responsive learning environment. This approach aims to level the playing field for students with varying levels of programming experience, provide more immediate and personalized support, and ultimately enhance the overall learning experience in quantitative research methodologies.

To address the innovative component of GenAI integration, the IDers performed inquiry sessions with five students who had previously completed the regular course version to examine viewpoints on the possible use of ChatGPT and R Studio's AI-enhanced add-ons in the curriculum. Participants volunteered for the study and were selected purposefully to represent a range of prior R programming experience levels (based on self-reporting from the original course) and diverse academic backgrounds within the graduate program. Sessions were audio-recorded (with consent), transcribed, and subjected to thematic analysis to identify recurring patterns, benefits, concerns, and suggestions regarding potential GenAI integration.

The human-centered approach revealed detailed worries and anticipations surrounding AI integration. Students expressed a combination of enthusiasm regarding AI's capacity to enhance learning and offer immediate assistance, alongside concerns that excessive dependence on AI may hinder fundamental comprehension. Ethical concerns were highlighted regarding the influence of AI on academic integrity and its image in professional research environments. Thus, the team synthesized insights from both analyses to create an empathy map to capture students' thoughts, feelings, anticipated pain points, potential gains, and ethical concerns regarding GenAI integration in quantitative research.

Figure 1

Empathy Map


The empathy map directly informed the development of course content and activities, tailored to address students' specific needs and concerns. One of the primary outcomes was identifying the need for clear, actionable guidelines on the ethical use of GenAI in research. This led to the creation of a flexible GenAI integration to accommodate varying levels of comfort with the technology.

Taking Care Of

To address the needs identified in the previous phase, the ID team refined the course design by using GenAI tools to create initial drafts of learner journeys and personas. These outputs offered a data-driven understanding of how students with diverse backgrounds and varying levels of R programming experience might navigate the course while considering their concerns about GenAI usage and ethics.

ChatGPT was prompted with instructions informed by the empathy map and interviews from the 'Care About' phase. These findings highlighted key challenges, such as the reliance on real-time support and ethical concerns surrounding GenAI in research. Then, the ID team conducted a critical review to ensure these outputs aligned with the course's ethical and pedagogical goals. The IDers and professor reviewed the outputs, identifying areas where the GenAI lacked nuance or failed to capture the full complexity of student needs, especially regarding ethical concerns. The team adjusted the learner journeys and personas, ensuring that they addressed both technical and ethical dimensions. For instance, the persona of Alex Chen (Figure 2), a first-year doctoral student interested in ethical AI use, was refined to more comprehensively address concerns about bias mitigation, academic integrity, and the responsible use of GenAI in research.

Figure 2

Learner Persona Example


The finalized learner journeys (Figure 3) emerged from this iterative process by reflecting a holistic view of how students might engage with the course and where GenAI could offer support while addressing ethical implications raised during the earlier phases. This balance ensured that the course was designed not only for technical proficiency but also for fostering ethical reflection and critical thinking in the context of GenAI-enhanced learning.

Figure 3

Learning Journey Map


Care Giving

The focus was on addressing the care needs identified earlier by developing course materials and activities that were pedagogically sound, accessible, and adaptable to the diverse needs of students. ChatGPT was used to rapidly create R code examples and research scenarios that corresponded with the course's learning objectives. The IDers critically assessed whether these activities truly encouraged students to understand underlying research methods and programming concepts, rather than simply automating tasks. Examples of ChatGPT-assisted tasks that helped students engage with quantitative research using R were provided. These prompts allowed students to interact with ChatGPT for various steps of their statistical analyses, from checking assumptions for a t-test to filtering data and running detailed tests with confidence intervals. The ChatGPT-generated prompts were designed to align with the learning objectives of the course, encouraging students to think critically about the steps involved in conducting statistical tests. Each task not only facilitated the execution of technical processes but also prompted students to reflect on the reasoning behind each statistical procedure.

Usability testing was conducted with five pilot participants to gather insights into how well materials supported learners with different abilities and access levels. The feedback highlighted the need for alternative formats to accommodate students with disabilities or limited technological resources. In response, the IDers reformatted specific ChatGPT-generated content, such as R code examples, research scenarios, and instructional guides, into accessible formats. For visually impaired learners, materials were provided as structured text files compatible with screen readers, including clear headings, properly formatted code with comments, and detailed alternative text for graphs and tables. For students with limited internet connectivity, downloadable offline versions of the content were made available, ensuring they could access the materials without needing constant internet access. This approach ensured that all students, regardless of their abilities or access to technology, could fully engage with the ChatGPT-enhanced course materials. To ensure accessibility, the course design incorporated elements of personalization and flexibility to adapt to individual learner needs. During the pilot usability testing, the GenAI’s ability to adjust the difficulty of tasks based on student performance proved valuable. Participants who demonstrated advanced coding skills were presented with more complex problems, while those who needed additional support were offered more basic examples and tailored feedback. This flexibility allowed students to progress at their own pace, providing a more personalized learning experience.

Care Receiving

The 'Care Receiving' phase focuses on continuously responding to how students will interact with the GenAI-enhanced course. Each week, students will provide feedback on their experiences through surveys and reflection exercises, which will allow for real-time adjustments to course materials and teaching strategies. This iterative approach ensures that the course evolves based on the unique needs of the learners, creating a dynamic learning environment that adapts to their preferences. Table 2 presents an example of the weekly reaction survey, designed to gather insights into how students are engaging with ChatGPT.

Table 2

Weekly Reaction Survey

Reflect on Your Learning Experience

  • How well did the AI tools (including ChatGPT) support your learning of R programming this week?

  • What challenges did you face when applying AI to your coding or research?

  • How has AI influenced your understanding of research and its ethical implications?

  • Do you feel your learning pace is right for you? How can the tools better support you?

  • How has AI helped or hindered collaboration with peers in class activities?

  • How has your view of GenAI (including ChatGPT)’s role in research evolved this week?

  • What ethical considerations have emerged for you as you use GenAI (including ChatGPT) in your research?

This continuous feedback loop not only helps improve how the GenAI is utilized but also empowers students to shape their own learning experiences by influencing how course content is delivered and refined over time. Table 3 outlines how ChatGPT-generated suggestions are complemented by the human instructor’s review, ensuring a balance between GenAI efficiency and thoughtful pedagogical adjustments that support students’ learning.

Table 3

Iterative Design Process Overview

Stage

Activity

AI Involvement

Human Oversight/Adjustments

Initial R Coding Exercises

Students complete exercises with AI-generated code samples

ChatGPT provides code suggestions and real-time feedback

Instructor reviews code, refines prompts, and adds supplementary materials

Case Studies on Ethical AI Use

Ethical dilemmas are explored through AI-generated cases

ChatGPT generates initial case scenarios

Instructor adds complexity, designs discussions around ethical implications

Peer Review Sessions

Students critically evaluate AI-generated outputs

ChatGPT supports analysis and feedback

The instructor moderates and provides additional guidance

Reflective Journals

Students document AI-related experiences and ethical insights

AI provides prompts for reflection and summarization

The instructor reviews journals to identify emerging patterns

The iterative design process also applies to case studies on ethical dilemmas related to GenAI, which encourages students to critically reflect on the role of GenAI in research. These case studies address ethical questions such as bias in GenAI outputs, the limits of GenAI in substituting human analysis, and the handling of sensitive data in GenAI models (Table 4).

Table 4

Ethical Dilemmas in AI-Generated Case Studies

Case Study

ChatGPT

Human Role

Ethical Questions

AI Bias in Research Output

Generated initial case scenario

Refined scenario, added complexity

How do we ensure AI tools do not reinforce biases in research?

AI as a Substitute for Human Analysis

Generated comparative data

Created discussions around AI limitations

When does reliance on AI compromise human insight?

Privacy Concerns in Data-Driven AI Models

Generated examples involving data privacy

Developed a case study focusing on compliance with data privacy regulations

How should researchers handle sensitive data in AI models?

Students will maintain reflective journals to reflect on the ethical implications of using GenAI in research. These journals will offer insights into how GenAI has impacted their understanding of research methodologies and collaboration with others. This reflective practice allows for to capture of more nuanced feedback about students' intellectual and emotional development, promoting personal growth alongside technical learning.

Integrating GenAI within a Care Ethics framework, this redesigned course aims to boost student confidence in R programming and the critical evaluation of AI-generated output, anticipating enhanced engagement and performance through thoughtful AI use. This approach is grounded in core ethical principles like transparency, continuous dialogue, critical human oversight, and a focus on the human elements of learning, ensuring AI serves as a supportive tool within a caregiving context. Recognizing the shift towards AI-driven coding paradigms (e.g., vibe coding), the course adapts by emphasizing effective AI prompting, code evaluation, debugging, and deeper statistical understanding over purely manual coding. Acknowledging the formative nature of the initial pilot, a comprehensive mixed-methods evaluation, including longitudinal tracking, is planned for a larger student cohort starting in Spring 2026 to rigorously assess the long-term effectiveness, challenges, and implications of this GenAI-enhanced, care-centered instructional model.

Case Example 2

This case example will overview the redesign of an existing online course on Database Management Systems, which was piloted in spring 2025. The course was a collaboration between an IDer and an engineering faculty who redesigned the foundational weeks as part of a critical approach for engineering students to prepare them for the Structured Query Language (SQL) with integration of ChatGPT, the main outcome of the course. Integration of the first two redesigned foundational modules emphasized the importance of critical thinking among engineering students who used GenAI for coding. One of the challenges was that engineering students tended to use GenAI, i.e., ChatGPT or Microsoft Co-Pilot, during their coding process. However, without a critical and ethical approach to using GenAI in the course, it was difficult for them to develop a competency level in coding to be competitive in the workforce market.

The course was redesigned based on the principles of Tronto’s four phases of care ethics, integrated with the Practical Inquiry Model (PIM) (Garrison et al., 2001) to ensure a student-centered collaborative learning environment. Table 5 shows the integration of care ethics phases in alignment with the PIM principles and the model implementation.

Table 5

Integration of Care Ethics, Practical Inquiry Model, and SQL-AI Implementation

Phase

Description

Practical Inquiry Model (PIM)

SQL-AI Implementation

Care About

Attention to engineering students’ challenges in learning Structured Query Language (SQL).

Triggering Events: Understanding the existing problem, asking questions, and understanding the dilemma

Read students’ feedback and evaluate the quality of the assignments from previous semesters.

Listen to the course instructor’s feedback about instructional experiences where challenges happened.

Taking Care Of

Taking responsibility for designing learning modules to address the challenges identified.

Exploration:

Explore resources to meet the needs

Compare and contrast possible ideas/suggestions

Brainstorming

Collaborate with two junior IDers to test a new design.

Create the course map with identified objectives to align with the SQL skills and foundational knowledge.

Care Giving

Perform instructional design tasks that involve knowledge of learning sciences.

Integration:

Combine experiences to build a new idea

Find solution

Build on/add to previous ideas

Discuss the instructional sequence for two new modules based on the principles of cognitive learning.

Alpha test: Prepare a mockup module with foundational knowledge for GenAI (ChatGPT) integration.

Pilot the part of the mockup design to test students’ reactions.

Care Receiving

Observe engineering students’ reactions, behavior, and feedback when cooperative collaboration was integrated into the modules to prepare for the GenAI integration.

Resolution:

Defend the final solution

Collect instructional reflection and students’ informal feedback after the mockup pilot.

Beta test: Revise and update all the modules following the identified challenges during students’ work.

Care About

During this phase, the IDer focused on understanding the needs by questioning the course instructor, who had been teaching the course for several years. The course instructor, who is an engineering faculty member, observed engineering students’ changes during the COVID-19 pandemic when the course moved to an asynchronous online mode from a traditional face-to-face mode. After multiple iterations of the online course version, challenges were still there. However, the traditional course version did not have any issues with students’ behavior changes. With ChatGPT's arrival in November 2022, more challenges arose due to ethical considerations of human coding versus GenAI coding. While face-to-face students coded in class, online students continued their coding online. Even though they used ChatGPT without disclosing its use, it was obvious that students were using the codes without critically evaluating their correctness. This also created other challenges on whether the course could prepare high-quality specialists for the current market if they rely only on ChatGPT in their work.

Further, understanding the dilemma between the use of ChatGPT versus the use of human coding brought another challenge to the course: students could not achieve the highest level of critical thinking when they worked with Structured Query Language (SQL). SQL requires more analytical thinking that students could lose during the foundational phase. The IDer and the instructor went through students’ feedback from all the semesters and combined their responses into one document to find patterns. One of the patterns we identified was that students struggled with different types of assignments, where the transition from the conceptual part to the applied part was difficult for them. For example, students said, “The assignments quickly and constantly go from conceptual to extremely advanced, or “Some of the assignments were dull and tedious.”

Then, students’ assignments were collected and analyzed for the quality of coding. It was obvious that the quality dropped tremendously, showing the gap in the course, especially between the first two weeks and the transition to weeks 3 and 4. As the instructor explained that when students move to the more complex diagram creation in weeks 3 and 4, they feel lost. The instructor considered that students struggle with the concepts, syntax, commands, and principles. The instructor’s feedback helped identify the gap in the course when each week went through the review. The gap was identified as missing scaffolding during the first two foundational weeks, where some students used ChatGPT. For example, we found that the sequence of the course assignments was the following: students completed the quiz, participated in discussions to understand the problem, and then they completed an individual assignment in addition to the submission of the practice problem assignment. The triggering events phase was finalized with the problem identification of how to take care of the dilemma and the gap to help engineering students overcome existing challenges.

Taking Care Of

'Taking Care Of' included an exploration phase to identify resources to meet students’ needs, compare possible ideas, and brainstorm. During this phase, the IDer worked on the first two foundational modules designed to minimize identified challenges - coding skills. While it was clear that coding skills development required more support from the instructor, it was also important to build students' self-regulation, as the instructor couldn’t provide 24/7 support. To identify the module design, the IDer collaborated with two junior IDers who participated as graduate students from the Educational Technology program. During the exploration phase, the IDer and the instructor used ChatGPT to test the first two assignments in terms of the code's accuracy. The codes were invalid and had several errors that only experts could identify. As this was an exploratory phase, both the IDer and the instructor searched for any possible solutions to help students develop coding skills in addition to traditional instructional support. Therefore, it was decided to create the ChatGPT guide to support students’ learning during the design and drawing of diagram parts.

Table 6

Example of the ChatGPT Guide Developed by the Engineering Faculty

Phase

Action

1. Attempt the Problem First

  1. Understand the problem

  2. Plan Your Solution: outline the SQL clauses and operations, i.e., SELECT, JOIN, WHERE, and GROUP BY.

  3. Write the SQL Query

  4. Test and Debug: Oracle Live SQL or SWL *Plus (not in ChatGPT yet)

2. Consult ChatGPT for Guidance

  1. Explain Your Attempt

  2. Ask Specific Questions

  3. Request Conceptual Explanations: Can you explain how the window function works?

  4. Ask for an Example Query: Can you provide an example of a query that retrieves several columns from a table and uses an alias?

3. Practice and Explore Further

  1. Practice Regularly

  2. Explore Advanced Topics

  3. Seek Additional Resources

4. Avoid Cheating

Use ChatGPT as a tutor, not as a crutch.

5. Finding a ChatGPT SQL Expert

  1. Access Platform at https://chatgpt.com to interact with an SQL Expert.

  2. Registration and Login.

  3. Navigating to SQL Expert.

  4. Using SQL Expert.

6. Tips for Using SQL Expert Effectively

  1. Attempt the Problem First.

  2. Be Specific.

  3. Explore Features.

  4. Follow-Up.

The option was to prepare students during the first two weeks with foundational knowledge before they move to week 3 with integration of ChatGPT, before considering more sophisticated GenAI software for the SQL coding, such as SQL-Expert or SQL-Data Analyst. During the exploration phase, the course instructor explored different collaborative options so that students could move to group coding more easily in week 3. For example, during the first week, we followed the sequence from “identify and annotate the Entity Relationship Diagram (ERD) elements” to “analyze errors in ERD.”

The role of two junior IDers in this process was to complete two assignments as students to understand each instructional step. Based on both junior IDers' feedback, both assignments were fully revised, and new instructional steps were created. The first two weeks were mapped based on the new assignments’ guidelines, following the main goal to create the course map with identified objectives to align with the SQL skills and foundational knowledge. To do so, we created the table to explain how learning progress usually takes place from understanding the concepts to explaining them (Table 7). To fill the gap, the previous assignment sequence was changed, and a new collaboration was added where the transition from foundational concepts to application took place. Table 7 shows that the previous version included seven assignments to complete without providing enough scaffolding to transition from conceptual understanding to applying the concepts. The new version shows that existing assignments were converted to the four assignments, where the new paper assignment to approach ethical consideration of AI in education was added; previous quizzes were merged into one learning quest; discussions in the form of peer review were removed, and the new collaboration was added. Collaboration was in the form of cooperation where students were able to see their peer’s contribution and if needed, they were able to correct them or ask questions. Finally, the final work for the assessment was added instead of the previous multiple individual assignments. This change allowed for scaffolding conceptual learning to apply learning while preparing students for the use of GenAI in the course.

Table 7

The Course Map Example

Modules

Sequence of Verbs by Levels

Concepts To Learn Identified by the Course Instructor

Assignments

Module 1: Overview of Database Concepts

understand – recognize -identify – differentiate - define-describe - explain -articulate


** The IDer did not focus on explaining why “understand” is not a good idea to use; instead, the IDer focused on the instructor’s ability to identify students’ needs.

Data, table, record, attribute, users, clients; Database Management System (DBMS), database catalog, query language; Database design, Entity-Relationship Model, relational model

RDBMS, NoSQL, SQL, Big Data systems, AI in Databases; Big Data ecosystem

DDL, DML, DBAs, VDL, SD, database schema, data independence, meta—data, transaction-processing applications, host language, database utilities;

Previous Sequence:

Syllabus Assignment

Quiz

Discussion (Peer Review)

Quiz

Practice Problems

Individual Assignment “Relational Database Specifications”

Discussion (Peer Review)


Updated Sequence:

Ethical Consideration of AI in Education: Individual Assignment

Learning Quest

Collaborative Work:

Identify and Annotate ERD Elements

Relational Database Specifications Individual Assignment

Care Giving

The integration phase included the application of learning sciences principles to complete the task. The IDer and the course instructor discussed what type of collaborative assignments could better serve the goal of helping students transition to further collaboration in week 3, where they were required to code together with ChatGPT, perform tasks. By applying the principles of learning sciences, we decided to use both individual and cooperative approaches to prepare students for group coding with ChatGPT in the form of a dialogue, especially when students were new and didn’t know each other well. Students completed the first new assignment entitled “Considering the Use of Generative AI in Education,” where students were asked to write a short research paper exploring the ethical implications of using GenAI tools like ChatGPT in educational settings. Students focused on the benefits, risks, and the development of guidelines for responsible use. The goal for this assignment was to hear from the students themselves instead of the course instructor telling them about the improper use of GenAI in the course. For example, one of the students created the following research questions for ethical consideration of using ChatGPT in the course:

In what ways does ChatGPT influence fairness in educational practices?

What are the potential dangers of students relying too heavily on AI for learning?

How can educators uphold academic integrity while integrating generative AI into their teaching approaches?

Further, the same student proposed the following ethical framework while completing this assignment:

Promote Transparency: Ensure students and educators fully understand the capabilities and limitations of AI tools.

Address Plagiarism: Deploy effective detection systems and establish clear guidelines for the ethical use of AI.

Safeguard Privacy: Protect user data by anonymizing and securely managing it.

Support Active Learning: Utilize ChatGPT as a supplementary tool to enhance, rather than replace, critical thinking and problem-solving skills.

Supervise AI Usage: Educators should actively monitor AI use to minimize misuse and overdependence.

The second assignment was collaborative work in the form of cooperation that introduced students to the basics of entry-relationship diagrams (ERDs) and their elements. Students worked together to identify the main elements of the ERD using Lucid and reflected on their understanding. To care about students’ frustration with the first coding experience, the assignment minimized anxiety by asking students to identify and annotate at least one element from each of the six categories that were listed for them. Previously, students were not provided with the list. Students identified their contribution and addressed each part of the diagram. Once students felt comfortable with the new tool, Lucid, and the new programming language, they were asked to provide feedback by summarizing their classmates’ solutions.

Figure 4

Student’s Example of the Annotated ERD

A diagram of a product

AI-generated content may be incorrect.


Once the learning science principles were identified for "Care Giving", the mock-up of the first two modules was prepared. The mock-up version was implemented in a pilot phase called the alpha version to test the robustness of the learning sequencing. At the end of the pilot, we collected students’ feedback and instructors’ observations to build a beta version in the fall of 2025.

Figure 5

Mock-Up of Module 1 Example

A screenshot of a computer

AI-generated content may be incorrect.


Care Receiving

The 'Care Receiving' stage was when engineering students experienced a new learning sequence with the mock-up to develop coding skills from foundational knowledge to transition more easily to module 3 with ChatGPT. We were able to observe students’ behavior by collecting their questions, reactions, or any other informal responses they sent to the course instructor. Based on the instructor’s informal observations, the students from spring 2025 were more active in completing the course assignments than the students from previous semesters. One of the assumptions that the IDer and the course instructor expressed was that the first two phases (triggering event and exploration) helped these students with the foundational knowledge, with the expectation of using ChatGPT during the group coding. As students used only the ChatGPT guide during this pilot, we did not collect data from their group coding assignments, as we tested the guide itself. Instead, we asked students to provide their insightful reflection on their experiences of using ChatGPT during the group coding while they used the newly developed ChatGPT guide by the course instructor. Below is one of the reflections to demonstrate students’ overall impression of implementing ChatGPT into the coding process:

For my ChatGPT experience, I specifically asked ChatGPT to "guide without providing the solution," incorporating questions of varying difficulty and complexity. As a first-time user learning to code in Oracle SQL, I found this approach incredibly helpful. ChatGPT offered step-by-step explanations, breaking down each question into smaller, more digestible components. It felt as though I were receiving a tailored response, like what I might expect from a professor or a teaching assistant in my course.

In addition, as each PIM phase based on Tronto’s ethics of care was carefully examined before implementing new assignments, students did not ask any additional questions about how to complete assignments and never submitted unfinished work. Finally, this was the first time since the course instructor taught this course that the end-of-the-semester evaluation came unexpectedly as the highest compared with previous semesters.

Framework for Inclusive and Equitable Higher Education Practices

Two case examples from the practical experiences of the IDers, who are the authors of this paper, show the importance of ethics of care consideration when planning to use GenAI in the ID process. We propose to consider Tronto’s four phases of care of ethics framework, consisting of caring about, taking care of, care giving, and care receiving in GenAI-enhanced ID practices. With consideration of ethical issues when the use of GenAI is involved, it is necessary to understand the process from three ID perspectives: 1) GenAI ID process enhancement, 2) GenAI instructional content development, and 3) GenAI integration. The table below describes each phase of the process to guide and inform where and how GenAI needs to be used and why. The framework also shows which GenAI capabilities to consider in designing inclusive and equitable environments to possibly avoid any ethical issues during the course design and development stages (Table 8). In addition, we propose a checklist for evaluating the implications of GenAI tools in ID. We believe that the checklist can serve as a guide to evaluate the ethical dimensions of integrating specific GenAI tools into the learning environments (Appendix A).

Table 8

Framework for Inclusive and Equitable Higher Education Practices

Tronto’s Ethics of Care Phases

ID Responsibilities

GenAI Ethical Considerations

Four ID GenAI Perspectives

GenAI Capabilities for Inclusive and Equitable Practices

Caring About - Attentiveness


Identify and understand individual needs and concerns about using GenAI.

Lack of diversity,

issues with equity, and accessibility.

GenAI ID Design Process Enhancement, i.e., mapping, needs assessment, and planning.

Personalization, analysis of learner profiles, and modification of educational strategies.

Taking Care Of - Responsibility

Taking and assuming responsibility for addressing any identified individual needs in course design.

Development of policies and guidelines for the use of the GenAI.

GenAI Instructional Content Development, i.e.,

journey mapping, persona development, exploring resources,

brainstorming, and production.

Personalization of the content, automation, and real-time feedback in a large classroom.

Care Giving - Competence

Meeting the needs through performing instructional design tasks, resources, design of course materials, and follow-up activities based on the knowledge of learning sciences. Perform instructional design tasks that involve knowledge of learning sciences.

Becoming dependent on GenAI, academic integrity, bias, and passive learning.

GenAI Instructional Content Development, i.e., prototyping, usability testing, building a new idea by adding to previous ideas, and implementation.

Automation

Care Receiving - Responsiveness

Monitoring and responding to feedback, reactions, and behavior on the GenAI-integrated course.

Privacy, student consent, and over-automation.

GenAI Integration, i.e., feedback analysis, longitudinal studies,

finding the final solution, assessment

Automation, real-time feedback.

Conclusion

This conceptual paper suggests the consideration of a framework for inclusive and equitable practices for GenAI-enhanced ID. The framework can guide the ID process following the four phases of Tronto’s ethics of care. The framework is based on three perspectives of GenAI: 1) IDers use GenAI to enhance the ID process; 2) IDers use GenAI to develop instructional content; and 3) IDers collaborate with faculty to integrate GenAI into course assignments and activities. The framework considers GenAI ethical issues: biases, lack of diversity, and privacy, to design accessible and equitable learning based on accepted policies and guidelines.

Acknowledgement

The authors of this paper wish to thank Dr. Ioulia Rytikova, Professor and Associate Chair, MS AIT Online Program Director, Co-Director of the Personalized Learning in AIT (PLAIT) Laboratory, College of Engineering & Computing, George Mason University for providing the course resources during the development phase for the second case example.

References

Abdelghani, R., Murayama, K., Kidd, C., Sauzéon, H., & Oudeyer, P. Y. (2025). Investigating middle school students question-asking and answer-evaluation skills when using ChatGPT for science investigation. arXiv preprint arXiv:2505.01106. https://doi.org/10.48550/arXiv.2505.01106

Ahmad, S. F., Han, H., Alam, M. M., Rehmat, M., Irshad, M., Arraño-Muñoz, M., & Ariza-Montes, A. (2023). Impact of artificial intelligence on human loss in decision making, laziness, and safety in education. Humanities and Social Sciences Communications, 10(1), 1-14. https://doi.org/ 10.1057/s41599-023-01787-8

Al-Zahrani, A. M. (2024). The impact of generative AI tools on researchers and research: Implications for academia in higher education. Innovations in Education and Teaching International, 61(5), 1029-1043. https://doi.org/10.1080/14703297.2023.2271445

Baker, R. S., & Hawn, A. (2022). Algorithmic bias in education. International Journal of Artificial Intelligence in Education, 32, 1052–1092. https://doi.org/10.1007/s40593-021-00285-9

Borah, A. R., Nischith, T. N., & Gupta, S. (2024, January). Improved learning based on GenAI. Proceedings of the 2024 2nd International Conference on Intelligent Data Communication Technologies and Internet of Things (IDCIoT), India, 1527-1532. https://doi.org/10.1109/IDCIoT59759.2024.10467943

Bozkurt, A. (2024). GenAI et al. Cocreation, authorship, ownership, academic ethics, and integrity in a time of generative AI. Open Praxis, 16(1), 1-10. https://doi.org/10.55982/ openpraxis.16.1.654

Buddemeyer, A., Walker, E., & Alikhani, M. (2021). Words of wisdom: Representational harms in learning from AI communication. LearnTec4EDI: Designing Learning Technologies for Equality, Diversity and Inclusion - Workshop, September 20th–24, 2021, Bolzano, Italy and Online. https://doi.org/10.48550/arXiv.2111.08581

Campbell, R. C., Yasuhara, K., & Wilson, D. (2012). Care ethics in engineering education: Undergraduate student perceptions of responsibility. In Frontiers in Education Conference (FIE). Seattle, WA: IEEE. https://doi.org/10.1109/FIE.2012.6462370

Choi, W., Bak, H., An, J., Zhang, Y., & Stvilia, B. (2024). College students' credibility assessments of GenAI‐generated information for academic tasks: An interview study. Journal of the Association for Information Science and Technology, 76, 867–883. https://doi.org/10.1002/asi.24978

Costello, E., Welsh, S., Girme, P., Concannon, F., Farrelly, T., & Thompson, C. (2022). Who cares about learning design? Near future superheroes and villains of an educational ethics of care. Learning, Media and Technology, 1-16. https://doi.org/ 10.1080/17439884.2022.2074452

Dickey, E., Bejarano, A., & Garg, C. (2023). Innovating computer programming pedagogy: The AI-Lab framework for Generative AI adoption. arXiv preprint arXiv:2308.12258.

Duah, J. E., & McGivern, P. (2024). How generative artificial intelligence has blurred notions of authorial identity and academic norms in higher education, necessitating clear university usage policies. The International Journal of Information and Learning Technology, 41(2), 180–193. https://doi.org/10.1108/IJILT-11-2023-0213

Eke, D. O. (2023). ChatGPT and the rise of GenAI: Threat to academic integrity? Journal of Responsible Technology, 13, 100060. https://doi.org/10.1016/j.jrt.2023.100060

Escalante, J., Pack, A., & Barrett, A. (2023). AI-generated feedback on writing: Insights into efficacy and ENL student preference. International Journal of Educational Technology in Higher Education, 20(1), 57. https://doi.org/10.1186/s41239-023-00425-2

Fisher, B., & Tronto, J. (1990). Toward a feminist theory of caring. In E. Abel & M. Nelson (Eds.), Circles of Care (pp.35-62). Albany: State University of New York.

Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical thinking, cognitive presence, and computer conferencing in distance education. American Journal of Distance Education,15(1), 7– 23. https://doi.org/10.1080/08923640109527071

Govender, S., Immenga, C., & Gachago, D. (2023). Designing systems with care: Responding to inequality in an online course in South Africa. EdTech Books. https://doi.org/10.59668/722.13023

Gray, C. M., Yilmaz, S., Daly, S., Seifert, C. M., & Gonzalez, R. (2015b). Idea generation through empathy: Reimagining the ‘cognitive walkthrough’. In Proceedings of the ASEE Annual Conference (pp. 26.871.1–26.871.29), Seattle, WA. https://dr.lib.iastate.edu/handle/20.500.12876/44670

Haleem, A., Javaid, M., Qadri, M. A., Singh, R. P., & Suman, R. (2022). Artificial intelligence (AI) applications for marketing: A literature-based study. International Journal of Intelligent Networks, 3, 119-132. https://doi.org/10.1016/j.ijin.2022.08.005

Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 30(1), 99-120. https://doi.org/10.1007/s11023-020-09517-8

Hauske, S., & Bendel, O. (2024, May). How can GenAI foster well-being in self-regulated learning? Proceedings of the AAAI Symposium Series, 3(1), 354-361. https://doi.org/10.1609/aaaiss.v3i1.31234

Hess, C., Kunz, S., & Heinisch, C. (2024). Teaching future project managers in using artificial intelligence-based tools in project management activities. Proceedings of the ICERI2024, IATED,1133-1140. https://doi.org/10.21125/iceri.2024.0361

Huang, Y., Li, S., & Liu, Z. (2025). Can GenAI complement supervisor support in shaping postgraduates’ research experiences? A mixed-methods approach. Studies in Higher Education, 1-19. https://doi.org/10.1080/03075079.2025.2495710

Jadhav, S., Vhatkar, S., & Aalam, Z. (2024). Bridging the gap: Exploring the revolutionary application of GenAI in language teaching and learning. Journal of Electrical Systems, 20(4s), 2185-2193. https://doi.org/10.52783/jes.2325

Kardon, J.B. (2005). Concept of “care” in engineering. Journal of Performance of Constructed Facilities, 19(3), 256–260. https://doi.org/10.1061/(ASCE)0887-3828(2005)19:3(256)

Kebaetse, M. B., & Sims, R. (2016). Using instructional consultation to support faculty in learner-centered teaching. The Journal of Faculty Development, 30(3), 31-40.

Kumar, S., Gunn, A., Rose, R., Pollard, R., Johnson, M., & Ritzhaupt, A. D. (2024). The role of instructional designers in the integration of Generative Artificial Intelligence in Online and Blended Learning in Higher Education. Online Learning, 28(3), 207-231. https://doi.org/10.24059/olj.v28i3.4501

Mueller, C. M., Richardson, J., Watson, S. L., & Watson, W. (2022). Instructional designers' perceptions & experiences of collaborative conflict with faculty. TechTrends, 66(4), 578-589. https://doi.org/10.1007/s11528-022-00694-0

Noddings, N. (1987). An ethic of caring. In J. Devitis (Ed.), Women, culture, and morality (pp. 333-372). New York: Peter Lang.

Olesova, L., & Campbell, S. (2019). The impact of the cooperative mentorship model on faculty preparedness to develop online courses. Online Learning Journal, 23(4), 192-213. https://doi.org/10.24059/olj.v23i4.2089

Pantazidou, M., & Nair, I. (1999). Ethics of care: Guiding principles for engineering teaching & practice. Journal of Engineering Education, 88(2), 205–212.

Rao, K. (2021). Inclusive instructional design: Applying UDL to online learning. The Journal of Applied Instructional Design, 10(1). https://doi.org/10.59668/223.3753

Rasul, T., Nair, S., Kalendra, D., Balaji, M. S., de Oliveira Santini, F., Ladeira, W. J., ... & Hossain, M. U. (2024). Enhancing academic integrity among students in GenAI Era: A holistic framework. The International Journal of Management Education, 22(3), 101041. https://doi.org/10.1016/j.ijme.2024.101041

Ruiz-Rojas, L.I., Acosta-Vargas, P., De-Moreta-Llovet, J., & Gonzalez-Rodriguez, M. (2023). Empowering Education with Generative Artificial Intelligence Tools: An Approach with an Instructional Design Matrix. Sustainability, 15(15), 11524. https://doi.org/10.3390/su151511524

Shailendra, S., Kadel, R., & Sharma, A. (2024). Framework for adoption of generative artificial intelligence (GenAI) in education. IEEE Transactions on Education. https://doi.org/10.36227/techrxiv.172175540.04742569/v1

Singh, R. G., & Ngai, C. S. B. (2024). Top-ranked US and UK’s universities’ first responses to GenAI: Key themes, emotions, and pedagogical implications for teaching and learning. Discover Education, 3(1), 115. https://doi.org/10.1007/s44217-024-00211-w

Solomon, R. C., & Flores, F. (2003). Building trust: In business, politics, relationships, and life. Oxford University Press.

Tronto, J.C. (1993). Moral boundaries: A political argument for an ethic of care. New York: Routledge.

Tronto, J. C. (1998). An ethic of care. Generations: Journal of the American society on Aging, 22(3), 15-20

Walter, A. (2024). Utilizing language-generating Artificial Intelligence in educational planning: A case study. Journal of Interdisciplinary Teacher Leadership, 8(1), 29-59. https://kenanfellows.org/journals/wp-content/uploads/sites/377/2024/10/Utilizing-LGAI-in-Educational-Planning.pdf

Weger Jr, H., Castle Bell, G., Minei, E. M., & Robinson, M. C. (2014). The relative effectiveness of active listening in initial interactions. International Journal of Listening, 28(1), 13-31. https://doi.org/10.1080/10904018.2013.813234

Yang, E., & Beil, C. (2024). Ensuring data privacy in AI/ML implementation. New Directions for Higher Education, 2024(207), 63-78. https://doi.org/10.1002/he.20509

Yusuf, A., Pervin, N., & Román-González, M. (2024). Generative AI and the future of higher education: a threat to academic integrity or reformation? Evidence from multicultural perspectives. International Journal of Educational Technology in Higher Education, 21(1), 21.

Zamora, M., & Bali, M. (2025). From socially just care to socially just distributed ecosystems of care. Journal of Learning Development in Higher Education, (35).

Zheng, Y. D., & Stewart, N. (2024). Improving EFL students’ cultural awareness: Reframing moral dilemmatic stories with ChatGPT. Computers and Education: Artificial Intelligence, 6, 100223.

Appendix A

Checklist for Evaluating the Ethical Implications of GenAI Tools in Instructional Design

Purpose: This checklist, guided by the principles of Tronto’s Ethics of Care, helps instructional designers and faculty systematically evaluate the ethical dimensions of integrating a specific GenAI tool into the learning environment.

Instruction: Consider the specific GenAI tool and its intended use within your course context. Rate the level of concern or applicability for each point and note mitigation strategies. The final column aligns each dimension with Tronto's phases of care ethics to help frame the evaluation:

  • Caring About (Attentiveness): Recognizing the diverse needs, potential vulnerabilities, and concerns related to using GenAI.

  • Taking Care Of (Responsibility): Assuming responsibility for addressing the identified needs and potential issues within the specific educational context and institutional policies.

  • Care Giving (Competence): Ensuring the actual work of implementing and using GenAI is done competently, effectively meeting pedagogical goals while managing risks.

  • Care Receiving (Responsiveness): Monitoring and responding to how students (the care receivers) interact with and are affected by the GenAI integration, including their feedback and outcomes.

Ethical Dimension

Tronto’s Phase Alignment

Checklist Item

Level of Concern (Low/Med/High)

Mitigation Strategies

Equity & Accessibility

Caring About

Does the tool require specific hardware, software, or internet connectivity that may not be available to all students?


E.g., provide alternatives, ensure access via campus labs, check for mobile compatibility

Caring About

Is there a cost associated with the tool for students? If so, is it prohibitive? Are free alternatives available?


e.g., Seek institutional licenses, use free tiers, and provide equitable alternatives.

Caring About

Does the tool's interface and output meet accessibility standards (e.g., WCAG) for students with disabilities?


e.g., Test with screen readers, check color contrast, and provide text alternatives for generated images/videos.

Bias & Representation

Caring About, Care Giving

Is the GenAI tool known to perpetuate stereotypes (e.g., gender, racial, cultural)? How was it trained (consider WEIRD bias)?


e.g., Critically review outputs, supplement with diverse materials, teach students to identify bias and use prompts designed to reduce stereotypical output.

Care Giving

Does the tool allow for diverse perspectives, or does it tend to homogenize information or cultural representation?


e.g., Use AI as a starting point, not the sole source; encourage critical comparison with other sources.

Data Privacy & Consent

Care Receiving

What student data does the tool collect (e.g., prompts, submissions, personal identifiers)?


e.g., Review the tool's privacy policy, minimize data input, and use anonymization where possible.

Care Receiving

Where is the data stored? Who has access (institution, third-party vendor)? Is it secure?


e.g., Check vendor agreements, institutional data security policies.

Care Receiving

Is informed consent required from students for their data to be used by this tool? (See Consent Template)


e.g., Implement a clear consent process if personal or identifiable data is used.

Academic Integrity

Taking Care Of, Care Giving

Could the tool be easily misused for plagiarism or cheating (e.g., submitting generated text as original work)?


e.g., Design assessments that require higher-order thinking beyond simple generation, set clear usage policies, and teach proper citation.

Care Giving

Does the use of the tool undermine the development of essential skills or learning objectives for the course?


e.g., Focus AI uses on specific parts of a task (brainstorming, revision), ensuring students still perform core critical thinking/skill application.

Transparency & Explainability

Care Giving, Care Receiving

Is it clear to students when they are interacting with or receiving content generated by AI?


e.g., Label AI-generated content, and explain the tool's role in activities.

Care Giving

Are the tool's limitations, potential inaccuracies (hallucinations), and capabilities understood and communicated?


e.g., Educate users about limitations, and encourage verification of AI-generated information.

Over-reliance & Pedagogy

Care Giving

Does the integration encourage passive learning or over-dependence on the tool?


e.g., Structure activities to require critical evaluation of AI output, and ensure human oversight and feedback loops.

Taking Care Of, Care Giving

Does the tool genuinely enhance the learning experience and align with pedagogical goals?


e.g., Pilot the tool, gather student feedback, and align use cases directly to learning objectives.

Institutional Responsibility

Taking Care Of

Does the use of this tool align with institutional policies on AI, data privacy, and academic integrity? Are specific guidelines needed?


e.g., Consult institutional policies, and advocate for clear guidelines if lacking.