A Co-Creative Approach for AI-Enhanced Instructional Design

Combining Generative Artificial Intelligence and Learning Analytics

In this paper, we present an innovative approach to instructional design that combines generative artificial intelligence (AI) with learning analytics to facilitate the co-creative development of learning materials and assessments. The approach was evaluated in an exploratory study with 54 preservice teachers across three university courses. The results demonstrated the effectiveness of the approach, as the students rated the AI-generated materials as good to very good and highlighted the material clarity and engagement opportunities as benefits. The approach shows particular promise for teacher education, as it can enable future educators to develop both practical experience and a critical understanding of AI-enhanced instructional design.

Introduction

The potential for artificial intelligence (AI) to transform and shift the traditional paradigm of teaching and learning in education is a topic of considerable interest. In recent years, developments in the field of AI have been significant and continuous, with an emphasis on generative AI (GenAI; Mishra et al., 2024; Pratschke, 2024; Yusuf et al., 2024). In general, the benefits of AI in education are manifold and include personalized learning, greater insight into student understanding, positive impacts on learning outcomes, and reduced planning and administrative time for educators (Bond et al., 2023; Chan & Hu, 2023; Kasneci et al., 2023; Mah & Groß, 2024). For example, learning analytics provides deeper insights into student understanding, thereby allowing educators to more effectively identify and address learning gaps and positively impact student retention (Arnold & Pistilli, 2012; Mah, 2016; Márquez et al., 2023; Tsai et al., 2020). Key challenges related to AI in education include ethical considerations (e.g., concerns about honesty and plagiarism; Cotton et al., 2023; Holmes et al., 2022; Susnjak, 2022), curriculum development (Bellas et al., 2023; Chiu, 2021; Southworth et al., 2023), infrastructure, and digital and AI literacy including empowering to use AI-based tools (Almatrafi et al., 2024; Delcker et al., 2024; Fraillon et al., 2020; Hall, 2025; Ng et al., 2021). However, despite the growing body of research and literature on AI in education (Chiu et al., 2023; Crompton & Burke, 2023), exploration of the integration of GenAI with other AI-based educational technologies, such as learning analytics, has been limited (Khosravi et al., 2023). Furthermore, relatively little research has been conducted on combining AI-based tools for instructional design purposes (S. Choi et al., 2023; Luo et al., 2024; Xu et al., 2025). Hence, in order to address this research gap, we examined a combination of AI-based tools from the perspective of instructional design in this study. While theoretical frameworks for AI integration in education exist, a gap remains in practical approaches for implementing these technologies combined in educational settings. We therefore evaluated an instructional design approach which integrates GenAI (large language models, LLM) and learning analytics in a higher education setting. This paper details a co-creative approach for instructional design and its evaluation through an exploratory study with preservice teachers and students of educational science on the basis of a design approach which systematically combines genAI and learning analytics. It concludes with implications for educational practice and recommendations for implementing AI-based tools in instructional design processes.

Theoretical Background

Artificial Intelligence in Education

The field of AI has a long history, with research dating back to the 1960s. However, the exponential growth in data availability, computing power, and consequently, AI capabilities over the past few years has led to a rapid increase in research and public discourse on the use of AI in education (Crompton & Burke, 2024; Fütterer et al., 2023). Based on one of the first systematic reviews on AI in education in 2019, researchers identified four principal categories from a typology of AI research and applications in the field of education: profiling and prediction, intelligent tutoring systems, assessment and evaluation, and adaptive systems and personalization (Zawacki-Richter et al., 2019). This foundation served as the basis for a more recent meta-systematic review (Bond et al., 2023). Advancements in AI are evolving rapidly. However, systematic research and evidence-based approaches on AI in education remain scarce (Bauer et al., 2025). Learning analytics and GenAI represent two particularly prominent technologies in contemporary AI education discussions. While learning analytics has been specifically developed within educational contexts to work directly with educational data, GenAI presents numerous potential educational applications which require systematic analysis and evaluation. In order to enhance understanding of AI on learning, Bauer et al. (2025) introduce the ISAR model to distinguish inversion, substitution, augmentation, and redefinition effects of AI enhancement in the context of cognitive learning processes and outcomes. For instance, the term augmentation refers to the capacity of artificial intelligence to augment instruction by providing supplementary cognitive learning opportunities.

Learning Analytics

Learning analytics is an emerging technology focused on adaptive systems and personalization used to collect and analyze data from learners and their environment to optimize learning (Ifenthaler, 2015; Long & Siemens, 2011). It has been researched and implemented in universities internationally for more than a decade (Arnold & Pistilli, 2012; Sclater et al., 2016), and various learning analytics frameworks now exist (Drachsler & Greller, 2016; Khalil et al., 2022; Márquez et al., 2023). In Europe, learning analytics remains relatively underexplored and underapplied in higher education (Tsai et al., 2020; Wollny et al., 2023). The potential of learning analytics lies primarily in evaluating learning materials, improving instructional design and facilitation, and enhancing personalization, thereby increasing academic success. Learning analytics is primarily used in online learning environments but can be integrated into blended learning (Mah et al., 2023) and classroom settings (Shibani et al., 2020). One of the main challenges is to ensure data protection and privacy (Drachsler & Greller, 2016; Jones, 2019) and competent utilization by users (Ifenthaler, 2017a).

Learning analytics design involves the use of available information from various educational sources, including learners’ characteristics, behaviors, and performance, as well as information on the learning design to support pedagogical interventions and the redesign of learning environments (Ifenthaler, 2017b). While the integration of learning design and learning analytics has been discussed for over a decade (Lockyer & Dawson, 2011), large-scale implementation studies remain limited (Frick et al., 2022; Ifenthaler et al., 2018). Research reviews have indicated that the field is still evolving toward maturity (Drugova et al., 2023; Mangaroska & Giannakos, 2019). Effective implementation requires established frameworks (Law & Liang, 2020) and well-defined indicators (Ahmad et al., 2022) to enable efficient data analysis for instructional designers. While instructional designers recognize the potential for optimizing learning experiences, adoption remains constrained by infrastructural challenges and professional development needs (Muljana & Luo, 2020).

GenAI

Recently, GenAI has been the focus of much attention in higher education worldwide, with discussions about its impact on learning and teaching, including assessments, integrity, and AI literacy (Almatrafi et al., 2024; Celik, 2023; Mah et al., 2025; Ng et al., 2021). Studies have shown that students and faculty in higher education are already utilizing GenAI, particularly LLM (Chan & Hu, 2023; Mah & Groß, 2024; von Garrel & Mayer, 2023). Consequently, educational institutions around the globe have been formulating guidelines and policies concerning the utilization of GenAI (Jin et al., 2025). However, research indicates that GenAI is often employed without consideration of the generated content, which is inherently biased based on the training data (e.g., discrimination, lack of diversity; Bianchi et al., 2023; Schlude et al., 2024). Moreover, the inaccurate results that LLM can produce, referred to as “hallucinations”—or, as Hicks et al. (2024) has called them, “bullshit”—are often overlooked. Hence, it is crucial for students and educators to always check the accuracy of the provided (learning) content (Tlili et al., 2023). GenAI in education can have potential benefits, such as enhancing creativity (e.g., providing original and diverse ideas), saving time and efficiency (e.g., for administrative tasks so that educators can focus on more meaningful undertakings, such as personal interactions and instructional activities), and teaching time efficiency (e.g., assisting educators with lesson planning and educational materials; Bozkurt, 2024; Crompton & Burke, 2024; Giannakos et al., 2024; Tlili et al., 2023; Zawacki-Richter et al., 2019; Bond et al, 2023).

The integration of GenAI into instructional design activities holds a number of opportunities and challenges (Hodges & Kirschner, 2024), which have called established pedagogical practices into question. For example, ChatGPT has been proven to be able to write graduate-level instructional design assignments with minor challenges in terms of contextualization (Parsons & Curry, 2024). Current applications include the generation of lesson (Moundridou et al., 2024) and teaching plans (Hu et al., 2024), the generation of high-quality questions which promote learning (Kim et al., 2025), and instructional materials (Mittal et al., 2024). G. Choi et al. (2024) conducted an analysis to identify the strengths, weaknesses, opportunities, and threats of using GenAI for instructional design by using the LLM ChatGPT to create a course map for an online course. They identified several strengths (e.g., generating suitable learning objectives, aligning activities with learning objects, providing a variety of instructional activities and assignments), weaknesses (e.g., activities not feasible due to limited contextual understanding, unreliable output, vagueness of instructions), opportunities (e.g., reducing instructors’ workloads due to rapid prototyping of course maps, promoting student-centered learning), and threats (e.g., quality control). Given the disruptive potential of GenAI in education, it seems necessary that instructional designers adopt a conscientious, cautious, and ethically sensitive approach toward GenAI integration and support this with training and collaborative guidance (Kumar et al., 2024).

Instructional Design Approach for Combining GenAI and Learning Analytics

In light of the aforementioned developments, research has increasingly focused on leveraging learning analytics and GenAI to enhance instructional design processes, and researchers have identified specific integration points within the multitude of established instructional design models (Bond & Dirkin, 2020; Dousay & Stefaniak, 2024). The widely adopted analysis, design, development, implementation, and evaluation (ADDIE) framework (Branch, 2010) can serve as a reference, as it provides structured phases through which AI technologies can be systematically integrated. For example, learning analytics can enhance the analysis and evaluation phases through data-driven insights, while GenAI can support the design and development phases by facilitating content creation and adaptation. In summary, researchers have highlighted three critical aspects in the educational AI landscape. First, while learning analytics and GenAI each offer distinctive benefits for instructional design, their potential synergies remain largely unexplored. Second, despite growing technological capabilities, practical, implementable frameworks are needed, particularly as instructional designers face adoption challenges. Third, the effective integration of AI technologies requires systematic approaches that ensure quality control while leveraging the capabilities of these technologies.

Building on these insights, we present a design approach that systematically combines the strengths of both technologies (Mah, in press): GenAI’s capability for rapid content creation and the potential of learning analytics for data-driven optimization (Figure 1). The approach specifically addresses current challenges by:

  • providing a structured approach for implementing AI technologies in instructional design

  • incorporating quality control mechanisms through iterative feedback

  • supporting the development of AI literacy through practical application

  • enabling evidence-based refinement of AI-generated materials through the utilization of learning analytics

The approach enables a co-creative approach for developing meaningful learning materials and assessments through the collaborative interaction of educators and learners with and through AI technologies, where continuous feedback and data-driven insights guide the iterative improvement of AI-generated content. Accordingly, the approach can serve as a conceptual guide on how to effectively structure and implement teacher–AI collaboration (Kim, 2024).

Figure 1

LA–GenAI Design Approach

Seven step instructional design cycle when using AI

The approach was implemented in a blended learning scenario (Hrastinski, 2019) which combined online learning activities with face-to-face classroom sessions. It encompasses the following steps in the process of instructional design:

  1. The educator develops new learning materials and assessments with the help of GenAI (e.g., self-tests, quizzes, instructions, case scenarios). This requires critical reflection and the adaptation of the AI output to ensure alignment with learning outcomes and accuracy.

  2. The educator provides the adapted learning materials and assessments to their students on the learning management system (LMS)/digital learning platform used for the course.

  3. The students engage with the learning materials (e.g., by answering quizzes). Automated feedback is provided for exercises such as quizzes.

  4. Learning analytics is used to analyze how the students interact with the AI-generated materials on the LMS/digital learning platform. The type of learning analytics depends on several factors: the LMS used at the institution, the system-enabled analytics, the students’ consent, and the educator’s skill level.

  5. The educator receives the learning analytics data-based evaluations and feedback on the materials provided via a learning analytics dashboard, which they can utilize to draw conclusions regarding aspects such as unclear instructions and overly complex tasks.

  6. During the classroom sessions, the educator and their students engage in discussions about the learning analytics feedback (e.g., the quality of the learning material). The process includes a critical analysis of the AI-generated materials and an exploration of their optimization potential, such as refining prompts. At this stage, AI is also integrated as a learning topic.

  7. Based on the feedback (learning analytics and discussion), the educator can refine the learning materials and assessments in a continuous, iterative process. Additionally, the learning materials can be made available on open educational resource platforms for community use and development. In addition, the refined learning materials and assessments could undergo further employment and testing with the assistance of learning analytics to identify areas requiring further improvement, such as the instruction component.

Through this systematic approach, the development of learning materials becomes a collaborative effort between educators, students, and the educational community with support via AI technologies and learning analytics.

Research Questions

Against this background, we addressed the following two research questions (RQ) for the application and evaluation of the proposed design approach:

RQ1: How do students evaluate the AI-generated learning materials and assessments?

RQ2: How do students perceive the co-creative design approach combining generative AI and learning analytics?

Method

Application and validation of the design approach

We validated the conceptual approach (Fig. 1) through its implementation in three undergraduate courses, comprising one lecture and two seminars on education, digital teaching and learning, and AI in education, respectively, by following a mixed methods approach, in which we combined quantitative ratings (descriptive statistics) with qualitative feedback (Mayring, 2021) and classroom discussions. The approach was applied in three sequential phases. First, learning materials were created using ChatGPT-4 (OpenAI) by following the principled instructions for prompting that were indicated to be effective at the time (Bsharat et al., 2023). This included the development of multiple-choice questions and case scenarios focused on the topic AI in education, which was part of the course content. Thus, to create assessments, we provided the LLM with a paper on the topic of AI as a foundation. We then employed various prompts (Figures 2 and 3).

Figure 2

Prompt 1 for Assessment Creation

Ein Bild, das Text, Schrift, Screenshot, Dokument enthält.

KI-generierte Inhalte können fehlerhaft sein.

Figure 3

Prompt 2 for Assessment Creation

Ein Bild, das Text, Schrift, weiß, Dokument enthält.

KI-generierte Inhalte können fehlerhaft sein.

As prompted, the LLM generated (1) multiple-choice questions and (2) descriptions and instructions for a case study, both on the topic of AI in schools. The initial output was then refined through the use of prompts, such as “Make the answers more realistic and challenging” and “Adjust the answers according to Anderson and Krathwohl’s taxonomy. Also, focus on questions and answers that promote practical competence in AI literacy in various situations and critical thinking.” Figure 4 shows some examples of the LLM-generated and manually revised quiz items.

Figure 4

Examples of Generated Quiz Items

Ein Bild, das Text, Screenshot, Schrift enthält.

KI-generierte Inhalte können fehlerhaft sein.

We reviewed and adapted the generated content to ensure alignment with the learning objectives and accuracy. In the second phase, the materials were integrated into the Moodle LMS, where the students engaged with them for a period of one week (as preparation for the upcoming seminar session). During this period, the learning analytics data were collected. This data included the date and duration of engagement with the assessments, the identification of correct and incorrect responses to multiple-choice questions, and the responses to the open-ended case scenario question. The third phase focused on evaluation and refinement. The students rated the materials and provided written feedback. Furthermore, the approach and materials were discussed in classroom sessions, which allowed for a detailed exploration of possible improvements. The feedback and learning analytics data thus provided a foundation for the iterative refinement of the materials.

Sample, data collection, and data analysis

The study involved 54 bachelor students (1st and 2nd semester), primarily from educational science programs (e.g., preservice teachers), across three German university courses (Course 1: n = 12; Course 2: n = 34; Course 3: n = 8) from January to December 2024. The students engaged with the learning materials through the learning platform/LMS Moodle. The educational materials on the subject of “AI in education” were presented as a self-test and an instructional design experiment. Participation was voluntary. The students were instructed to complete the self-test (i.e., multiple-choice questions, and the responses to the open-ended case scenario question). For data analyses, the reports (i.e., quiz report, responses report, and statistics report) provided by Moodle were utilized, as they contain the student data (e.g., the responses given by the students). The data analyses encompassed the enrolled users who had attempted the self-test, the distribution of scores, and students' responses (e.g., correct/incorrect responses to the question). Besides, students were asked to provide feedback on the assessments. Thus, they rated the quality of the assignment questions and the overall quality of the case example using a 5-point Likert scale ranging from 1 (very good) to 5 (very bad). Additionally, the students provided written feedback on the material quality and participated in structured classroom discussions about the approach and materials. The quantitative data were analyzed using descriptive statistics, while the qualitative feedback underwent qualitative content analysis (Mayring, 2021) to identify key themes and suggestions for improvement.

Results

RQ1: Student Evaluation of AI-generated Learning Materials and Assessments

Over all three courses, the participants rated the quality of the developed assessments (by addressing the complete set of questions) as good (M = 1.87, SD = 0.69). Likewise, the overall quality of the developed case scenario was rated good (M = 2.25, SD = 1.03). Regarding the generated six assessment questions, the results were consistent over all three courses. The mean values of the quality ratings ranged between good and very good, with no outliers or significant differences between the courses (Table 1).

Table 1

Quality Ratings for the AI-Generated Assessment Questions


Course 1

Course 2

Course 3


n = 12

n = 34

n = 8

M

SD

M

SD

M

SD

AQ1

2.10

0.88

2.29

1.24

1.57

0.53

AQ2

2.00

0.85

1.77

1.10

1.43

0.53

AQ3

1.92

0.67

1.90

1.06

2.00

1.26

AQ4

1.82

1.25

1.79

1.10

1.17

0.41

AQ5

1.45

0.52

1.86

1.16

1.50

0.84

AQ6

1.55

0.93

1.97

1.27

1.33

0.52

Note. The 5-point Likert scale ranged from 1 (very good) to 5 (insufficient). AQ = assessment question.

RQ2: Student Perceptions of the Co-Creative Design Approach

The qualitative student feedback revealed generally positive tendencies. The students reported that the assessment items were easy to understand and could be solved using the knowledge acquired during the course. Regarding the case scenario, the students generally described the approach as interesting and comprehensible.

A content analysis of the students’ feedback provided deeper insights into the feasibility of the approach. For the AI-generated multiple-choice assessment questions, 31 valid responses were analyzed. Four overarching themes emerged, which highlighted both the strengths of the approach and areas that required further attention (Table 2).

Table 2

Feedback on AI-Generated Assessment Questions

Theme

Example quote

Frequency, n (%)

Clarity and Comprehensibility

“I had to read some questions multiple times to understand them.”

“The questions were very clear and well formulated. It was fun to answer them.”

15 (48.4%)

Difficulty and Complexity

“I think the correct answer was very easy to guess from among the answer options.”

“If you paid attention during the lecture, you could answer the questions well.”

9 (29.0%)

Question and Answer Design

“It was quite good, but I think other answer options or multiple correct answers could have fitted the questions more often.”

“The questions were very concise, which I appreciated. They were also not ‘trick questions’ that could confuse learners. However, I found the answer options in question 6 not entirely well designed.”

5 (16.1%)

Personal Reaction

“Pretty good.”

“Awesome.”

2 (6.5%)

The qualitative feedback on the AI-generated assessment questions highlighted the students’ generally positive perceptions. Many students found the questions clear, engaging, and manageable when course knowledge was applied. The theme Clarity and Comprehensibility was the most frequently mentioned, with most students praising the comprehensibility of the questions, although some noted that certain items required multiple readings. In terms of Difficulty and Complexity, the students largely perceived the difficulty as appropriate; however, some felt that the correct answers were too obvious, which made the questions less challenging. Others noted that a few questions were lengthy or complex, suggesting the need for more concise phrasing. Feedback on the Question and Answer Design theme indicated some confusion about why only one answer was correct in cases where multiple options seemed plausible. The students also suggested more varied answer choices and highlighted the importance of ensuring answer formats align with expectations. Finally, the Personal Reaction category consisted of short, positive remarks which indicated the students’ overall satisfaction. In summary, while the assessment was well received for its clarity and relevance, the students’ feedback suggested the need for improvements in question phrasing, complexity, and answer design to enhance future assessments.

The qualitative analysis of the open feedback on the case scenario revealed significant insights into the students’ perceptions of the instructional design. Out of a total of 35 responses, 31 valid statements were analyzed and categorized into five thematic areas (Table 3).

Table 3

Students’ Feedback on the Case Scenario

Theme

Example quote

Frequency, n (%)

Clarity and Comprehensibility

“The case scenario was clear and shows exactly what needs to be done.”

“The task was comprehensible but more time-consuming than the previous tasks.”

11 (35.5%)

Difficulty and Complexity

“Complex, somewhat lengthy, but very exciting.”

“There was a lot to pay attention to.”

7 (22.6%)

Case Scenario Design

“The case scenario was fine. You just needed to review the content again to be able to answer the question precisely.”

“The task was very well structured. It appeared clear and provided many different opportunities to be creative and bring in new ideas.”

5 (16.1%)

Personal Reaction and Evaluation

“Exciting idea, and it was fun to engage with it.”

“The case scenario raised questions for me.”

7 (22.6%)

Miscellaneous

“I found it difficult to say how teachers could be involved in the project.”

1 (3.2%)

The qualitative feedback on the case scenario revealed varied but generally positive perceptions. The most frequently mentioned theme was Clarity and Comprehensibility; the students highlighted that the case scenario was clear, practical, and easy to follow. However, some noted that certain parts required careful interpretation or were time-intensive. In the theme Difficulty and Complexity, some students felt that the task was manageable but required extensive attention and familiarity with certain concepts, which made it more demanding. A few students also expressed uncertainty about the expected scope of the task. Feedback regarding the Case Scenario Design theme emphasized the well-structured nature of the scenario, which allowed for creativity and the integration of personal ideas. Nevertheless, some students suggested that clearer instructions or more concrete examples could have made the task more accessible. The theme Personal Reaction and Evaluation captured the students’ subjective impressions, with many describing the task as interesting, engaging, and thought-provoking. Several comments reflected the students’ curiosity about how the case scenario could be applied in real-world educational contexts. Finally, the assessment for the Miscellaneous category comprised a reflection on teachers’ involvement in the project. In summary, the feedback demonstrated general approval of the case scenario while simultaneously highlighting opportunities to refine the task instructions and address complexity to support students more effectively.

In subsequent classroom discussions about the approach, the students identified several opportunities and challenges. They particularly valued the combination of data-driven and face-to-face feedback for improving the learning materials and noted efficiency gains once effective prompts had been established. However, they emphasized that AI literacy was crucial for utilizing the approach effectively. The students raised concerns that educators who lacked AI literacy may struggle to develop and evaluate AI-generated content, which could lead to implementation challenges or poor-quality materials. They also highlighted data privacy considerations regarding learning analytics. The students suggested implementing peer review processes for newly developed materials and proposed enhanced learning analytics dashboards to better understand their learning processes.

Discussion

The findings of this study demonstrate both the feasibility and potential impact of integrating GenAI with learning analytics in instructional design. The evaluation of our approach revealed several key insights about the co-creative development of learning materials in educational settings.

Effectiveness of the Approach

Our results indicated that the systematic combination of GenAI and learning analytics creates an effective environment for developing and refining educational content. The students’ positive evaluations of the AI-generated materials, with mean ratings ranging between good and very good, suggest the approach supports the creation of high-quality learning materials. The learning analytics data provided concrete insights into the effectiveness of the materials, while the classroom discussions enabled a deeper understanding of the students’ engagement with the content. For example, the high error rate in AI-generated exercises helped identify unclear instructions (Ifenthaler & Schumacher, 2016), which allowed for targeted improvements. The iterative nature of the framework, which combined online interactions with face-to-face discussions, proved particularly valuable for the continuous refinement of the learning materials.

The students’ feedback highlighted the specific strengths of the approach, particularly regarding the clarity of the material and engagement opportunities. The combination of data-driven insights and collaborative discussion sessions created multiple feedback loops that supported quality improvement. However, the students also identified important prerequisites for successful implementation, notably the need for AI literacy among educators and consideration of data privacy aspects in learning analytics.

Theoretical and Practical Implications

The approach developed in this study extends the current approaches to AI integration in education by demonstrating how different AI technologies can be systematically combined to enhance instructional design processes. While previous research has explored either GenAI (G. Choi et al., 2024) or learning analytics (Mangaroska & Giannakos, 2019) in isolation, our approach shows how their combination can create synergistic benefits. The co-creative approach aligns with recent calls for the more collaborative and evidence-based integration of AI in education (Bozkurt, 2024) while addressing practical implementation challenges. Furthermore, our approach could be associated with augmented instruction of AI effects on learning in the ISAR model (Bauer et al., 2025). Therefore, the proposed instructional design approach has the potential to enhance cognitive support by providing new learning material (GenAI), additional feedback (Learning analytics), and scaffolding that optimize cognitive processing of various learning activities in digital learning environments.

From a practical perspective, our approach offers several advantages. First, it enables the efficient creation of differentiated learning materials, thereby potentially contributing to more inclusive learning opportunities aligned with universal design principles (Yang et al., 2024). Second, the structured approach helps ensure quality control through multiple feedback mechanisms, which addresses common concerns about AI-generated content reliability. Third, the approaches’ integration of learning analytics provides evidence-based insights for continuous improvement and thus moves beyond subjective assessments of material quality.

Teacher Education Context

Our findings suggest that the approach is particularly valuable in teacher education settings, as it offers multiple benefits for both instructors and preservice teachers. For instructors, it provides an efficient approach to creating and refining learning materials while modeling evidence-based technology integration. The approach enables the systematic analysis of material effectiveness through learning analytics and consequently supports data-driven instructional decisions.

For preservice teachers, the co-creative approach offers unique opportunities for engagement with AI technologies as both learners and future implementers. This dual role is crucial, as research indicates that exposure to tools like ChatGPT alone does not ensure future classroom adoption, with preservice teachers often expressing uncertainty about AI integration (Bae et al., 2024). The approach addresses these uncertainties by combining AI literacy development with authentic instructional design tasks while fostering the data literacy needed for effective teacher–AI collaboration (Kim, 2024). Additionally, research indicates that some faculty members employ GenAI to a greater extent for the conception and preparation of teaching than for exams and evaluations (Mah & Groß, 2024). This evidence-based approach may therefore also motivate instructors to utilize GenAI for the creation of AI-generated materials and assessments.

Perhaps most significantly, preservice teachers trained through this approach can act as innovation multipliers in the educational system. Having experienced evidence-based AI integration firsthand, they are better positioned to implement similar approaches in their future teaching practice. This multiplication effect is particularly important given the rapid evolution of AI technologies in education and the need for informed, practical approaches to their implementation in K-12 settings.

Limitations

Several limitations should be considered when interpreting our results. The sample size was relatively limited, which reflects the exploratory nature of the study. Additionally, the students’ educational science backgrounds may have influenced their interest in and engagement with the AI-based instructional design tools. The approach integrated separate AI-based tools (LLM and learning analytics), so future integrated learning environments may require adaptations to the model.

Future Research

Further research should address several key areas. More detailed specifications are needed regarding learning outcomes (Anderson & Krathwohl, 2001) and constructive alignment (Biggs et al., 2022). The implementation of learning analytics dashboards for both educators and students requires further investigation (Khosravi et al., 2021), as does the potential benefit of using templates (Rüdian & Pinkwart, 2023). Beyond technical requirements, research should explore educators’ perceptions of AI in instructional design, particularly regarding teacher–AI collaboration (Alfredo et al., 2024). Furthermore, future research on assessment design should also address the distractors quality in AI-generated multiple-choice questions (Hwang et al., 2024), as this could undermine assessment rigor. Therefore, during the exploratory phase of the model, it may be particularly useful for self-assessments, which can yield profound insights into assessment designs. In addition, our proposed approach may also support a shift toward more formative, competency-based assessment approaches in the era of GenAI (Hodges & Kirschner, 2024; Mao et al., 2023).

References

Ahmat, N. H. C., Ridzuan, A. H. A., & Yunos, M. S. A. (2022). Perceptions and readiness of educators toward micro-credential certification programme. International Journal of Education and Pedagogy, 4(1), 38–50. https://myjms.mohe.gov.my/index.php/ijeap/article/view/17571/9290

Alfredo, R., Echeverria, V., Jin, Y., Yan, L., Swiecki, Z., Gašević, D., & Martinez-Maldonado, R. (2024). Human-centred learning analytics and AI in education: A systematic literature review. Computers and Education: Artificial Intelligence, 6. https://doi.org/10.1016/j.caeai.2024.100215

Almatrafi, O., Johri, A., & Lee, H. (2024). A systematic review of AI literacy conceptualization, constructs, and implementation and assessment efforts (2019–2023). Computers and Education Open, 6. https://doi.org/10.1016/J.CAEO.2024.100173

Anderson, L. W., & Krathwohl, D. R. (2001). A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives. In L. W. Anderson & D. R. Krathwohl (Eds.), Educational Horizons (1st ed.). Longman.

Arnold, K. E., & Pistilli, M. D. (2012). Course signals at Purdue: using learning analytics to increase student success. In S. Dawson, C. Haythornthwaite, S. Buckingham Shum, D. Gasevic, & R. Ferguson (Eds.), Proceedings of the 2nd International Conference on Learning Analytics and Knowledge (LAK ’12) (pp. 267–270). Association for Computing Machinery. https://doi.org/10.1145/2330601.2330666

Bae, H., Hur, J., Park, J., Choi, G. W., & Moon, J. (2024). Pre-service teachers’ dual perspectives on generative AI: Benefits, challenges, and integrating into teaching and learning. Online Learning Journal, 28(3), 131–156. https://doi.org/10.24059/olj.v28i3.4543

Bauer, E., Greiff, S., Graesser, A. C., Scheiter, K., & Sailer, M. (2025). Looking beyond the hype: Understanding the effects of AI on learning. Educational Psychology Review, 37(2), 1–27. https://doi.org/10.1007/S10648-025-10020-8

Bellas, F., Guerreiro-Santalla, S., Naya, M., & Duro, R. J. (2023). AI Curriculum for European High Schools: An Embedded Intelligence Approach. International Journal of Artificial Intelligence in Education, 33(2), 399–426. https://doi.org/10.1007/s40593-022-00315-0

Bianchi, F., Kalluri, P., Durmus, E., Ladhak, F., Cheng, M., Nozza, D., Hashimoto, T., Jurafsky, D., Zou, J., & Caliskan, A. (2023). Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale. FAccT ’23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 1493–1504. https://doi.org/10.1145/3593013.3594095

Biggs, J., Tang, C., & Kennedy, G. (2022). Teaching for quality learning at univeristy (5th ed.). Open Univeristy Press.

Bond, J., & Dirkin, K. (2020). What models are instructional designers using today? Journal of Applied Instructional Design, 9(2), 125–139. https://doi.org/10.51869/92jbkd

Bond, M., Khosravi, H., De Laat, M., Bergdahl, N., Negrea, V., Oxley, E., Pham, P., Chong, S. W., & Siemens, G. (2023). A meta systematic review of artificial intelligence in higher education: A call for increased ethics, collaboration, and rigour. International Journal of Educational Technology in Higher Education, 21(1), 4. https://doi.org/10.1186/s41239-023-00436-z

Bozkurt, A. (2024). Why generative AI literacy, why now, and why it matters in the educational landscape? Kings, queens and genAI dragons. Open Praxis, 16(3), 283–290. https://doi.org/10.55982/OPENPRAXIS.16.3.739

Branch, R. M. (2010). Instructional design: The ADDIE approach. Springer. https://doi.org/10.1007/978-0-387-09506-6

Bsharat, S. M., Myrzakhan, A., & Shen, Z. (2023). Principled instructions are all you need for questioning LLaMA-1/2, GPT-3.5/4. ArXiv Preprint, 2312.16171. https://doi.org/10.48550/arXiv.2312.16171

Celik, I. (2023). Towards Intelligent-TPACK: An empirical study on teachers’ professional knowledge to ethically integrate artificial intelligence (AI)-based tools into education. Computers in Human Behavior, 138. https://doi.org/10.1016/j.chb.2022.107468

Chan, C. K. Y., & Hu, W. (2023). Students’ voices on generative AI: perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education, 20(1). https://doi.org/10.1186/s41239-023-00411-8

Chiu, T. K. F. (2021). A holistic approach to the design of artificial intelligence (AI) education for K-12 schools. TechTrends, 65(5), 796–807. https://doi.org/10.1007/s11528-021-00637-1

Chiu, T. K. F., Xia, Q., Zhou, X., Chai, C. S., & Cheng, M. (2023). Systematic literature review on opportunities, challenges, and future research recommendations of artificial intelligence in education. Computers and Education: Artificial Intelligence, 4. https://doi.org/10.1016/j.caeai.2022.100118

Choi, G. W., Kim, S. H., Lee, D., & Moon, J. (2024). Utilizing generative AI for instructional design: Exploring strengths, weaknesses, opportunities, and threats. TechTrends, 68(4), 832–844. https://doi.org/10.1007/S11528-024-00967-W

Choi, S., Jang, Y., & Kim, H. (2023). Influence of Pedagogical Beliefs and Perceived Trust on Teachers’ Acceptance of Educational Artificial Intelligence Tools. International Journal of Human–Computer Interaction, 39(4), 910–922. https://doi.org/10.1080/10447318.2022.2049145

Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2023). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 61, 1–12. https://doi.org/10.1080/14703297.2023.2190148

Crompton, H., & Burke, D. (2023). Artificial intelligence in higher education: The state of the field. International Journal of Educational Technology in Higher Education, 20, 22. https://doi.org/10.1186/s41239-023-00392-8

Crompton, H., & Burke, D. (2024). The educational affordances and challenges of ChatGPT: State of the field. TechTrends, 68(2), 380–392. https://doi.org/10.1007/S11528-024-00939-0/FIGURES/4

Delcker, J., Heil, J., & Ifenthaler, D. (2024). Evidence-based development of an instrument for the assessment of teachers’ self-perceptions of their artificial intelligence competence. Educational Technology Research and Development. https://doi.org/10.1007/S11423-024-10418-1

Dousay, T. A., & Stefaniak, J. E. (2024). Instructional Design Models. In Foundations of Learning and Instructional Design Technology: Historical Roots & Current Trends. EdTech Books. https://edtechbooks.org/foundations_of_learn/id_models

Drachsler, H., & Greller, W. (2016). Privacy and analytics: it’s a DELICATE issue a checklist for trusted learning analytics. In D. Gasevic, G. Lynch, S. Dawseon, H. Drachsler, & C. Penstein Rosé (Eds.), LAK ’16 Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (pp. 89–98). Association for Computing Machinery. https://doi.org/10.1145/2883851.2883893

Drugova, E., Zhuravleva, I., Zakharova, U., & Latipov, A. (2024). Learning analytics driven improvements in learning design in higher education: A systematic literature review. Journal of Computer Assisted Learning, 40(2), 510–524. https://doi.org/10.1111/jcal.12894

Fraillon, J., Ainley, J., Schulz, W., Friedman, T., & Duckworth, D. (2020). Preparing for Life in a Digital World. Springer International Publishing. https://doi.org/10.1007/978-3-030-38781-5

Frick, T. W., Myers, R. D., & Dagli, C. (2022). Analysis of patterns in time for evaluating effectiveness of first principles of instruction. Educational Technology Research and Development, 70(1), 1–29. https://doi.org/10.1007/s11423-021-10077-6

Fütterer, T., Fischer, C., Alekseeva, A., Chen, X., Tate, T., Warschauer, M., & Gerjets, P. (2023). ChatGPT in education: global reactions to AI innovations. Scientific Reports, 13(1), 15310. https://doi.org/10.1038/s41598-023-42227-6

Giannakos, M., Azevedo, R., Brusilovsky, P., Cukurova, M., Dimitriadis, Y., Hernandez-Leo, D., Järvelä, S., Mavrikis, M., & Rienties, B. (2024). The promise and challenges of generative AI in education. Behaviour & Information Technology. https://doi.org/10.1080/0144929X.2024.2394886

Hall, R. (2025). Empowering Educators: Mastering AI Prompt Writing. The Journal of Applied Instructional Design, 14(2). https://doi.org/10.59668/2222.21450

Hicks, M. T., Humphries, J., & Slater, J. (2024). ChatGPT is bullshit. Ethics and Information Technology, 26. https://doi.org/10.1007/S10676-024-09775-5

Hodges, C. B., & Kirschner, P. A. (2024). Innovation of instructional design and assessment in the age of generative artificial intelligence. TechTrends, 68, 195-199. https://doi.org/10.1007/s11528-023-00926-x

Holmes, W., Porayska-Pomsta, K., Holstein, K., Sutherland, E., Baker, T., Shum, S. B., Santos, O. C., Rodrigo, M. T., Cukurova, M., Bittencourt, I. I., & Koedinger, K. R. (2022). Ethics of AI in education: Towards a community-wide framework. International Journal of Artificial Intelligence in Education, 32(3), 504–526. https://doi.org/10.1007/S40593-021-00239-1

Hrastinski, S. (2019). What do we mean by blended learning? TechTrends, 63(5), 564–569. https://doi.org/10.1007/s11528-019-00375-5

Hu, B., Zheng, L., Zhu, J., Ding, L., Wang, Y., & Gu, X. (2024). Teaching plan generation and evaluation with GPT-4: Unleashing the potential of LLM in instructional design. IEEE Transactions on Learning Technologies, 17, 1445–1459. https://doi.org/10.1109/TLT.2024.3384765

Hwang, K., Wang, K., Alomair, M., Choa, F.-S., & Chen, L. K. (2024). Towards Automated Multiple Choice Question Generation and Evaluation: Aligning with Bloom’s Taxonomy (pp. 389–396). https://doi.org/10.1007/978-3-031-64299-9_35

Ifenthaler, D. (2015). Learning Analytics. In The SAGE encyclopedia of educational technology (pp. 448–451). SAGE Publications Inc. https://doi.org/10.4135/9781483346397.n187

Ifenthaler, D. (2017). Are higher education institutions prepared for learning analytics? TechTrends, 61(4), 366–371. https://doi.org/10.1007/s11528-016-0154-0

Ifenthaler, D. (2017b). Learning analytics design. In L. Lin & Spector J. M. (Eds.), The sciences of learning and instructional design (pp. 202–211). Routledge. https://doi.org/10.4324/9781315684444-13

Ifenthaler, D., Gibson, D., & Dobozy, E. (2018). Informing learning design through analytics: Applying network graph analysis. Australasian Journal of Educational Technology, 34(2), 117–132. https://doi.org/10.14742/ajet.3767

Ifenthaler, D., & Schumacher, C. (2016). Learning Analytics im Hochschulkontext. Wirtschaftswissenschaftliches Studium : WiSt, 45(4), 176–181. https://doi.org/10.15358/0340-1650-2016-4-176

Jin, Y., Yan, L., Echeverria, V., Gašević, D., & Martinez-Maldonado, R. (2025). Generative AI in higher education: A global perspective of institutional adoption policies and guidelines. Computers and Education: Artificial Intelligence, 8, 100348. https://doi.org/10.1016/J.CAEAI.2024.100348

Jones, K. M. L. (2019). Learning analytics and higher education: A proposed model for establishing informed consent mechanisms to promote student privacy and autonomy. International Journal of Educational Technology in Higher Education, 16(Art. 24), 1–22. https://doi.org/10.1186/s41239-019-0155-0

Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T., … Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 1–13. https://doi.org/10.1016/j.lindif.2023.102274

Khalil, M., Prinsloo, P., & Slade, S. (2022). A Comparison of Learning Analytics Frameworks: a Systematic Review. In A. F. Wise, R. Martinez-Maldonado, & I. Hilliger (Eds.), LAK22: 12th International Learning Analytics and Knowledge Conference (pp. 152–163). ACM. https://doi.org/10.1145/3506860.3506878

Khosravi, H., Shabaninejad, S., Bakharia, A., Sadiq, S., Indulska, M., & Gašević, D. (2021). Intelligent learning analytics dashboards: Automated drill-down recommendations to support teacher data exploration. Journal of Learning Analytics, 8(3), 133–154. https://doi.org/10.18608/jla.2021.7279

Khosravi, H., Viberg, O., Kovanovic, V., & Ferguson, R. (2023). Generative AI and learning analytics. Journal of Learning Analytics, 10(3), 1–6. https://doi.org/10.18608/jla.2023.8333

Kim, J. (2024). Leading teachers’ perspective on teacher-AI collaboration in education. Education and Information Technologies, 29, 8693–8724. https://doi.org/10.1007/S10639-023-12109-5

Kim, P., Wang, W., & Bonk, C. J. (2025). Generative AI as a coach to help students enhance proficiency in question formulation. Journal of Educational Computing Research. https://doi.org/10.1177/07356331251314222

Kumar, S., Gunn, A., Rose, R., Pollard, R., Johnson, M., & Ritzhaupt, A. D. (2024). The role of instructional designers in the integration of generative artificial intelligence in online and blended learning in higher education. Online Learning, 28(3), 207–231. https://doi.org/10.24059/OLJ.V28I3.4501

Law, N., & Liang, L. (2020). A multilevel framework and method for learning analytics integrated learning design. Journal of Learning Analytics, 7(3), 98–117. https://doi.org/10.18608/jla.2020.73.8

Lockyer, L., & Dawson, S. (2011). Learning designs and learning analytics. In G. Conole & D. Gasevic (Eds.), Proceedings of the 1st International Conference on Learning Analytics and Knowledge (pp. 153–156). ACM. https://doi.org/10.1145/2090116.2090140

Long, P., & Siemens, G. (2011). Penetrating the fog: Analytics in learning and education. EDUCAUSE Review, 46(5). https://er.educause.edu/articles/2011/9/penetrating-the-fog-analytics-in-learning-and-education

Luo, T., Muljana, P. S., Ren, X., & Young, D. (2024). Exploring instructional designers’ utilization and perspectives on generative AI tools: A mixed methods study. Educational Technology Research and Development. https://doi.org/10.1007/S11423-024-10437-Y

Mah, D.-K. (2016). Learning analytics and digital badges: Potential impact on student retention in higher education. Technology, Knowledge and Learning, 21(3), 285–305. https://doi.org/10.1007/s10758-016-9286-8

Mah, D.-K., & Groß, N. (2024). Artificial intelligence in higher education: exploring faculty use, self-efficacy, distinct profiles, and professional development needs. International Journal of Educational Technology in Higher Education, 21, 58. https://doi.org/10.1186/s41239-024-00490-1

Mah, D.-K., Hense, J., & Dufentester, C. (2023). Didaktische Impulse zum Lehren und Lernen mit und über Künstliche Intelligenz. In C. de Witt, C. Gloerfeld, & S. E. Wrede (Eds.), Künstliche Intelligenz in der Bildung (pp. 91–108). Springer Fachmedien. https://doi.org/10.1007/978-3-658-40079-8_5

Mah, D.-K., Knoth, N., & Egloffstein, M. (2025). Perspectives of academic staff on artificial intelligence in higher education: exploring areas of relevance. Frontiers in Education, 10. https://doi.org/10.3389/feduc.2025.1484904

Mah, D.-K. (in press). Generative KI wie ChatGPT und Learning Analytics im Zusam-menspiel: Ein ko-kreatives Anwendungsszenario zur Entwicklung didaktischer Lernmaterialien. GMW-Tagungsband.

Mangaroska, K., & Giannakos, M. (2019). Learning Analytics for Learning Design: A Systematic Literature Review of Analytics-Driven Design to Enhance Learning. IEEE Transactions on Learning Technologies, 12(4), 516–534. https://doi.org/10.1109/TLT.2018.2868673

Mao, J., Chen, B., & Liu, J. C. (2024). Generative artificial intelligence in education and its implications for assessment. TechTrends, 68, 58–66. https://doi.org/10.1007/s11528-023-00911-4

Márquez, L., Henríquez, V., Chevreux, H., Scheihing, E., & Guerra, J. (2023). Adoption of learning analytics in higher education institutions: A systematic literature review. British Journal of Educational Technology, 55(2). https://doi.org/10.1111/bjet.13385

Mayring, P. (2021). Qualitative Content Analysis. A Step-by-Step Guide. SAGE Publications.

Mishra, P., Oster, N., & Henriksen, D. (2024). Generative AI, teacher knowledge and educational research: Bridging short- and long-term perspectives. TechTrends, 68, 205–210. https://doi.org/10.1007/S11528-024-00938-1

Mittal, U., Sai, S., Chamola, V., & Sangwan, D. (2024). A comprehensive review on generative AI for education. IEEE Access, 12, 142733–142759. https://doi.org/10.1109/ACCESS.2024.3468368

Moundridou, M., Matzakos, N., & Doukakis, S. (2024). Generative AI tools as educators’ assistants: Designing and implementing inquiry-based lesson plans. Computers and Education: Artificial Intelligence, 7, 100277. https://doi.org/10.1016/j.caeai.2024.100277

Muljana, P. S., & Luo, T. (2020). Utilizing learning analytics in course design: voices from instructional designers in higher education. Journal of Computing in Higher Education, 33, 206–234. https://doi.org/10.1007/S12528-020-09262-Y

Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2. https://doi.org/10.1016/j.caeai.2021.100041

Parsons, B., & Curry, J. H. (2024). Can ChatGPT pass graduate-level instructional design assignments? Potential implications of artificial intelligence in education and a call to action. TechTrends, 68, 67–78. https://doi.org/10.1007/s11528-023-00912-3

Pratschke, B. M. (2024). Generative AI and education. Digital pedagogies, teaching innovation and learning design (1st ed.). Springer. https://doi.org/10.1007/978-3-031-67991-9

Rüdian, S., & Pinkwart, N. (2023). Auto-generated language learning online courses using generative AI models like ChatGPT. In R. Röpke & U. Schroeder (Eds.), 21. Fachtagung Bildungstechnologien (DELFI). (pp. 65–76). Gesellschaft für Informatik e.V. http://www.doi.org/10.18420/delfi2023-14

Schlude, A., Mendel, U., Stürz, R. A., & Fischer, M. (2024, March 15). Verbreitung und Akzeptanz generativer KI an Schulen und Hochschulen. Bidt DE. https://www.bidt.digital/publikation/verbreitung-und-akzeptanz-generativer-ki-an-schulen-und-hochschulen/

Sclater, N., Peasgood, A., & Mullan, J. (2016). Learning Analytics in Higher Education - A review of UK and international practice (Issue April). Jisc. https://doi.org/10.1201/9780429196614-9

Shibani, A., Knight, S., & Buckingham Shum, S. (2020). Educator perspectives on learning analytics in classroom practice. The Internet and Higher Education, 46, 100730. https://doi.org/10.1016/j.iheduc.2020.100730

Southworth, J., Migliaccio, K., Glover, J., Glover, J., Reed, D., McCarty, C., Brendemuhl, J., & Thomas, A. (2023). Developing a model for AI Across the curriculum: Transforming the higher education landscape via innovation in AI literacy. Computers and Education: Artificial Intelligence, 4. https://doi.org/10.1016/j.caeai.2023.100127

Susnjak, T. (2022). ChatGPT: The End of Online Exam Integrity? ArXiv. https://doi.org/10.48550/arXiv.2212.09292

Tlili, A., Shehata, B., Adarkwah, M. A., Bozkurt, A., Hickey, D. T., Huang, R., & Agyemang, B. (2023). What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learning Environments, 10(1), 1–24. https://doi.org/10.1186/S40561-023-00237-X

Tsai, Y.-S., Rates, D., Moreno-Marcos, P. M., Muñoz-Merino, P. J., Jivet, I., Scheffel, M., Drachsler, H., Delgado Kloos, C., & Gašević, D. (2020). Learning analytics in European higher education—Trends and barriers. Computers & Education, 155. https://doi.org/10.1016/j.compedu.2020.103933

von Garrel, J., & Mayer, J. (2023). Artificial intelligence in studies—use of ChatGPT and AI-based tools among students in Germany. Humanities and Social Sciences Communications 2023 10:1, 10, 1–9. https://doi.org/10.1057/s41599-023-02304-7

Wollny, S., Mitri, D. Di, Jivet, I., Muñoz‐Merino, P., Scheffel, M., Schneider, J., Tsai, Y., Whitelock‐Wainwright, A., Gašević, D., & Drachsler, H. (2023). Students’ expectations of Learning Analytics across Europe. Journal of Computer Assisted Learning, 39(4), 1325–1338. https://doi.org/10.1111/jcal.12802

Xu, J., Hur, J., Kim, I., Kozan, K., & Baptiste, J. (2025). Integration of artificial intelligence into instructional design. The Journal of Applied Instructional Design, 14(2). https://doi.org/10.59668/2222.20824

Yang, M., Duha, M. S. U., Kirsch, B. A., Glaser, N., Crompton, H., & Luo, T. (2024). Universal design in online education: A systematic review. Distance Education, 45(1), 23–59. https://doi.org/10.1080/01587919.2024.2303494

Yusuf, A., Pervin, N., Román-González, M., & Noor, N. M. (2024). Generative AI in education and research: A systematic mapping review. Review of Education, 12(2). https://doi.org/10.1002/REV3.3489

Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education – where are the educators? International Journal of Educational Technology in Higher Education, 16(1), 1–27. https://doi.org/10.1186/s41239-019-0171-0