How Learning Analytics Can Inform Learning and Instructional Design

Learning and instructional designers are increasingly required to make use of various forms of data and analytics in order to design, implement and evaluate interventions. This is largely due to the increase in data available to learning designers and learning analysts due the update of digital technologies as part of learning and training. This chapter examines the historical development of what has become known as learning analytics, defining the field and considering various models of learning analytics, including the process model with its focus on student learning. The chapter then explores the ways that learning analytics might be effectively connected to the field of learning design, and provides a number of examples of applications of this, including learning analytics dashboards, tailored messaging and feedback systems and writing analytics. Examples of these different applications are also presented and discussed. Finally, the chapter concludes with a consideration for he challenges and future directions for learning analytics.


P. W. Anderson (1972) famously wrote that more is not just more—more is sometimes both quantitatively and qualitatively different. This is especially true in the case of data, and what is often known as “big data,” which is central to the study of learning analytics. This chapter discusses the relationship between big data and learning analytics and, more importantly, what that means for learning designers and other educators. It will examine what learning analytics is, some of the advantages it offers and the challenges it faces, and what learning designers need to know to make use of it in their design work.

Making Sense of Data

One consequence of the increased use of digital technologies in education is that users now generate significantly more data than in the past. Often, this data is a kind of “digital exhaust”—something that learners do not even realize they are creating. Nevertheless, that data is generated, and researchers, academics, and businesses gather it and attempt to make sense of it. However, understanding how to use the data to improve learning outcomes is not easily done; This led to the development of the field of learning analytics (LA).

A Short History of Learning Analytics

Many of the tools, techniques, and data sources used in learning analytics are new, but the field itself has a long history. Since the early days of behaviorism, psychologists and educational researchers have discussed Computer-Aided Instruction (CAI), Computer-Assisted Teaching (CAT) and Intelligent Tutoring Systems (ITS)—all movements where Artificial Intelligence (AI) has been central. The key point, however, is that technology is beginning to make possible what was, before, only an idea. Some of the key tools involved in this change are machine learning, learning analytics, and big data. Figure 1 showcases some important events from this history.

Figure 1. A brief history of learning analytics

Defining Learning Analytics

The following definition was proposed at the first International Learning Analytics and Knowledge Conference (Siemens & Long, 2011):

"Learning analytics is the measurement, collection, analysis, and reporting of data about learners and their contexts, for the purposes of understanding and optimising learning and the environments in which it occurs."

This definition is one that has been taken from the broader data analytics field and has become known as Business Intelligence (BI). The purpose of data analytics is the development of what is called “actionable intelligence.” This means the data analysis is not undertaken solely for the sake of research, but rather to be able to make better decisions about what to do to reach specific outcomes. That idea is present in the definition of learning analytics as presented above, with its emphasis on “optimising learning and the environments in which it occurs.” A more comprehensive definition is presented in the Consolidated Model of Learning Analytics (Gašević et al., 2016). This model (Figure 2) combines design, theory, and data science, and argues that learning analytics rests in the nexus of all of these.

Figure 2. The consolidated model of learning analytics

How Does Learning Analytics Work?

Learning analytics is best understood as a cyclical, iterative process. This section will examine the learning analytics cycle in detail, including a discussion about the kinds of data, algorithms, and modelling employed in learning analytics.

Process Models of Learning Analytics: It is All About the Learning

In the formative years of learning analytics as a discipline, researchers proposed models and frameworks to facilitate a better understanding of the processes involved in data-informed educational decision-making. One example of an earlier documented process for LA is provided by Campbell and Oblinger (2007, in Clow, 2012). To gain actionable intelligence into institutional data regarding retention and success, they defined a five-step process: capture, report, predict, act, and refine. Because academic analytics involves decision-making based on institutional data (Campbell et al., 2007), Campbell and Oblinger’s (2007, in Clow, 2012) model focused on statistics and modelling of big data.

As learning analytics began to draw greater attention, new frameworks and models were defined. Chatti et al. (2012) proposed a 3-stage reference model focusing on how data are involved in the process, namely: (a) data are collected and pre-processed; (b) the results of pre-processing are transformed via analytics and actions such as prediction or data-mining; and (c) improvement of the analysis process through post-processing, for example, by gathering new data or refining existing data, or adapting the analysis. Their model addressed four key questions: what (data), who (target of analysis), why (purpose of analysis), and how (how analysis is performed). The three-stage model provided a useful framework to understand the emerging trends in the early years of the discipline. Chatti et al. (2012) mapped the literature available at the time against this framework. They identified that most of the work around learning analytics at that time was purposed for intelligent tutoring systems or researchers and system designers, focusing on classification and prediction techniques. In contrast to Chatti et al.’s (2012) data-centric model, Clow (2012) aligned his model (see Figure 4) to that of Campbell and Oblinger’s (2007, in Clow, 2012). Through this choice of alignment, Clow proposed a four-stage LA cycle emphasising learners and learning in the process. This cycle includes the following four aspects that explain how (a) Learners in any learning setting (b) generate data either by virtue of their demographics or through their learning activities and academic performance; (c) these data are transformed into metrics(e.g., indicators of progress, comparisons against benchmarks or peers) via analytics; (d) metrics inform interventions such as learning dashboards, recommendation systems, or emails from educators or phone calls to learners. When (or if) learners act on the intervention, they generate new data, thereby creating another iteration of learning analytics.

The learning analytics cycle.
Figure 3. The Learning Analytics Cycle (based on Clow, 2012)

Other models have emerged in the literature to improve upon predecessors and focus more on making sense of learner data. For example, Verbert et al. (2013) proposed a learning analytics process model that focuses on processes around the sensemaking of learning analytics. However, in our view, Clow’s (2012) model is useful for educators and learning designers who have control over the teaching context, as the process cycle highlights the need for LA interventions “to close the feedback loop” (p. 134) for students through actionable insights. This theme of closing the feedback loop continues to be significant when considering the effectiveness of LA interventions today.  Throughout the rest of this chapter, you will see concrete examples of how this framework can be implemented in teaching and learning design.

Think About It!

As an educator:

  • What information do you usually rely on to know whether your students are learning?
  • How do you use this information to help your students?
  • How might you use data from your students’ interactions with the course’s learning tasks and resources to inform your learning design?

Regardless of which LA process or model is subscribed to, a central concern of LA is data. So, what kinds of data can be collected about learners? The reality is that there are almost limitless amounts of data available. The challenge lies in putting that data in a usable format, then analysing it to find actionable insights. Many learning analysts find this kind of data analysis interesting (Mougiakou et al., 2023, p. 3):

A wide range of data is generated by the learners and stored in online and blended teaching and learning environments. Data is collected from explicit learners’ activities, such as completing assignments and taking exams, and from tacit actions, including online social interactions, extracurricular activities, posts on discussion forums, and other activities that are not directly assessed as part of the learner’s educational progress.

Some examples are included below (Figure 4).

Table displaying sources of data that can be pulled from various sources.
Figure 4. Sources of data

This learner-generated data is used to assess learning progress, predict learning performance, detect and identify potentially harmful behaviors, and act upon the findings. In addition to measuring student performance, it can also be used to help us understand the effectiveness of our learning design, as will be described in the next section.

Connecting Learning Analytics with Learning Design

This section will explore various learning analytics that can be employed in order to assist students in reaching intended learning outcomes. It will take the form of practical case studies and discussions about the benefits of the different learning analytics approaches and will then explain the current areas and focus of learning analytics research. Learning analytics interventions can take any forms, including feedback or personalized study support. To do this, the interventions draw on information about the learner’s activity or performance that is available from university systems or that the learner provides through self-reporting mechanisms.

Learning analytics dashboards: Visualising learning

Dashboards are perhaps the most frequently mentioned intervention associated with learning analytics and are garnering strong interest from developmental researchers Learning analytics dashboards (LADs) are visual displays of learners’ information and/or progress with digital learning technologies. The dashboard’s aim is to present the most important information to a range of educational stakeholders in a single display (Schwendimann et al., 2017). These systems are fully automated, in that data are transformed into meaningful metrics by preset algorithms and deployed at scale. LADs differ from institutional dashboards, which are designed from the perspective of academic analytics, and draw on a wide range of data captured across university systems, providing information on key performance indicators such as student enrollment patterns, retention rates, and faculty and staff employment information.

Institutional dashboards as described above draw on big data and can present useful information for the university to make decisions at an administrative level. However, when it comes to closing the feedback loop for learning (cf. Figure 4), LADs that are designed for educators and students as key stakeholders of learning analytics are important. Student- and educator-facing LADs (henceforth, LADs) have been increasingly represented in the literature.

One of the earliest and most famous LADs in the field—also referred to as the “poster-child of LA” – was Purdue University’s Course Signals (Arnold & Pistilli, 2012), an early-alert system that employed predictive algorithms on data from students’ academic background, current performance, and ongoing progress to identify students who are at risk of dropping out or failing a subject. The results of the algorithms were presented visually, in the form of “traffic lights” to signal the likelihood of success, with green indicating a high likelihood that the learner will succeed in the course; orange indicating the presence of issues impeding success; and red flagging students who are at high risk of failing in a course. The signals are presented to students as feedback on their progress, as well as to course instructors, who can then intervene in a timely way through emails or face-to-face consultation. 

Other notable LAD research has been documented by researchers around the globe. Notably, a group from Katholieke Universiteit (KU) Leuven in Belgium, led by Katrien Verbert (e.g., Verbert et al., 2013; Broos et al., 2018), have been actively involved in developing LADs. However, comparatively few of these developments have made their way into actual teaching and learning settings. In a recent example, Li et al. (2021) described how instructors used an instructor-facing LAD at one institution in the United States (see case-in-point below).

Screenshot of instuctor-facing LAD

Figure 5. An example of an instructor-facing LAD (Source: Li et al., 2021, p.346. Image used with permission)

LADs have tended to draw on LMS activity data, which is not surprising as these data are readily available and automatically cached in learning platforms. However, advances in information and communication technologies (ICTs) as well as in data mining have resulted in a greater range of data reporting and visualizations afforded by LADs. For example, some recently developed LADs use multimodal data to provide feedback to educators and learners about learners’ emotions during learning (e.g., Ez-Zaouia, Tabard, & Lavoué, 2020) and collaborative activity (e.g., Martinez-Maldonado et al., 2021).     

With learning increasingly shifting to online or blended modes, the LMS has become a feature of most learning environments. Major LMS vendors such as Canvas and Blackboard now include dashboard features for students and instructors (See Figure 6). Accordingly, Williamson and Kizilcec (2022) noted that “the dashboard feature in LMS likely exposed millions of students and instructors to LADs” (p. 260).

Screenshot of an isntructor-facing LAD generated by the Canvas Learning Management System.

Figure 6. An example of an instructor-facing LAD generated by the Canvas Learning Management System used at the University of Technology Sydney (Source: Image used with permission)

LIDT in the World: How Do Instructors Use LADs in Real Teaching Contexts?

Recall the Think About It! exercise posed earlier. Consider how you might use data from your students’ interactions, with learning tasks and resources in your course, to inform your learning design. LADs offer such information at a glance. So, how do instructors use this information in their own contexts? Gleaning actionable intelligence from student activity and performance information in LADs involves a sense-making process. Li et al. (2021) studied how instructors from different disciplines at a large private University in the United States made sense of a teacher-facing LAD at the institution. The LAD (see Figure 5) had been part of the institution’s learning management system for a few years, and offered the following views for instructors:

  • Timestamped data showing the extent to which students had accessed course materials
  • Frequency with which students employed key words in discussion forum posts or assignments
  • Interactions with video resources
  • Assessment performance

From their analysis of interview data with instructors who used the LAD, the researchers identified three categories of questions to which instructors sought answers when interrogating the visualizations. Most typically, instructors approached the LAD with goal-oriented questions—seeking information regarding how students were interacting with course resources, and therefore, the extent to which course objectives were being met. Importantly, some instructors also applied their knowledge about their student cohort (e.g., if they were mature students) as an additional layer of insight to inform how they talked to students about course objectives. Secondly, instructors approached the LAD with problem-oriented questions—seeking explanations for notable issues such as students’ poor performance, for instance, by examining their learning behaviours. Thirdly, instructors approached the LAD with instructional modification questions—making informed decisions about teaching, for example, to maximize limited class time by focusing instruction if data showed poor grasp of certain topics.

Despite their pervasiveness, LADs have come under heavy criticism as automated feedback systems. A key criticism is that, because these visualization tools are designed to scale across students and contexts, they tend to be a one-size-fits-all solution (Teasley, 2017). As noted above, LADs tend to draw on data collected from students’ activity in the LMS. The way students engage with different activities—discussion forums, readings, practice exercises and other resources—is a function of learning design. Given the contextual nature of these interactions, information presented in LADs, especially those that run on predictive analytics, face the issue of trustworthiness.

A second criticism has to do with the communication of information: While the visualisations may be very advanced and aesthetically pleasing, most are still passive displays of feedback reporting (Jivet et al., 2021). Feedback is a dialogic process, requiring learners to make sense of the information presented, manage unproductive emotions, and then take action in order to improve learning or performance (Carless & Boud, 2018). As such, LADs need to be carefully designed to foster dialogic feedback processes.

Related to this, a third criticism is that many LADs are limited in their ability to offer actionable advice to learners, which is another important principle of effective feedback communication., Because of the highly visual nature of the reports, learners may need help interpreting the information. Inaccurate or unhelpful interpretation of the information could negatively impact a learner’s motivation to learn (e.g., Lim et al., 2019). In fact, research examining the impact of LADs indicates that the way information is presented on LADs can have detrimental effects on students who are at risk of failing (Aguilar, 2018; Lonn, Aguilar, & Teasley, 2015).

A final criticism of LADs has to do with the increasing concern regarding equity in learning analytics. With most LADs relying on single algorithms to report on students’ progress or to predict performance, there is a danger of “algorithmic bias,” disadvantaging underrepresented groups (Williamson & Kizilcec, 2023). As contemporary higher education is characterised by large and diverse cohorts, there is a danger that underrepresented groups may not be able to attain the same levels of progress or performance at the same speed as those who possess more experience with academic environments.

In summary, LADs are a ubiquitous intervention in digital learning environments, due in large part to the abundance of data readily cached in LMS platforms. However, while these fully automated feedback interventions can be aesthetically pleasing and easy to scale, most fall short of delivering on some of the principles of effective feedback and are limited in their ability to address issues of equity.

Think About It!

Have you come across LADs in your own teaching? To what extent do you make use of these visualisations to understand your students? Do you find these tools helpful or unhelpful?

Tailored Messaging and Feedback Systems: “Systems That Care”

As noted above, LADs are fully automated learning analytics feedback systems that deliver personalized feedback to learners with the aim of supporting monitoring and self-regulated learning. We also took note of some of the criticisms associated with such systems, which are designed to be administered at scale. We now turn to another class of feedback interventions that are not fully automated but are instead designed with a “human in the loop” in the learning analytics process. This means that humans intervene in the application of the system. The following interventions may be considered as “systems that care,” in du Boulay’s (2010) terms. This is because they are directed at learners’ motivation, metacognition, and affect.

These human-involved systems are currently adopted “in the wild”—in higher education settings by educators around the world. These systems allow educators to control certain features—also known as parameters—in accordance with their context and learning design. Depending on the tool, these parameters may include one or more of the following:

This contextualization is important, as learning design significantly influences students’ engagement and success (Gašević et al., 2016). What follows are three examples of these semi-automated learning analytics feedback systems that have been featured in the literature: the Student Relationship Engagement System (SRES), OnTask, and ECoach. These systems are discussed as they are known for their implementation in higher education courses.

The Student Relationship Engagement System (SRES) and OnTask

SRES (Liu et al., 2017) and OnTask (Pardo et al., 2018) are described together here, as they share a few similarities in the way they allow for educators to make decisions regarding personalized, data-informed feedback and support, using “teacher intelligence and small (but meaningful) data” (Arthars et al., 2020, p. 229). A noteworthy point about both systems is that they were developed by educators who understood the challenges that fellow-educators face in wanting to personalize support and feedback to large cohorts of students. These tools were designed to facilitate educators’ personalized support of students in a timely manner in their courses.

Both tools involve educators as part of the learning analytics process (see Figure 7). In basic terms, educators can select from the universe of data that is automatically collected from the LMS if the system is integrated with institutional systems. In addition, educators can upload additional data from other sources. This might include: attendance data from synchronous or face-to-face classes; performance on formative quizzes; or surveys that capture student academic variables such as program of study, current motivation, or learning challenges. Self-report surveys may be especially important for further personalization of support or communication. At the simplest level, at the start of the semester, instructors may ask students if they have a preferred name that differs from their official record that the instructor can use that for all subsequent communications.

The learning analytics cycle

Figure 7. The learning analytics cycle with a human in the loop

Interviews with instructors using the SRES to communicate with their students on a first-name basis reported that the emphasis on using a preferred name helped the instructors engage personally with their students and to show their care. As students progress through the semester, instructors can deploy timely surveys to tap into how students are feeling about their current learning—for example, by asking them about “muddiest points” of the week. The instructors then possess the resources to personalize communication based on students’ responses. Following this, the instructor can then segment students based on the data, and design supportive messages for personalized feedback, nudges, or advice to each student based on their progress or attitudes over the course of their study.

While these systems are still fairly new, empirical evidence that documents their impact has recently emerged. For example, results from studies with OnTask have been promising, showing positive impact on students’ motivation (Lim et al., 2021), self-regulated learning (Matcha et al., 2019), and academic performance (Pardo et al., 2019; Lim et al., 2021). In these studies, performance and learning management system activity data were collected from courses where instructors employed OnTask for personalized feedback; the data were compared between cohorts who experienced personalized feedback and those who had not.

Additionally, studies exploring data on course satisfaction indicate that personalizing feedback in this way enhanced the course experience for students (Pardo et al., 2019). Through interview studies, it was also found that students experienced greater feelings of support and belonging in their courses (e.g., Lim et al., 2021). Importantly, students noted that the fact that personalized feedback was communicated from the instructor played an important motivational role in their continued efforts to learn in the course.

Research on educators’ perspectives with SRES also serves as a rich source of information to understand how these key stakeholders engage with learning analytics feedback tools in actual teaching settings (as opposed to laboratory settings, which tends to be the case for new technologies), the challenges they face, and how to facilitate wider adoption of such tools (Arthars & Liu, 2020; Blumenstein et al., 2019). For example, Arthars and Liu (2020) described how instructors moved from the collection of student attendance data using paper rolls, to a digitized approach leveraging the affordance of the SRES. Later on, as the tool became integrated with the institution’s LMS, further automation of the data collection process became possible. This automation then allowed instructors to generate metrics for personalized feedback on engagement data such as quizzes; it also allowed them to communicate this feedback through emails.

While sharing some similarities with OnTask, the SRES—being an earlier development—has advanced to a more mature, “multifunctional system” (Arthars et al., 2020). This new system offers additional features for interactions with feedback, such as the affordances of students’ self-reflection on instructor, dialogic, and peer feedback (see Figure 8 for an illustration). As students write their self-reflections, their inputs can be the next cycle of personalized feedback (see Figure 8). It is also possible for instructors to upload data in real time; these data include tutorial attendance data, students’ lab results, or marks based on rubrics in oral examinations. This has allowed instructors to greatly reduce the time it takes to provide feedback on assessment. To date, the SRES has been widely adopted at the University of Sydney by instructors who want to engage personally with their students (see Arthars et al., 2020 for an in-depth perspective of how and why instructors adopted SRES at the institution).

Screenshot for SRES interface showing email editor with elements for personalization.

Screenshot of SRES interface showing feedback and response.

Figure 8. The SRES interface. (Top) Email editor with elements for personalization. (Bottom) Feedback message showing instructor and peer feedback, and dialogue for feedback response. (Source: Arthars et al., 2020. Images used with permission)

Both OnTask ( and SRES ( are open source tools, meaning that they are available for anyone to use—what may be needed is to ensure the tool meets institutional requirements with regards to data privacy and security (if you are interested, read the paper by Buckingham Shum (2023) when seeking to embed learning analytics technologies within the institution).


ECoach is a platform that leverages data to offer personalized learning support to students, offering them a personal “coach” through the students’ own personalized portal. The platform was designed as a motivational coaching system for students in large, introductory STEM courses to enhance motivation and ultimately foster academic success. Managed at the University of Michigan since 2014 (Huberth et al., 2015; Wright et al., 2014), this platform is relatively mature, having undergone several iterations of design based upon ongoing research.

The platform hosts a range of features to prompt students’ metacognition and reflection, and employs messages personalized to students’ activity data in the course LMS, as well as to information gathered from self-report surveys. The features in the platform were gradually added over time as part of the platform’s development and include tailored messages, exam playbooks, exam reflections, a grade calculator, and an interactive to-do list (see Matz et al., 2021 for details). Some of these surveys are preset, and based on existing instruments on study habits such as the Motivated Strategies for Learning Questionnaire, MSLQ (Pintrich & de Groot, 1990). The pre-set surveys ask students about their goals for the course or their study habits; other surveys can be designed by instructors and deployed to students through the system so that advice or feedback can be tailored to students based on their response data.

Screenshot of ECoach platform from students' perspective and screenshot of post-exam reflection message.

Figure 9. The ECoach platform from the students’ perspective. (Left) A To-Do list tuned to the curriculum, prompting student reflection and action. (Right) a tailored post-exam reflection message integrating student self-report data about how they regarded their grade, and eliciting reflection on study habits. For details see Images used with permission.

A notable difference between ECoach and the OnTask and SRES systems, as described above, is how instructors work with the system. Instructors work directly with the OnTask and SRES platforms to select data and create rules for—as well as write—personalized messages to motivate students and nudge productive study behaviours. In the case of ECoach, a team of behavioral scientists consults with instructors to customize the platform for their course—this becomes the “coach” for students.

LIDT in the World: Aligning Learning Analytics with Learning Design—A Case Study on Personalized Feedback

Educators can align learning analytics with learning design through personalized feedback. Here is one example: The setting was a course in the Engineering and IT discipline that employed a flipped learning design. This was a large course enrolled by close to 600 students. As is typical of flipped learning designs, students were required to complete preparatory tasks prior to the weekly lecture. The preparatory work consisted of watching a topic video and answering related questions, as well as completing a short series of practice exercises. The preparatory work was designed to support students’ mastery of the weekly topics.

Because the educator wanted to increase students’ engagement with the preparatory work and provide personalized feedback on the mastery of the weekly topic, they utilized the OnTask system to generate formative feedback after the preparatory work was due. This meant that students received feedback on their progress with the tasks as well as on their performance. Evaluations of this personalized feedback design found that, overall, students acquired a regular pattern of study that involved preparation and review. They also performed better when the educator included actionable study advice along with performance feedback. Importantly, the way that the personalized, data-informed feedback was designed reinforced the rationale of the flipped learning design. This is a powerful example of the impact on students’ learning when learning analytics is aligned with learning design. For more details about this case study, refer to Pardo et al. (2019) and Lim et al. (2021).

Think About It!

How much do you know about your students? If you were to give them a survey to find out more about them, what would you ask? How would you use this information to offer personalized advice or feedback?

Writing Analytics: Supporting Students’ Academic Writing with Natural Language Processing

So far, we have discussed learning analytics interventions that draw on LMS activity data and, to some extent, student self-report data to personalize feedback and support. As mentioned earlier, quantitative data have tended to be the main type of data used in the field due to the affordances LMSs offer to automatically capture and harvest learners’ activity data. However, textual data is also just as ubiquitous—readily available in forms such as written assessments, qualitative survey responses, and discussion forum comments. Furthermore, writing—in particular, academic writing—is a “key disciplinary skill” (Knight et al., 2020) that is a requisite for professions such as law and business, and certainly in most areas of research. However, teaching academic writing in higher education where enrollment is in the hundreds, is undoubtedly a challenge. This is because students need personalized feedback and guidance on their writing. Even so, with educators juggling teaching loads and administrative tasks, helping every student individually with their writing is nearly impossible. With this in mind, we now turn to another category of learning analytics interventions: writing analytics feedback.

Automated text analysis tools have been around for more than a decade now. These tools can be classified into one of three categories: automated essay scoring (AES), automated writing evaluation (AWE), and intelligent tutoring systems (ITS) (Conijn et al., 2022). In this chapter, only a very brief comparison among the three classes of tools is provided; for more details, you are invited to read Conijn et al. (2022), who offers a good overview of these with examples of the tools in each category.

ITSs have been in existence for years now. These are closed systems, meaning that they are fully automated with pre-set algorithms. They are standalone platforms, able to perform a range of functions with built-in affordances for interactivity. AES tools are intended to assist educators with summative assessment of students’ writing; in other words, they focus on the written piece as an end product and evaluate this product according to pre-defined rules. Finally, AWEs are the newest systems, designed to provide personalized support and formative feedback on students’ writing processes, rather than the final product, to help students improve in their writing.

Automated writing evaluation tools have emerged due to advancements in automated text analysis—particularly natural language processing techniques. One example of an AWE that has been featured in the literature is a tool called AcaWriter (see Knight et al., 2020). This tool sits on an online platform and provides automated feedback to students by highlighting the presence of “rhetorical moves” (Swales, 2004) that are critical in academic writing. AcaWriter is tuned through machine learning to detect “phrases and sentences that indicate . . . the writer’s attitude or position in relation to . . . the text” (Knight et al., 2020, p.143). For example, rhetorical moves in analytical writing include: question move (i.e., raising a question or highlighting missing knowledge), background move (i.e., describing background knowledge or consensus about a topic), and emphasis move (i.e., emphasising significant, important ideas) (Swales, 2004). When students input their written drafts into the platform, they can obtain immediate feedback on their writing regarding the presence or absence of these moves; they can also receive actionable advice on how to improve the draft to make these moves more visible.

Screenshot of AcaWriter web interface 'rhetorical moves' highlighting detection and screenshot of author feedback information.

Figure 10. The AcaWriter web interface. (Left) Highlighting of sentences in which it detects academic ‘rhetorical moves’ (see legend). (Right) Feedback information for the author. 

LIDT in the World: Enhancing Students' Critical Engagement with Writing Analytics Tools—Where Learning Analytics Informs Learning Design

So far, we have described how learning design informs learning analytics interventions in context, but the reverse can also be true as learning analytics can inform learning design. We see this in the case study of the AWE tool, AcaWriter. In this case, researchers worked closely with the educator in a business course to ensure the learning design of the course and the feedback intervention were closely aligned. In this co-design, both parties worked together to tune AcaWriter feedback to the assessment rubric in order to facilitate students’ sense-making of the tool’s feedback and actionability. Even so, it was observed that not all students engaged deeply with the feedback from the AWE tool. Some/many made only surface changes in response to the feedback they received.

To encourage students’ critical engagement with the feedback from AcaWriter, the researchers worked again with the educator to create “self-evaluation exercises” or SEE sheets that students were encouraged to complete as they used AcaWriter (see Figure 11). The SEE sheets comprised prompts for students to reflect on the feedback they obtained from AcaWriter. The purpose of the SEE sheets was not to enforce compliance with the feedback, but to encourage students to consider whether they agreed with their feedback or not. Students were also invited to annotate the feedback they downloaded from AcaWriter, as a way to develop their understanding of the feedback. The use of the SEE sheets resulted in an adaptation to learning design stimulated by the learning analytics intervention. Importantly, this was done in order to foster critical student engagement through personalized, writing feedback. You can read the full case study in Shibani et al. (2022).

Screenshot of SEE prompts for reflecting on AcaWriter feedback.Screenshot of sample student response to SEE prompts on AcaWriter feedback.

Figure 11. The AcaWriter self-evaluation exercise (SEE).  (Top) SEE prompts for reflecting on AcaWriter feedback. (Bottom) Sample student response to SEE prompts on AcaWriter feedback. (Source: Shibani et al., 2022. Images used with permission)


Challenges and Future Directions for Learning Analytics

Over a little more than a decade, learning analytics as a field has grown with rising interest; technology development; and the creation of subfields such as multimodal analytics (Blikstein & Worsley, 2016), assessment analytics, and collaborative analytics. These subfields explore wider sources of data to inform interventions in support of student learning beyond the LMS.

A key challenge for the field is that of adoption. While there are a plethora of tools and systems, few have been actually adopted “in the wild,”—that is, by educators and institutions. In this chapter, we have presented three examples of systems that have seen adoption in actual teaching environments. Each of these systems harnesses either activity, performance, or textual data, to personalize feedback and support to students.

The reason for a lack of adoption is due to the challenges involved in deciding to procure and embed such tools within existing infrastructure (see Buckingham Shum, 2023, for an overview of the levels of conversations to be had in trying to embed learning analytics tools at a university). Furthermore, educators themselves are faced with challenges when considering to adopt data-informed tools. These challenges include the time investment required to learn how to work with the technology (Arthars & Liu, 2020), as well as the kinds of literacy that are required for effectively working with data (Buckingham Shum et al., 2023). This presents specific challenges for learning designers who are trying to incorporate learning analytics in their practice.

The low adoption of learning analytics within education makes measuring the impact of these data-driven interventions difficult —according to Viberg and Grönlund (2021), the field is in “desperate” need of showing impact at scale. To show this impact, educators and learning designers may need to return to fundamental skills around teacher feedback literacy and develop additional automated feedback competencies in order to effectively design data-informed, personalized feedback and support. Stakeholders need to develop an understanding of data that can inform about students’ learning within learning design. They also need to enhance their ability to work with data to personalize feedback in a meaningful way. Combining this data literacy with knowledge of effective feedback will foster greater adoption of data-informed feedback systems in the wild.


Aguilar, S. J. (2018). Examining the relationship between comparative and self-focused academic data visualizations in at-risk college students' academic motivation. Journal of Research on Technology in Education, 50(1), 84–103.

Anderson, P. W. (1972). More is different: broken symmetry and the nature of the hierarchical structure of science. Science, 177(4047), 393–396.

Arnold, K. E., & Pistilli, M. D. (2012). Course signals at Purdue: Using learning analytics to increase student success. In Proceedings of the 2nd International Conference on Learning Analytics and Knowledge (pp. 267–270).

Arthars, N., & Liu, D. Y. -T. (2020). How and why faculty adopt learning analytics. In D. Ifenthaler & D. Gibson (Eds.), Adoption of Data Analytics in Higher Education Learning and Teaching (pp. 201–220). Springer International Publishing.

Blikstein, P., & Worsley, M. (2016). Multimodal learning analytics and education data mining: Using computational technologies to measure complex learning tasks. Journal of Learning Analytics, 3(2), 220–238.

Blumenstein, M., Liu, D. Y. -T., Richards, D., Leichtweis, S., & Stephens, J. (2019). Data-informed nudges for student engagement and success. In J. M. Lodge, J. C. Horvath, & L. Corrin (Eds.), Learning analytics in the classroom: Translating learning analytics for teachers (pp. 185–207). Routledge.

Broos, T., Verbert, K., Langie, G., Van Soom, C., & De Laet, T. (2018). Low-investment, realistic-return business cases for learning analytics dashboards: Leveraging usage data and microinteractions. Springer International Publishing.

Buckingham Shum, S., Lim, L. -A., Boud, D., Bearman, M., & Dawson, P. (2023). A comparative analysis of the skilled use of automated feedback tools through the lens of teacher feedback literacy. International Journal of Educational Technology in Higher Education, 20(1), 40.

Buckingham Shum, S. (2023). Embedding learning analytics in a university: Boardroom, staff room, server room, classroom. In O. Viberg & Å. Grönlund (Eds.), Practicable Learning Analytics (pp. 17–33). Springer International Publishing.

Campbell, J. P., DeBlois, P. B., & Oblinger, D. G. (2007). Academic analytics: A new tool for a new era. EDUCAUSE Review, 42(4), 40–57.

Carless, D., & Boud, D. (2018). The development of student feedback literacy: Enabling uptake of feedback. Assessment & Evaluation in Higher Education, 43(8), 1315–1325.

Chatti, M. A., Dyckhoff, A. L., Schroeder, U., & Thüs, H. (2012). A reference model for learning analytics. International Journal of Technology Enhanced Learning, 4(5–6), 318–331.

Clow, D. (2012). The learning analytics cycle: Closing the loop effectively. In D. Gašević, S. Buckingham Shum, & R. Ferguson (Eds.), Proceedings of the 2nd International Learning Analytics & Knowledge Conference (pp. 134-138). ACM.

Conijn, R., Martinez-Maldonado, R., Knight, S., Buckingham Shum, S., Van Waes, L., & van Zaanen, M. (2022). How to provide automated feedback on the writing process? A participatory approach to design writing analytics tools. Computer Assisted Language Learning, 35(8), 1838–1868.

du Boulay, B., Avramides, K., Luckin, R., Martínez-Mirón, E., Méndez, G. R., & Carr, A. (2010). Towards systems that care: A conceptual framework based on motivation, metacognition and affect. International Journal of Artificial Intelligence in Education, 20, 197–229.

Ez-Zaouia, M., Tabard, A., & Lavoué, E. (2020). EMODASH: A dashboard supporting retrospective awareness of emotions in online learning. International Journal of Human-Computer Studies, 139, 102411.

Gašević, D., Kovanović, V., & Joksimović, S. (2017). Piecing the learning analytics puzzle: A consolidated model of a field of research and practice. Learning: Research and Practice, 3(1), 63–78. 

Gašević, D., Dawson, S., Rogers, T., & Gasevic, D. (2016). Learning analytics should not promote one size fits all: The effects of instructional conditions in predicting academic success. The Internet and Higher Education, 28(Supplement C), 68–84.

Huberth, M., Chen, P., Tritz, J., & McKay, T. A. (2015). Computer-tailored student support in introductory physics. PLOS ONE, 10(9), e0137001.

Jivet, I., Wong, J., Scheffel, M., Torre, M. V., Specht, M., & Drachsler, H. (2021). Quantum of choice: How learners’ feedback monitoring decisions, goals and self-regulated learning skills are related. In LAK21: 11th International Learning Analytics and Knowledge Conference (pp. 416–427). Association for Computing Machinery.

Knight, S., Shibani, A., Abel, S., Gibson, A., Ryan, P., Sutton, N., Wight, R., Lucas, C., Sándor, Á., Kitto, K., Liu, M., Mogarka, R. V., & Buckingham Shum, S. (2020). AcaWriter A learning analytics tool for formative feedback on academic writing. Journal of Writing Research, 12(1), 141–186.

Li, Q., Jung, Y., & Wise, A. F. (2021). Beyond first encounters with analytics: Questions, techniques and challenges in instructors’ sensemaking. In LAK21: 11th International Learning Analytics and Knowledge Conference, Irvine, CA, USA.

Lim, L. -A., Dawson, S., Joksimović, S., & Gašević, D. (2019). Exploring students' sensemaking of learning analytics dashboards: Does frame of reference make a difference? In Proceedings of the 9th International Conference on Learning Analytics & Knowledge: Learning analytics to promote inclusion and success (pp. 250–259). ACM.

Lim, L., Dawson, S., Gašević, D., Joksimović, S., Pardo, A., Fudge, A., & Gentili, S. (2021). Students’ perceptions of, and emotional responses to, personalised LA-based feedback: An exploratory study of four courses. Assessment & Evaluation in Higher Education, 46(3), 339–359.

Lonn, S., Aguilar, S. J., & Teasley, S. D. (2015). Investigating student motivation in the context of a learning analytics intervention during a summer bridge program. Computers in Human Behavior, 47, 90–97.

Martinez-Maldonado, R., Gaševic, D., Echeverria, V., Fernandez Nieto, G., Swiecki, Z., & Buckingham Shum, S. (2021). What do you mean by collaboration analytics? A conceptual model. Journal of Learning Analytics, 8(1), 126–153.

Matcha, W., Gašević, D., & Pardo, A. (2019). A systematic review of empirical studies on learning analytics dashboards: A self-regulated learning perspective. IEEE Transactions on Learning Technologies, 13(2), 226–245.

Matz, R. L., Schulz, K. W., Hanley, E. N., Derry, H. A., Hayward, B. T., Koester, B. P., Hayward, C., & McKay, T. (2021). Analyzing the efficacy of Ecoach in supporting gateway course success through tailored support. In LAK21: 11th International Learning Analytics and Knowledge Conference (LAK21), April 12–16, 2021, Irvine, CA, USA. (pp. 216–225). ACM.

Mougiakou, S., Vinatsella, D., Sampson, D., Papamitsiou, Z., Giannakos, M. & Ifenthaler, D. (2023). Educational Data Analytics for Teachers and School Leaders. Advances in Analytics for Learning and Teaching.

Pardo, A., Bartimote-Aufflick, K., Buckingham Shum, S., Dawson, S., Gao, J., Gašević, D., Leichtweis, S., Liu, D. Y. T., Martinez-Maldonado, R., Mirriahi, N., Moskal, A., Schulte, J., Siemens, G., & Vigentini, L. (2018). OnTask: Delivering data-informed personalized learning support actions. Journal of Learning Analytics, 5(3), 235–249.

Pardo, A., Jovanovic, J., Dawson, S., Gašević, D., & Mirriahi, N. (2019). Using learning analytics to scale the provision of personalised feedback. British Journal of Educational Technology, 50(1), 128–138.

Pintrich, P. R., & De Groot, E. V. (1990). Motivational and self-regulated learning components of classroom academic performance. Journal of Educational Psychology, 82(1), 33–40.

Schwendimann, B. A., Rodríguez-Triana, M. J., Prieto, L. P., Boroujeni, M. S., Holzer, A., Gillet, D., & Dillenbourg, P. (2017). Perceiving learning at a glance: A systematic literature review of learning dashboard research. IEEE Transactions on Learning Technologies, 10(1), 30–41.

Shibani, A., Knight, S., & Buckingham Shum, S. (2022). Questioning learning analytics? Cultivating critical engagement as student automated feedback literacy. Proceedings of LAK22: 12th International Learning Analytics and Knowledge Conference, Online, USA. 326-335.

Siemens, G., & Long, P. (2011). Penetrating the fog: Analytics in learning and education. EDUCAUSE Review, 46(5), 30–40.

Swales, J. (2004). Research genres: Explorations and applications. Ernst Klett Sprachen. Teasley, S. D. (2017). Student facing dashboards: One size fits all? Technology, Knowledge and Learning, 22, 377–384.

Verbert, K., Duval, E., Klerkx, J., Govaerts, S., & Santos, J. L. (2013). Learning analytics dashboard applications. American Behavioral Scientist, 57(10), 1500–1509.

Viberg, O., & Gronlund, A. (2021). Desperately seeking the impact of learning analytics in education at scale: Marrying data analysis with teaching and learning. In J. Liebowitz (Ed.), Online Learning Analytics. Auerbach Publications.

Williamson, K., & Kizilcec, R. (2022). A Review of learning analytics dashboard research in higher education: Implications for justice, equity, diversity, and inclusion. LAK22: 12th International Learning Analytics and Knowledge Conference, Online, USA.

Wright, M. C., McKay, T., Hershock, C., Miller, K., & Tritz, J. (2014). Better than expected: Using learning analytics to promote student success in gateway science. Change: The Magazine of Higher Learning, 46(1), 28–34.

Lisa-Angelique Lim

University of Technology Sydney, Australia

Dr Lisa-Angelique Lim is currently a Postdoctoral Research Fellow with the Connected Intelligence Centre at the University of Technology Sydney. Lisa’s research explores how learning analytics can be harnessed to support students’ learning and to enhance the learning experience, as well as how stakeholders engage with these sociotechnical systems. Lisa has published in several top journals and conferences in the fields of education and learning analytics. In addition to research, Lisa’s work involves partnering with educators to explore best practices in the implementation of learning analytics interventions within their learning design.
Keith Heggart

University of Technology Sydney, Australia

Dr Keith Heggart is an award-winning researcher with a focus on learning and instructional design, educational technology and civics and citizenship education. He is currently exploring the way that online learning platforms can assist in the formation of active citizenship amongst Australian youth. Keith is a former high school teacher, having worked as a school leader in Australia and overseas, in government and non-government sectors. In addition, he has worked as an Organiser for the Independent Education Union of Australia, and as an independent Learning Designer for a range of organisations.

This content is provided to you freely by EdTech Books.

Access it online or download it at