Data-Informed Design for Online Course Improvement

&
DOI:10.59668/164.4223
Instructional DesignLearning AnalyticsData AnalyticsRetentionProcess Improvement
In this chapter, we introduce the Analytical Design Model, a strategy developed at Penn State’s World Campus for improving course quality and student outcomes through an evidence-based approach to instructional design. ADM utilizes data from a variety of sources and offers an efficient and flexible process to analysis that results in concrete improvement targets for courses. ADM is not an instructional design model in itself, but rather a method for making instructional design more strategic and aligned to academic program priorities.

Introduction

Improving student outcomes in higher education is a critical challenge for all colleges and universities. Earning a degree is expensive, and while the American public continues to value higher education, they do have substantial concerns about cost (Fishman et al., 2018). In this environment, institutions are under increased pressure to demonstrate that the time students spend at college is of the highest quality and that it is effectively preparing them to engage in the world as productive citizens. This is particularly true for online learning (Allen & Seaman, 2015), as it is a comparatively new mode for higher education. Most online learning units of colleges and universities employ staff specifically to monitor student progress, develop predictive models, and craft a wide variety of retention strategies. These initiatives often focus on specific populations (e.g., first generation, military). They may borrow from ideas such as “improvement science” from the Carnegie Foundation (Bryk et al., 2015), which was created to provide a framework for educators and college administrators to use to systematically analyze the student learning process and implement highly targeted improvements.

This chapter will outline one such framework that is being developed at Penn State’s World Campus. With this framework, we seek to equip faculty and learning designers with a process and set of tools for utilizing learning analytics and other quality measures to pinpoint course design improvements. The model we introduce here, the Analytical Design Model (ADM), is specifically constructed to be flexible and allow for institutions at varying levels of maturity with data science and data infrastructure to see benefits and, over time, to iterate toward greater sophistication in these areas. In short, we hope this model is approachable regardless of an instructional designer’s (ID) prior experience or institutional challenges and serves as the basis for design units to build a culture of data-informed decision making to support student success.

Introduction to the Analytical Design Model

The primary purpose of the ADM is to provide IDs and faculty with a framework for utilizing empirical evidence to precisely target improvements to course content. It is not, in itself, an instructional design model, but rather an approach to analysis that complements traditional instructional design approaches. Thus, it can be used in conjunction with any instructional design (ID) model that incorporates an analysis or evaluation phase. The outcome of an ADM implementation is a set of improvement targets that can serve to focus one’s design efforts and increase the likelihood of a positive impact on student success. Such well-articulated targets also serve as discrete elements to measure in a post-implementation evaluation on the efficacy of changes. The targets are not a prescription for how the identified instructional problem should be solved. Rather, ADM offers processes and tools that serve as the basis for productive and creative collaboration between IDs, faculty, and other stakeholders as they seek to improve student success and retention.

Figure 1

Analytical Design Model

Millet-FIGURE1A-HEB.PNG

The primary phases of ADM include (see Figure 1):

  1. Plan: Establish Guiding Questions based on program and institutional needs and initial intuitions about where improvements may be needed. These questions will focus the analysis, identify essential data sources, and help to avoid unproductive detours.
  2. Analyze: Gather data, analyze, and develop insights to address the guiding questions. Triangulate with multiple data sets to build confidence in any assertions. Produce specific Revision Prioritization Scores based on the analysis.
  3. Validate: Involve instructors and other practitioners in reviewing the interpretations and ensure they align with real-life experiences. Practice transparency to ensure that methodologies are sound. Prioritize Revision Targets based on stakeholder needs and level of impact.
  4. Design and Develop: Engage in instructional design consistent with the ID’s preferred processes informed by validated Revision Targets.
  5. Evaluate: Collect additional data related to revision targets to assess the efficacy of any changes. Update the data collection protocols, reprioritize targets, ask new guiding questions, and begin to prepare for the next ADM iteration.

Each phase of the ADM is highly deliberate and utilizes an evidence-based approach to produce a specific deliverable that acts as an input to the next phase. For example, guiding questions drive the analysis and revision targets focus the design. The evidence collected relevant to the ADM may include data about student use of a learning management system (LMS) or content management system (CMS), engagement between students, assessment outcomes, surveys, quality evaluations (e.g., Quality Matters), or alignment maps. Data may be collected through existing learning analytics infrastructures, access to unprocessed system logs, or through manual processes (e.g., surveys, interviews, evaluations). The model is quite flexible about what data should be included – the only requirement being that data collection is purposeful and aligns with a well-defined set of guiding questions. When multiple data sets are aligned to a common set of questions, it becomes feasible to triangulate any findings because there are multiple pieces of data to support each assertion. As Cohen and Manion (1989) explain, triangulation seeks to “map out, or explain more fully, the richness and complexity of human behavior by studying it from more than one standpoint and, in so doing, by making use of both quantitative and qualitative data” (p. 269). Triangulation is critical to ADM as we often work with smaller data sets that are limited in terms of the length of collection (one or two semesters) or number of students. Depending on an institution’s maturity with data and learning analytics, being able to incorporate diverse data sources that include low-tech options while still achieving a high degree of validity is quite useful. By utilizing technical standards such as IMS Caliper (Caliper Analytics, 2020) that enforce structure and a common vocabulary to education-related data, it becomes much easier to make linkages across disparate data sources. These data tools create a mechanism for collecting data outside the LMS (Pardo & Kloos, 2014) and greatly simplify triangulation.

In summary, it may be useful to think about ADM in the following ways:

The Role of Analysis in Instructional Design

In the context of instructional design, analysis identifies “the probable cause for a performance gap” (Branch, 2009, p. 23). As analysis is a core concept of our model, we must consider why it is important to instructional design. We will do this by viewing analysis through two lenses: business processes and pedagogy. Both organizational needs and learner success must be balanced for any design model to be considered effective and sustainable.

Business Justification for Analysis

Most learning design organizations in higher education do not have the luxury of dedicating significant time and money on rigorous analysis efforts in pursuit of course improvements. One common strategy is to focus analysis efforts on specific programs that are underperforming in terms of enrollment, student outcomes, or other key success indicators. In these cases, their limited scope and strong strategic alignment justify the level of effort required for a proper analysis with lengthy data gathering and subsequent improvement. Conversely, the bulk of courses within a normal revision cycle may not justify such an investment. More traditional production tasks (e.g., content authoring or multimedia development) may then be prioritized with analysis playing a smaller role. However, as we discuss later in this chapter, learning designers can reassert the role of analysis by demonstrating: (a) that there is clear alignment with program priorities and an articulation of expected benefits and (b) that potential increased efficiencies and efficacy as a result of this upfront work outweigh the investment in analysis. A well-defined model such as ADM can produce clear documentation that satisfies both of these conditions and can help gain buy-in with stakeholders.

Pedagogical Justification for Analysis

When making decisions that impact pedagogy, analysis has always been central to an ID’s toolkit. Many design models indicate that analysis is the first step an ID must complete before beginning a design. Indeed, the ‘A’ in ADDIE, the foundational framework that IDs learn early in their training, stands for analysis. It is essential that an ID spends time understanding the instructional problem that the ensuing design must address. In practice, this process is often not informed by empirical evidence. However, Muljana and Luo (2020) indicate that while understanding and adoption of learning analytics is currently low, IDs largely view its potential as positive. Integrating analysis and data-informed practices into instructional design may be more approachable than current adoption of such practices suggests.

Learning Analytics, Data, and Institutional Maturity

Learning analytics (LA) are concerned with “the measurement, collection, analysis, and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs” (Siemens et al., 2011, p. 4). LA is a powerful capability and ADM is bolstered by having at least limited LA infrastructure in place. There are many approaches to building LA maturity at an institution. Developing an institution-wide strategy (as described by the modified ROMA approach in Ferguson et al., 2015) to implement LA may require a broad understanding of policy and stakeholder priorities and a deftness with change management. However, the iterative nature of ADM suggests a more incremental approach towards LA maturity.

Some of the barriers to change that learning design leaders may face in establishing a data-informed decision-making culture come from ethics and privacy concerns over the potential for improper use of student data. ADM helps mitigate this risk by prioritizing transparency and validation with faculty and IDs as an accountability measure on any and all data interpretations. Involving policymakers throughout this incremental maturation can also ensure that institutional policy adapts and grows in response to the needs of LA initiatives.

Risk can be further managed by having a well-defined project with clear objectives. This will assist in identifying what types of data may be required to support the analysis. Indeed, many institutions engage in data risk classification that assigns risk levels to specific data types. Determining the data needed to complete an ADM implementation at the outset can ensure IDs do not later find themselves in the position that they cannot obtain the access needed. Table 1 can help in understanding the types of data that inform an ADM process and the associated risks attached to them.

Table 1

Possible Data Sources and Associated Risk Categories

Data Source

Considerations

Risk Category*

Quality Evaluation

Utilizes rubrics provided by Quality Matters and others. Deals with course content, not student data. Labor intensive.

Low

Student Surveys

Uncovers student perceptions. Useful when combined with other data. May not get high response rates. Requires valid survey instruments.

High

Instructor observations

Qualitative. Collected though interview with instructor. Can be helpful during validation phase.

Medium

LMS usage logs

Can be structured or unstructured (see below). May require significant data cleanup. Requires statistical techniques to properly interpret.

High

Student performance (grades)

Highly sensitive. Useful when correlated with other sources.

High

Student information system records

Highly sensitive. May be difficult to obtain. May not be necessary depending on guiding questions.

High

Note. * Each institution may classify risks differently. Consult your local data policies.

Analytical Design Model: In Detail

This section articulates each phase of the Analytical Design Model in detail and provides an example to show how each of these phases work in practice.

Plan

The ADM begins with a robust planning phase. During this time, course and program faculty, IDs, and other relevant stakeholders meet to initiate the project, develop guiding questions, and define goals and expectations. Higher education is a complex system (Chow, 2013) and online course offerings inherently face highly contextualized opportunities, needs, and constraints. This early attention to setting a shared vision establishes the foundation for a revision process tailored to best meet the unique needs of courses and programs.

It is important to note that the driving force for initiating this process is course, program, and institution specific. In some cases, identified issues with student performance, retention, or engagement determine the need for course improvement. In other instances, courses or programs choose to revise on an established schedule or in a pre-defined order based on course sequences in a program. Additionally, broad institutional goals, such as reducing materials costs or increasing retention, and external influences, such as changes in state or federal legislation, necessitate course revisions.

During the initial project meeting, important topics to discuss include:

Once the group has committed to using the process, establish a set of guiding questions for the project. These initial questions may be based on specific program needs, anecdotal observations or areas of interest, or specific challenges. For example, when evaluating a three-course sequence that was flagged for revision due to low student performance, guiding questions focused on understanding how well the content coverage was aligned across the course sequence, how well students demonstrated mastery of key concepts in each class, and how students interacted with the course materials and each other. By the end of the planning phase, the group should have clearly documented guiding questions and goals.

Analyze

The analysis phase requires determining which data sources address the guiding questions, analyzing those sources, and making observations based on the findings. When preparing for analyses, consider the available data sources and develop a plan to address the guiding questions and goals. As outlined in Table 1, a variety of different qualitative and quantitative data may inform an ID’s understanding of phenomena in the course. Triangulating data across multiple sources (and semesters when possible) develops a more rich, comprehensive, and meaningful view of the current state of the course. Quantitative analysis may include behavioral analytics, such as click-stream data and access reports, and performance analytics such as assessment scores by item and category, item analyses, and overall course grades. Qualitative data may be drawn from student surveys and feedback, instructor observations and reflections, and discussion forum and assignment content analysis. Other indicators of course quality may be derived from creating an alignment map of the course objectives, content, and assessments. An alignment map template has been provided. Our institution also utilizes the Quality Matters (QM) standards to evaluate online courses. A QM pre-review during the analysis phase provides another data source that allows us to identify areas of improvement to better meet the expectations outlined in the QM standards.

After outlining which data sources to use for each guiding question, determine how to analyze those sources. For example, when creating an alignment map of course objectives, content, and assessments, it may be beneficial to collaborate with the course instructor when creating the map. Similarly, LMSs and third-party tools typically have built-in item analyses, grade reports, and other information that may be ready to use. For each of the guiding questions and data sources, decide what and how to measure. Then, conduct the analyses and capture observations in a reporting format that can be shared with the course development stakeholders. Based on the specific project, needs, and capabilities, the report may take the shape of a comprehensive document, an interactive dashboard (e.g., using tools such as Power BI or Tableau), or another format that works for the context. Quantifying data across a course and comparing performance, interaction, and alignment across lessons determines where to focus attention and resources. The ADM Example Implementation at the end of this chapter illustrates our approach to prioritization.

It is important to maintain thorough description of the methodology including data sources and how data was cleaned and analyzed. Include explanations for every interpretation made in service of answering the guiding questions. These decisions should be transparent and documented along the way allowing for open dialogue and ensuring that stakeholders can confidently revisit the analysis and data sets to expand, refine, or revise as needed.

Each guiding question should be addressed with multiple pieces of data to make informed observations about the course. In parallel to crafting observations, explore existing academic literature related to the guiding questions to determine what kinds of interventions may be effectual and draw comparisons to local circumstances and contexts. With these observations, revision prioritizations and guidance from prior research, frame a set of conclusions for each guiding question. The conclusions may include recommendations or suggestions for future action steps which can help inform the instructional design process moving forward. Because this process relies on human decision-making, it is important to be sure to revisit data sources, look for specific information to help support observations, and revise and refine the questions, goals, and targets as needed while creating the report.

Validate

During the validation phase, regroup with all stakeholders to review the methodology, discuss observations and revision prioritization, and refine reporting and suggestions as necessary. Ultimately, the validation process involves honest conversation that asks stakeholders “Could this mean what we think it means?” and inviting feedback in service of improving the findings prior to acting on them. For example, during an analysis of an upper-level management course, we were able to note that students frequently paused video presentations at similar intervals, which we hypothesized reflected their note-taking behavior. During the validation meeting, the faculty member confirmed that the videos in question were content-dense and challenging. These valuable insights were carried forward into the redesign to inform how we presented content to learners.

Design and Develop

The steps of the ADM up to this point prepare IDs and course authors to enter course revision armed with a set of contextualized, validated, and well-supported recommendations upon which to act. While the data-informed nature of ADM is more intensive than analyses commonly conducted in higher education contexts, the ADM outcomes can serve to fill the role of a thorough needs assessment as called for by many traditional instructional design models (Branch & Kopcha, 2014; Dick et al., 2015; Morrison et al., 2019). To best engage with the ADM outcomes, the course design team should progress through stages of ideation, prioritization, design, and development. Any instructional design process naturally involves relying on a degree of design conjecture as IDs interpret the analysis and integrate their expertise and observations to make decisions (Stefaniak et al., 2018). By using results from the ADM and ideating around identified needs and then prioritizing those ideas, course design teams can more intentionally focus their resources and efforts.

Evaluate

During implementation ongoing evaluation should monitor the progress of revisions to best support student success. Establish expectations for continuous communication with the course team, determine the scope of ongoing iterations, and decide which data sources to use over time to monitor course progress. Supporting iterative improvement may mean revisiting part or all of the ADM to revise questions and goals, add new or expanded data, and create more timely reports, updates, or dashboards for ongoing use. Performing a data-informed revision for the primary project is valuable, but committing to and supporting ongoing analysis for continuous improvement will allow the team to increase revision efficiencies and focus efforts more proactively.

Analytical Design Model: Example Implementation

This example highlights three phases of an ADM implementation: planning, analysis, and design and development. In the real-life implementation of this example, we included validation and evaluation, but have left those out here for brevity. Our model is not prescriptive and each approach is highly contextual (i.e., you can adapt your own implementation). The statistics and visualizations are specific to the data available but are general enough to align with what most institutions can accomplish with locally-available data. Most importantly, in considering ADM in your own organization, this should create a clearer picture of the type of effort involved. We have used fictitious organizations throughout the example.

Phase 1: Planning

This section outlines Forest State University (FSU) Online’s current progress toward producing the deliverables necessary to achieve the goals established at the onset of the project. A proposed timeline for the next steps is included below:

Goals

Deliverables

The FSU team will deliver:

Timeline

Project Team

The following individuals are identified as stakeholders on the project team:

Guiding Questions

Phase 2: Analysis

Methodology

This report presents the findings and recommendations from a multi-semester exploration of course alignment and student behaviors and performance in BUS 101, 102, and 103. This exploration is designed to inform revisions of BUS 101 and 102 to ensure students are better prepared for success in BUS 103.

Student performance and content interaction data were collected during the Spring 2019 and Summer 2019 semesters. During early Fall 2019, FSU Online met with faculty for each course to develop a comprehensive course alignment map and to collect LearnMore’s LearnLab performance data.

The following sections address the study’s four guiding questions, outline overall observations, and detail suggested next steps to inform the revision process.

Revision Prioritization Score

The Revision Prioritization Score (RPS) provides an objective, quantifiable metric to guide revision efforts based on alignment, performance, and interaction. These three factors can be weighted to reflect goals of the program and revision efforts. For the purposes of this review, categories were weighted evenly. The RPS and detailed descriptions of the measurements for alignment, performance, and interaction are provided below.

Revision Prioritization Score Formula

Millet-F1-HEB.PNG

Alignment

Alignment is defined as the extent to which the course and lesson learning objectives, lesson content, activities, and assessments work together toward the achievement of the stated objectives. Alignment was evaluated in direct consultation with faculty for each course. The alignment score is calculated as follows:

Millet-F2-HEB.PNG

Objective Score = 1 point if objective is covered in content + 1 point if objective is assessed

Performance

Assignment grades were collected from LMS data. Scores were converted to percentages and averaged for each lesson to create a lesson Performance Score.

Interaction

Content for BUS 101 and 102 has been developed in the FSC Online CMS. This system tracks when students access course content, how they interact with content, and the extent to which they consume media.

Additionally, BUS 101 and 102 use a LearnMore textbook that is equipped with LearnLab, an interactive and adaptive reading tool that integrates guided, distributed practice problems and active learning strategies to improve student learning. LearnLab provides instructors with information about the length of time students spend in the system, as well as their degree of completion for the chapter.

Information from both FSU and LearnLab systems were combined as follows to calculate the interaction score:

Millet-F3-HEB.PNG

Observations

Though not formally accounted for in the RPS, observations play an important role in determining how to prioritize revisions moving forward. During the alignment mapping process, faculty identified which lessons in BUS 101 and 102 are prerequisites for BUS 103. These lessons should be given special consideration when deciding where to focus revision efforts to have maximum impact on students’ preparedness for BUS 103. Additionally, faculty identified lessons in which content is outdated or misaligned. These lessons should be revisited during revision regardless of RPS to ensure that content is current and accurate.

Detailed Course Analyses

Analysis Part 1: Alignment

When using the LearnMore textbook chapter objectives as a foundation, BUS 101 and 102 cover a total of 204 learning objectives. Of those, 47 objectives are not discussed in the FSU content, but are assessed in the course, 12 are covered by the FSU content but are not assessed, and 8 are not covered by the FSU content or assessed. These objectives that lack content, assessments, or both should be reviewed to determine if they are within the scope of the course or if they should be removed. If they are deemed within scope, the lesson materials should be reviewed to determine if/what types of content and/or assessments may be added or adjusted to improve alignment.

Table 2

Lessons 9 and 10 Objective Coverage

Millet-F4-HEB.PNG

Note. Blue – Objective covered in FSU content but not assessed; Yellow – Objective not covered in FSU content but assessed; Gray – Objective not covered in FSU content and not assessed. Table 2 shows a sample lesson objective alignment map, which indicates whether and how an objective is covered in content and assessed. This table is drawn from the data gathered in the alignment mapping spreadsheet, which is shown in Table 3.

Table 3

BUS 101 Lesson 9 Alignment Map

Course Objectives

LO

Lesson Objectives

1, 2

1

Recognize the ethical quandaries in a business situation and recommend actions to address such issues.

2

2

Identify legal issues germane to many business situations and assess the relationship between strategic decision making and regulatory requirements.

2

3

Recognize the social impacts of business decisions and suggest appropriate sustainable practices.

2

4

Identify key challenges of globalization in business operations.

2

5

Exhibit knowledge of the major cultural, economic, social, and legal environment faced by multinational corporations.

Content Pages

LO

Practice/Activities

LO

Assessments

LO

Tools Used in this Lesson

Overview of Regulation

2

LearnLab

3,8

Lesson 9 Discussion

3, 8

LearnLab

Business in Society

3

   

Lesson 9 Problems

1, 2, 3

Excel

Key Legislation in Business

2

   

Lesson 9 Quiz

3, 7, 8, 9 , 10

 

Operating at a Global Scale

2, 4

   

Lesson 9 Excel

3, 7, 8, 10

 

Multinational Corporations

5

         

Note. In addition to using these maps to better understand alignment issues, FSU uses the course alignment maps as an input for Quality Matters course pre-review. By including information in the alignment map about technology tools used in the lesson, as well as connections back to the course-level objectives, this sheet is able to meet several needs.

Analysis Part 2: Performance

Student performance on assessments in BUS 101, 102, and 103 was evaluated by lesson, topic, and assignment category to help identify trends and issues from a variety of angles. Figure 2 below shows the range of student average scores on assignments for BUS 101 Lesson 9.

Figure 2

Lesson 9 Assignment Performance

Millet-FIGURE1-HEB.PNG

Analysis Part 3: Content Interaction

To better understand how students interacted with course content, we evaluated how many times students viewed lesson pages throughout the semester (see Figure 3), how many students accessed the course content for each lesson (see Figure 4), and how students engaged with multimedia and third-party tools (see Table 4).

Figure 3

Course Content Page Views by Date, Spring 2019 BUS 101

Millet-FIGURE2-HEB.PNG

Figure 4

Number of Enrolled Students Who Viewed Lesson Content, by Lesson, Summer 2019 BUS 102 001

Millet-FIGURE3-HEB.PNG

Table 4

Content, Multimedia, and Third-Party Tool Interaction, BUS 102 Lesson 9

Accessed FSU course content:

52%

Played FSU lesson media:

50%

Used interactive FSU content:

50%

Completed LL LearnLab:

36%

Avg. time spent on LearnLab by due date:

52 min.

Analysis Summary and Findings Upon completing the analysis of lesson alignment, content interaction, and performance, a revision prioritization score was calculated for each lesson to help focus attention on those parts of the course during design and development (see Table 5).

Table 5

BUS 102 Course Revision Prioritization Snapshot

Lesson

Alignment (%)

Content Interaction (%)

Performance (%)

Revision Prioritization Score

FSU

LearnLab

Lesson 1

78.6

95.3

N/A

86.13

13.23

Lesson 2*

100

73.9

46

86.82

17.74

Lesson 3*

92.9

81

43

88.95

18.72

Lesson 4 & 5*

92.9

50.8

43

83.06

25.71

Lesson 6

86.4

81

36

83.62

24.43

Lesson 7 & 8

90

44.5

43

88.22

26.01

Lesson 9 & 10

83.3

52

36

81.66

30.35

Lesson 11

83.3

57.4

36

87.22

27.59

Lesson 12 & 13*

94.4

46.3

36

89.17

25.09

Lesson 14*

82.1

57

21

87.53

30.46

Lesson 15*

88.9

45.4

25

82.49

31.17

Midterm

N/A

N/A

N/A

74.5

NA

Final

N/A

N/A

N/A

76.54

NA

Composite

88.44

62.24

36.5

80.71**

27.16

Note. Key for Revision Prioritization Score Ranges = 0-20, 20-30, 30+

Phase 3: Design and Development

Ideation and Prioritization

Based on the insights from Table 5 and the analysis phase, the course design team meets to brainstorm ideas for course improvement and then, to prioritize those ideas into concrete revision targets.

BUS 102 Revision Targets: Lesson 9 & 10

*Lesson topic is prerequisite to or continued in BUS 103; ** Weighted to reflect 45.9 exam weight.

Conclusion

Developing a comprehensive approach to data-informed course improvement takes time, and course, program, and institutional circumstances may pose a series of challenges when working towards implementing the ADM. Ideally, forming a team that includes individuals with broad skill sets in instructional design, learning analytics, educational research, and programming supports the use of a larger toolbox of strategies and approaches that a design team can leverage. However, this ideal state of support for the ADM also implies having well-established resources, institutional culture, and data infrastructure. For IDs in a small unit or who face limited opportunities to collaborate, the ADM is flexible enough to start with available resources. For example, IDs who do not have access to the data infrastructure or sharing necessary to get detailed or customized LMS data can use existing reports generated by their LMS and multimedia systems to better understand learner performance and frequencies of behaviors. Additionally, IDs tasked with faculty development can support skill building with course instructors and authors to facilitate more data-informed course design decisions from the faculty perspective. With these challenges in mind, IDs looking to integrate the ADM into their professional practice should plan to iterate towards more complex implementations.

References

Allen, I., & Seaman, J. (2015). Grade level: Tracking online education in the United States. Babson Survey Research Group. https://edtechbooks.org/-djFo

Branch, R. M. (2009). Instructional design: The ADDIE approach. Springer.

Branch, R. M., & Kopcha, T. J. (2014). Instructional design models. In J. M. Spector, M. D. Merrill, J. Elen, & M. J. Bishop (Eds.), Handbook of research on educational communications and technology (pp. 77-87). Springer.

Bryk, A. S., Gomez, L., Grunow, A., & LeMahieu, P. (2015). Learning to improve: How America’s schools can get better at getting better. Harvard Education Publishing.

Caliper Analytics (2020). IMS Global Learning Consortium. https://edtechbooks.org/-gAbN

Chow, A. S. (2013). One educational technology colleague's journey from dotcom leadership to university E-learning systems leadership: Merging design principles, systemic change and leadership thinking. TechTrends, 57(5), 64-73. https://edtechbooks.org/-CAKn

Cohen, L. &Manion, L. (1989). Research methods in education (3rd ed.). Routledge.

Dick, W., Carey, L., & Carey, J. O. (2015). The systematic design of instruction (8th ed.). Pearson.

Ferguson, R., Macfayden, L., Clow, D., Tynan, B., Alexander, S., & Dawson, S. (2014). Setting learning analytics in context: Overcoming the barriers to large-scale adoption. Journal of Learning Analytics, 1(3), 120-144. https://edtechbooks.org/-qzkZ

Fishman, R., Ezeugo, E., & Nguyen, S. (2018). Varying degrees 2018: New America’s survey on higher education. New America. https://edtechbooks.org/-RMNz

Morrison, G. R., Ross, S. J., Morrison, J. R., & Kalman, H. K. (2019). Designing effective instruction. John Wiley & Sons.

Pardo, A., & Kloos, C. D. (2011, February). Stepping out of the box: Towards analytics outside the learning management system. In Proceedings of the 1st International Conference on Learning Analytics and Knowledge (pp. 163-167). https://edtechbooks.org/-PFj

Siemens, G., Gasevic, D., Haythornthwaite, C., Dawson, S., Shum, S., Ferguson, R., Duval, E., Verbert, K., & Baker, R. (2011). Open learning analytics: An integrated & modularized platform. Society for Learning Analytics Research. https://edtechbooks.org/-eyIqpdf

Stefaniak, J., Baaki, J., Hoard, B., & Stapleton, L. (2018). The influence of perceived constraints during needs assessment on design conjecture. Journal of Computing in Higher Education, 30(1), 55-71. https://edtechbooks.org/-GsXo

This content is provided to you freely by EdTech Books.

Access it online or download it at https://edtechbooks.org/id_highered/data_informed_design.