Developing a Quality Assurance Approach for an Online Professional Military Education Institution

DOI:10.59668/377.8134
Instructional DesignHigher EducationQuality AssuranceCourse QualityCourse EvaluationProfessional Military Education
With the increasing demand for online learning, higher education institutions are heightening their focus on assuring online course quality (Allen & Seaman, 2015). However, they lack consensus on what constitutes assuring quality in online courses (Vlachopoulos, 2016), which is challenging for institutions seeking to develop quality assurance approaches. This paper describes how a specific institution, the US Air Force’s eSchool of Graduate Professional Military Education (eSchool), developed and implemented an evaluative instrument to assure course design quality within its unique context. This example provides a valuable perspective for those developing quality assurance processes and resources for their online programs.

Introduction

Online learning has grown exponentially over the past several decades, with higher education institutions dedicating significant resources to expand and promote their online learning options (Allen & Seaman, 2015). The COVID-19 pandemic has further intensified the need for online learning options that provide similar quality experiences for remote and on-campus students (Means & Neisler, 2021). As the demand for online learning opportunities increases, higher education institutions are heightening their focus on assuring online course quality, which has been linked to student satisfaction, engagement, and achievement of outcomes (Murray et al., 2012). However, there is a lack of consensus on what constitutes “quality” and “assuring quality” in online courses (Vlachopoulos, 2016), which can be challenging for institutions seeking to develop a quality assurance approach for their online courses. This paper presents a detailed example of how a specific institution, the US Air Force’s eSchool of Graduate Professional Military Education (eSchool), developed and implemented an approach to assure course design quality within its distinct institutional context using a Course Design Quality Checklist (CDQC). This example provides a valuable perspective for institutions seeking to develop quality assurance processes and resources for their online programs, particularly those who have unique characteristics and needs.

First, the article describes the eSchool’s institutional context to provide background for the problem being addressed, namely the need to improve course design quality and devise an approach for quality assurance. Next, the process for addressing this problem is described, including the methodology for developing the CDQC and an overview of its content. Finally, the article reviews some of the lessons from this experience that can help inform other institutions seeking to develop a quality assurance approach to improve the quality of their online courses.

Background and Introduction to the Problem

Established in 2016, the eSchool is a fully online, graduate-level institution that provides Officer Professional Military Education (OPME) courses for thousands of airmen across the globe. As of 2022, the eSchool is responsible for over 90 online courses that reside within four fully online programs, one of which is a fully accredited master’s program. While all the courses in these programs are asynchronous, the majority are self-paced and the rest are facilitated by adjunct instructors.

Prior to the eSchool’s establishment, distance learning (DL) OPME programs resided within their requisite resident institutions (e.g., Air War College DL resided in the Air War College resident college). Upon opening, the eSchool inherited many of the existing online courses from the DL programs. These courses offered opportunities for improvement across multiple areas of course design, including instructional alignment, learner engagement, and the use of multimedia and technology.

Solution: Develop a Quality Assurance Approach

Designing, maintaining, and revising asynchronous online courses requires considerable instructional design expertise because all of the content must be fully developed, functional, and available online at all times (Grant, 2021). Thus, the eSchool has a dedicated instructional design and development department, whose members work collaboratively with faculty members to develop and optimize courses for an online delivery format. As an instructional designer in this department, I spearheaded this project with one of my colleagues.

My colleague and I had long been aware of the areas for improvement in many of the school’s courses. Furthermore, we knew that failing to apply quality standards to online course design can negatively affect student engagement, learning, and performance outcomes (Baldwin, Chin, & Hsu, 2018; Parscale et al., 2015). However, it was challenging to improve these areas without a set of quality design standards and a formal quality assurance process to implement them. Thus, we began devising an approach to help address these issues within the parameters of our institutional context.

Determining the Purpose

In accordance with the principles of backwards design, we decided to begin with the end in mind (Wiggins & McTighe, 2005). Therefore, before planning our specific approach, we needed to establish the institution’s overall goals and purpose for assuring quality in its courses. We would then determine what types of activities and products to develop to achieve the purpose.

To help determine the purpose, we identified gaps between the current and desired states of our online course quality (see Table 1). This exercise forced us to narrow down and clearly articulate what the main existing problems were and what specific outcomes we would like to achieve. First, we selected the three most prominent aspects of our current state that we wanted to address with a quality assurance approach. Then, we envisioned our desired states for each of these aspects. Finally, we examined the differences between the current and desired states to identify gaps and determine our purpose. Ultimately, this process indicated we needed a dual-purposed approach that could be used to both evaluate existing courses and integrate design best practices into new course design efforts by: (a) providing standards and processes for evaluating and improving existing courses, and (b) providing standards and processes for designing and developing new courses.

In both cases, the goal was to improve course quality, which would help cultivate a culture of continuous improvement.

Table 1

Identifying Gaps between Current and Desired State: eSchool Example

Current StateDesired State
Areas for improvement in multiple areas of course design.All courses meet course design quality standards.
Potential for student engagement not fully realized, due to greater percentage of passive vs. active learning elements.Increased student engagement in courses that meet course design quality standards.
Need for products and processes to assure quality of course design.Institution implements processes and products that cultivate culture of continuous improvement.

Operationalizing the Purpose

Next, we created an action-oriented plan for addressing the identified gaps and achieving the institution’s purpose. An effective way to do this was to create a logic model (see Table 2). Logic models help to visualize the processes and resources required to move from the current state to the desired state (Loberti & Dewsbury, 2018). We started by listing statements describing the desired state in the Long-Term Outcomes column. Next, we determined what activities would be conducted to achieve the desired state and what their outputs, or products, would be. This was an iterative process, in which we brainstormed numerous potential activities and outputs and then narrowed them to a final few. Once our activities and outputs were finalized, we listed the inputs, or resources, that would be needed to conduct the activities and produce the outputs. We completed the model by listing short and intermediate-term outcomes leading up to the long-term outcomes.

We used the completed logic model as a guide to develop more detailed plans for each activity, including instructions and schedules. This paper focuses on our efforts for the first activity – developing an evaluative instrument to inform new course design and to evaluate and improve existing course design quality and its associated output and outcomes.

Table 2

Example Logic Model

InputsActivitiesOutputsOutcomes
Short-TermIntermediateLong-Term
What resources do you need to conduct the activities and achieve the desired outcomes?What activities will you conduct to achieve the outcomes?What products will the activities produce?What immediate changes will occur?What mid-term changes will occur that lead up to long-term changes?What changes do you hope will occur and be sustained in the long-term?
Instructional design professionals (IDs)

Faculty members

Institutional leadership

Existing courses Time (work hours)

Collaborative communication tools (e.g., email, collaborative notebook, video conferencing)
Develop evaluative instrument to inform design of new courses and measure and improve design quality of existing courses Develop and implement process for applying evaluative instrument to evaluate existing coursesEvaluative instrument for informing design of new courses and measuring and improving design quality of existing courses Process for applying evaluative instrument to existing courses Results from applying evaluative instrument to existing coursesIDs start using evaluative instrument to inform design of new courses IDs use evaluative instrument to evaluate an increasing number of existing courses IDs work with faculty to revise existing courses based on evaluation resultsInstitutional adoption of evaluative instrument to design new courses IDs continue evaluating existing courses and work with faculty to revise existing courses on a continual basis Positive student feedback regarding experiences in courses that meet course design quality standardsAll courses meet course design quality standards Increased student engagement in courses that meet course design quality standards Institution implements processes and products that cultivate culture of continuous improvement.

As the first step towards achieving these outcomes we decided to either adopt or create an instrument with criteria that would both inform the design of new courses and evaluate the quality of our existing courses. Initially, we considered adopting a pre-built evaluative instrument. However, after examining numerous instruments from various academic institutions and professional organizations, we determined that no single prebuilt instrument met all of our specific needs. Instead, we needed a tool that was more tailored to our unique context, which resulted in our CDQC.

Methodology

To develop our instrument, we used a methodology similar to McGahan et al. (2015), whose case study from the University of Nebraska at Kearney (UNK) “[provides] a roadmap for institutions that are developing an [evaluation] instrument of their own” (p 126). UNK developed a custom online course quality checklist after determining that none of the prebuilt evaluation instruments they had reviewed met their specific needs. Instead, they used the prebuilt instruments, grounded in research-based course design principles, to inform the development of evaluative categories and criteria for the UNK checklist.

Prebuilt Instruments Overview

We also used prebuilt evaluative instruments to inform the development of our tailored checklist. We selected these instruments based on five criteria established by Baldwin, Ching, and Hsu (2018), who reviewed a series of national and statewide online course evaluation instruments. According to their criteria, a qualifying instrument needed to:

  1. Evaluate design of higher education online courses,
  2. Support student success,
  3. Have national or statewide influence,
  4. Be published after 2006, and
  5. Be currently in use.

Using these criteria, we identified six qualifying instruments. While each of these instruments made valuable contributions to our final checklist, none was sufficient by itself. Instead, we analyzed, synthesized, and tailored criteria from each of the six instruments to develop our own checklist.

Quality Matters Higher Ed. Course Design Rubric Standards, 6th ed

The QM Rubric is widely used across higher education institutions to help design and evaluate online and hybrid courses. It is comprehensive, with eight sections comprised of 42 research-based standards, each with extensive annotations. The rubric emphasizes instructional alignment throughout all of its sections (Quality Matters, 2022). While the QM Rubric is renowned and emphasizes instructional alignment, it ultimately did not meet our needs. First, QM requires a paid subscription to access the annotated standards, which was beyond our budgetary constraints. Second, the rubric focuses too much on instructor facilitated interactions, and most of our courses are self-paced. 

Open SUNY Center for Online Teaching Excellence Quality Review Scorecard

The Open SUNY Center for Online Teaching Excellence Quality Review Scorecard (OSCQR) rubric was developed to help faculty members design high-quality online courses. Its six sections, comprised of 50 standards, emphasize course design and accessibility elements (Online Learning Consortium, 2022). Although the OSCQR focuses on course design, as well as its openly licensed and customizable format, it did not meet our needs. Primarily, the OSCQR places a higher emphasis on technical and logistical course design elements than we wanted. Additionally, it infused accessibility criteria throughout the rubric, while we wanted to address it in a single section.

Illinois Online Network’s Quality Online Course Initiative Rubric

The primary purpose of the Illinois Online Network’s Quality Online Course Initiative Rubric (QOCI) rubric is to “help colleges and universities to improve accountability of their online courses” (University of Illinois Springfield, 2022) by providing course design guidelines and evaluative criteria. The tool is comprehensive, with seven sections and 97 criteria that cover all aspects of course design (University of Illinois Springfield, 2022). The QOCI rubric provides comprehensive treatment of online course design, as well as the option to access both a full rubric and a condensed checklist version. However, it contains numerous criteria that are too specific and do not apply to our institution. For example, the QOCI includes a criterion for an assignment due date calendar. Because eSchool courses are developed from Canvas Blueprints, we do not set specific due dates in Canvas (e.g., the course instructions state that an assignment is due on Thursday of Week 3, but do not provide the calendar date). Thus, we would not incorporate a criterion for an assignment calendar.

California Virtual Campus-Online Education Initiative Online Course Design Rubric

The California Virtual Campus-Online Education Initiative Online Course Design Rubric (CVC-OEI) rubric was developed to help instructional designers and faculty members design and evaluate online courses. In particular, it is intended for faculty members to use when peer-reviewing courses. Almost 70% of 44 criteria, which are divided into four sections, focus on content presentation and accessibility (California Virtual Campus Online Network of Educators, 2022). The CVC-OEI provides clear, detailed descriptions for the criteria in the CVC-OEI rubric. However, the criteria emphasize the technical and logistical elements of course design more than instructional alignment - a greater focus for us. Additionally, the CVC-OEI includes numerous criteria that apply to instructor facilitated courses where the majority of eSchool courses are self-paced.

Canvas Course Evaluation Checklist

The purpose of Canvas Course Evaluation Checklist is to help educators with a varying range of course design experience to develop courses in the Canvas LMS. The checklist, which has four sections and 39 criteria, is not intended to be comprehensive. Rather, it is “starting point for institutions to make a copy and customize it to meet their individualized needs” (Instructure, 2022). Because the Canvas checklist is not intended to be comprehensive, we did not consider adopting it for our institution. Instead, we incorporated some of the more useful criteria into our final checklist.

California State University Quality Learning and Teaching Evaluation Rubric, 2nd ed.

The Quality Learning and Teaching (QLT) rubric was developed to help faculty members design and deliver online courses. It has 10 sections with 57 criteria with a heavy emphasis on instructor roles and responsibilities (California State University, 2022). While the QLT rubric is thorough and provides examples for each criterion, it did not meet our needs. In particular, many of the criteria evaluate instructor performanceThe instructor provides information about being a successful learner/student” (California State University, 2022). However, because most of the eSchool’s courses are self-paced and do not have instructors, these criteria would not apply. Furthermore, even for the school’s instructor-facilitated courses, we wanted to use the checklist for evaluating course design, not instructor performance.

Qualitative Analysis

We then used a qualitative analysis approach to analyze and synthesize criteria from the six instruments (Thomas, 2006). First, we reviewed the evaluative categories and specific criteria for each instrument, noting similarities and differences. Next, we imported them to Nvivo, a qualitative analysis software, where we coded them to identify common concepts and themes. We chose not to start with a predetermined set of codes instead of creating them as we read through the criteria. We organized the resulting codes into seven categories: Layout and Organization, Instructional Content and Materials, Assessment, Accessibility and Usability, Communication and Interaction, Support, and Technology.

Determining Checklist Criteria

After completing the analysis, we engaged in several intense rounds of selecting the codes and categories from our analysis to incorporate as criteria in our checklist. We also developed additional criteria for course design elements entirely unique to our institution and, therefore, not addressed in any prebuilt instrument. To make these decisions, we considered the various facets of our unique institutional context, including characteristics related to the course delivery format and design elements.

We considered several characteristics related to our course delivery format (see Table 3). First, many of the criteria from analysis addressed aspects of hybrid or flipped courses, as well as synchronous online activities. eSchool courses are all fully online and asynchronous, therefore, these criteria did not apply to our checklist. Second, many of the criteria focused on instructor roles and interactions. However, the majority of our courses are self-paced. Because the self-paced courses do not have instructor involvement, they support limited types of course interaction. Thus, our checklist criteria did not emphasize student- instructor interaction.

Table 3

Implications of Course Delivery Format Characteristics

CharacteristicImplication
Fully OnlineCriteria needed to reflect a fully online institution, with no references to in-class activities or flipped-classroom/hybrid approaches.
AsynchronousCriteria needed to focus on asynchronous interactions between students and students and instructors, with no references to synchronous interactions and software requirements.
Mostly Self-PacedCriteria needed to emphasize student: content and student: student interactions and include an optional section for student: instructor interactions.

We also considered characteristics related to our course design (see Table 4). All of our courses are designed, developed, and maintained by instructional designers and faculty members who are subject matter experts, not the individual course instructors. Thus, our courses have more standardized elements than institutions where individual course instructors design their own courses (Herron et al., 2012). Our checklist criteria needed to incorporate these standardized elements, such as the inclusion, layout, and organization of specific course pages. Our courses also have several elements that are not identical across courses but must follow the same parameters. For example, we have specific guidelines for writing course outcomes and course descriptions that needed to be included in the checklist criteria.

Table 4

Implications of Course Design Characteristics

CharacteristicImplication
Standardized Course ElementsCriteria needed to address course elements that were standardized across courses, such as the layout and organization of the course Home Page, Syllabus Page, and Lesson Pages.
Semi-Structured Course ElementsCriteria needed to address semi-structured course design elements, such as the Course and Lesson Outcomes, Course and Lesson Descriptions, and Narratives.

Determining Checklist Format

Most of the evaluative instruments that my colleague and I reviewed used scoring scales to evaluate criteria. However, we selected a simple checklist format for several reasons related to our unique institutional context. Foremost, while the eSchool has an academic mission, it is a military institution. Self-assessment checklists are an integral part of the Air Force Inspection System, which is familiar to many of our faculty members. Secondly, a checklist system providing only “Yes”, “No”, and “Not Applicable” (Y/N/NA) options for each criterion eliminates much of the subjectivity involved in ranking criteria on a numeric scale. A Y/N/NA selection immediately and clearly indicates whether a standard has been fully satisfied; partial satisfaction is considered as a failure to meet that standard. The checklist allots substantial space for commentary in each section of the checklist allowing the reviewer to elaborate on why specific standards were not met and provide recommendations for improvements. Overall, the format helps to objectively mediate discussions for planning the “way ahead” when discussing CDQC results with the faculty members who manage the courses, especially as they may have limited course design experience (Baldwin, Ching, & Friesen, 2018).

CDQC Overview

The CDQC has three sections, including a course dashboard page, a course alignment page, and the checklist pages.

Dashboard Section

Because the checklist itself is lengthy, we needed an efficient way to convey the results to faculty members. To satisfy this need, we designed a highly visual Dashboard page that provides a snapshot of the checklist results, and additional information that provides important context, such as the amount of time students and instructors spend performing different types of activities in the course (see Figure 1). For example, we included a bar graph comparing the amount of time students spend in passive activities, such as reading or watching videos, versus active activities, such as discussions and projects. The graph clearly indicates imbalances between the amount of time students spend in passive versus active learning activities, which can help inform efforts to make the course more engaging for students. We also included an area for the reviewer to make recommendations on how to proceed. Overall, the Dashboard helps the reviewer to communicate the most essential information when discussing results with faculty members.

Figure 1

Dashboard Section

Hostetler-11-2-Fig1.png

Graphical user interface, application

Description automatically generated

Course Alignment Section

Next, we included a section to record information relating to instructional alignment (see Figure 2). We created a table with a row for each lesson and columns for course learning outcomes, lesson objectives, and assessments. This provides an at-a-glance overview of the instructional alignment of all the course elements and helps to quickly identify misalignments. Additionally, by recording all of this information on a single page, we could reference it easily when completing the checklist.

Figure 2

Course Alignment Section 

Hostetler-11-2-Fig2.png
A picture containing calendar

Checklist Section

The checklist consists of seven main evaluative categories, each with multiple subcategories that contain specific criteria (see Table 5). In sum, there are 127 specific criteria. The checklist is formatted as a table, with a row for each criterion, a column for indicating whether the criteria was met (with Yes, No, and N/A options), and a column for reviewers to provide commentary explaining ratings with suggested recommendations (see Figure 3). After initial testing, we found reviewers sometimes needed more space for commentary. Thus, each category also has a larger space at the end of its criteria for additional comments and recommendations.

Table 5

CDQC Overview

CategorySub-CategoriesDescription
1. Course Introduction and Information1.1 Home PageReviews the currency, accuracy, and availability of basic course information and resources.
1.2 Course Syllabus Page
1.3 Learner Support Resources
1.4 Instructor Information*
2. Descriptions and Outcomes2.1 Course DescriptionReviews the clarity, accuracy, and curricular alignment of course and lesson descriptions and outcomes.
2.2 Lesson Descriptions
2.3 Course Learning Outcomes
2.4 Lesson Objectives
3. Instructional Content and Materials3.1 SequenceReviews the delivery, organization, and curricular alignment of instructional content and materials.
3.2 Organization
3.3 Variety
3.4 Alignment
3.5 Accuracy and Currency
3.6 Legal
3.7 Accessibility
4. Course Narratives4.1 ContentReviews the clarity, coherence, organization, and mechanics of the course’s instructional narratives, which guide students through each lesson page and provide a conceptual framework for them to engage with assigned readings and media.
4.2 Organization
4.3 Writing Style and Mechanics
5. Assessment5.1 AlignmentReviews the methods, frequency, and curricular alignment of the course’s assessments, as well as the quality of its grading rubrics & feedback opportunities.
5.2 Methods
5.3 Frequency
5.4 Grades
5.5 Rubrics
5.6 Feedback
5.7 Instructions
6. Community and Interaction6.1 Learner: LearnerReviews the frequency, type, and structure of learner interactions with each other, the content, and, for facilitated course, the instructor.
6.2 Learner: Instructor*
6.3 Learner: Content
6.4 Group Work
6.5 Discussion Expectations
7. Technology7.1 VarietyReviews the types, quality, and curricular alignment of technological tools and multimedia used in the course.
7.2 Purpose and Alignment
7.3 Quality
7.4 Access
*For facilitated courses only

Figure 3

Checklist Section

Hostetler-11-2-Fig3.png
A picture providing checklist criteria

Implementing the CDQC

Currently, we are piloting the CDQC in our design department. Our instructional designers are using it to evaluate a selection of courses from each program. We are using inter-rater reliability practices to standardize our evaluation of the criteria, fine-tune them, and provide faculty development as necessary. We are also determining the most effective way to summarize and communicate the results to faculty members in a meaningful and actionable way. Ultimately, using the CDQC affords the opportunity to ensure course evaluations and design recommendations are grounded in objective research-based standards, resulting in more engaging student learning experiences and cultivating an institutional culture of continuous improvement.

Lessons Learned and Recommendations

Based on the review of various course evaluation instruments, as well as the experience of designing a course evaluation checklist, several lessons were learned that may help institutions in efforts to assure online course design quality.

Begin with a Purpose

Instructional designers often advocate for beginning with the end in mind and this project was no exception (Wiggins & McTighe, 2005). We began our project by identifying our overall purpose for assuring quality in our online courses. To do this, we completed a gap analysis between our current and desired institutional states (see Table 1). Achieving the desired state became our purpose. Completing this step gave us direction when considering how to structure our approach. Using a logic model, we identified the specific activities and products we could develop to achieve our desired outcomes. Establishing our purpose first helped moving forward because it focused our decisions on achieving a common goal. 

Take Advantage of Open Resources

The rise of open educational resources has benefitted online students and education professionals alike (Mosharraf & Taghiyareh, 2016). For those looking to develop an evaluative instrument for quality assurance, adopting or referring to openly resourced examples provides several advantages. First, it saves time and resources, which is critical for institutions like the eSchool that have a limited number of instructional designers. Developing these instruments from scratch is time consuming and involves a substantial number of resources. This is evidenced in the production processes described by the Online Learning Consortium which developed the OSCQR in conjunction with Open SUNY over a three-year period (Online Learning Consortium, 2022).

Next, using openly resourced examples can ensure that evaluative criteria are research-based and current. Institutions such as The University of Illinois Springfield, which developed the QOCI Rubric, describe their process for developing criteria that are periodically reviewed and updated to “reflect the research and best practices in online learning” (University of Illinois Springfield, 2022). Again, this is valuable for institutions that may not have the capacity to undertake extensive research processes.

The large number of examples from other institutions can be overwhelming. As when conducting literature reviews, we recommend that institutions apply selection criteria to instruments they are reviewing (Lubke et al., 2017). For example, when screening prebuilt examples for this project, we adopted criteria from Baldwin, Ching, & Hsu (2018) to narrow down our final selection. This made the process more manageable and tailored to our unique needs. 

Tailor to Unique Characteristics and Needs

No two institutions have the same characteristics and needs; rather, each has a unique context influencing its approach to assuring quality. Consequently, even with the vast number of available openly resourced evaluative instruments, we did not find any that fully met our needs. We considered adopting the prebuilt instrument that met the highest number of our needs, but ultimately decided against it. In order to meet our goals of informing new course design and evaluating existing courses, we needed to have criteria specifically tailored to our unique course delivery format and design elements. Otherwise, we would have needed to develop additional guidelines for designing and evaluating certain elements. For example, because many of our courses are asynchronous and self-paced, they have a unique lesson structure. Each lesson centers around an instructional narrative, which guides students through each lesson page and provides a conceptual framework for them to engage with assigned readings and media. Thus, the narratives need to meet specific, standardized criteria in order to maintain a consistent style. By creating our own checklist, we were able to incorporate a category dedicated to narratives.

Conclusion

As higher education institutions shift more of their focus and resources to online learning, they are also placing greater scrutiny on the quality of their online courses. However, they may struggle to find clear guidance for developing a quality assurance approach, as each institution has unique contextual factors influencing its needs and capacity to assure quality in its online courses. This paper provided an example of how the eSchool developed its approach to assure course design quality within its unique institutional context by creating a tailored CDQC. By replicating or modifying this approach, institutions can work towards achieving their desired state for online courses.

References

Allen, I.E., & Seaman, J. (2015). Grade level: Tracking online education in the United States. Babson, MA: Babson Survey Research Group.

Baldwin, S., Chin, Y., & Hsu, Y. (2018). Online course design in higher education: A review of national and statewide evaluation instruments. Tech Trends, 62, 46-57. http://dx.doi.org/10.1007/s11528-017-0215-z

Baldwin, S.J., Ching, Y. H., & Friesen, N. (2018). Online course design and development among college and university instructors: An analysis using grounded theory. Online Learning, 22(2), 157-171. http://dx.doi.org/10.24059/olj.v22i2.1212

California State University. (2022). Quality learning and teaching. CSUN information technology. https://www.csun.edu/it/qlt

California Virtual Campus Online Network of Educators. (2022). CVC-OEI online course design rubric. California community colleges. https://onlinenetworkofeducators.org/course-design-academy/online-course-rubric/

Grant, M. M. (2021). Asynchronous online course designs: Articulating theory, best practices, and techniques for everyday doctoral education. Impacting Education: Journal on Transforming Professional Practice6(3), 35–46. https://doi.org/10.5195/ie.2021.191

Herron, R. I., Holsombach-Ebner, C., Shomate, A. K., & Szathmary, K. J. (2012). Large scale quality engineering in distance learning programs. Journal of Asynchronous Learning Networks, 16(5), 19-35. http://dx.doi.org/10.24059/olj.v16i5.289

Instructure. (2022). Course evaluation checklist. Instructure community. https://community.canvaslms.com/t5/Canvas-Instructional-Designer/Course-Evaluation-Checklist-v2-0/ba-p/280349

Loberti, A. M., & Dewsbury, B. M. (2018). Using a logic model to direct backward design of curriculum. Journal of Microbiology & Biology Education19(3), 1-3. https://doi.org/10.1128/jmbe.v19i3.1638

Lubke, J., Britt, G., Paulus, T., & Atkins, D. (2017). Hacking the literature review: Opportunities and innovations to improve the research process. Reference & User Services Quarterly, 56(4), 285–295. https://doi.org/10.5860/rusq.56.4.285

McGahan, S.J., Jackson, C. M., & Premer, K. (2015). Online course quality assurance: Development of a quality checklist. InSight, 10, 126-140. https://doi.org/10.46504/10201510mc

Means, B., & Neisler, J. (2021). Teaching and learning in the time of COVID: The student perspective. Online Learning, 25(1), 8-27. https://doi.org/10.24059/olj.v25i1.2496

Mosharraf, M., & Taghiyareh, F. (2016). The role of open educational resources in the eLearning movement. Knowledge Management & ELearning, 8(1), 10–21. https://doi.org/10.34105/j.kmel.2016.08.002

Murray, M., Pérez, J., Geist, D., Hedrick, A., & Steinbach, T. (2012). Student interaction with online course content: Build it and they might come. Journal of Information Technology Education, 11, 125-140. https://doi.org/10.28945/1592

Online Learning Consortium. (2022). About OSCQR. The SUNY online course quality review rubric OSCQR. https://oscqr.suny.edu/about/about-oscqr/

Parscale, S. L., Dumont, J. F., & Plessner, V. R. (2015). The effect of quality management theory on assessing student learning outcomes. S.A.M. Advanced Management Journal, 80(4), 19-30. Retrieved from link.gale.com/apps/doc/A440715098/AONE?u=maxw30823&sid=bookmark-AONE&xid=598b9969

Thomas, D. R. (2006). A general inductive approach for analyzing qualitative evaluation data. American Journal of Evaluation, 27(2), 237-246. https://doi.org/10.1177%2F1098214005283748

University of Illinois Springfield. (2022). Quality online course initiative (QOCI) rubric. ION professional eLearning programs. https://www.uis.edu/ion/resources/quality-online-course-initiative-qoci-rubric

Vlachopoulos, D. (2016). Assuring quality in e-learning course design: The roadmap. International Review of Research in Open and Distributed Learning, 17(6), 183-205. https://doi.org/10.19173/irrodl.v17i6.2784

Wiggins, G. P., & McTighe, J. (2005). Understanding by design (2nd ed.). Pearson.

Author Acknowledgement

I have no conflicts of interest to disclose. An early version of these ideas was presented at the Association for the Advancement of Computing in Education Innovative Learning Summit 2021 Virtual Conference. Thank you to Jon French for comments on a draft of this article. Correspondence concerning this article should be addressed to Stephanie Hostetter, AU eSchool for Graduate Professional Military Education, 51 East Maxwell Blvd Bldg. 678, Maxwell AFB, AL, 36112. Email: stephanie.hostetter.1@au.af.edu.

Stephanie Teague Hostetter

Air University

Stephanie Teague Hostetter is a Design, Development, and Innovation Coordinator at the eSchool of Graduate Professional Military Education at Air University. Her research interests include instructional design and technology, quality assurance in online education, and online professional development.

This content is provided to you freely by EdTech Books.

Access it online or download it at https://edtechbooks.org/jaid_11_2/_developing_a_qualit.