Metrics for Assessment of Instructional Design/Course Development Teams

&
DOI:10.59668/723.13047
Instructional Design TeamsInstructional Design UnitsAssessment MetricsInstitutional Effectiveness
As higher education institutions expand online education in the wake of the COVID-19 pandemic, instructional design/course development (ID/CD) teams, units, centers or departments are becoming more commonplace. How will calls for higher education accountability, coupled with decreasing fiscal resources, affect these teams when “COVID panic” dies down? Principles of institutional effectiveness can be used in the assessment of ID/CD teams to justify the team’s existence, combat the lack of knowledge about instructional designers, and drive continuous improvement. An exploratory study of 76 institutions reveals how and why assessment is currently being done and which metrics should be used to assess ID/CD teams.

Introduction

Among the numerous ways in which the COVID-19 pandemic has impacted higher education has been the expansion of online education, accompanied by an increase in the demand for instructional designers/online course developers (Decherney & Levander, 2020; Garrett, et. al, 2020). A recent annual data report from NC-SARA of more than 2,200 institutions indicated a 93% growth in distance learning enrollments from 2019 to 2020 (NC-SARA, 2021). It was estimated that at least one-half of all instructors who were forced to pivot to online education and emergency remote teaching during the pandemic had no prior experience in developing and teaching online courses (Garrett, et al., 2020).

The effect of the pandemic on instructional designer demand and visibility was immediate (Petherbridge, et al., 2022). Barely one month after the commencement of the COVID-19 crisis, an article titled “The Hottest Job in Higher Education: Instructional Designer” was published by Inside Higher Ed (Decherney & Levander, 2020). As colleges and universities moved from “quick fix” emergency remote teaching into strategically-planned online education (Hodges, et al., 2020), many were hiring multiple instructional designers, organizing them into a team, unit, center or department. As Drysdale (2021) observed, “instructional design teams shifted from a preferred institutional resource to a necessary one” (p. 58). This concept of an instructional design/course development team of instructional designers is distinct from a design team that consists of a single faculty subject matter expert collaborating with a single instructional designer (e.g., Hart, 2020; Hixon, 2008).

Some recent authors have predicted that the market for instructional designers will continue to increase (e.g., Petherbridge, et al., 2022). However, instructional designers report that a lack of knowledge about and respect for their skills and expertise continues to present a barrier to their success (Drysdale, 2018; Hart, 2020; Intentional Futures, 2016). Further, IDs who are located organizationally within individual academic departments, rather than in a centralized team or unit, experience lower job satisfaction and less collegial relationships with faculty (Drysdale, 2018). 

In an age where fiscal resources for colleges and universities are continually decreasing, what will happen to instructional design/course development teams (ID/CD teams) when the “COVID scare” dies down and institutions seek to “get back to normal”? How will ID/CD teams be able to demonstrate their value, effectiveness, and dedication to continuous improvement? Answers may be found through principles of assessment and institutional effectiveness.

Institutional Effectiveness

Brint and Clotfelter (2016) identify effectiveness in higher education as “the extent to which and the quality with which an institution achieves [its] expectations” (p. 4). The current emphasis on institutional effectiveness is a result of circumstances that predate the COVID-19 crisis. As Brown (2017) has noted, “Since the late 20th century, colleges and universities have had to respond to persistent calls from multiple social sectors about the expansion of accountability in American higher education. The increased reporting measures are the result of multiple contextual factors that have influenced the system of higher education. In part, the substantial increases in the cost of obtaining a college education have catalyzed the American public to question the value of a postsecondary degree and to call for greater transparency regarding college outcomes” (p. 41). 

The public’s call for accountability, transparency, and return on investment has prompted accrediting agencies tasked with quality assurance of higher education to shift their emphases from inputs, such as the quantity of library holdings, to outputs, such as student learning outcomes, student retention, and graduate rates. The Southern Association of Colleges and Schools Commission on Colleges (SACSCOC) was the first of the six regional institutional accrediting agencies to embrace the concept of institutional effectiveness; however, the other five soon followed suit (Ewell, 2011). SACSCOC Accreditation Standard 8.2 defines institutional effectiveness as “The institution identifies expected outcomes, assesses the extent to which it achieves these outcomes, and provides evidence of seeking improvement based on analysis of the results” (Southern Association of Colleges and Schools, 2018, p. 73). All academic discipline units (e.g., colleges, schools, departments), administrative support units, and academic support units within an institution, are required to demonstrate compliance with this standard.

Institutional effectiveness, with its iterative process of objectives formulation, assessment, implementation, and continuous improvement, is reminiscent of systematic instructional design models familiar to instructional designers (Bond & Dirken, 2020; Branch & Dousay, 2015; Wiley, et al., 2020). As an academic support unit, an ID/CD team could utilize the institutional effectiveness process to educate leadership in what IDs do, establish the team’s role and value to the institution, and provide a mechanism for implementing continuous improvement of the team. 

Assessing ID/CD Teams

Martin and Kumar (2018) state that “Quality assurance is a systematic approach to check whether online learning meets specific requirements based on a set of standards and frameworks” (p. 272). This is often much easier said than done, as institutional effectiveness is one of the most often-cited areas of weakness identified during the accreditation process (Higher Learning Commission, 2022; Southern Association of Colleges and Schools, 2020). “The increasing focus of external entities on the effectiveness of higher education institutions makes it more important than ever to monitor how well the institutional effectiveness role is being carried out at institutions” (Clapp, 2020, p. 6). So, what is the best way to assess the effectiveness of ID/CD Teams?

The good news is that there is a robust set of rubrics and standards for the evaluation of instructional design and instructional designers. The not-so-good news is that, while Quality Matters Rubric (Quality Matters, 2020), the Online Learning Consortium Scorecards (Online Learning Consortium, 2022; Shelton, 2010), the AECT Instructional Design Standards for Distance Learning (Piña, 2017), California State University Chico’s Rubric for Online Instruction (California State University Chico, 2022), and Blackboard’s Exemplary Course Rubric (Blackboard, 2022) each provide useful metrics for assessing the quality of online courses, they lack metrics and guidance for assessing the teams that create the courses. Similarly, the International Board of Standards for Training, Performance and Instruction (ibstpi®) has identified 22 competencies that can be used for training and assessment of individual instructional designers (Kozalka, et al., 2013), but these competencies provide little application for the assessment of ID/CD teams.

Which Metrics to Use?  

As Brint and Clotfelter (2016) have observed, “Usable metrics for assessing effectiveness remain aspiration more often than reality” (p.4). This may explain why institutional effectiveness has been such a challenging accreditation standard for so many institutions. A recent search of EBSCO databases, Google Scholar, and several journals in the fields of instructional design, educational technology, and online education, failed to find any publications addressing how to assess the effectiveness of instructional design and/or course development teams, units, centers, or departments. 

Collaboration with ID/CD Teams

While individual faculty may develop online courses by themselves, ID/CD teams work via collaborations (Hixon, 2008). The collaborators may include administrators, academic department chairs, librarians, or other professionals, but at the very least, involve an instructional designer collaborating with a subject matter expert (Bawa & Watson, 2017; George & Casey, 2020). While the nature of the collaboration depends upon the needs and culture of the institution (Piña, 2021), it is clear that the success of the development project is dependent upon the success of the collaboration (Reinig, 2003).

Suárez-Lantarón and her colleagues (2023) emphasized the necessity for academic service units to assess the satisfaction of their key constituents in order to determine whether their needs were being met. The key constituents for ID/CD teams include faculty/subject matter experts, administrators and students (Bawa & Watson, 2017; Hixon). Reinig (2003) identified both the product and the process of collaboration as necessary and understudied elements of collaborative development.

Identifying Metrics 

An online search was conducted to identify higher education institutions that have made assessment reports and guides for their academic support units publicly available on their websites. Reports and guides from 15 institutions were obtained: 

Table1

Institutions with Reports and Guides Listing Assessment Metrics

·       Arkansas Tech University

·       Caldwell Community College and Technical Institute

·       California University of Pennsylvania

·       Eastern Kentucky University

·       Florida State University

·       Jackson State University

·       LaGuardia Community College

 

·       Miami University of Ohio

·       New Mexico State University

·       Northern Illinois University

·       Savannah State University

·       Sullivan University

·       Texas A & M University

·       University of Louisville

·       University of North Carolina at Chapel Hill


Analysis of these assessment reports and guides identified two broad areas for assessment: 1) collaboration and constituent satisfaction and 2) activities undertaken and recognition received by the team. Table 2 provides possible metrics for assessing ID/CD teams in these two categories.  

Table 2

Possible metrics for assessing ID/CD teams

Category

Assessment Metric

Collaboration/
Constituent
Satisfaction

Faculty/SME satisfaction with course development process

Faculty satisfaction with consultancy/support/training Faculty satisfaction with courses

Student satisfaction with courses

Academic leadership satisfaction with courses

Advisory council satisfaction with courses

Activities

Courses developed/modified by the team

Courses evaluated by the team

Training events provided by the team

Consultancy sessions provided by the team

Faculty support sessions provided by the team

Awards received

Conference presentations

Publications


How Assessment of ID/CD Teams is Being Done

Assessment reports and guides for academic support units may provide hints and possible directions for instructional design professionals and academic leaders to pursue in assessing ID/CD teams. However, the relative silence of the literature on how ID/CD teams are being assessed—or if, in fact, they are being assessed at all--limits the ability to apply institutional effectiveness principles for the benefit of these teams. Therefore, an exploratory study was devised to determine whether assessment of ID/CD teams was occurring and, if so, which metrics are and should be used. The following research questions were explored: 

Methodology

Participants

Participants included instructional design/educational technology professionals at 76 higher learning institutions in the United States. Table 3 below identifies the characteristics of the participants’ institutions. Nearly two-thirds of participants came from public institutions. Institutions varied by enrollment, with almost half coming from institutions with enrollments of more than 20,000. The vast majority of participants’ institutions awarded graduate degrees.

Table 3

Institutional characteristics (n=76)

Characteristic

Number

Percentage

Institutional Control

    Public
    Private

 

50

26

 

66%

34%

Enrollment

    Less than 3,000

    3,000-10,000

    10,000-20,000

    More than 20,000

 

 9

18

13

36

 

12%

24%

17%

47%

Level

    Undergraduate

    Graduate

 

  9

67

 

12%

88%


Data Collection and Analysis 

The study and instrumentation were reviewed and approved by the Institutional Review Board of the sponsoring university. A custom survey instrument was developed and formative evaluations of the validity of the survey items were conducted with 10 members of a statewide distance learning directors’ group and with 12 participants at the 2022 Distance Learning Administration Conference. The validity of the survey was affirmed, with minor modifications to the wording of three survey items. The final survey items are listed in Table 4 below. The survey was distributed by the Association for Educational Communications and Technology (AECT) to its membership via an email link to the online survey. Data were analyzed using descriptive statistics.

Table 4

Survey items

Please tell us about your institution: Highest degree awarded (select one)

·       Undergraduate degree

·       Graduate degree

Please tell us about your institution: Control

·       Private

·       Public

Please tell us about your institution: Student enrollment

·       Less than 3,000

·       3,000-10,000

·       10,001-20,000

·       More than 20,000

How many instructional designers does your institution employ? (select one)

·       1

·       2-4

·       5-7

·       8-10

·       More than 10

Please describe how instructional designers are organized at your institution. (select one)

·       Centralized ID Unit (instructional designers reside in a single unit, team, center or department for the entire institution)

·       Decentralized (instructional designers are dispersed across multiple colleges, schools or academic departments)

·       Hybrid (some instructional designers in a central unit, while others are dispersed)

·       Other (please specify)

Describe whether/how often your ID unit (as a whole, not its individual employees) undergoes an assessment/evaluation process. (select one)

·       The ID Unit as a whole is not formally assessed/evaluated (skip the next two questions)

·       The ID Unit is assessed/evaluated at least once per year

·       The ID Unit is assessed/evaluated every 2-3 years

·       Other (please specify)

What is the purpose for the assessment? (select all that apply)

·       Provide data/evidence for accreditation or other outside compliance

·       Provide data/evidence for implementing ID unit improvements

·       Provide data to justify the ID unit’s staffing or existence

·       Other (please specify)

Which metrics are used to assess the ID Unit(s) at your institution? (select all that apply)
 4-critical 3-useful 2-minimal 1-not helpful

·       Academic department (dean/chair) satisfaction with courses

·       Faculty/subject matter expert satisfaction with course development process

·       Faculty/subject matter expert satisfaction with training/consultancy

·       ID unit scholarly activities (publications, presentations, grants, etc.)

·       Instructor satisfaction with course quality

·       Student satisfaction with course quality

·       Number of courses created

·       Other (please specify)

Which metrics would be the most effective to assess an ID Unit?
4-critical 3-useful 2-minimal 1-not helpful

·       Academic department (dean/chair) satisfaction with course quality

·       Faculty/subject matter expert satisfaction with course development process

·       Faculty/subject matter expert satisfaction with training/consultancy

·       ID unit scholarly activities (publications, presentations, grants, etc.)

·       Instructor satisfaction with course quality

·       Student satisfaction with course quality

·       Number of courses created

·       Other (please specify)

Please describe any additional metrics not mentioned above

Results

Organizational Structure

As Reid (2018) has observed, instructional designers in higher education institutions may be organized within a centralized instructional design/course development unit; they may be decentralized (e.g., instructional designers employed by and operate exclusively within a specific academic college, school, or departments). Institutions may also employ a combination of both models. Andrade (2016) acknowledged that decentralized organizations may appeal to those prioritizing departmental control of the online course development process. However, distance education experts have maintained that centralized and formalized online instructional design and course development results in online courses that are of overall better quality, consistency, and cost-effectiveness (e.g., Andrade, 2016; Cini & Pineas, 2018; Scheuermann, 2018). Drysdale (2018, 2021) found notable differences in the job experience and job satisfaction of centralized versus decentralized instructional designers, with the latter reporting a significantly less satisfying and effective work environment, non-collegial relationships with faculty and “pressure to focus on technology support instead of pedagogy and course design” (2021, p. 72).

Figure 1 below shows that half of the respondents’ institutions organized their instructional designers within a centralized ID/CD team that can service the entire institution, with the other half evenly split between 1) decentralized and dispersed instructional designers and 2) a combination where some instructional designers reside in a centralized team, while others were dispersed in units throughout the institution. This distribution is similar to that found by Fong, et al. (2017).

Figure 1

How Instructional Designers are Organized (n=76)

Frequency of Assessment

To determine the extent to which assessment of ID/CD teams was occurring at participants’ institutions, they were asked to specify how often ID/CD teams underwent a formal evaluation process. As indicated in Figure 2, The majority (57%) of institutions did not have a known formal assessment of their ID/CD teams. Of the remaining institutions, 28% assessed their ID/CD team on an annual basis, while 12% did so at intervals ranging from two to five years. The organizational structure did make a difference regarding whether assessment was taking place, with 50% of institutions with centralized ID/CD teams conducting assessments of the teams, compared to 17% of those with decentralized instructional designers and 39% of those with a combination of centralized and decentralized.

Figure 2

Assessment of Instructional Design Units (n=76)

Rationale for Assessment

For those institutions that conducted formal assessments of their ID/CD teams, participants were asked to identify one or more purposes underlying the assessments. Results are shown in Figure 3. The most frequently cited rationale for assessment (69% of respondents) was to use the assessment results as the basis for implementing improvements to the ID/CD team. Using assessment data to justify the continued need for the ID/CD team was indicated by 38% of respondents, while providing data for accreditation purposes was identified by 28%. Other identified purposes for assessment (9%) included annual reporting to internal departments within the institution. 

Figure 3

Rationale for Assessment of Instructional Design Units (n=32)

Metrics Used Currently

The primary purpose for this study was to identify metrics for assessing the effectiveness of ID/CD teams. Therefore, participants were asked to identify the metrics currently used by their institutions for this purpose. Results displayed in Figure 4 reveal that the most commonly used metric (68% of respondents) was to gauge the satisfaction of faculty/subject matter experts (SME) with the course development process--the item most directly related to the quality of the collaboration between the SME and the ID/CD team.

Next in frequency (41%) was instructor satisfaction with the course design quality. This would include instructors who were teaching the course, but who may not have been directly involved in the initial course development. Faculty satisfaction with the ID/CD team’s training and consulting services, along with student satisfaction with the course design quality, were utilized by 38% of respondents’ institutions, while administrator (e.g., chair, dean) satisfaction with the course design quality fared slightly lower at 34%. The most quantitatively-based measures—the number of courses created by the team and scholarly activity by the team--were used much less frequently (22% and 9% respectively). 

Figure 4

Metrics Used Currently to Assess ID Units (n=35)

Effective Assessment Metrics

Apart from the metrics being used currently at their institutions, participants were asked to identify those metrics that they determined would be the most useful and effective for assessing ID/CD teams. The responses, shown in Figure 5 below, indicate agreement between current practice indicated in Figure 4 above--with one notable exception. Respondents rated student satisfaction with online course design quality as the most desirable metric with which to gauge ID/CD team effectiveness, with all other metrics following the same order as their current usage by institutions.

Figure 5

Most Effective Metrics for Assessing ID Units (n=76)

Other Metrics

Participants identified additional metrics beyond those listed above. These included: subsequent student achievement of learning outcomes, retention, and graduation rates (5); cost effectiveness/return on investment (3); ability of courses to pass a Quality Matters or other external review (3); speed of course development (2) and competence of instructional designers with legal aspects of course development, such as accessibility, copyright and privacy (1).

Discussion

The goal of this study was to address the lack of published metrics for assessing instructional design/course development teams/units/centers/departments (ID/CD teams). Institutional effectiveness, with its emphasis on outcomes, assessment, implementation, and continuous improvement, was selected as a framework due to its compatibility with systematic instructional design. 

Assessment of ID/CD Teams Not Common

The first significant finding was that less than half of the participants’ institutions engaged in a formal process of assessment for ID/CD teams. In an era of increasing calls for accountability, data-driven decision-making, and the threat of diminishing resources, this situation could leave ID/CD teams without the data that they need to gauge their effectiveness, identify areas for improvement, combat ignorance regarding what instructional designers do, and justify the continued existence of the ID/CD team. This situation is even more acute in institutions where instructional designers are decentralized. Those colleges and universities that do engage in formal ID/CD team assessment tend to follow annual institutional assessment cycles or longer cycles associated with accreditation timetables (Southern Association of Colleges and Schools, 2018). 

Collaboration

Results of this study indicate that collaboration and constituent satisfaction are both the most commonly utilized and most desirable metrics by respondents and their institutions. Slaughter & Murtaugh (2018) recommended constituent surveys to identify strengths and weaknesses in the course development process. Under the institutional effectiveness paradigm, ID/CD teams are assessed as academic support units, with metrics involving constituent satisfaction of students, faculty, subject matter experts, and administrators being both the most utilized and the most recommended by participants. 

Discipline and Orientation of Participants

The only notable difference between currently utilized metrics and those recommended by the study participants was the relative placement of student satisfaction with course quality in the ranking of metrics. As instructional design begins with concerns about what learners will need to know and be able to do at the conclusion of the instruction, it is not surprising that the instructional design and distance learning professionals who participated in this study would prioritize learner satisfaction above faculty satisfaction.

The discipline and orientation of this study’s participants may have also influenced the rationale given for assessing ID/CD teams. That instructional design and distance education professionals would consider assessment data to drive ID/CD team’s continual improvement as more important than meeting accreditation requirements is not surprising. It is possible, however, that many administrators would reverse that order of importance.

Finally, it should be noted that an ID/CD team’s role as a support center does not mean that instructional designers must take a subordinate role to faculty in the course development process. Instructional designers should be empowered to exercise leadership, and project management and serve as collaborators and partners with faculty subject matter experts (Ashbaugh, 2013). 

Institutional Effectiveness and Driving Improvements

A critical component of institutional effectiveness is that assessment data must drive improvement efforts (Britt & Clotfelter, 2016; Southern Association of Colleges and Schools, 2018). In order for this to occur, the data must be able to be influenced directly by actions taken by the party being assessed. In the case of ID/CD teams, metrics that involve student outcomes, such as final grades, retention, and graduation rates, are influenced by many extraneous factors that are outside of the direct control and influence of instructional designers. Therefore, it may be unclear which changes an instructional designer could make to cause significant differences in these metrics. 

The same situation may occur if an ID/CD team is assessed based on the number of courses that they develop and if this metric is controlled by the amount of demand from academic departments, schools, or colleges. Needs for course development can wax and wane, depending on whether new degree programs are being planned or whether temporary situations, such as COVID-19, cause a spike in online course developments. 

Implications for Applied Instructional Design Leadership and Management

The results of this study can be applied by instructional design leadership to determine data-driven metrics that can be used to:

The results of this study were used by the ID/CD team at the sponsoring institution to formulate outcomes and to determine how those outcomes would be assessed. Instruments were created for administration to students and instructors during the first term after a course had been newly developed or had undergone a major redevelopment. Table 5 lists outcomes and assessment instruments and when administered. Table 6 lists the items for the Student Survey for First-Term Courses and Table 7 lists the items for the Instructor Survey for First-Term Courses. Table 8 lists the items for the Course Development Process Survey administered to faculty subject matter experts at the completion of the course development process.

Table 5

ID/CD Team Outcomes and Assessment

Outcome

Assessment of Outcome

Assessment Completed By

Develop courses that meet university standards and meet student needs

Student Survey for First-Term Courses

Students during the initial course offering  

Develop courses that meet university standards and meet student and instructor needs

Instructor Survey for First-Term Courses

Instructors during the initial course offering 

 

Utilize an effective course development process

Course Development Process Survey

Subject Matter Expert at the end of course development


Table 6

Student Survey Items

·       This course used enough resources like videos, websites or activities to enhance my learning experience.

·       The lesson’s instructional materials (readings, videos, links, activities, etc.) prepared me for my assignments.

·       The assignments (papers, projects, labs, etc.) were appropriate for the lesson topics.

·       The quizzes/tests/exams were appropriate for the lesson topics.

·       The online discussions helped me to understand the lesson topics.

·       Instructions provided for assignments were clear and easy to understand.

·       Links to outside materials worked as they should.

·       The course was free of typos and grammatical errors.



Table 7

Faculty Survey Items

·       The individual lesson objectives were adequately assessed.

·       Content in this course was relevant to the topic of the course.

·       This course used adequate resources like videos, websites, or games to enhance the educational experience.

·       The assignments stimulated critical thinking appropriate to the level of the course.

·       Instructions provided for assignments were clear.

·       Links to outside material/multimedia were functional.

·       The course was free of typos and grammatical errors.


Table 8

Subject Matter Expert Survey

·       I was satisfied with the level of collaboration, communication and support I received from my ID and the ID Team during the development process.

·       I found the weekly content templates and materials provided by the ID Team to be useful.

·       I found the SME training course to be useful.

·       I found the SME online resources provided at the SME website to be useful.


Limitations and Future Research

Due to the lack of prior studies in this area, the research and scope of this exploratory study were limited to those teams or units dedicated to instructional design/online course development. As these teams are often housed within larger units, such as a center for teaching and learning, a center for professional development, or within an institution’s academic technology or information technology department, a future study may examine how these larger units are assessed and how instructional design/course development operates and is assessed within these units. 

Collaboration is a vital part of the ID/CD team’s work. This study focused most on the faculty/subject matter expert’s collaboration with the assigned instructional designer and ID Team. Future studies can explore in greater detail the interactions between the ID/CD team and department chairs, deans, and other administrators. 

While this study indicates that ID/CD team assessment occurs more frequently when instructional designers are centralized, current research on centralized versus decentralized instructional design and instructional designers is limited. More studies on how instructional designers are organized and the results of different organizational structures on instructional designers and the instructional design process are needed. 

The requirements of the granting institution’s IRB regarding participant anonymity made it not possible to capture information that could reveal participants’ identities. In order to limit the possibility of an institution having more than one participant, the survey responses were analyzed for duplicate answers. While none were found, it cannot be said with 100% certainty that no institution had more than one respondent. 

Finally, it is likely that the makeup of this study’s participants—instructional design and distance education professionals—influenced the rationale for assessment and the ranking of assessment metrics. Future studies could include comparisons with rationales and ranking by administrators, faculty, and students.

Conclusion

Although total higher education enrollments and higher education funding have been in decline for the past decade, online enrollments show no signs of abating. The number of fully online and hybrid and HyFlex programs will continue to grow, necessitating the talents of instructional designers and instructional design/course development teams. At the same time, calls for higher education accountability, transparency, and return on investment, prevalent throughout the new millennium, will grow ever louder. These voices will fuel demand for ways to justify, assess, and improve operations at colleges and universities and those teams, units, centers, and departments that provide those functions, resources and services. Failure to do so may result in those functions, resources, and services being seen as optional, expendable, or--at worst--superfluous.

Instructional design/course development teams, being a lesser-known and often misunderstood part of an institution, are particularly vulnerable to changes in fiscal dynamics and leadership priorities. Assessment of instructional design/course development teams ties these teams to the larger institutional effectiveness and accreditation activities of a college or university, provides data that can be used to justify the ID/CD team’s existence, combats the lack of knowledge about instructional design and instructional designers, and promotes continuous improvement.

Acknowledgments

The authors wish to express gratitude to Jeffrey Corkran, Lorie Long, Barry Sanford, Krista Lyons, Cassandra Black, Diane Curtis, and Kathleen Decker for development and refining metrics and data collection and to Sullivan University for providing a research grant to support this study.

References

Andrade, M. S. (2016). Effective organizational structures and processes: Addressing issues of change. New Directions for Higher Education, 173, 31–42.

Ashbaugh, M. L. (2013). Expert instructional designer voices: Leadership competencies critical to global practice and quality online learning designs. Quarterly Review of Distance Education, 14(2), 97-118.

Bawa, P., & Watson, S. (2017). The chameleon characteristics: A phenomenological study of instructional designer, faculty and administrator perceptions of collaborative instructional design environments. Qualitative Report, 22(9), 2334-2355.

Blackboard. (2022). Are your courses exemplary? https://www.blackboard.com/resources/are-your-courses-exemplary

Bond, J., & Dirkin, K. (2020). What models are instructional designers using today? The Journal of Applied Instructional Design, 9(2).

Branch, R. M., & Dousay, T. A. (2015). Survey of instructional design models. Association for Educational Communications and Technology.

Brint, S., & Clotfelter, C. T. (2016). U.S. higher education effectiveness. RSF: The Russell Sage Foundation Journal of the Social Sciences, 2(1), 2-37.

Brown, J. T. (2017). The seven silos of accountability in higher education: Systematizing Multiple Logics and Fields. Research & Practice in Assessment, 17(1), 41-58.

California State University Chico. (2022). Exemplary online instruction: The rubric. https://www.csuchico.edu/eoi/rubric.shtml

Cini, M. A., & Pineas, M. (2018). Scaling online learning: Critical decisions for e-learning leaders. In A. A. Piña, V. L. Lowell & B. R. Harris (Eds.), Leading and managing e-learning: What the e-learning leader needs to know (pp. 305-320). Springer.

Clapp, M. (2020). Assessing the efficacy of an institutional effectiveness unit. Assessment Update, 32(3), 6-13.

Decherney, P., & Levander, C. (2020). The hottest job in higher education: Instructional designer. Inside Higher Ed. https://www.insidehighered.com/digital-learning/blogs/education-time-corona/hottest-job-higher-education-instructional-designer

Drysdale, J. (2018). The organizational structures of instructional design teams in higher education: A multiple case study. Digital Commons @ ACU, Electronic Theses and Dissertations. Paper 115.

Drysdale, J. (2021). The story is in the structure: A multi-case study of instructional design teams. Online Learning, 25(3), 57-80.

Ewell, P. (2011). Accountability and institutional effectiveness in the community college. New Directions for Community Colleges, 2011(3), 23-36.

Fong, J., Uranis, J., Edward, M., Funk, C., Magruder, E., & Thurston, T. (2017). Instructional design and technology teams: Work experience and professional development. UPCEA. http://upcea.edu/IDResearch

Garrett, R., Legon, R., Fredericksen, E. E., & Simunich, B. (2020). CHLOE 5: The pivot to remote teaching in spring 2020 and its impact, the changing landscape of online education, 2020. http://qualitymatters.org/qa-resources/resource-center/articles-resources/CHLOE-project

George, K., & Casey, A. M. (2020). Collaboration between library, faculty, and instructional design to increase all open educational resources for curriculum development and delivery. Reference Librarian, 61(2), 97-112.

Hart, J. (2020). Importance of instructional designers in online higher education. The Journal of Applied Instructional Design, 9(2).

Higher Learning Commissions. (2020). HLC membership by the numbers: Key findings of the application of the criteria for accreditation. https://download.hlcommission.org/initiatives/BytheNumbers_CriteriaforAccreditation.pdf

Hixon, E. (2008). Team-based online course development: A case study of collaboration models. Online Journal of Distance Learning Administration, 11(4).

Hodges, C. B., Moore, S., Lockee, B. B., Trust, T., & Bond, M. A. (2020). The difference between emergency remote teaching and online learning. EDUCAUSE Review. https://er.educause.edu/articles/2020/3/the-difference-between-emergency-remote-teaching-and-online-learning

Intentional Futures. (2016). Instructional design in higher education: A report on the role, workflow, and experience of instructional designers. https://intentionalfutures.com/work/instructional-design

Kozalka, T. A., Russ-Eft, D. F., & Reiser, R. A. (2013). Instructional designer competencies: The standards (4th. Ed.). Information Age Publishing.

Martin, F., & Kumar, S. (2018). Frameworks for assessing and evaluating e-learning courses and programs. In A. A. Piña, V. L. Lowell & B. R. Harris (Eds.), Leading and managing e-learning: What the e-learning leader needs to know (pp. 271-280). Springer.

Moore, S., Trust, T., Lockee, B., Bond, M. A., & Hodges, C. (2021). One year later... and counting: Reflections on emergency remote teaching and online learning. EDUCAUSE Review. https://er.educause.edu/articles/2021/11/one-year-later-and-counting-reflections-on-emergency-remote-teaching-and-online-learning

NC-SARA. (2021). NC-SARA annual data report: Technical report for fall 2020 exclusively distance education enrollment & 2020 out-of-state learning placements. https://nc-sara.org/sites/default/files/files/2021-10/NC-SARA_2020_Data_Report_PUBLISH_19Oct21.pdf

Newsome, M. L., Piña, A. A., Mollazehi, M., Ali-Ali, K., & Alshaboul, Y. (2022). The effect of learners' sex and stem/non-stem majors on remote learning: A national study of undergraduates in Qatar. Electronic Journal of e-Learning, 20(4), 360-373.

Online Learning Consortium. (2022). OLC quality scorecard suite. https://onlinelearningconsortium.org/consult/olc-quality-scorecard-suite

Petherbridge, D., Bartlett, M., White, J., & Chapman, D. (2022). The Disruption to the practice of instructional design during COVID-19. The Journal of Applied Instructional Design, 11(2).

Piña, A. A. (2017). Instructional design standards for distance learning. Association for Educational Communications and Technology.

Piña, A. A. (2021). Managing the course development process. In L. D. Cifuentes (Ed.), A Guide to Administering Distance Learning (pp. 141-173). Brill Publishing.

Quality Matters. (2020). Specific review standards from the QM higher education rubric, sixth edition. https://www.qualitymatters.org/qa-resources/rubric-standards/higher-ed-rubric

Reid, P. (2018). EdTechs and instructional designers: What’s the difference? Educause Review. https://er.educause.edu/articles/2018/12/edtechs-and-instructional-designerswhats-the-difference

Reinig, B. A. (2003). Toward an understanding of satisfaction with the products and outcomes of teamwork. Journal of Management Information Systems, 19(4), 65-83.

Scheuermann, B. (2018). Ensuring success in online programs by centralizing support. https://evolllution.com/managing-institution/operations_efficiency/ensuring-success-in-online-programs-by-centralizing-support

Shelton, K. (2010). A quality scorecard for the administration of online education programs: A Delphi study. Journal of Asynchronous Learning Networks, 14(4), 36-62.

Slaughter, D. S., & Murtaugh, M. C. (2018). Collaborative management of the e-learning design and development process. In A. A. Piña, V. L. Lowell & B. R. Harris (Eds.), Leading and managing e-learning: What the e-learning leader needs to know (pp. 253-270). Springer.

Southern Association for Colleges and Schools. (2018). Resource manual for the principles of accreditation: Foundations for quality enhancement. Southern Association for Colleges and Schools Commission on Colleges. https://sacscoc.org/pdf/2018%20POA%20Resource%20Manual.pdf

Southern Association for Colleges and Schools. (2020). Most frequently cited principles in decennial reaffirmation reviews: Class of 2020. Southern Association for Colleges and Schools Commission on Colleges. https://sacscoc.org/app/uploads/2022/03/Most-Frequently-Cited-Principles_2020_web.pdf

Suárez-Lantarón, B., Castillo-Reche, I. S., & López-Medialdea, A. (2023). Development and validation of a measuring instrument for the improvement of university guidance and tutoring. Social Sciences, 12(2), 56-71.

Wiley, D., Strader, R., & Bodily, R. (2020). Continuous improvement of instructional materials. In J. K. McDonald & R. E. West (Eds.), Design for Learning: Principles, Processes, and Praxis. EdTech Books. https://edtechbooks.org/id/continuous_improvement

This content is provided to you freely by EdTech Books.

Access it online or download it at https://edtechbooks.org/jaid_13_1/metrics_for_assessment_of_instructional_designcourse_development_teams.