Historical Perspectives

d yet. Unpublished content will not be listed in the table of contents and may be missing other functionality.

Histories and Foundations of Assessment



Introduction

<If we don't write an introduction to the book itself, we could include a brief overview and description of the book here, then go into the purpose of this chapter as it relates to the book. Otherwise, this chapter and its outline combines the chapters "Assessment and Instructional Design," "Historical Perspectives," and "The Role of Assessment." We will want to think about how we can shift language specific to IDs to include teachers/educators/instructors. OR if we need to do that all. Maybe we clarify that one hat teachers often have to wear is that of instructional designer, and go from there. I prefer the latter.  - CS>

<lets make this the intro and cover the history of assessment and then the ID connection RSD>

This chapter is meant to accomplish two objectives:

  1. Explain some foundational principles concerning what is and is NOT assessment, answering the question, "what do we assess"?
  2. Provide a brief historical context and discussion about assessment to answer the question, "why do we need assessments"?

Spoiler for objective one: Everything, or nearly everything, can be seen as an assessment of some kind.

Assessment in Education

<Would this be different that the section above? I think it would be worth mentioning the development of standardized testing, etc.>

Background of Assessment in Instructional Design

The field of instructional design began to emerge in the mid-1900s. The military was the first to design instruction systematically; they needed to quickly and efficiently train soldiers to perform specific tasks. An essential aspect of the military's training was the assessment of a soldier's aptitude and ability to correctly carry out what they had learned. Over the next few decades, an Instructional Systems Design (ISD) approach was adopted by most instructional designers. The main goal of ISD was to outline key steps that should be taken to ensure that quality instruction was created.

In the 1970s, the ADDIE model for designing and developing instruction was one of the first formal ISD models developed – reportedly by the Center for Educational Technology at Florida State University for the United States Armed Forces. ADDIE stands for Analyze, Design, Develop, Implement, and Evaluate. The analysis phase of the Addie model required a gap or needs analysis to determine the goals and objectives of the instruction to be developed. The original purpose of the evaluation phase in the ADDIE model focused on assessing student learning to determine whether the learning objective of the course had been met. The results of a summative assessment were used to certify that students had accomplished the intended learning objectives and were the main criteria used to determine the effectiveness of the instruction. However, the purpose of evaluation in the model was later expanded to include a more comprehensive view of evaluation that included formative evaluations of the instructional approach, design, usability, and maintenance of the instructional product.

The ADDIE model is likely the most prominent instructional design model developed, but many others have since been developed and promoted. There are differences in the models, but there are three broad activities an instructional designer must accomplish:

1) Establish the learning objectives for the instruction.

2) Decide how to assess the expected learning outcomes.

3) Design and develop instructional activities to facilitate the desired learning.

Wiggins and McTighe (2005) popularized this idea by coining the term Backward Design or starting with the end in mind. Their book Understanding by Design includes the following steps: Identify the desired results, determine acceptable evidence that the expected learning outcomes have been met, then plan learning experiences and instruction to facilitate the expected learning. This approach of establishing learning objectives and creating assessments before creating learning activities was not a new concept, but Wiggins and McTighe effectively rebranded the ideas of Tyler, Gagné , Mager, and others – concepts that were the foundation of most ISD models developed in the 1950s and 1960s. As a result of Wiggins and Mctighe's work, present-day educators and instructional designers have been reintroduced to these critical concepts. 


Research Opportunities

If you are interested in researching the topic of assessment, there are several promising and challenging areas you might consider. 

Online test security. With increased online and distance learning acceptance, cheating on exams has become a prominent concern. Research on this topic has identified various vulnerabilities and proposed measures to address them. Online proctoring tools can help mitigate the risk of cheating. Using biometrics to verify students' identity and authorship has also been studied (for example, Young et al., 2019). Security breaches can be an issue for high-stakes testing and certification exams, where keeping test items secure is crucial. Proper training and communication with students can help promote ethical behavior during online assessments; however, ongoing research and development in this area will be important to ensure the integrity and validity of online assessments.

Learning Analytics. Recent calls for data-driven decision-making have prompted considerable interest in learning analytics. Research in this area is concerned with ways to personalize instruction. This includes the topics of stealth assessment and non-intrusive assessment data collection. With learning analytics, creating and using dashboards to communicate essential learning accomplishments and areas for improvement is particularly important. This includes identifying at-risk students and monitoring student progress with real-time student achievement results and engagement updates. Additional research is also needed to address student privacy and confidentiality concerns regarding the information we collect about students.

Automated Tutoring Systems. Providing feedback is an important function of the assessment process. Results from assessments can provide the information students need to resolve misconceptions, increase their understanding, and improve their skills. Timely feedback is essential for effective learning. Automating the feedback process can improve the speed and consistency of our assessment feedback. For example, generative AI-enabled tutors have become proficient at answering user questions, providing instruction, and assessing student learning (Davies & Murff, 2024). However, critics point out the need for human interaction and that inappropriate applications and overreliance on artificial intelligence to provide instruction and feedback can lead to trained incompetence rather than increasing students' ability. Research in this area will be important to ensure that automated assessment and feedback is accurate and administered appropriately.


This content is provided to you freely by EdTech Books.

Access it online or download it at https://edtechbooks.org/Assessment_Basics/History.