Evaluation in the Design Phase

Theory-based evaluation is often underutilized if utilized at all. Many designers fail to consider the theoretical underpinning of their design choices. Many educators may not consider the pedagogical theories that support a specific instructional approach. The following evaluation approaches are commonly used by instructional designers in the design phase.

While the following evaluation approaches are commonly used by instructional designers in the design phase, it is not uncommon for them to be used in other ways in other phases of the design and development process.

Theory-based evaluation

Theory-based evaluation can be used to a) describe the theory supporting a specific design (i.e., evaluating the appropriateness of the theory) and b) evaluate how well a specific theory was put into practice. The first approach is most suitable in the analysis and design phases, the second in the development and implementation phases as part of an effectiveness evaluation. However, a consumer review may attempt to do both, as you would need to understand the theory used in the design before assessing how effectively the theory was put into practice. For instructional designers, this could entail evaluating pedagogical theories and design principles.

Unfortunately, theory-based evaluation is often underutilized if utilized at all. Many designers fail to consider the theoretical underpinning of their design choices. Often designers cannot articulate the design theory they employ; they only know it seems to work. Likewise, educators may not consider the pedagogical theories that support a specific instructional approach. Sometimes failing to conduct a theory-based evaluation has little impact; other times, it can cause an instructional product to fail completely.

The first step in any theory-based evaluation is identifying the theory being used. Chen (1990) suggests that an evaluator can determine the theory supporting a product's design in one of two ways. They can work with stakeholders (i.e., designers and implementers) to discover the reasons for designing and assumptions stakeholders have about the product. Or, they can use their own knowledge of educational psychology and social science theory to describe the supporting theories and design principles used.

Donaldson (2007) describes a process that balances both tactics. A modified version of the basic steps he recommends to accomplish a theory-based evaluation are:

Step 1:  

Engage relevant stakeholders - Ask them to articulate the product's purpose (short and long-term objectives and goals) and why they think it will work. Ask them to describe how it is supposed to work.

Step 2:  

Develop a draft of the theory and present it to stakeholders for discussion, reaction, and input.

Step 3:  

Conduct a plausibility check – consult existing research (or other experts) and use your own understanding of educational psychology and social science theory to determine whether the intended outcome might be achieved in this way.  

Step 4:  

Communicate your findings to stakeholders and revise your description of the theory if necessary. 

Step 5:  

Probe the theory for more specificity - Ask stakeholders what resources and conditions must exist for the product to work. Consider critical links between theory and practice. Make sure the theory's description includes an accurate account of how the product is intended to be used.

Step 6:  

Finalize the theory and report evaluation findings - make a judgment regarding the merit of using this theory to support the product's design. Is it likely this product can produce the intended outcomes? Can it be implemented as intended?

If the evaluation is testing theory-to-practice, you might add:

Step 7:  

Conduct an objective-oriented evaluation – Test the degree to which the learning objectives were accomplished. Attempt to verify a relationship (experimental or correlational) between the product's use and the desired outcomes.

Step 8:  

Conduct a usability study – Observe the product being used to determine whether the product can be used and is being used as intended. Determine whether the product's use is viable, sustainable, desirable, and efficient.

A good resource for educational theories in instructional designers can be found here

Logic Models

Logic models are often used during the analysis and design phase of an educational program or initiative. An evaluator might use a logic model to describe the evaluand (program) and the theory supporting the initiatives. While an objective-oriented evaluation can provide information about the effectiveness of a program, it does not explain why it was effective. Logic models make an explicit, often visual, statement about how a program is supposed to work, why it is supposed to work, what is needed to make it work, and what change you might expect to see as a result.

A typical logic model might include:

Inputs: 

A description of the resources needed to make the program or product work as intended.

Activities: 

A description of the program's key components. This might include an explanation of how the program is supposed to work and a description of what those implementing the program will be doing and why they are supposed to do it (program theory or logic).

Outputs: 

Describe any tangible products or deliverables that result from the activities. What will participants be required to do?

Outcomes: 

Identify the intended benefit and expected results for participants (short-term outcomes and long-term impact).

Several public resources exist that describe the process of developing a logic model (see intro video, steps).

Task Analysis

There are several ways to accomplish a task analysis; each approach has a slightly different focus based on the purpose for the analysis. A task analysis is often conducted as part of a training needs analysis. A training needs assessment aims to create learning objectives for the training so assessments, instructional materials, and learning activities can be designed and developed.  

Common Steps for Implementing a task analysis:

  1. Identify the target skill.
  2. Identify prerequisite skills.
  3. Breaking down the skill into its component parts (or steps).
  4. Confirm that the task is thoroughly analyzed.
  5. Determine how the skill should be taught.
  6. Implement training and monitor effectiveness.
  7. Revise the task analysis and instruction as needed.

Some additional resources that cover this topic can be found online (see basics, steps). A few different ways a task analysis might be conducted include:

Job Task Analysis (JTA)

Once you’ve determined that training is needed, a JTA aims to determine the knowledge and skills an individual requires to complete a specific job. Several discrete tasks may be required for one job. You use the results of this analysis to create learning objectives that will inform the design of any proposed training. For example, training might provide information if the work requires specialized knowledge. If the work requires specific skills, the training should include practice activities (simulated or authentic) that address the highest priority behaviors.

A JTA is appropriate when a job has clearly delineated work requirements and people are hired to do a specific task. A JTA would not work well if the job description was generic and the employee responsibilities varied. For example, suppose the only requirements for a job were to show up on time and follow directions. In that case, a JTA won’t help define learning objectives, and a training course is likely not needed because on-the-job training would be more effective. However, if the job requires people to do a specific task and have specialized knowledge, then a JTA can help define the learning objectives for a course. However, remember that not all training solutions require the development of a  course—sometimes a training aid would suffice.

The goal of a JTA is to collect specific information, for example:

Skills and knowledge objectives to target included those that are essential, complex, or required frequently. However, the existence of any of these factors may be sufficient to label an objective as high-priority. For example, a good candidate for a learning objective might include a simple task that must be done accurately every time regardless of how frequently it is required or how difficult it is to complete  (e.g., hand washing or data entry). If the action has severe consequences when done wrong, it would be vital that training help individuals develop the required knowledge and skills.

You can obtain the information you need in a variety of ways:

Critical Steps Analysis (CSA)

The term task analysis is often used to refer to a detailed description of the steps required to complete a specific task. We call this a critical steps analysis (or CSA) to differentiate it from other types of task analysis. A CSA breaks down a skill into discrete steps or behaviors needed to complete a taskThis may or may not refer to a task associated with a specific job. The number of steps in a CSA will depend on the nature of the skill (i.e., its complexity). It will also depend on the individual learner’s ability to understand the process (i.e., their age or cognitive capacity). It may be sufficient to define a task in very general terms, or listing detailed steps may be required. For example, most people understand how to brush their teeth. However, you may need to break this task down for the very young.

  1. get a toothbrush
  2. place toothpaste on the brush (not too much)
  3. make sure the toothbrush makes contact with your teeth
  4. clean all your teeth (brush for 2 to 3 minutes)
  5. rinse your mouth and the toothbrush

The instruction provided may be verbal (on-the-job training), or it may involve the use of a training aid.

Teeth_Brushing_Aid.png
Retrieved from: https://www.alphadentalgroup.com.au/blog/oral-care/how-to-brush-teeth/
 

There are several ways you might break down a specific skill or behavior into smaller steps, including:

Personal experience. Complete the task yourself and record each of the steps. Obtain feedback from an expert to make sure the task was completed correctly.

Observation. Watch several individuals and document the steps they take as they complete the task. Note steps people take that are not required, those that are essential, and steps that are difficult to do properly. While it is best to conduct this analysis with those who perform the tasks exceptionally well, information obtained from observing poor performers can be helpful. This information might help you decide when practice activities are needed or more detailed instructions is required.

Cognitive interviews (knowledge audits). Ask an expert in the target skill or behavior to explain what they are doing as they complete the task. Make sure to ask why they are doing each of the steps. Ask those demonstrating the skill to rate the difficulty and importance of what they are doing in terms of the likelihood that they will get a satisfactory result.

Curriculum Content Standards Analysis

Course development in a general education setting often does not have a specific job or career in mind. The curriculum outlines a broad set of competencies (e.g., literacy, numeracy, and social skills) an institution has determined students will need to succeed in an advanced educational setting or as lifelong learners. This general education philosophy is designed to provide students with transferrable skills that will prepare them to gain knowledge, acquire new skills, and broaden their perspectives. Its goal is to help individuals adapt to society’s ever-changing needs and become productive workers in the economy.

Just as the term curriculum means different things to different people, so does curriculum analysis. Whereas curriculum development involves building the curriculum to present a comprehensive and coherent educational plan, curriculum analysis involves unpacking the curriculum. Like a task analysis, an instructional designer must evaluate how the parts fit together. The analyst must determine: What need the curriculum is responding to? Who is the curriculum designed for? What are the prerequisite knowledge and skill requirements? What content does it cover? What resources are needed to teach the curriculum?

For instructional designers, conducting a curriculum content analysis is only an evaluative process in that the instructional designer (e.g., a teacher) must develop instructional products that align with a chosen or, more likely, a mandated curriculum. Primarily as a result of the standards-based reform movement in education, curriculum developers constantly review and revise the curriculum. As a result, designers need not spend a lot of time deciding on the learning objectives for a course as they should be clearly outlined in curriculum standards.  

Learner Analysis (or Target Audience Analysis)

This is a special type of task analysis. It does not consider what needs to be taught (i.e., learning objectives). You conduct learner analysis to understand the ability and needs of those whose knowledge, skill, or ability you hope to improve. This will help you determine the level of specificity required when listing critical steps. More specifically, you might collect general group information about their:

Overall, you use the information gathered during the learner analysis to make instructional design decisions, such as the level of scaffolding or guidance required, the technology support needed, or the most appropriate delivery methods. Another way to accomplish this task is to use personas (see personas).

Instructional Context Analysis

Like the learner analysis, an instructional context analysis does not consider what needs to be taught (i.e., learning objectives). Instead, it considers the best way to provide the instruction and the constraints imposed on the instruction by the context. It is not as common as the other types of task analysis, but it is important nonetheless. It was originally used to evaluate business environments (internal and external) but has been adapted for education. To conduct this analysis, you ask yourself what context the learner will be in when they interact with the training and what modality (i.e., conditions) would be best. In some cases, the context may severely limit how instruction can be delivered and the instruction’s likely success. For example, teaching online (possibly due to covid or distance) may be a problem for some topics and learning objectives. When students do not have access to the expensive specialized equipment they need, they cannot practice the skills targeted by the learning objectives. This will significantly diminish the effectiveness of the instruction—attempting to design instruction in less than ideal conditions will be challenging and probably ill-advised.

By analyzing the instructional context before designing any instruction, you are able to tailor your instruction with regard to any potential limitations. For example, online instruction may be designed to be accessed using a personal computer but may also be accessed using smartphones. Other issues may constrain the feasibility of designing instructions, including internet access or audio/visual needs. Identifying potential implementation problems can also draw attention to situations where alternative modes of instruction may be needed.  

Chapter Summary

There are several evaluative activities that should be considered in the design phase. The consequences of overlooking these evaluations can be costly and often will impact the success of a designer’s development and implementation efforts.

  • Theory-based evaluation - describes the theory supporting a specific design or how well a specific theory is put into practice. 
  • Logic models - logic models describe an evaluand (e.g., a program or instructional product)—how it is supposed to work, the resources needed, the intended outcomes, and the theory supporting that initiative. 
  • Task analysis - A task analysis is conducted to clarify training needs (i.e., learning objectives). There are several ways to accomplish a task analysis.
    • Job-task Analyses help us identify the high-priority objective on which we should focus the ensuing training.
    • Critical Steps Analysis breaks down a task (i.e., the general description of a task) into the precise steps needed to complete that task.
    • Curriculum Content Standards are developed so designers can select appropriate learning objectives from an established set of general learning objectives. Like a task analysis, this analysis must determine the gap or need the instruction will address, the intended learners, and the prerequisite knowledge/skill requirements.
    • Learner Analyses describe the intended learners’ attributes and characteristics. This helps us make decisions regarding delivery methods and the level of guidance and background information to provide.
    • Instructional context analyses identifies the best conditions for providing the training and the constraints imposed on the instruction by the context.

Discussion Questions

  1. Consider an educational product you use. Describe the pedagogical and design theory that makes it effective.
  2. Think of a task you regularly undertake (you being an expert at doing the task). List the critical steps a novice might need to understand to accomplish the task.
  3. Think of some training you have completed. Describe how a learner analysis and instructional context analysis might inform the design of the instruction.

References

Chen, H. T., & Chen, H. T. (1990). Theory-driven evaluations. Sage.

Donaldson, S. I. (2007). Program theory-driven evaluation science: Strategies and applications. Routledge.

Mager, R. F., & Pipe, P. (1997). Analyzing Performance Problems: Or, You Really Oughta Wanna (3 ed.): Center for Effective Performance.

Stufflebeam, D. L., & Coryn, C. L. (2014). Evaluation theory, models, and applications (Vol. 50). John Wiley & Sons.

This content is provided to you freely by EdTech Books.

Access it online or download it at https://edtechbooks.org/eval_and_design/design_evaluation.