How Do We Solve a Problem Like Media and Methods?

&
Mediainstructional methodsinstructional framework theoryinstructional prioritiesresearch to proveresearch to improveCulture of Instructional Designconfounded variables
Media and instructional methods have had a problematic history in the instructional design field. In the 1990s, the "media vs. methods" debate exploded across influential issues of journals, including Educational Technology Research and Development, led by thinkers such as Richard Clark and Robert Kozma. This chapter discusses this historical debate and its insights, and updates it for our time, providing examples of how many designers still struggle with focusing on the media of a design, over the instructional methods.

Imagine that you are an instructional designer who needs to teach whitewater-river rafters how to tie a knot, specifically a double fisherman’s knot (Wikipedia, 2022). The designer instructional objective (Reigeluth & An, 2021) for the task is as follows:

Given two, 2-foot or greater lengths of rope, tie a double fisherman’s knot from memory. Knot must be tied in less than 20 seconds and must be appropriately dressed.

For the content, you are primarily teaching a rote procedure, with a few concepts like “knot” and “dressed” (Bloom, 1956; Merrill, 1983). You engage a subject-matter expert to show you how to tie the knot and explain what “dressed” means. You take photographs of the procedure.

Sequence for tying a double fisherman's knot.

Figure 1. Photo sequence for tying a double fisherman's knot. (Wikipedia public domain photo credit: Markus Bärlocher)

You then add words beneath each image, like “Step 1, Step 2,” and so on, that explain what the learner should do with the rope for each of the steps.

The instructional design field calls the photo sequence and the text in the example above media (a visual medium and a verbal medium, of which there are many types); photo and text are just two of many other types of media described by Heinich, Molenda, and Russell (1989). The photos and text communicate the message: in this case, the subject-matter content. Yet, these photos and text alone do not enable a learner to master the specified instructional objective. Think about it: from memory (which means the learner cannot look at the photos or words when performing the task), do you think a learner could tie this knot to meet the standard specified in the instructional objective on the first try?

It is very likely that the learner could not. Why? Because a medium only carries the message to the learner. For the learner to learn the message (specified by the instructional objective), the message must contain instructional methods, specific features that facilitate actual learning (Reigeluth & Carr-Chellman, 2009; Reigeluth & Keller, 2009). Instructional methods have a hierarchical classification that includes instructional approaches, instructional components, and sequencing of both content and instructional components.

For example, for a physical task like knot tying, we recommend the approach of hands-on learning, which is characterized by “mastery of skills through activity and direct experience”, i.e., actually trying to tie the knot (Reigeluth & Keller, 2009). We further recommend that this instructional approach be customized by using the well-researched, primary instructional components of tell (generality), show (example), and do (practice with feedback) (Merrill, Reigeluth & Faust, 1979). As a rote procedure, the practice should involve repetitive memorization of how to tie the knot (called drill and practice) and automatization (learning to tie it quickly without really thinking about it). The sequence should involve simultaneous telling and showing, followed by doing with immediate feedback.

To summarize the key terminology described in the example above:

Learning Check

Classify the following examples as either Media or Instructional Method:

YouTube Video

  1. Media
  2. Instructional Method

Demonstration

  1. Media
  2. Instructional Method

Closed-captioning

  1. Media
  2. Instructional Method

Analogy

  1. Media
  2. Instructional Method

Authentic task

  1. Media
  2. Instructional Method

Exhibit in a museum

  1. Media
  2. Instructional Method

Podcast

  1. Media
  2. Instructional Method

Teamwork

  1. Media
  2. Instructional Method

Flight simulator

  1. Media
  2. Instructional Method

Zoom Meeting

  1. Media
  2. Instructional Method

(Explanations for answers can be found at the end of the chapter.)

However, media and instructional methods have had a problematic history in the instructional design field. Simply stated, the problem is that designers and researchers often attribute the learning effectiveness (mastery of the instructional objective) of their design to media when it is really due to instructional methods. This problem was first noticed in the 1960s with media comparison studies (Clark, 1983), but received even more significant attention when computer-based instruction (CBI) gained popularity in the 1980s. Clark (1985) was an early researcher who challenged the media mindset that was evolving:

The result of the analysis strongly suggests that achievement gains found in these CBI studies are overestimated and are actually due to the uncontrolled but robust instructional methods embedded in CBI treatments. It is argued that these [instructional] methods may be delivered by other media [methods] with achievement gains comparable to those reported for computers (p. 1).

The purpose of this chapter is to bring you up-to-speed on this topic so you can advise current or future clients, managers, and other stakeholders about its implications for instructional design. If you plan to base your instructional designs on research, or conduct your own research, this chapter will help you understand appropriate research for media and instructional methods. We start by introducing you to effectiveness, efficiency, and appeal, three key measures for judging the success of instruction (Reigeluth & Carr-Chellman, 2009). Next, we chronologically summarize various perspectives about media and instructional methods between 1980 and the present. We conclude with ideas for both research and practice, focusing on the value of research-to-improve versus research-to-prove and “Culture Five” design principles.

Effectiveness, Efficiency, and Appeal

Instructional designers, teachers, clients, parents, and a host of other instructional stakeholders want to know how well a learning experience works. To measure this, the instructional design field focuses on three measures embedded in the instructional theory framework: effectiveness, efficiency, and appeal (Honebein & Reigeluth, 2021; Reigeluth & Carr-Chellman, 2009). 

Effectiveness measures learner mastery of an instructional objective, which is also known as student achievement (Reigeluth, 1983). An example of an effectiveness measure is a test, which is typically an objective measure. Another example is a rubric, which is usually a subjective measure.

Efficiency measures the time, effort, and costs put in by teachers, learners, and other stakeholders to deliver and complete a learning experience (Reigeluth, 1983). An example is the number of hours a learner puts in to master an instructional objective. Another example is the cost of instructional materials that deliver the learning experience.

Appeal measures how well learners, teachers, and other stakeholders enjoy the learning experience, especially in terms of media and instructional methods. An example is having students answer a survey question like, “I would recommend this course to others.” Another example is an evaluator interviewing teachers about their experiences teaching a learning experience. These appeal measures are synonymous with Merrill’s (2009) terminology of “engaging” and the first level of the Kirkpatrick-Katzell model, “reaction” (Katzell, 1952; Kirkpatrick, 1956, 1959).

Effectiveness, efficiency, and appeal form what is called an “Iron Triangle” (Honebein & Honebein, 2015; Honebein & Reigeluth, 2021). The iron triangle represents sacrifices a designer must make during the design process. The left side of Figure 2 is what the designer initially desires, where the double fisherman’s knot instruction balances all three outcomes. However, during development, the designer experiences various situational issues and constraints that impact the design. This forces the designer to sacrifice—or “trade-off”—media, instructional methods, or both in their design. This effect is illustrated on the right side of Figure 2. For example, learner feedback favors keeping the hands-on learning instructional method because it boosts effectiveness. Learner feedback also advises the designer to sacrifice efficiency by using a more expensive and more time-consuming media—video— rather than still photographs to increase appeal (for example, see https://www.youtube.com/watch?v=9glfeKvEuyo).

Instructional Design iron Triangle depicting design when it is balanced between the three and design when effectiveness is emphasized, sacrificing efficiency and appeal.

Figure 2. The Instructional Design Iron Triangle depicts the three outcomes (or constraints) associated with instructional methods: effectiveness, efficiency, and appeal. The blue triangle shows equal priority for the three outcomes.  The orange triangle shows priority for effectiveness, which requires some sacrifice in efficiency and appeal.

Many designers believe that media only affects two of the three effectiveness-efficiency-appeal measures:

Read and Reflect

As the two authors were finalizing this manuscript, Reigeluth threw out this idea:

“I think video can improve effectiveness by showing the actual motions (an affordance of the video medium) for teaching a task that entails movement. Do you think we should acknowledge that affordances of media can, for some kinds of content (perhaps using a motion medium for a motion task or a sound medium for a sound task), influence effectiveness?”

Honebein was skeptical, reiterating that it is the message and instructional method that drive effectiveness. He responded: “Video without a message is essentially a blank screen, thus how can it, alone, be effective?”

Reigeluth then replied that perhaps some messages can be communicated more effectively through one medium than another, just like different kinds of goods (like livestock versus gasoline) can be delivered more effectively by one kind of truck than another.

What do you think? Could a media “affordance”, like motion or sound, influence effectiveness?

Why are media likely limited to efficiency and appeal? Clark (1994) wrote, “When learning gains [effectiveness] are found, we attribute them to the delivery medium, not to the active ingredient in instruction [instructional methods]. When learning gains are absent, we assume we have chosen the wrong mix of media” (p. 27). What Clark is saying is that since media are not the active ingredient for effectiveness, it is impossible for a medium to claim it delivers learning effectiveness (mastery of the instructional objective). What a medium can claim is that it contributes to making learning more efficient and appealing.1

Application Exercise

What if we give a learner with no knot-tying experience a photograph of a completed double fisherman’s knot (Figure 3) and some rope? All the learner has is media, the photo and realia. If the learner is able tie the knot, wouldn’t it prove that media has learning effectiveness qualities? Why don’t you try it out with your husband, wife, boyfriend, girlfriend, or some random person off the street.

An appropriately-dressed double fisherman's knot.

Figure 3. An appropriately-dressed double fisherman's knot. Could you tie this knot based solely on this photograph? Could you do it in 20 seconds or less on the first try? (Wikipedia public domain photo credit: Malta, https://commons.wikimedia.org/wiki/File:N%C5%93ud_de_p%C3%AAcheur_double_serr%C3%A9.jpg, no changes made).

Let’s explore this thought experiment: the photograph (media) communicates the message (the end-state of a properly dressed double fisherman’s knot). The message is embedded in an instructional method, likely self-paced hands-on learning and/or discovery-based learning (Reigeluth & Keller, 2009). These types of instructional methods require learners to apply their own experience and knowledge to figure out the solution. In other words, it is the learner who invents or appropriates these, and perhaps other instructional methods to master the task based upon their experience—probably similar to what the person who originally invented the double fisherman’s knot did when inventing the knot. 

Instructional designers and researchers should always collect data for all three outcomes: effectiveness, efficiency, and appeal. Honebein and Reigeluth (2021) advised that:

Without data for effectiveness, efficiency, and appeal, it is difficult to know when an instructional medium or method is preferable compared to another, given that different priorities are valued by different stakeholders in different situations. This is a huge gap in our field’s research practice (p. 17).

To summarize, the three primary measures for evaluating learning experiences are effectiveness, efficiency, and appeal. Instructional methods enable measurement of all three outcomes. Media enable measurement of two outcomes, efficiency and appeal. The Instructional Design Iron Triangle guides how designers make trade-offs and sacrifices in their designs, which can then be measured by always collecting data about effectiveness, efficiency, and appeal. So, as a designer, it is important that you understand the priorities for effectiveness, efficiency, and appeal in your particular project.

A Cold Bucket of Water: The Great Media vs. Instructional Method Debates

Generally, each new medium seems to attract its own set of advocates who make claims for improved learning and stimulate research questions which are similar to those asked about the previously popular medium. Most of the radio research approaches suggested in the 1950s (e.g., Hovland, Lumsdaine, & Sheffield, 1949) were very similar to those employed by the television movement of the 1960s (e.g., Schramm, 1977) and to the more recent reports of the computer-assisted instruction studies of the 1970s and 1980s (e.g., Dixon & Judd, 1977). Clark (1983), p. 447

In 1990, the first author of this chapter (Honebein) arrived at Indiana University as a first-year graduate student in the School of Education, intent upon righting the wrongs of his K-12 education via the computer revolution. He had a Macintosh IIcx, copies of Hypercard, Supercard, and Macromedia Director (which were the latest and greatest multimedia software development tools of that time to create computer-based learning experiences), and a vision to prove the efficacy of computer-based instruction over all other mediums. He was one of those advocates Clark described.

Read and Reflect

Given that you are likely new to the instructional design field, you are probably an advocate for some new kind of instructional media. Gibbons (2003) interprets this type of behavior as “centrisims.” New designers tend to start out media-centric, then evolve to embrace message, strategy, and model centrisms. Newcomers go through this process because media technologies attract people, like you, to the instructional design field. What media technology are you an advocate for? How do you think you will evolve across Gibbon’s “centrisims”?

The first author’s dreams of glory started to moderate in a class taught by the second author, who introduced the idea that situation drives methods. Another media-focused class clarified the difference between media and instructional methods, and how they interact. And a third hypermedia class re-imagined the relationship between media and instructional methods via a constructivist perspective. Somewhere in the massive amount of reading for each of these classes was the Clark (1983) paper, an excerpt of which we included at the start of this section. Clark’s paper was a paradigm shift. It shattered the first author’s dream. Attempting to prove the efficacy of computer-based instruction now seemed to be a complete waste of time.

After publication of the 1983 paper, Clark’s follow-up papers on this topic (Clark, 1984, 1985, 1986) appear to be rebuttals and elaborations to other academics. Clark had touched a nerve in the instructional design community, especially Clark’s (1983) analogy that referred to media as being “mere vehicles that deliver instruction” (p. 445), leading Ross (1994) to refer to media as “delivery trucks” (p. 1). For example, Petkovich and Tennyson (1984, 1985) challenged Clark’s ideas about the relationship between media and methods, specifically from an encoding perspective. Clark’s rebuttal suggested, among other ideas, that any media research should focus on “delivery issues (cost, efficiency, equity, and access)” (p. 240).

Hannifin (1986) (and, later, Driscoll & Dick, 1999), on the other hand, opined that the small size of the instructional design field, combined with tenure and promotion processes, created conditions for “quick and dirty publications” (p. 14) that are mostly experimental and focused only on learning outcomes (effectiveness). In this context, Hannifin cautioned academics to heed Clark’s (1983) ideas about the primary suspect: confounded, comparative media research. Ross and Morrison (1989) added the comment that media “…does not directly affect learning, [yet media] serves as influential moderating variables” (p. 29-30).

Kozma (1991), in his first rebuttal to Clark (1983), agreed with the idea that traditional experimental designs involving media that focused on cued recall measures were confounded and therefore not useful. However, Kozma pressed on to provide media research examples that were supposedly unconfounded in his view. These included four studies that focused on comprehension and learning with text and pictures (similar to our knot-tying example above).

LIDT in the World: The Stone & Glock Study, as cited in Kozma (1991)

The Stone & Glock (1981) research study is particularly instructive in understanding the relationship between media and instructional methods. It also shows how to identify potential experimental research methodology flaws that contribute to media/instructional method confounding and poor instructional designs.

The study was straight-forward: subjects in three different groups were asked to assemble a hand-truck. All groups received hand-truck parts. All groups received a job aid (assembly instructions) in three different media forms:

  • Group 1 received just text.
  • Group 2 received text and illustrations.
  • Group 3 received just illustrations.

The result was that text and illustration produced “significantly more accurate performance” (p. 1). Or did it? We argue that the causes for this performance were (a) the instructional methods, and (b) the amount of time spent using those instructional methods.

The Method section of the study neglected to mention differences in instructional methods across treatment groups. The researchers described details of the media—the text and illustrations. However, the researchers mentioned nothing about the instructional methods. To discover what instructional methods were present, we carefully read the study’s methods and procedure. This enabled us to reverse-engineer the likely instructional objectives (which the researchers didn’t specify, but should have):

  • From memory and given 10 types of hand-truck parts, name each part with 100% accuracy.
  • Given hand-truck parts and job aids, assemble the hand-truck with 100% accuracy.

Based on the instructional objectives and the research procedure, we deduced the likely instructional methods. First, we applied Merrill’s (1983) Component Display Theory, specifically its primary presentation forms, to analyze the situation.

  • Group 1 used generalities (G), an instructional method category that describes concepts, procedures, and principles in the form of text.
  • Group 3 used instances (E), an instructional method category that includes examples in the form of illustrations.
  • Group 2 included both generalities and instances (G+E).

These instructional method differences represent the confounding. The researchers should have identified them in the Method section.

Second, we used Reigeluth and Keller (2009) and Reigeluth (1999) to name the likely instructional methods: presentation, practice, hands-on learning, easy-to-difficult sequencing, and procedural sequencing. The researchers did not specify these instructional methods in the Method section, and it is unclear how subjects applied these methods.

Another criticism is that researchers used inadequate outcome measures. The researchers:

  • Collected effectiveness data (number of errors) for all three groups.
  • Collected efficiency data (time subjects looked at media) but only for Group 2 (text and illustration) and Group 3 (illustration only).
    • Did not collect total time to complete the task data for all three groups. This is an important outcome measure for instructional method selection.
  • Did not collect appeal

The efficiency data suggests the presence of time-on-task confounding. Group 2 had significantly fewer errors than Groups 1 and 3. However, Group 2 spent 375.24 seconds looking at the job aid (309 seconds for text; 66.24 seconds for illustration), while Group 3 spent 160.33 seconds looking at just the illustration job aid. The difference was 214.91 seconds (researchers did not collect time data for Group 1). Could the performance difference be only that Group 2 spent a lot more time learning than the other groups?

Perhaps Kozma’s choice of Stone and Glock (1981) and other studies were potentially more confounded than originally thought. This example demonstrates just how tenuous experimental research can be in terms of media /instructional-method confounding, inadequate outcome measures, and time-on-task confounding.

A few years after Kozma (1991) published his first response to Clark’s (1983) paper, Ross (1994) organized a special issue in Educational Technology Research and Development (ETR&D) spearheaded by two papers, one from Kozma (1994) and one from Clark (1994). This became known as the “Clark/Kozma debate”. This debate included perspectives by reviewers of Kozma’s paper (Jonassen, Campbell, & Davidson, 1994; Morrison, 1994; Reiser, 1994; Shrock, 1994), as well as an overall synthesis by Tennyson (1994).

There was no definitive agreement on whether media could influence learning. Tennyson’s (1994) scorecard essentially showed a 3/3 tie. However, within Kozma’s and several other contributors’ papers, an interesting theme emerged, that of media having complex properties, such that a learning experience might be considered a complex system (Honebein & Reigeluth, 2020). As Tennyson wrote, “learning is a complex phenomenon, requiring the interaction of many variables including the learner and environmental factors” (p. 16). Shrock (1994) picked up on this, suggesting research methods that “would allow the investigation of complex, simultaneous variables” (p. 52). Reiser (1994) cited Ullmer (1994) to explain that “traditional experimental research methods and their attendant controlling mechanisms may fail to fully assess the complex effects that modern multimedia systems may have on learners” (p. 48). Jonassen et. al (1994) got very specific about the complex systems association:

When we consider the role of media, we should realize that [media] vehicles are not "mere." They are complex entities with multiple sets of affordances that are predicated on the perceptions of users and the context in which they are used (p. 38).

Researcher and designer perspectives on the 1994 Clark/Kozma debates have persisted over the years since 1994. Lockee, Burton, and Cross (1999) reported the rise of distance education media comparison studies, calling media comparison studies in this context “inappropriate” (p. 1). Kozma (2000), as a commentator for several articles in a special ETR&D issue, wrote that “as both Richey (1998) and Discroll and Dick (1999) point out, the messy, uncontrolled context of real-world educational technology R&D demands alternative research methodologies. Traditional experimental designs often are not able to accommodate the complexity of these real-world situations” (p. 10).

Richey (2000) responded to Kozma’s comments in a “yes…, and…” way, suggesting that “other views of the field describe a more complex enterprise with a more complex knowledge base” (p. 16). Richey continued to differentiate the needs in K-12 environments compared to the needs in corporate environments, where many instructional designers work. Richey wrote, “[Corporate practitioners’] primary concern is not technology-based delivery, nor is it even learning. Instead, they are typically concerned with organizational problem solving” (p. 17). This idea is somewhat supported when Nathan and Robinson (2001) wrote that “media and method, while separable in theory, cannot be separated in practice” (p. 84). Surry and Ensminger (2001) suggested using other research methods for media research, such as intra-medium studies and aptitude treatment interactions. Then, Hastings and Tracey (2005) took one last look in the rear-view mirror as the Clark/Kozma debate further faded in the distance:

We believe that after 22 years it is time to reframe the original debate to ask, not if, but how media affects learning. We agree that media comparison studies are inherently flawed and support the argument that we must identify research designs that will provide answers to this question in significantly less time (p. 30).

Fast forward to 2019. Between 2005 and 2018, not much was written or said about the relationship between media and instructional methods. However, Sickel (2019), observing the significant rise of new media, suggested that technical pedagogical and content knowledge (TPACK) could be a “modern framework for teaching with technology” (p. 157). This led to reinforce the idea that “methods take advantage of media attributes” and “media enable certain methods” (p. 161).

Yet Kozma’s statement that “traditional experiments often are not able to accommodate the complexity of these real-world situations” (p. 10) had led the instructional design field to new and more accepted research methods that, instead of trying to prove, focused on trying to improve. These research-to-improve methods (Honebein & Reigeluth, 2021, 2020; Reigeluth & An, 2009) recognized that learning experiences were a complex system where media and instructional methods and other elements exist as systemic components. These components provide value when stakeholders (learners, designers, clients, etc.) say they do. Such research methods include action research (Efron & Ravid, 2020; Stringer, 2008; Stringer & Aragon, 2021), design experiments (Cobb et al. 2003), design-based research (Barab & Squire, 2004; Collins et al., 2004; Design-Based Research Collective, 2003; Wang & Hannafin, 2005), evaluation research (Phillips et al., 2012), and formative research (Reigeluth & An, 2009; Reigeluth & Frick, 1999).

Research-to-Prove or Research-to-Improve?

Around 2017, Honebein & Reigeluth (2020) noticed a concerning trend: it seemed that there was a rise in experimental, media comparison-type studies in various instructional design journals. They further investigated the phenomenon by reviewing comparative research papers in ETR&D between 1980 and 2019. Thirty-nine papers from ETR&D met their criteria (Study 1). Another forty-one papers focused on flipped instruction from non-ETR&D journals also met the criteria (Study 2) (Al-Samarraie et al. (2019).

The results showed a significant rise of experimental, research-to-prove papers. These papers appeared to (a) confound media and instructional methods, (b) not include sufficient information about the instructional objective, (c) omit one or more of the effectiveness, efficiency, and appeal outcomes, and (d) not report whether or not the researchers conducted formative evaluation. The results also showed a significant rise in comparative studies between 2010 and 2019 (Figure 4):

Distribution of published articles comparing North America with all other regions.

Figure 4. Distribution of published articles comparing North America with all other regions. The number of articles published from non-North American regions increased substantially between 2010 and 2019.

In Study 1, the 2010 to 2019 rise in research-to-prove journal articles was attributed to non-North American sources (75%), primarily from China (41%). This increase of non-North American journal articles was likely caused by a change in ETR&D editorial policies around 2008 which, according to Spector (2017), “encouraged more international contributions from outside North America” (p. 1416). In Study 2, North American sources (63%) were the primary contributor of research-to-prove journal articles. Honebein and Reigeluth (2021) summarized the situation this way:

The studies introduce significant confounding variables involving the mixture of instructional methods and media. The instructional design field has vigorously debated these issues (see Clark, 1983, 1994; Kozma, 1994, 2000; Tennyson, 1994). By eliminating the comparison group (traditional learning experience) and focusing on research to improve, a researcher eliminates the problem of confounding variables (p. 18).

What Should Future Instructional Designers and Researchers Take Away from This Chapter?

The Clark/Kozma debates have a lot to unpack, in terms of how such ideas impact and provide guidance to future practitioners and researchers. Here we provide some advice for how you should think about applying media and instructional methods in your coursework, professional work, or research work.

  1. Be cautious of experimental, research-to-prove comparison studies that include media or combine media and instructional methods. In other words, buyer beware. This research is not useful because it might:
    • Suggest that learning effectiveness is improved by the media, when in fact it is improved by the instructional method.
    • Influence practitioners to choose media that likely won’t work for their situation.
    • Represent quick and dirty publications that are intentionally or unintendedly meant to pad a researcher’s portfolio for promotion, salary increase, and tenure. In other words, the paper benefits the author, not the reader.
  2. When designing a learning experience, follow the processes associated with the instructional theory framework (Honebein & Reigeluth, 2020, 2021) (Figure 5). This framework helps you systematically and logically synthesize possible media and instructional method options that fit your specific instructional-design situation.
    A revised version of the instructional theory framework.
    Figure 5. A revised version of the instructional theory framework.
  3. Think of a learning experience as a systemic, complex system (Honebein & Reigeluth, 2021, 2020). Formative evaluation or research-to-improve is your best option for determining whether or not your learning experience (which likely blends media and instructional methods in interesting ways) meets the needs of your stakeholders in your particular situation. For these types of evaluation/research, you must collect effectiveness, efficiency, and appeal data. With this data, you will be able to advise your team, manager, or clients about the benefits and pitfalls of a learning experience.

Kozma (2000), in what was likely his last paper about the media versus instructional method debate, planted a seed where he called for building a “Fifth Culture” of instructional design practice—the Forth Culture was put forth by Leslie Briggs in 1984 (Briggs, 1984). In Kozma’s Fifth Culture, designers and researchers embed themselves within their client’s world, as co-designers or “co-producers” (Honebein & Cammarano, 1995, p. 5), where designers learn to “understand their [client’s] needs, goals, problems, issues, and practices” (Kozma, 2000, p. 13). By doing this, it refocuses designers to create “learning environments” (which the authors like to call “learning experiences”), where “learning outcomes are owned by the learners” (p. 13).

Summary of Kozma’s (2000) Fifth Culture Principles:

  1. Embed research designs in the "real world" and embed ourselves in the contexts of our client base. Deeply understand our clients’ needs, goals, problems, and issues, and embed these, in turn, into our theories, research, and practices.
  2. Shift the focus of our work from the design of instruction to the design of learning environments. Learning outcomes are owned by the learners. Learners set the objectives for learning, not the designers.
  3. Understand that the relationship between media, design, and learning should be the unique contribution of our field to knowledge in education. This understanding is the base of our practice, our theory, and our research.

Summary of Honebein and Reigeluth’s (2021) additions to Kozma’s (2000) Fifth Culture ideas:

  1. Accurately specify the desired learning outcomes based upon the conditions and values of the situation elicited from stakeholders, and supply requirements and instructional objectives that include conditions, behaviors, and criteria, along with assessments that align with the situation.
  2. Describe students’ real learning experiences in detail, including improvements suggested by data, made over time.
  3. Describe how learning experiences are systematically designed and formatively evaluated, using good design judgment, prior to conducting research or evaluation.
  4. Make sure that tests and data really measure effectiveness, efficiency, and appeal. (p. 17)

Footnotes

1 Perhaps there is an exception when a particular medium’s affordances match a particular feature of the content, as in the affordance of sound for teaching musical chords, pictures for teaching colors and hues, and motion for teaching dance moves.

Chapter 33 Learning Check Explanations

YouTube Video: Videos, moving pictures, etc. are media.

Demonstration: Demonstration is an instructional method that one can deliver through various media.

Closed-captioning: Closed-captioning is a medium that converts narration to text.

Analogy: Analogy is an instructional method that one can deliver through various media.

Authentic task: Authentic task is an instructional method that one can deliver through various media.

Exhibit in a museum: An exhibit is a medium that illustrates a phenomenon, like a skeleton of a dinosaur.

Podcast: Podcast is a medium that delivers narrated messages that may be enhanced by other media, such as music, sound effects, ambient sounds, and multiple narrators.

Teamwork: Teamwork is an instructional method that one can deliver through various media (i.e., in person or by phone).

Flight simulator: Flight simulator is a medium, as it includes realia (the controls and instruments for flying a plane).

Zoom Meeting: Zoom meeting is a medium, as it enables the presentation of voice, text, graphics, photos, and videos.

References

Al-Samarraie, H., Shamsuddin, A., and Alzahrani, A. I. (2019). A flipped classroom model in higher education: a review of the evidence across disciplines. Educational Technology Research and Development, 68, 1017–105. https://doi-org.proxyiub.uits.iu.edu/10.1007/s11423-019-09718-8

Barab, S., & Squire, K. (2004). Design-based research: Putting a stake in the ground. The Journal of the Learning Sciences, 13(1), 1-14. https://doi.org/10.1207/s15327809jls1301_1

Bloom, B. S. (1956). Taxonomy of educational objectives, handbook I: the cognitive domain. David McKay Co Inc.

Briggs, L. J. (1984). Trying to straddle four research cultures. Educational Technology, 24(8), 33-34. https://www.jstor.org/stable/44424176

Clark, R. E. (1983). Reconsidering research on learning from media. Review of Educational Research, 53(4), 445-459. https://doi.org/10.3102/00346543053004445

Clark, R. E. (1985). Evidence for confounding in computer-based instruction studies: Analyzing the meta-analyses. Educational Communications and Technology Journal, 33(4), 249-262. https://doi-org.proxyiub.uits.iu.edu/10.1007/BF02769362

Clark, R. E. (1986). Absolutes and angst in educational technology research: A reply to Don Cunningham. Educational Communications and Technology Journal, 34(1), 8-10.

Clark, R. E. (influence learning. Educational Technology Research and Development, 42(2), 21-29. https://doi-org.proxyiub.uits.iu.edu/10.1007/BF02768357

Cobb, P., Confrey, J., diSessa, A., Lehrer, R., & Schauble, L. (2003). Design experiments in educational research. Educational Researcher, 32(1), 9-13. https://www.jstor.org/stable/3699928

Collins, A., Joseph, D., & Bielaczyc, K. (2004). Design research: Theoretical and methodological issues. The Journal of the Learning Sciences, 13(1), 15-42. https://doi.org/10.1207/s15327809jls1301_2

Design-Based Research Collective (2003). Design-based research: An emerging paradigm for educational inquiry. Educational Researcher, 32(1), 5-8. https://doi.org/10.3102/0013189X032001005

Driscoll, M. P., & Dick, W. (1999). New research paradigms in instructional technology: An inquiry. Educational Technology Research and Development, 47(2), 7-18. https://doi-org.proxyiub.uits.iu.edu/10.1007/BF02299462

Efron, S. E., & Ravid, R. (2020). Action research in education: A practical guide (2nd ed.). New York, NY: The Guilford Press.

Gibbons, A. S. (2003). What and how do designers design? TechTrends, 47(5), 22-27. https://doi-org.proxyiub.uits.iu.edu/10.1007/BF02763201

Hannafin, M. J. (1985). The Status and Future of Research in Instructional Design and Technology. Journal of Instructional Development, 8(3), 24–30. http://www.jstor.org/stable/30220787

Hastings, N.B., & Tracey, M.W. (2005). Does media affect learning: Where are we now? TechTrends, 49(2), 28-30. https://doi-org.proxyiub.uits.iu.edu/10.1007/BF02773968

Heinich, R., Molenda, M., and Russell, J.D. (1989). Instructional Media. Macmillan.

Honebein, P. C., & Cammarano, R. (1995). Creating Do-It-Yourself Customers. Thomson.

Honebein, P. C., & Honebein, C. H. (2015). Effectiveness, efficiency, and appeal: pick any two? The influence of learning domains and learning outcomes on designer judgments of useful instructional methods. Educational Technology Research and Development, 63(6), 937-955. https://doi.org/10.1007/s11423-015-9396-3

Honebein, P.C., Reigeluth, C.M. To prove or improve, that is the question: the resurgence of comparative, confounded research between 2010 and 2019. Educational Technology Research and Development, 6465–496 (2021). https://doi-org.proxyiub.uits.iu.edu/10.1007/s11423-021-09988-1

Honebein, P. C., & Reigeluth, C. M. (2020). The instructional theory framework appears lost. Isn’t it time we find it again? Revista de Educación a Distancia, 64(20). https://revistas.um.es/red/article/view/405871/290451

Jonassen, D.H., Campbell, J.P. & Davidson, M.E. (1994). Learning with media: Restructuring the debate. Educational Technology Research and Development, 42(2), 31–39. https://doi-org.proxyiub.uits.iu.edu/10.1007/BF02299089

Katzell, R. A. (1952). Can we evaluate training? A summary of a one day conference for training managers. A publication of the Industrial Management Institute, University of Wisconsin, April, 1952.

Kirkpatrick, D. L. (1956). How to start an objective evaluation of your training program. Journal of the American Society of Training Directors, 10, 18-22.

Kirkpatrick, D. L. (1959). Techniques for evaluating training programs. Journal of the American Society of Training Directors, 13(11), 3-9.

Kozma, R.B. (1991). Learning with Media. Review of Educational Research, 61(2), 179–211. https://doi.org/10.2307/1170534

Kozma, R.B. (1994a). Will media influence learning? Reframing the debate. Educational Technology Research & Development, 42(2), 7–19. https://doi-org.proxyiub.uits.iu.edu/10.1007/BF02299087

Kozma, R.B. (1994). A reply: Media and methods. Educational Technology Research & Development, 42(2), 11–14. https://doi-org.proxyiub.uits.iu.edu/10.1007/BF02298091

Kozma, R.B. (2000). Reflections on the state of educational technology research and development. Educational Technology Research & Development, 48(1), 5–15. https://doi-org.proxyiub.uits.iu.edu/10.1007/BF02313481

Lockee, B.B., Burton, J.K. & Cross, L.H. (1999). No comparison: Distance education finds a new use for ‘No significant difference’. Educational Technology Research & Development, 47(3), 33–42. https://doi-org.proxyiub.uits.iu.edu/10.1007/BF02299632

Merrill, M. D. (1983). Component display theory. In C.M. Reigeluth (Ed.), Instructional-design theories and models: An overview of their current status (pp. 279-333). Lawrence Erlbaum Associates.

Merrill, M. D. (2009). Finding e3 (effective, efficient, and engaging) instruction. Educational Technology, 49(3), 15–26. http://www.jstor.org/stable/44429676

Merrill, M.D., Reigeluth, C.M., & Faust, G.W. (1979).  The Instructional Quality Profile: A curriculum evaluation and design tool.  In H.F. O'Neil, Jr. (Ed.), Procedures for Instructional Systems Development. Academic Press.

Mitchell, N., & Robinson, C. (2001). Considerations of learning and learning research: Revisiting the “media effects” debate. Journal of Interactive Learning Research, 12(1), 69-88.

Morrison, G.R. The media effects question: “Unresolvable” or asking the right question. Educational Technology Research & Development, 42(2), 41–44 (1994). https://doi-org.proxyiub.uits.iu.edu/10.1007/BF02299090

Petkovich, M.D., Tennyson, R.D. (1984). Clark’s “learning from media”: A critique. Educational Technology and Communications Journal, 32(4), 233–241. https://doi-org.proxyiub.uits.iu.edu/10.1007/BF02768896

Petkovich, M.D., Tennyson, R.D. ((1985). A few more thoughts on Clark’s “Learning from Media”. Educational Technology and Communications Journal 33(2), 146. https://doi-org.proxyiub.uits.iu.edu/10.1007/BF02769117

Phillips, R., Kennedy, G., & McNaught, C. (2012). The role of theory in learning technology and evaluation research. Australasian Journal of Educational Technology, 28(7), 1103-1118. https://doi.org/10.14742/ajet.791

Reigeluth, C.M. (1983). Instructional Design: What is it and Why is it? In C.M. Reigeluth (Ed.), Instructional-design theories and models: An overview of their current status (pp. 3-36). Hillsdale, NJ: Lawrence Erlbaum Associates.

Reigeluth, C.M. & An, Y. (2009). Theory building. In C. M. Reigeluth & A. Carr-Chellman (Eds.), Instructional-design theories and models: Building a common knowledge base (Vol. III) (pp. 365-386). Hillsdale, NJ: Lawrence Erlbaum Associates.

Reigeluth, C.M. & Carr-Chellman, A. (2009). Understanding instructional theory. In C. M. Reigeluth & A. Carr-Chellman (Eds.), Instructional-design theories and models: Building a common knowledge base (Vol. III) (pp. 3-26). Hillsdale, NJ: Lawrence Erlbaum Associates.

Reigeluth, C.M. & Keller, J. B. (2009). Understanding instruction. In C. M. Reigeluth & A. Carr-Chellman (Eds.), Instructional-design theories and models: Building a common knowledge base (Vol. III) (pp. 27-35). Hillsdale, NJ: Lawrence Erlbaum Associates.

Reigeluth, C. M., & Frick, T. W. (1999). Formative research: A methodology for creating and improving design theories. In C.M. Reigeluth (Ed.), Instructional-design theories and models: A new paradigm of instructional theory, volume II (pp. 633-651). Hillsdale, NJ: Lawrence Erlbaum Associates.

Reiser, R.A. (1994). Clark's invitation to the dance: An instructional designer's response. Educational Technology Research & Development, 42(2), 45–48. https://doi-org.proxyiub.uits.iu.edu/10.1007/BF02299091

Richey, R.C. (1998). The pursuit of useable knowledge in instructional technology. Educational Technology Research & Development, 46(4), 7–22. https://doi-org.proxyiub.uits.iu.edu/10.1007/BF02299670

Richey, R.C. (2000). Reflections on the state of educational technology research and development: A response to Kozma. Educational Technology Research & Development, 48(1), 16–18. https://doi-org.proxyiub.uits.iu.edu/10.1007/BF02313482

Ross, S.M. (1994). Delivery trucks or groceries? More food for thought on whether media (will, may, can't) influence learning. Educational Technology Research & Development, 42(2), 5–6. https://doi-org.proxyiub.uits.iu.edu/10.1007/BF02299086

Ross, S.M., & Morrison, G.R. (1989). In search of a happy medium in instructional technology research: Issues concerning external validity, media replications, and learner control. Educational Technology Research & Development, 37(1), 19–33. https://doi-org.proxyiub.uits.iu.edu/10.1007/BF02299043

Shrock, S.A. (1994). The media influence debate: Read the fine print, but don't lose sight of the big picture. Educational Technology Research & Development, 42(2), 49–53. https://doi-org.proxyiub.uits.iu.edu/10.1007/BF02299092

Sickel, J.L. (2019). ) The Great media mebate and TPACK: A multidisciplinary examination of the role of technology in teaching and learning. Journal of Research on Technology in Education, 51(2), 152-165. https://doi.org/10.1080/15391523.2018.1564895

Stone, D.E., & Glock, M.D. (1981). How do young adults read directions with and without pictures?Journal of Educational Psychology, 73(3), 419-426.

Stringer, E. T. (2008). Action research in education (2nd ed.). Pearson Prentice Hall.

Stringer, E. T., & Aragon, A. O. (2021). Action research (5th ed.). Sage Publications.

Surry, D.W, & Ensminger, D. (2001). What’s wrong with media comparison studies? Educational Technology, 41(4), 32-35. https://www.jstor.org/stable/44428679

Tennyson, R.D. (1994). The big wrench vs. integrated approaches: The great media debate. Educational Technology Research & Development, 42(2), 15–28. https://doi-org.proxyiub.uits.iu.edu/10.1007/BF02298092

Ullmer, E. (1994). Media and learning: Are there two kinds of truth? Educational Technology Research and Development, 42(1), 21-32. https://doi-org.proxyiub.uits.iu.edu/10.1007/BF02298168

Wang, F., & Hannafin, M. J. (2005). Design-based research and technology-enhanced learning environments. Educational Technology Research and Development, 53(4), 5–23. https://doi.org/10.1007/BF02504682

Peter C. Honebein

Customer Performance Group

Dr. Peter C. Honebein, co-founder and managing director of the Customer Performance Group, focuses his career on researching, designing, and developing innovative employee and customer performance improvement solutions in wildly different contexts for high priority, high visibility initiatives. Peter’s clients are C-level executives, seasoned managers, and visionary entrepreneurs who appreciate creative evidence-based methods, systematic process, collaborative engagement, influential thought leadership, and speed. As a thought leader in the field, Peter was editor-in-chief from 2016 to 2018 of ISPI’s monthly journal, Performance Improvement, and has served as an adjunct professor at Indiana University, Boise State University, and University of Nevada, Reno, teaching graduate classes in instructional theory, instructional strategy, human performance technology, evaluation, marketing, and customer experience. He publishes in both peer-reviewed and non-peer reviewed journals, receiving AECT/ETR&D's Outstanding Research Reviewer Award in 2016 and 2020 and the AECT Research and Theory Division Outstanding Theoretical Journal Article Award in 2021. Peter authored the books Strategies for Effective Customer Education and Creating Do-It-Yourself Customers, both published by the American Marketing Association. Peter received his Ph.D. in instructional systems technology from Indiana University and currently resides in Reno, NV.
Charles M. Reigeluth

Indiana University

Charles M. Reigeluth is a distinguished educational researcher and consultant who focuses on paradigm change in education. He has a B.A. in economics from Harvard University, and a Ph.D. in instructional psychology from Brigham Young University. He taught high-school science for three years, was a professor at Syracuse University for 10 years, was a professor at Indiana University for 25 years, and is currently a professor emeritus in the School of Education at Indiana University. While at Indiana University, he facilitated a paradigm change effort in a small school district in Indianapolis for 11 years. His latest books are Instructional-Design Theories and Models, Volume IV: The Learner-Centered Paradigm of Education (www.reigeluth.net/volume-iv), Vision and Action: Reinventing Schools through Personalized Competency-Based Education (www.reigeluth.net/vision-and-action), and Merging the Instructional Design Process with Learner-Centered Theory: The Holistic 4D Model (www.reigeluth.net/holistic-4d). They chronicle and offer guidance for a national transformation in K-12 education to the learner-centered, competency-based paradigm. He offers presentations and consulting on this topic..

This content is provided to you freely by EdTech Books.

Access it online or download it at https://edtechbooks.org/foundations_of_learn/media_methods.