40 minutes
CoverForewordBlended LearningBack to Feedback Basics Using Video RecordingsBlended Learning Research and PracticeFaculty Perceptions of Using Synchronous Video-based Communication TechnologyThe Handoff: Transitioning from Synchronous to Asynchronous TeachingImproving Problem-Based Learning with Asynchronous VideoMotivations Among Special Education Students and their Parents for Switching to an Online SchoolProctoring Software in Higher EdTeacher, Are You There? Being "Present" in Online LearningUnderstanding How Asynchronous Video Can Be Critical to Learning SuccessInstructional DesignContinuous Improvement of Instructional MaterialsThe Design of Holistic Learning EnvironmentsDesigning Technology-Enhanced Learning ExperiencesEducational Technology"I Can Do Things Because I Feel Valuable"Implementation and Instructional DesignInstructional Design Prototyping StrategiesModeling Expertise Through Decision-based LearningNavigating Worlds of Significance: How Design Critiques Matter to Studio ParticipantsSimulations and GamesOpen EducationA/B Testing on Open TextbooksThe Interaction of Open Educational Resources (OER) Use and Course Difficulty on Student Course Grades in a Community CollegeOpen educational resources: undertheorized research and untapped potentialRecognizing and Overcoming Obstacles of OERSharing and Self-promotingOther Research AreasThe Ecology of Study Abroad for Language LearningEducational PsychologyHigher EducationResearch Impact MetricsTeaching and Teacher EducationWhy do faculty resist change?

Modeling Expertise Through Decision-based Learning

Theory, Practice, and Technology Applications
, , , &
In higher education, faculty are generally hired for their expertise in the field. They have received extensive training in the discipline but have received limited training in teaching. Thus, they struggle in two ways to teach and develop expertise in novices: First, they are often blind to how their own intuitive expertise functions, and second, they lack a pedagogical strategy to teach their intuitive expert decision-making to students. In this paper, we synthesize the literature on these difficulties for experts. We then discuss how DBL uses cognitive task analysis to help experts make their knowledge explicit and how DBL may be an appropriate pedagogical solution for many university professors. Finally, we provide case studies of DBL in action and discuss how educational technology can support the theory and practice of Decision-based Learning.
Keywords: DBL, Decision-based Learning, Educational Technology, Learning

Students attend the university, in part, to gain knowledge and skills for their future careers. Hopefully, their university experience provides them with the necessary foundation to perform well in the workplace. University professors serve as a primary resource to assist in developing this expertise (Elvira, Imants, Dankbaar, & Segers, 2017). In higher education, however, faculty are generally hired for their research expertise in the field. According to Persellin and Goodrick (2010), it has generally been assumed that “any PhD can teach” (p. 1, citing McGee & Caplow, 1965). Contrary to that assumption, research has shown that experts often struggle to teach and impart their expertise to learners (Bransford, Brown, Cocking, & Center, 2000; Catrambone, 2011; Endsley, 2018; Feldon, Timmerman, Stowe, & Showman, 2010; Gordon & Guo, 2015; Nathan & Petrosino, 2003). In other words, the expertise to act skillfully within a discipline does not entail the expertise to teach that discipline. Shulman (1987) referred to this as the difference between content knowledge and pedagogical knowledge.  

Unfortunately, most faculty have received limited pedagogical training (Chang, Lin, & Song, 2011; Dath & Iobst, 2010; Gilmore, Maher, Feldon, & Timmerman, 2014). Persellin and Goodrick (2010) further observed, “Professors may have a fine grasp of content knowledge, but many may not have the skills and understandings required for effective teaching” (p. 1). In other words, the lack of pedagogical knowledge may be one contributing factor in professors’ difficulty conveying their expertise to learners (Goertz, 2013). 

Moreover, other researchers have identified a content knowledge issue as well. In this case, it is not the lack of content knowledge, but rather expert knowledge has become so automatic that experts have become unaware of the complexity of their thought process (Bransford et al., 2000; Endsley, 2018; Nathan, Koedinger, & Alibali, 2001). Consequently, they do not recognize the need to articulate this knowledge to learners (Feldon et al., 2010; Koh, Koedinger, Rosé, & Feldon, 2015; Ostermann, Leuders, & Nückles, 2018; Tofel-Grehl & Feldon, 2013). This phenomenon has been called “the expert blind spot” (Catrambone, 2011; Foley & Donnellan, 2019; Gordon & Guo, 2015; Nathan et al., 2001; Nathan & Petrosino, 2003).  

These findings suggest that instructors need (1) help understanding more explicitly their own expert knowledge (which can be accomplished through instructional design), and (2) a pedagogical framework for teaching that knowledge to learners.  
In this paper, we will describe the theoretical and practical need for an instructional design and pedagogical strategy that overcomes the expert blind spot. We will do this, in part, by highlighting a type of knowledge called “conditional knowledge.”

We will describe a method called Decision-based Learning (DBL) that addresses these issues from a design as well as a pedagogical perspective. We will discuss the role of technology supporting DBL and briefly summarize five case studies of DBL implementation. We conclude with implications and limitations of DBL and recommendations for research and practice. 

Factors Adversely Affecting Teaching Performance 

Excellent teaching is a multidimensional endeavor. There are many factors that contribute to the success or failure of teaching. In particular, the lack of pedagogical knowledge and the expert blind spot are two of these factors that have an impact on an expert’s ability to convey their expertise. 

Lack of Pedagogical Knowledge 

For faculty, teaching is often their secondary emphasis behind their research. And in some cases, it is a distant second (Dath & Iobst, 2010; Goertz, 2013; Petrosino & Shekhar, 2018; Shulman, 2015).  And yet, Petrosino and Shekhar (2018) indicated, “it is important for teachers to possess both content knowledge and the ability to effectively teach content to novice learners” (p. 98). However, content experts often lack knowledge of pedagogical strategies (pedagogical knowledge), and how to use them to teach their content (pedagogical content knowledge) (Gilmore et al., 2014; Shulman, 1987, 2015).  

For example, Meyer (2018) interviewed 16 secondary school teachers implementing a new teaching strategy that included making decisions under unfamiliar conditions, and metacognitively evaluating their thought processes. Meyer found that, “teachers believe they provide their students with opportunities to practice clear procedural, and observable, steps in making a decision, but are not particularly strong in supporting the students’ thinking about their internal intellectual processes” (p. 8). This finding underscores the need for teachers to not only become aware of the internal decision-making process they engage in but additionally the strategies required to make that internal process transparent and learnable for their students.  

The obvious practical solution to this problem would be training in pedagogy. Indeed, in the last few decades, institutions of higher education have recognized this need and formed faculty support centers to provide pedagogical training.  
Indeed, faculty need pedagogical training—yet despite these efforts on campuses, we still do not see dramatic differences in terms of student abilities. One reason may be that significant portions of an expert’s content knowledge remain hidden—not only to students—but the experts themselves, making it difficult for them to teach what they do not see themselves. We will discuss this phenomenon in the section that follows. 

Automaticity of Thought, or the Expert Blind Spot 

Experts have accumulated a vast amount of knowledge in their field (Gobet, 2005; Gobet & Charness, 2018). Further, this knowledge has become so practiced that decisions can often be made intuitively. In other words, they have developed automaticity (Bransford et al., 2000; Dreyfus & Dreyfus, 2005, 2008; Endsley, 2018; Gordon & Guo, 2015). Automaticity occurs when complex thoughts, actions, or behaviors are aggregated and integrated into a single unit and invoked with little or no conscious thought or effort (Endsley, 2018; Ericsson, 2018b; Gobet & Charness, 2018; Swan, 2008; Swan, Plummer, & West, 2020). For example, the tennis player does not envision a “tennis serve” as a series of discrete movements, but as a single fluid action.While this automaticity is both necessary and a hallmark of expertise, it has a downside: experts may not be able to explain how they activate and use their expert knowledge—a phenomenon known as the “expert blind spot” (Catrambone, 2011; Feldon et al., 2010; Nathan et al., 2001; Petrosino & Shekhar, 2018; Tofel-Grehl & Feldon, 2013).  

Consider this real-life example: engineering faculty taught students the procedure of modeling a problem to result in a system of equations to be solved. Students were able to get this far readily. However, the faculty noted that many students then struggled with the decision of which equation to solve. They remarked, “This is where it is just intuitive; you either know or you don’t know.” However, with time and conscious effort, they realized that they immediately recognized which equation had only one variable to solve for. Until this time, they were unaware of their own thought process, and had never communicated this conditional cue to students. With this blind spot revealed, the solution was simple.  

The above example does not appear isolated or unique. According to Yuan, Wang, Kushniruk, and Peng (2017): 
Solving a real-world problem often involves a sophisticated process of understanding the problem, linking abstract knowledge to problem information, and applying relevant methods and strategies to solve the problem. Learning in such contexts can generate a heavy cognitive load for learners that instructors or experts often underestimate, as for them, many of the requisite processes have become largely automatic or subconscious with experience. (p. 233). 

Indeed, according to a meta-analysis by Tofel-Grehl and Feldon (2013), multiple studies show that, when teaching students, experts often omit anywhere from 40-70% of what they themselves actually do. Thus, college professors—even though they have the content knowledge— can struggle to represent that knowledge accurately and completely to students (Feldon, 2010; Foley & Donnellan, 2019; Hoffman, 1998, 2016; Huang, 2018; Singer & Smith, 2013; Swan et al., 2020; Tofel-Grehl & Feldon, 2013). 

Swan et al. (2020) asserted that what is obscured in the expert blind spot is a type of knowledge called conditional knowledge. Conditional knowledge is usually grouped with the more familiar procedural and conceptual knowledge. We will briefly describe procedural, conditional, and conceptual knowledge, and discuss the role of conditional knowledge in expert performance and teaching. 

Types of knowledge.

Knowledge can be characterized in different ways. One way that is relevant to this discussion is that experts have acquired procedural, conditional, and conceptual knowledge (Anderson et al., 2001; Bransford et al., 2000; Plummer, Swan, & Lush, 2017; Sansom, Suh, & Plummer, 2019; Swan et al., 2020). 

Procedural knowledge.

Procedural knowledge is the knowledge of “how to do” something (Bransford et al., 2000; Sugiharto, Corebima, Susilo, & Ibrohim, 2018; Swan et al., 2020). According to Anderson et al. (2001), “Procedural knowledge often takes the form of a series or sequence of steps to be followed” (p. 52). It can also take the form of guidelines or heuristics (Dreyfus & Dreyfus, 2005, 2008; Swan et al., 2020). In short, procedural knowledge describes how to carry out the mental and physical tasks in the domain. 

Conditional knowledge.

Conditional knowledge is the knowledge of “when” or “under what conditions;” it consists of recognizing the conditions under which given concepts and procedures apply (Bransford et al., 2000; Oluwatayo, Ezema, & Opoko, 2017; Vasyukova, 2012). When one uses their conditional knowledge, they regulate and direct their conceptual and procedural knowledge. Amolloh, Lilian, and Wanjiru (2018) asserted that conditional knowledge aids critical thinking and problem solving at a higher level. 

Conceptual knowledge.

Conceptual knowledge can be described as the knowledge of the concepts, theories, and models of a domain (Anderson et al., 2001; Bransford et al., 2000; Swan et al., 2020). Conceptual knowledge has also been referred to as declarative knowledge (Sugiharto et al., 2018; Tofel-Grehl & Feldon, 2013); although Anderson et al. (2001) argued that the term conceptual knowledge is preferred when referencing larger ideas, theories, models, etc., while declarative knowledge should refer to “discrete, isolated content elements (i.e., terms and facts)” (p. 41). Conceptual knowledge develops a foundation of theoretical understanding within a discipline that helps students understand a wide array of problems (Alamäki, 2018; Barrotta & Montuschi, 2018; Greca & Moreira, 2000). For example, an architect knows what to look for in a high-quality design because he/she knows the guiding concepts of what to look for, such as stability of materials, foundational strength, and weathering patterns. 

The role of conditional knowledge in expertise.  

Bransford et al. (2000) noted that one thing that sets experts apart is that their knowledge is conditionalized (see also Renkl & Mandl, 1996; Swan et al., 2020; Whitehead, 1929). Not only do they have knowledge of concepts and procedures, but they also understand the conditions under which to apply that knowledge (Amolloh et al., 2018; Elvira et al., 2017; Lorch, Lorch, & Klusewitz, 1993). As Barrotta and Montuschi (2018) put it, experts have “knowledge of the specific relevant circumstances to which [theory] is applied” (p. 390). Experts are skilled at decision-making under novel situations because they recognize conditional patterns and can focus on productive strategies (Elvira et al., 2017; Ivarsson, 2017; Johnson, 2005; Oluwatayo et al., 2017) Where novices are swimming in a sea of surface features, experts cut through to the conditions that are salient and essential (Le Maistre, 1998; Swan et al., 2020; Van de Wiel, 2017).  

The role of conditional knowledge in learning.  

As far back as 1929, Whitehead argued that knowledge is “inert” unless and until it is “conditionalized” (see also Bransford et al., 2000; Swan et al., 2020). Yet, most instruction in higher education focuses on conceptual knowledge with a degree of procedural knowledge (Bransford et al., 2000; Hovious, 2016; Swan et al., 2020). Walsh and Kotzee (2010) concluded that, “universities have taught much [conceptual] knowledge and some procedural knowledge, but students have had to develop the conditional knowledge which is necessary to achieve fully functioning knowledge on their own after graduation” (p. 40, citing Biggs (2003), emphasis added).  

Thus, what appears to get lost in the expert blind spot—and therefore left out of instruction—is conditional knowledge (Swan et al., 2020). Yet, when conditional knowledge is made explicit, student abilities improve. For example, Sansom et al. (2019) found that a practice module focused on conditional knowledge significantly improved students’ ability to reason through heat and enthalpy problems. Superior performance by preservice biology teachers was attributed to the presence of conditional knowledge by Sugiharto et al. (2018). Researchers Van De Kamp, Admiraal, and Rijlaarsdam (2016) found that conditional knowledge is a key component to help high school art students brainstorm more easily and come up with more original ideas. Indeed, there is a growing body of research highlighting the importance of conditional knowledge (Endsley, 2018; Fadde, 2009; Garikano, Garmendia, Manso, & Solaberrieta, 2019; Raymond, 2019; Schmidmaier et al., 2013).  

In relation to education, Swan et al. (2020) proposed a model in which conditional knowledge is a necessary precondition for developing full conceptual understanding. They noted that conceptual understanding is necessary for the adaptive expertise we hope students will eventually achieve. However, when faculty inadvertently omit conditional knowledge, it is as if they are trying to skip past a necessary developmental step. That step will occur before full conceptual understanding is achieved; however, students will learn this on their own after graduation as Walsh and Kotzee (2010) assert. It would seem logical, therefore, that helping faculty make their conditional knowledge explicit to themselves first, then providing a pedagogy that made such knowledge explicit to their students would improve student learning and better prepare them for their careers. In fact, when expert performance is made more explicit, significant effect sizes are found in students’ improvement of procedural knowledge, declarative [conceptual?] knowledge, self-efficacy, and speed of performance (Tofel-Grehl & Feldon, 2013).  

While there are many active learning, or experiential learning, models which will expose students to activities where they might tacitly acquire some conditional knowledge, we have not found an instructional design or teaching method that focuses specifically on conditional knowledge. Decision-based Learning (DBL), on the other hand, is both an instructional design process as well as a pedagogical method that uses conditional knowledge as its organizing principle. Developed in 2013 by educators from Brigham Young University, DBL has been implemented in statistics, mathematics, writing, religious studies, psychology, chemistry, and is currently being developed for accounting, biology, and physical science courses. In the sections that follow, we will discuss the design process for DBL and its pedagogical implementation. We will further summarize five examples of DBL implementation. 

Decision-based Learning as an Instructional Design Method. 

DBL relies on the presence of a structured framework organized around the decision-making processes of an expert. These decisions largely depend on the conditions present in the situation. Thus, the conditional knowledge that guides expert decision-making is the backbone of DBL. This structure—called an Expert Decision Model (EDM)—must be designed. Instructional modules are then designed around these decision points. Then, a bank of problems, cases, or scenarios that represent the desired range of application is populated. 

Thus, to facilitate the design process, these steps are followed: 
1) Define a course/unit/model purpose 
2) Map an Expert Decision Model 
3) Create Just-Enough-Just-In-Time Instruction for each decision point 
4) Populate the problem bank with multiple problems/cases/scenarios  
5) Create interleaved assignments 
We will discuss each of these steps in turn. 

Defining a Purpose

Experience indicates that many instructors enter the classroom with little guiding purpose other than to “cover the material.” Defining a clear purpose in terms of student abilities is the first step in uncovering the expert blind spot. This step alone has value to the instructor in that they are better able to articulate what they are trying to accomplish. Where possible, it is best to generate a purpose for the entire course and further break out elements of the purpose as necessary for units in the course. On a theoretical note, this step is an essential component of backward design (see Wiggins & McTighe, 2005). 

To define a course purpose, we ask faculty experts to visualize, and brainstorm on paper, what successful students will be like at the conclusion of their courses. What would “expert” student performance look like? We then work with them to encapsulate this vision into a single sentence. Next, we direct the faculty to break down the course purpose into components that define the scope of what the course will contribute to accomplishing the purpose. These elements become the learning outcomes for the course. Below is an example from an introductory statistics course for graduate students: 

Course Purpose:

Students will be able to produce credible quantitative research of their own and evaluate the credibility of published quantitative research in their field. It is not expected that a single course will be able to attend to all the facets of a worthwhile course purpose. In our example. the following learning outcomes operationalize the course purpose into the components of producing and evaluating credible research that this course will address. 

Learning Outcomes: 

Mapping an Expert Decision Model.  

Most of the work to make conditional knowledge explicit and teachable occurs in the step of mapping an EDM. It can be said that DBL uses a form of cognitive task analysis (CTA, see Feldon, 2010; Feldon et al., 2010; Tofel-Grehl & Feldon, 2013) to walk faculty through a think aloud process. This process elicits the decisions they would make as an expert when approaching a representative scenario or problem. Where most efforts at CTA focus on procedural knowledge (see Koh et al., 2015; Tofel-Grehl & Feldon, 2013), DBL puts primary emphasis on experts’ decisions and the conditions that inform those decisions. Appropriately then, an EDM represents an experts’ decision-making process as a coherent set of decision paths. Each decision path leads to a culminating decision that reflects accomplishment of one or more learning outcomes. 

Consequently, the next step is to determine which outcome or outcomes rely on conditional knowledge, and therefore, are candidates for an EDM. We have found that DBL is best suited to addressing learning outcomes that fall into the categories of analyze or evaluate (using Bloom’s Revised Taxonomy). An analysis learning outcome directs students to classify, categorize, and/or break down a problem, product, process, or scenario into its constituent parts in order to accomplish a given goal (Anderson et al., 2001). In the above example, students are analyzing a research scenario to classify it according to statistical method.

Evaluate learning outcomes on determining the value of something based on internal or external criteria (Anderson et al.,). The criteria for evaluation are usually very dependent on conditional knowledge the presence, absence, degree, or quality of a property, feature, or condition. In the above statistics example, the last learning outcome about evaluating published research uses the same conditional knowledge to determine whether the method used by researchers was appropriate for the research situation. This conditional knowledge should have already been learned; thus, we do not need to create a separate EDM for the evaluative learning outcome.

With, then, an appropriate learning outcome in mind, the expert poses a representative problem, case, or scenario. With guidance, the expert carefully examines their own thinking to identify questions they themselves would ask when analyzing or evaluating this specific concrete example they chose. Decision points are represented as questions with possible options. Figure 1 depicts an example of a first decision point of an EDM.

Figure 1. First decision point on a decision path. 
 
Mapping_an_EDM.jpg


Note that the example creates two possible paths. The most efficient method of mapping an EDM is to follow one path to its conclusion rather than trying to address all options at one time. As other problem types are taken through the path, these other options will be fleshed out.  

Thus, the expert/instructor identifies the correct option and the conditions that make it correct for the given scenario. They then identify the next logical decision for the scenario. In this example, if the scenario is inferential, then the next question could be if the scenario is asking about a comparison between groups, relationship between variables, or goodness of fit (see Figure 2). This process of creating questions and options continues until they reach an endpoint in the decision model. The endpoint reflects the learning outcome that was chosen. In this example, the end point is selecting the appropriate statistical method. As more representative problems are created and taken through this process, the EDM takes shape. 
 
Figure 2. Second decision point on a decision path. 
Second_decision_point.jpg


As mentioned above, conditional knowledge is foundational. However, there appears to come a point where the foundation has been laid and additional conceptual and procedural knowledge builds on the conditional knowledge already acquired. In other words, an EDM is not necessary for all units of a course. Some learning outcomes depend heavily on conditional knowledge while others build upon the conditional knowledge just acquired. In the above statistics example, virtually all the conditional knowledge is taught as students learn to select the appropriate statistical method. The other three outcomes use and build upon that base. Thus, it is not necessary to develop an EDM for all learning outcomes. 

Creating Just-Enough-Just-In-Time Instruction.  

With an EDM in place, instruction is mapped onto each decision point. In this way, instruction will be provided just in time. It is also important to keep the instruction focused only on what students actually need, or just enough instruction to make this decision. It seems to be an almost irresistible temptation of all teachers (including ourselves) to teach students everything about a topic—it is all important, isn’t it? Unfortunately, the result is cognitive overload (see Sweller, 2011). Providing just-enough-just-in-time instruction manages cognitive load and helps students see what matters most at this particular time. The concepts will reappear at other times where different facets of these concepts will be pertinent and consequently, taught at that time. 

For the first decision in our statistics example, students need to know the difference between a sample and a population. They need to know that you use inferential statistics when you are going to infer something about the population from a sample, and you use descriptive statistics when you are going to describe data from the entire population. Then they are taught the conditional cues that help you identify whether the research scenario is dealing with a sample or a population. From this, they can make the first decision. 

Populating the Problem Bank.

Gaining expertise takes not only practice, but deliberate practice (Ericsson, 2006, 2018a). One form of deliberate practice is to make sure that each decision path has a generous supply of problems or scenarios to provide students enough practice on every branch to achieve competence. The process of creating problems is not difficult but can be time-consuming. One way to make this more efficient is to use that same storyline but modify the conditions in the story to fit the various decision paths. This also has instructional benefit. When students see the same storyline with different conditions, they begin to think like experts and focus on the salient conditions rather than the surface features of the story. 

Creating Interleaved Assignments 

One of the challenges that many learners face when processing new material is a phenomenon called interference (Underwood, 1957; Rieber & Salzinger, 1998). Interference occurs when old information interferes with the acquisition of new information. The converse is also possible where new information restructures student thinking in a way that interferes with correct conceptions of previously learned material. Interleaving has been devised to diminish interference (Birnbau, Kornell, Bjork, and Borjk, 2013). Essentially interleaving is achieved when previous learned tasks are systematically combined with new tasks. For example, if students practice applying concepts in week 1 and concepts in week 2, then the week 2 assignment will include both practice problems from week 1 and week 2. This continues across a term culminating in assignments that are interleaved with concepts from the entire course.  

DBL assignments are interleaved because each new assignment is sprinkled with problems that are equivalent to problems from previous assignments. This keeps the array of problems across the entire decision model consistently in front of the students as they progress through the course, making it possible for meaningful pattern recognition to occur. 
These constitute the basic design steps for creating a DBL unit or course. In the section that follows, we will discuss the implementation of DBL as a pedagogy.  

Decision-based Learning as a Pedagogical Strategy 

DBL Situates Learning in Realistic Scenarios 

Decision-based Learning begins within the context of real, or realistic, scenarios, although this is not a distinguishing feature of DBL. Other constructivist approaches also situate learning in real, or realistic, contexts. However, many of these methods still focus on conceptual knowledge with conditional knowledge left to be developed only tacitly (Prince, 2004; Prince & Felder, 2006, 2007; Walsh & Kotzee, 2010).  

Unlike most traditional pedagogies, DBL presents a realistic scenario to learners before they receive instruction. Typically, the students do not know how to solve it, so they receive justenough-just-in-time instruction and learn how to make the correct decisions for the given scenario. In this way, learning takes place as fulfilling an immediate need instead of passively transmitting information. To use our example above, instead of students learning through lecture and reading about statistical procedures, they are presented with a research problem, and asked initial questions about what kind of research question it is, and the kinds of data available. If the student does not know, for example, the difference between an inferential or descriptive research question, then concise instruction is available for them to learn in that moment.  

Students continue, taking the scenario stepwise through a decision path. Students learn within the context of solving a concrete, real-world problem, thus fostering conditional knowledge as they understand the conditions for when and why to make various decisions. In this way, they learn the relevant concepts, theories, models and/or procedures necessary to answer each question in order to select the appropriate option. In essence, DBL operationalizes this statement by Gobet (2005): “perceptual skills, anchored in concrete examples, play a central role in the development of expertise, and conceptual knowledge is later built on such perceptual skills” (p. 193). Consequently, DBL instruction focuses on building recognition of conditional patterns that then invoke relevant concepts and procedures.  

Key elements of DBL as a pedagogy include: 

Conditionalizing Knowledge with High Scaffolding 

It is generally accepted that learning is enhanced when learners are provided appropriate scaffolding (Collins, 2006). In general, it appears that scaffolding is a component the instructor is expected to add adaptively to a standardized lesson plan (Collins, 2006; Pentimonti et al., 2017). In contrast, the DBL design process of identifying decision points and creating just-enough-justin-time instruction inherently creates highly scaffolded instruction. Initially, teaching is highly scaffolded as well by walking students through scenarios is a stepwise fashion which gives students time to process each decision. In student-interactions, the role of the instructor changes from providing instruction and scaffolding to monitoring student progress through the EDM in order to fade scaffolding as automaticity develops. This process works well in a hybrid situation where students prepare for class on their own by taking problems through the model until they feel confident in their abilities. The instructor can then use class time to gauge their progress in a moderately scaffolded environment. 

Monitoring Progress with Moderate Scaffolding 

Ideally students practice in a highly scaffolded environment before class. The instructor can use class time to pose scenarios and have the class work them through together. In this way, classmates and the instructor provide moderate scaffolding for each other. Working together reveals weaknesses in individual learner’s acquisition of the conditional schema, and at the same time reinforces their understanding. In these interactions, the instructor can gauge how well students are progressing by looking for signs of automaticity. If students are quickly making connections, they are developing automaticity. If they are hesitating, or taking time to think things through, more practice is indicated. 

Requiring Accountability with No Scaffolding 

The EDM could become a crutch unless students are held accountable for performing with no scaffolding. This generally takes the form of some type of assessment. The assessment should mirror the performance students have been practicing, but again, no EDM or other supports are allowed. This can be presenting students with additional research contexts/questions (to use our example from statistics) and ask them to again choose the correct statistical procedures, but without the EDM to provide the scaffolding. Students who have put in the work most often “ace the test.” Students who think they can get by with little practice will most likely fail the assessment. There is little in between. When students realize this fact, the problem of motivation goes away. We suggest that assessments be given early and often. 

In our experience, the question of when and how to provide scaffolding has been a little vague with other pedagogical strategies. DBL builds scaffolding and its removal into the process. While technology is not required to implement DBL, technology is a great facilitator, especially in terms of providing a highly scaffolded environment. Consequently, we will discuss the supportive role of technology below.  

Supporting DBL Theory and Practice with Technology 

The basic concepts of DBL do not require technology to be implemented. In the original implementation in 2013, instructors used PowerPoint and handouts. Other instructors created incomplete paper versions of their models and had the students work in groups to fill them out. These completed handouts were then used as a type of worked example or guide for their homework and in-class discussion. Another instructor used common mind-mapping software to create and visualize expert decision models for teaching qualitative research methods and program evaluation (see Figure 3, see also West & Leary, 2019). Some instructors indicate anecdotally that just the process of thinking through their conditional knowledge changes their instruction and improves student learning. However, without supporting technology, the barrier to full implementation of DBL is high. Recently, software was developed specifically to support DBL. The software facilitates the creation of EDMs (see Figure 3), putting problems in a problem bank, attaching problems to decision paths, and creating assignments for students. The software integrates with learning management systems (LMS) that are compatible with learning technology interoperability (LTI) standards.  

complete_edm_.png

The software also presents problems, decision points, and just-enough-just-in-time instruction to students and tracks their progress. Students do not see the whole EDM at once, but rather are presented with one decision point at time (see Figure 4). If they know the correct response, they can proceed quickly. If they need help, they can immediately access the just enough-just-in-time instruction for the decision point they are on.  Figure 4. Student view showing a scenario with a decision point.

Full_DBL_problem.png 
Anticipating the benefits of technology as a resource for teaching expertise, Bol and Garner (2011) wrote: 
For example, if learning materials were to be presented through the medium of an electronic learning platform, and if that platform could support flexible problem solving behaviors by presenting a sequence of steps through which students can navigate in order to consider more and less likely solutions to a problem, then students may be able to interact more effectively and strategically with the information even while they are relative novices in the subject area. (p. 116) 
We believe this to be an apt description of how software solutions could support the decision-based learning pedagogy. 

Initial Evidence of Effectiveness 

In this section we will review case studies of implementation of this pedagogy and emerging research supporting its effectiveness in a few selected disciplines. 

Statistics 

Plummer et al. (2017) performed a small pilot study on the effectiveness of this new DBL approach. One statistics course and two differential equations courses implemented DBL for one whole semester. The DBL pedagogy was employed during class using beta version of DBL software. The software was relied upon heavily for the homework assignments. Each class session covered a part of the DBL expert decision model. The in-class pedagogy included the just-enough, just-in-time instruction, along with teacher scaffolding of students through the decision model. 

The results showed that the majority of the students benefited from the DBL experience. Their academic performance improved, and they reported being able to learn the material with more ease. Much was learned in these early iterations of DBL, including the importance of more concise learning modules, and better fading of the decision model scaffolding.  

Religious Education 

DBL has also been implemented in a religious education class, where the students used the DBL software multiple times a week for homework to identify contextual themes from different scriptural passages (Plummer, Taeger, & Burton, 2020). Five students were interviewed and overall felt like they benefited from DBL. Several mentioned that as they took passages of the text through the decision model, they were able to observer larger patterns associated with the context of each passage. Two students reported feeling that the complexity and authenticity of the text itself became even more apparent as they took more and more passages through the decision model.  

Heat and Enthalpy in Chemistry 

In addition, Sansom et al. (2019) implemented DBL in a chemistry class to see if it improved test scores for a heat and enthalpy exam. The professor implemented the pedagogy along with the software for two days. The students then worked on the DBL software from home before the heat and enthalpy exam.

The same exam was distributed to two separate cohorts of students. One cohort received the professor’s original instruction, and the other cohort received the professor’s DBL instruction. The professor provided the experimental DBL instruction for two class sessions. Student activity was recorded in the DBL software to see how they performed on their homework. There were 129 students who participated in the study and were present for at least one of the DBL class sessions.  

The students who used the software for five problems (N=74) or 10 problems (N=55) had higher test scores on the heat and enthalpy test than the students who didn’t receive the DBL instruction. The control group got a test score of 4.5 and the students who did five and 10 DBL problems had mean differences of +0.626 (p=0.002) and +0.577 (p=0.018), respectively. Therefore, students who were taught with DBL seemed to have had more learning gains than students who were not taught with DBL.  

First case study from Peru 

In August of 2018, we led a 5-day DBL workshop at a Peruvian public university, training 25 professors from various disciplines on DBL pedagogy.  Most of the professors responded after implementing the methodology in their teaching that this pedagogy helped to engage the students more and helped them understand the material better, while students indicated they liked the innovative approach that contrasted these professors’ traditional ways of teaching. One of the difficulties that the professors faced during the workshop, however, was a lack of time to develop their model and teaching materials.  

Second case study from Peru 

In March of 2019, we returned to perform another workshop. We followed the same structure as the previous workshop, except for we began training on the software starting on the second day instead of the fourth day. The professors had more time to develop their model directly onto the software, and they seemed to have accomplished more than the previous group.  
In the videos that they recorded after the workshop, they implemented the pedagogy better than the first group. Some of their responses to the survey included that this is a “logical pedagogy” and that their students are more “active” and “owning the learning” because they make the decisions on how to solve the problems. However, they still expressed that they needed more time to finish their models and practice the pedagogy. In summary, more research needs to be done, but initial findings are promising. 

Implications for Instructional Design and Teaching 

Traditionally, instruction has been about “covering the material” and learning has been seen as remembering the material, or knowledge acquisition. Instructional design has often been about different ways to present the material and make it memorable. It has been assumed that remembered knowledge is structured knowledge. However, knowledge acquisition and schemabuilding are complementary yet separate learning activities. Rather than creating generic instruction with low scaffolding, designers and experts should create the highly scaffolded version that supports schema-building. It is much easier to remove scaffolding than to provide it on the fly.  Further, designers need to work with subject matter experts to uncover the hidden conditional knowledge and make it explicit; to situate learning in its immediate context; and to provide just enough instruction just in time. In short, we propose that conditional schema-building should be an explicit first-order learning activity.  

Further, we believe conceptual understanding matures as students see the range and variety of interactions—the conditional interactions—between elements of the domain. There is no denying that conceptual understanding is important, but we argue that ignoring the explicit teaching of conditional knowledge is trying to leapfrog past an essential developmental step (see Swan et al., 2020). The process of decision-making in realistic scenarios brings conditional knowledge and its connecting role to the forefront. 

Implications for Research 

In addition to implications for practice, we believe the DBL method discussed in this paper lends itself to future research implications. First, the connections between domains of knowledge and their corresponding types of expertise could be further explored. This research should be both basic/theoretical, as well as applied, as understanding is needed in how to teach and develop the various kinds of expertise. Also, while there is a rich history of research into schema development, most of that research focuses on conceptual knowledge rather than conditional, so the link between conditional knowledge and the development of expert schemas needs further research. 

Second, we need more research in how learners of various skill levels and in various domains recognize the conditional cues in their discipline, and how they learn to apply conditional rules to their field. More research is needed particularly in the differences between vocational domains and more knowledge-oriented ones.  

Third, research is needed in the most effective methods for mapping and organizing conditional information for learners, and whether technology can effectively help bridge this conditional knowledge gap by providing this mapping organization, cues to prompt learners in applying various conditionalities, and effective just-in-time instruction.

Fourth, future research should be undertaken to better understand qualitatively the nature of knowledge developed (or not) through this DBL method. In addition, effectiveness across international and multicultural boundaries is particularly interesting to us. Larger-scaled research that can test the impact of DBL strategies on student learning outcomes is particularly needed, as well as how teaching with DBL affects teachers’ own learning of their domain, and deepening of their own expertise.  
Fifth, research is needed to understand whether teaching DBL can assist in students better bridging the skills gap into the workforce and transferring their conditional knowledge from the academic setting into real world scenarios. In addition, more development is needed for additional technological solutions that can assist in teaching in this way. This is critical particularly because we argue that conditional knowledge is often not explicitly taught in the university, perhaps because of the lack of educational technologies to support this teaching. 

Sixth, especially during this time of pandemic “crisis” teaching, where in-person teaching is often limited and thus the ability of instructors to scaffold students in person, research could explore how decision-based learning can support online learning and scaffolding. Part of this research could be how much “teaching presence” (Garrison, 2007) students perceive from their instructors when working through an educational decision model.  

Finally, while we describe one technological solution in this paper, others could undoubtedly be developed. We welcome any potential collaboration with scholars interested in exploring these potential applications of DBL together. 

Conclusions 

Most faculty want to be successful teachers. But most faculty have not been trained as teachers and are left to replicate what they have seen done by other faculty who were also not trained to be teachers. In addition, most faculty are unaware of the conditional knowledge they have acquired tacitly. Therefore, they are unable to explain it to students. Students, in turn, want to see the relevance of the concepts they are learning. Yet it is precisely the invisible conditional knowledge that makes concepts functional and relevant. DBL is both a design process and a teaching method that makes conditional knowledge visible. DBL organizes instruction around decision points where the conditions invoke relevant concepts and procedures. When knowledge is made functional, students are more confident and better prepared to embark on their careers. DBL is still relatively new and needs further development and research. Yet, initial results indicate that DBL holds promise of helping experts convey their expertise to students. In this way students can develop their expertise and be better prepared to enter the crowded, complicated, and innovation-focused global economy.  

Funding 

The authors declare the following competing financial interest(s): Richard H. Swan and Kenneth J. Plummer are founding partners in Conate Incorporated, a company developing software to support decision-based learning pedagogy.  

References 

Alamäki, A. (2018). A conceptual model for knowledge dimensions and processes in design and technology projects. International journal of technology and design education, 28(3), 667683.  


Amolloh, O. P., Lilian, G. K., & Wanjiru, K. G. (2018). Experiential learning, conditional knowledge and professional development at University of Nairobi, Kenya—Focusing on preparedness for teaching practice. International Education Studies, 11(7), 125-135. doi:10.5539/ies.v11n7p125 


Anderson, L. W., Krathwohl, D. R., Airasian, P. W., Cruikshank, K. A., Mayer, R. E., Pintrich, P. R., . . . Wittrock, M. C. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives, abridged edition. White Plains, NY: Longman. 


Barrotta, P., & Montuschi, E. (2018). Expertise, Relevance and Types of Knowledge. Social Epistemology, 32(6), 387-396. doi:10.1080/02691728.2018.1546345 


Bol, L., & Garner, J. K. (2011). Challenges in supporting self-regulation in distance education environments. Journal of Computing in Higher Education, 23(2-3), 104-123.  


Bransford, J., Brown, A., Cocking, R., & Center, E. R. I. (2000). How People Learn: Brain, Mind, Experience, and School. (2nd ed.). Washington, D.C.: National Academy Press. 


Catrambone, R. (2011). Task analysis by problem solving (TAPS): Uncovering expert knowledge to develop high-quality instructional materials and training. Paper presented at the Learning and Technology Symposium, Columbus, GA.  


Chang, T. S., Lin, H. H., & Song, M. M. (2011). University faculty members’ perceptions of their teaching efficacy. Innovations in Education and Teaching International, 48(1), 49-60. doi:10.1080/14703297.2010.543770 


Collins, A. (2006). Cognitive Apprenticeship. In R. K. Sawyer (Ed.), Cambridge Handbook of the Learning Sciences (First ed., pp. 47-60). New York: Cambridge University Press. 


Dath, D., & Iobst, W. (2010). The importance of faculty development in the transition to competency-based medical education. Medical Teacher, 32(8), 683-686. doi:10.3109/0142159x.2010.500710 


Dreyfus, H. L., & Dreyfus, S. E. (2005). Peripheral Vision:Expertise in Real World Contexts. Organization Studies, 26(5), 779-792. doi:10.1177/0170840605053102 


Dreyfus, H. L., & Dreyfus, S. E. (2008). Beyond Expertise: Some preliminary thoughts on mastery. 
In K. Nielsen (Ed.), A Qualitative Stance; Essays in Honor of Steiner Kvale (pp. 113-124): Arhus University Press. 


Elvira, Q., Imants, J., Dankbaar, B., & Segers, M. (2017). Designing Education for Professional Expertise Development. Scandinavian Journal of Educational Research, 61(2), 187-204. doi:10.1080/00313831.2015.1119729 


Endsley, M. R. (2018). Expertise and Situation Awareness. In K. A. Ericsson, R. R. Hoffman, A. Kozbelt, & A. M. Williams (Eds.), The Cambridge handbook of expertise and expert performance (2nd ed., pp. 633-651). Cambridge, UK: Cambridge University Press. 


Ericsson, K. A. (2006). The influence of experience and deliberate practice on the development of superior expert performance. In K. A. Ericsson, N. Charness, P. J. Feltovich, & R. R. Hoffman (Eds.), The Cambridge handbook of expertise and expert performance (pp. 685705). Cambridge, UK: Cambridge University Press. 


Ericsson, K. A. (2018a). The Differential Influence of Experience, Practice, and Deliberate Practice on the Development of Superior Individual Performance of Experts. In K. A. Ericsson, R. R. Hoffman, A. Kozbelt, & A. M. Williams (Eds.), The Cambridge handbook of expertise and expert performance (2nd ed., pp. 745-769).  


Ericsson, K. A. (2018b). Superior Working Memory in Experts. In K. A. Ericsson, R. R. Hoffman, A. Kozbelt, & A. M. Williams (Eds.), The Cambridge handbook of expertise and expert performance (2nd ed., pp. 745-769).  


Fadde, P. J. (2009). Instructional design for advanced learners: training recognition skills to hasten expertise. Educational Technology Research and Development, 57(3), 359-376. doi:10.1007/s11423-007-9046-5 


Feldon, D. F. (2010). Do psychology researchers tell it like it is? A microgenetic analysis of research strategies and self-report accuracy along a continuum of expertise. Instructional Science, 38(4), 395-415. doi:10.1007/s11251-008-9085-2 


Feldon, D. F., Timmerman, B. C., Stowe, K. A., & Showman, R. (2010). Translating expertise into effective instruction: The impacts of cognitive task analysis (CTA) on lab report quality and student retention in the biological sciences. Journal of Research in Science Teaching, 47(10), 1165-1185. doi:10.1002/tea.20382 


Foley, C. E., & Donnellan, N. M. (2019). Overcoming Expert Blind Spot when Teaching the Novice Surgeon. Journal of Minimally Invasive Gynecology, 26(7, Supplement), S20. doi:https://doi.org/10.1016/j.jmig.2019.09.509 


Garikano, X., Garmendia, M., Manso, A. P., & Solaberrieta, E. (2019). Strategic knowledge-based approach for CAD modelling learning. International journal of technology and design education, 29(4), 947-959. doi:10.1007/s10798-018-9472-1 


Garrison, D. R. (2007). Online community of inquiry review: social, cognitive, and teaching presence issues. Journal of Asynchronous Learning Networks, 11, 61+. Retrieved from https://link.gale.com/apps/doc/A284325498/AONE?u=byuprovo&sid=AONE&xid=1c81 41df 


Gilmore, J., Maher, M. A., Feldon, D. F., & Timmerman, B. (2014). Exploration of factors related to the development of science, technology, engineering, and mathematics graduate teaching assistants' teaching orientations. Studies in Higher Education, 39(10), 1910-1928. doi:10.1080/03075079.2013.806459 


Gobet, F. (2005). Chunking models of expertise: Implications for education. Applied Cognitive 
    Psychology,     19(2),     183-204.     Retrieved     from https://edtechbooks.org/-Wadqj


Gobet, F., & Charness, N. (2018). Expertise in Chess. In K. A. Ericsson, R. R. Hoffman, A. Kozbelt, & A. M. Williams (Eds.), The Cambridge handbook of expertise and expert performance (2nd ed., pp. 597-615).  


Goertz, P. W. (2013). Seeing past the expert blind spot : developing a training module for inservice teachers. (MA). University of Texas-Austin, Austin, TX. Retrieved from https://repositories.lib.utexas.edu/handle/2152/23989  


Gordon, M., & Guo, P. J. (2015). Codepourri: Creating visual coding tutorials using a volunteer crowd of learners. Paper presented at the 2015 IEEE symposium on visual languages and human-centric computing (VL/HCC). 


Greca, I. M., & Moreira, M. A. (2000). Mental models, conceptual models, and modelling. International Journal of Science Education, 22(1), 1-11. doi:10.1080/095006900289976 Hoffman, R. R. (1998). How can expertise be defined? Implications of research from cognitive psychology. In R. Williams, W. Faulkner, & J. Fleck (Eds.), Exploring Expertise (pp. 81100). London: Palgrave Macmillan. 


Hoffman, R. R. (2016). How can expertise be defined? Implications of research from cognitive psychology. In J. Fleck, W. Faulkner, & R. Williams (Eds.), Exploring Expertise: Issues and Perspectives (pp. 81-99). London: Macmillan Press. 
Hovious, A. (2016). Reality Check Revisited: Sage on the Stage vs. Guide on the Side.  Retrieved from https://edtechbooks.org/-VubK


Huang, E. (2018). Rearview mirrors for the “expert blind spot”. In A. Bakker (Ed.), Design Research in Education: A Practical Guide for Early Career Researchers (pp. 16). New York, NY: Routledge. 
Ivarsson, J. (2017). Visual Expertise as Embodied Practice. Frontline Learning Research, 5(3), 123-138. doi:10.14786/flr.v5i3.253 


Johnson, K. (2005). The ‘general’ study of expertise. In K. Johnson (Ed.), Expertise in second language learning and teaching (pp. 11-33). London: Palgrave Macmillan. 


Koh, D., Koedinger, K. R., Rosé, C. P., & Feldon, D. (2015). Expertise in Cognitive Task Analysis Interviews. Paper presented at the 37th Annual Meeting of the Cognitive Science Society, Pasadena, CA. 


Le Maistre, C. (1998). What is an expert instructional designer? Evidence of expert performance during formative evaluation. Educational Technology Research and Development, 46(3), 21-36.  


Lorch, R. F., Lorch, E. P., & Klusewitz, M. A. (1993). College students' conditional knowledge about reading. Journal of educational psychology, 85(2), 239.  


Meyer, H. (2018). Teachers’ Thoughts on Student Decision Making During Engineering Design Lessons. Education Sciences, 8(1), 9. doi:10.3390/educsci8010009 


Nathan, M. J., Koedinger, K. R., & Alibali, M. W. (2001). Expert blind spot: When content knowledge eclipses pedagogical content knowledge. Paper presented at the Third International Conference on Cognitive Science, Beijing, China. 


Nathan, M. J., & Petrosino, A. J. (2003). Expert blind spot among preservice teachers. American Educational Research Journal, 40(4), 905-928.  


Oluwatayo, A. A., Ezema, I., & Opoko, A. (2017). Development of Design Expertise by Architecture Students. Journal of Learning Design, 10(2), 35-56.

 
Ostermann, A., Leuders, T., & Nückles, M. (2018). Improving the judgment of task difficulties: 
prospective teachers’ diagnostic competence in the area of functions and graphs. Journal of Mathematics Teacher Education, 21(6), 579-605. doi:10.1007/s10857-017-9369-z 


Pentimonti, J. M., Justice, L. M., Yeomans-Maldonado, G., McGinty, A. S., Slocum, L., & O’Connell, A. (2017). Teachers’ Use of High- and Low-Support Scaffolding Strategies to Differentiate Language Instruction in High-Risk/Economically Disadvantaged Settings. Journal of Early Intervention, 39(2), 125-146. doi:10.1177/1053815117700865 


Persellin, D. C., & Goodrick, T. (2010). Faculty development in higher education: Long-term impact of a summer teaching and learning workshop. Journal of the Scholarship of Teaching and Learning, 10(1), 1.  


Petrosino, A., & Shekhar, P. (2018). Expert blind spot among pre-service and in-service teachers: Beliefs about algebraic reasoning and potential impact on engineering education. The International journal of engineering education, 34(1), 97-105.

 
Plummer, K., Swan, R. H., & Lush, N. (2017). Introduction to Decision-Based Learning. Paper presented at the 11th International Technology, Education and Development Conference, Valencia, Spain. 


Plummer, K., Taeger, S., & Burton, M. (2020). Decision‐based learning in religious education. Teaching Theology & Religion, 23(2), 110-125. doi:10.1111/teth.12538 


Prince, M. (2004). Does Active Learning Work? A Review of the Research. Journal of Engineering Education, 93(3), 223-231.  


Prince, M., & Felder, R. (2006). Inductive teaching and learning methods: Definitions, comparisons, and research bases. Journal of Engineering Education, 95(2), 123-138.  


Prince, M., & Felder, R. (2007). The Many Faces of Inductive Teaching and Learning. Journal of College Science Teaching, 36(5), 14-20.

 
Raymond, K. M. (2019). First-year secondary mathematics teachers’ metacognitive knowledge of communication activities. Investigations in Mathematics Learning, 11(3), 167-179.  


Renkl, A., & Mandl, H. (1996). Inert knowledge: Analyses and remedies. Educational Psychologist, 31(2), 115. Retrieved from https://www.lib.byu.edu/cgibin/remoteauth.pl?url=http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN =9612021865&site=ehost-live&scope=site 


Sansom, R. L., Suh, E., & Plummer, K. J. (2019). Decision-Based Learning: ″If I Just Knew Which 
Equation To Use, I Know I Could Solve This Problem!″. Journal of Chemical Education, 96(3), 445-454. doi:10.1021/acs.jchemed.8b00754 


Schmidmaier, R., Eiber, S., Ebersbach, R., Schiller, M., Hege, I., Holzer, M., & Fischer, M. R. (2013). Learning the facts in medical school is not enough: which factors predict successful application of procedural knowledge in a laboratory setting? BMC medical education, 13(1), 28.  


Shulman, L. S. (1987). Knowledge and Teaching: Foundations of the New Reform. Harvard Educational Review, 57(1), 1-23. doi:10.17763/haer.57.1.j463w79r56455411 


Shulman, L. S. (2015). PCK: Its genesis and exodus. In A. Berry, P. Friedrichsen, & J. Loughran (Eds.), Re-examining pedagogical content knowledge in science education (pp. 13-23). New York, NY: Routledge. 


Singer, S., & Smith, K. A. (2013). Discipline-Based Education Research: Understanding and Improving Learning in Undergraduate Science and Engineering. Journal of Engineering Education, 102(4), 468-471. doi:10.1002/jee.20030 

Sugiharto, B., Corebima, A. D., Susilo, H., & Ibrohim. (2018). A comparison of types of knowledge of cognition of preservice biology teachers. Paper presented at the Asia-Pacific Forum on Science Learning & Teaching. 


Swan, R. H. (2008). Deriving Operational Principles for the Design of Engaging Learning Experiences. (Doctoral Dissertation). Brigham Young University, Provo. Retrieved from https://scholarsarchive.byu.edu/etd/1829/  

Swan, R. H., Plummer, K. J., & West, R. E. (2020). Toward functional expertise through formal education: Identifying an opportunity for higher education. Educational Technology Research & Development. doi:10.1007/s11423-020-09778-1 


Sweller, J. (2011). Cognitive load theory. In J. P. Mestre & B. H. Ross (Eds.), Psychology of learning and motivation (Vol. 55, pp. 37-76). Amsterdam: Elsevier. 


Tofel-Grehl, C., & Feldon, D. F. (2013). Cognitive Task Analysis–Based Training. Journal of 
Cognitive     Engineering     and     Decision     Making,     7(3),     293-304. doi:10.1177/1555343412474821 


Van De Kamp, M.-T., Admiraal, W., & Rijlaarsdam, G. (2016). Becoming original: effects of strategy instruction. Instructional Science, 44(6), 543-566. doi:10.1007/s11251-016-9384y 
Van de Wiel, M. W. (2017). Examining Expertise Using Interviews and Verbal Protocols. Frontline Learning Research, 5(3), 112-140.  


Vasyukova, E. E. (2012). The Nature of Chess Expertise: Knowledge or Search? Psychology in Russia: State of Art, 5(1), 511. doi:10.11621/pir.2012.0032 


Walsh, A., & Kotzee, B. (2010). Reconciling ‘graduateness’ and work-based learning. Learning and Teaching in Higher Education(4-1), 36-50.  


West, R. E., & Leary, H. (2019). Scaffolding new qualitative researchers through decision-based learning [Conference Presentation]. Paper presented at the Association for Educational Communications Technology Annual Conference, Las Vegas, NV.  


Whitehead, A. N. (1929). The Aims of Education and Other Essays. New York, NY: The Free Press. 


Wiggins, G. P., & McTighe, J. (2005). Understanding by design: Ascd. 


Yuan, B., Wang, M., Kushniruk, A. W., & Peng, J. (2017). Deep Learning towards Expertise Development in a Visualization-based Learning Environment. Journal of Educational Technology & Society, 20(4), 233-246. Retrieved from www.jstor.org/stable/26229220 
 

Suggested Citation

, , , & (2022). Modeling Expertise Through Decision-based Learning: Theory, Practice, and Technology Applications. Light + Learning. https://edtechbooks.org/light_learning_2022/Modeling_Expertise_Through_Decision_based_Learning

Previous Version(s)

Cárdenas, C., West, R., Swan, R., & Plummer, K. (2020). Modelando las decisiones de un experto a través del aprendizaje basado en decisiones: aplicaciones de la teoría, a la práctica y a la tecnología. Revista de Educación a Distancia (RED), 20(64). https://doi.org/10.6018/red.449831
CC BY-NC-ND International 4.0

CC BY-NC-ND International 4.0: This work is released under a CC BY-NC-ND International 4.0 license, which means that you are free to do with it as you please as long as you (1) properly attribute it, (2) do not use it for commercial gain, and (3) do not create derivative works.

End-of-Chapter Survey

: How would you rate the overall quality of this chapter?
  1. Very Low Quality
  2. Low Quality
  3. Moderate Quality
  4. High Quality
  5. Very High Quality
Comments will be automatically submitted when you navigate away from the page.
Like this? Endorse it!