Addressing the Challenges of Program and Course Design in Higher Education with Design Technologies

DOI:10.51869/92ab
Higher EducationLearning TechnologiesDesign Challenges
This article describes six major challenges facing faculty members and teams as they engage in the design of degree programs in higher education and how technology tools for program design can be employed to address those challenges. They include tools for collaboration, leveraging best practice, designing for quality and distinctiveness, addressing standards overload, focusing on assessment, and making feedback a meaningful part of the design process. The article makes the case for each of the challenges and shows examples of how the tools help teams engage in collaborative program development in higher education.

Learning and teaching in higher education institutions have been subject to profound change in recent decades (Henard & Roseveare, 2012). Degree offerings have migrated from being the sum of sometimes disparate parts offered up by individual faculty members to more integrated and coherent programs of study. The design of these programs and their contents are scrutinized from different perspectives including, for professional degrees, prescriptive external accreditation standards (Phillips KPA, 2017).

The effect of curriculum reform in higher education (HE) has been to transform the way programs are developed adding significant new demands to the learning and teaching roles of individual faculty members and the teams on which they serve (e.g., Pegg, 2013). This article describes six common challenges faced by faculty members and teams as they address the demands of program design and development in HE and the role technology can play in supporting those faculty members to negotiate new expectations. For the purposes of the article and with consideration for the variability in terminology employed internationally, the term program when used here refers to a degree or collection of units that are combined as a qualification of some kind; the term course will refer to a single unit of learning within a degree or qualification.

Challenge 1: Making Program Design Collaborative

The focus on whole programs as a unit of analysis (over individual courses) requires collaboration among academics in the design, development, and accreditation of degrees (Norton et al., 2013; Jones et al., 2012). A shared responsibility to ensure that programs meet standards internal and external to the institution stands in contrast to the traditional more autonomous learning and teaching culture of many higher education institutions (Zundans-Fraser, 2014; Kezar & Lester, 2009).

Existing research on collaboration in HE suggests that while academics recognize the importance of collaboration in program design, their work environments frequently lack the organizational support necessary to collaborate efficiently and effectively (Zundans-Fraser, 2014). Further, while individual faculty members may express an interest in or commitment to collaboration, they often do not have experience with the skills and knowledge required for collaborative team work (Briggs, 2007; Kezar & Lester, 2009; Zundans-Fraser & Bain, 2015; Newell & Bain, 2018). As a consequence, collaboration about program design is commonly described as forced and unproductive by members of program design teams (Newell & Bain, 2018).

While the challenge of making institutions more collaborative is complex and multi-faceted, there is a fundamental acknowledgment in the literature that to be effective collaboration needs to assume a form that includes methods and processes to assist teams conduct meetings, manage interactions, and capture the product of their efforts (Zundans-Fraser, 2014; Ciampaglia, 2010; Stevens & Myers, 2000; Salisbury et al., 1997).

Technology can make an important contribution in this space. Platforms where teams can come together to map standards, develop assessments, design course offerings and learning experiences can provide a focus for collaboration. Figure 1 describes six modules included in a software platform for program and course design.

Figure 1

Program and Course Design Modules

six steps for program and course design

The intent of the software modules is twofold: First, to provide a collaborative work environment that gives form to the program design process. This includes, maximizing the effectiveness of meeting time by focusing on a clearly defined set of scaffolded design steps, tasks and activities and second, to ensure that the product of the collaborative process is captured in a form that builds over time and can be configured in order to report out to different stakeholders and meet institutional program approval requirements.

Team members can use the tools synchronously or asynchronously to build their programs. Each module connects to subsequent modules so that developers can see previously completed work as they engage with new design tasks. The feedback module makes possible formal and informal feedback at each step in the design process. Each of the modules described in Figure 1 are examined in more detail as they relate to the design challenges that follow.

Challenge 2: Focusing on Quality and Distinctiveness

Efforts to accredit higher education institutions have as a driver the desire to both assure and improve the quality of what those institutions do (Stensaker, 2008) although that effort can also create a burden that gets in the way of program quality and distinctiveness. One finding of the KPA Phillips Australian national report on professional accreditation was the perceived negative impact of accreditation overload on program quality, diversity and faculty autonomy (Phillips KPA, 2017). 

The focus on standards and accreditation has not necessarily helped institutions do a better job of determining quality or making their programs more distinctive. Massy et al. (2012) when describing the results of a US national report on higher education productivity note that the determination of quality in learning and teaching is an unresolved question in higher education and the “elephant in the room” with respect to making determinations of productivity.

Dvorak and Busteed (2015) note that “the lack of enduring and unique identities in higher education offers an opportunity for education leaders, as it indicates there are a host of undifferentiated brands ripe for disruption” (p. 2). Program teams and faculty members in general frequently express frustration derived from what they perceive to be a preoccupation with standards and mandatory requirements that takes time away from efforts to make their programs original and distinctive.

To reach beyond drivers for compliance and uniformity, program design teams need to consider those things they believe will endow their programs with a unique quality and distinctive identity. Figure 2 describes the inclusions in a tool that enables a program team to step through a series of baseline considerations to capture members’ vision for the program; build an understanding of the context in which it will operate (e.g., strengths, needs, drivers, risks), make shared commitments, and create a conceptual model for the design. The baseline module can aggregate input and feedback from all stakeholders creating a transparent starting point for the design process.

Figure 2

Baseline Module

the baseline module

Teams often complete the baseline process in a one-day workshop. Importantly, the product of their work is captured in a usable form within the tools. For example, Figure 3 describes a matrix from the baseline module that summarizes the identification of strengths, needs, drivers, and the risks facing a program. Selecting any of the entries on the matrix takes the user to a detailed account of that need or driver based upon the collaborative input of team members.

Figure 3

Baseline Matrix

the baseline matrix

Using the baseline module, the team can take a step back to reconcile the pragmatic considerations driving the program like market forces and policies creating a bigger picture vision and conceptual model that reflect the team members’ expertise, priorities, and commitments as they engage with the design process. The result is a shared foundation for the design and development of the program. The team identifies what it wants to achieve by considering and acting upon those things that it believes can improve quality and distinctiveness.

Challenge 3: Referencing Program Design to Educational Best Practice

A third important challenge in contemporary higher education curriculum design and development relates to the increasing role educational research and practice play in the design and implementation of HE courseware (Hattie, 2011). Increasingly, empirical research from the field of education is finding its way into policy, regulation, and the normal work expectations for program and course design in institutions. Terms like constructive alignment (Biggs & Tang, 2011), criterion based assessment (Sadler, 2005), and evidence-based pedagogy (Hattie, 2011) have become common-place in the practice lexicon of HE; in requirements for program and course approval, and in the work of Centers for Learning and Teaching Development.

Knowledge of evidence-based pedagogy, assessment principles, and educational design are frequently not within the primary experience of many academics even although part of their role in many institutions is to design and/or deliver courseware. The response from academics to what could be described as these best practice requirements is mixed regarding the extent to which they are taken up in their teaching (Scott & Scott, 2015).

Technology can be employed to assist with the challenge created by the need to apply evidence-based practice to program design and development by including in software tools the key features of approaches that have been shown to improve student achievement. Embedding those features in the design of relational tools used to design courseware reduces the load on faculty members as they take up the requirements to design programs and courses in ways that reflect educational research.

For example, while many academics may not know an immense amount about constructive alignment which refers to the alignment between learning experiences and assessment tasks (Biggs & Tang, 2011); it is possible to design software that highlights the relationships among design elements and make those visible as teams work to design assessment tasks, build content and develop learning experiences. Figure 4 describes a tool for developing and then aligning learner outcomes at the course level.

Figure 4

Constructive Alignment

9-2-Bain-Fig4.PNG

The left hand panel of the figure is a scrolling list of learner outcomes associated with specific courses. The central panel is the work space for developing the content of those outcomes. The right hand panel is a scrolling list of the things that make up the assessment task to which the outcomes will be linked. Users can review the assessment tasks and their components as they develop the content of the outcomes. This ensures that course outcomes are directly referenced to assessment tasks which as described previously are linked to higher level program expectations (e.g., standards). A newly developed outcome is linked by the user to specific components of the assessment task by clicking on the term major (designating that the outcome is a major connection to a part of the assessment task) or a minor connection which means that the outcome is partially connected to the assessment task. Users build coherence across the elements of a program and identify gaps and discontinuities as they go about the design and development process.

In this way, the tools assist faculty members make decisions about constructive alignment as they engage in the normal work of building outcomes, assessment tasks etc. Users can become proficient at the practice of constructive alignment without having extensive prerequisite knowledge or learning about the construct. In this way, an important feature of educational research is embedded in the software to support transactions related to a best practice.

The example described in this article can be extended to the design of templates for using different teaching approaches and in the development of assessment tasks. An additional example of designing for research-based practice related to assessment is described subsequently in response to challenge five.

Challenge 4: Standards Overload

The proliferation of sector, professional, and internal standards represent an immense challenge for higher education institutions. They can be subject to over 100 different sets of professional standards (Dodd, 2017) resulting in an immense burden in terms of cost and workload that impacts the academic culture of institutions. Phillips KPA (2017) describes the work associated with meeting standards as expensive, frequently excessive, unreasonable and burdensome as a consequence. The many agencies and standards internal and external to institutions exert immense power given the potential consequences for failure to meet accreditation expectations (Phillips KPA, 2017) or comply with in-house program approval requirements.

In practical terms, program design teams face the complex challenge of finding ways to effectively map and then meet multiple sector, professional, and internal institutional standards in a single degree program. These standards are frequently diverse in their purpose and degree of focus; are often semantically incongruent, and represent a complex matrix of stakeholder interests. Teams have to make meaning of those standards in the design of a degree program under circumstances where they face their own constraints including time (i.e., the length of a degree) and other institutional requirements that may adversely influence overall scope and sequence (e.g., delivery mode, admission requirements, credit packaging, prerequisite learning, allocation of adequate faculty workload to program design etc.). Figure 5 describes a layout from a tool that enables team members to map multiple sets of standards looking for connections across sector, professional and institutional expectations in order to build a term of reference for the design of a program.

Figure 5

Standards Matrix

9-2-Bain-Fig5.PNG

The mapper produces a matrix that retains the integrity of the original standards, showing matches across individual standards and merges where the content of two or more standards are merged producing a set of integrated standards for designing the program. In the example, two sets of standards, one international for preparing inclusive education teachers are integrated with a more general set of national teaching standards. In this instance, the developers looked for similarities across the two standards where they could be matched or merged or a new standard added. Notably, the source standard is retained in the map, so that all design and development can be linked back to the originating standards.

Challenge 5: Making Student Performance the Focus

An additional important feature of the software described here is the way standards are mapped to program level assessment tasks called products. These products represent the knowledge and skills students should be able to demonstrate on graduation. An assumption underpinning the approach is that effective mapping requires program teams to think specifically about how students will demonstrate competence in program requirements upon graduation over the more traditional method of matching standards with intended learning outcomes at the course level. The former requires mapping standards to program level authentic assessment tasks and then showing how those high level tasks will be met in individual courses. This involves designing a program that assures students are competent in the key professional requirements articulated in professional standards as opposed to finding syntactic and semantic congruity by matching the text of terms in standards documents to learning intentions at the course syllabus level. The latter produces congruence in accreditation documents and submissions although often lacks substantive meaning and an adequate level of assurance that standards will be addressed and met by students. By way of contrast, mapping to program level assessment expectations drives a level of granularity in thinking and design that is more likely to produce genuine alignment between standards and program level graduation outcomes. To take up assessment at the program level, requires developers to make a clear account of what students demonstrate on graduation at the level of the program as well as the individual course. Figure 6 describes a tool for developing program level assessment products.

Figure 6

Product Developer

9-2-Bain-Fig6.PNG

The right hand panel of the layout is a scrolling list view of the standards to be met. The center panel is a work space for developing program level assessment tasks (i.e., products) that are then matched to the standards. In the example, students are required (as a program level element) to build a school design that is responsive to individual difference. The bulleted items describe the elements or inclusions for that product. Those bulleted elements are then built out as assessment tasks in individual courses. In this way, program level authentic assessment expectations are instantiated at the course level in a cascaded mapping process that connects standards to products that are then developed as assessment tasks at the course level.

Over the last decade, higher education institutions have moved progressively from normative assessment approaches (judging students based on inter-individual comparison) to models that are criterion-based where student performance is judged against predetermined performance criteria (O’Donovan et al., 2010). The uptake of criterion-based assessment brings its own set of challenges related to the identification and alignment of criteria with standards and in the development of valid evaluation criteria, often in the form of rubrics used to judge the extent to which students have met those criteria. One of the biggest challenges in rubric development is describing grading criteria in language that students understand while also making clear evaluable distinctions among performance levels on the task. Figure 7 describes a tool for building a criterion-based assessment task that assists a team to align learner outcomes and learning experiences with criteria for determining successful performance.

Figure 7

Assessment Task Rubric

9-2-Bain-Fig7.PNG

Users can retrieve the learner outcome for the course and look at it while developing the criteria for the rubric ensuring that the different levels of performance are sufficiently differentiated and connected to the intended learning.

Challenge 6: Producing and Using Meaningful Feedback.

Higher education institutions experience great difficulty generating the kind of feedback that is useful for quality improvement in learning and teaching. According to Massy, Sullivan, and Mackie (2012) while current and prospective learning and engagement measures are useful in particular contexts, they cannot be brought together into comprehensive, robust, indices for quality adjustment (p. 6). One reason for this difficulty is the inability to clearly explicate the work process of course design by employing factors known to positively influence learning and student achievement (i.e., what we know about assessment, constructive alignment, teaching approaches etc.). An outcome of this lack of professional control (Bowker & Star, 2000) is the tendency to defer to after-the-fact checklists and surveys that focus more on whether things in the development process happened over the quality of the work and whether known achievement-related characteristics are present in the design elements (i.e., quality of a rubric, quality of learner outcomes etc.). When the work process is explicated to include functionality that relates to known achievement-related practice, feedback can focus on the presence/absence of those characteristics.

As noted previously, a feature of the technologies described here is the way best practice assumptions about, the alignment of program elements, mapping standards, assessment, the description of learning experiences etc., are embedded and integrated in the design of the tools. This helps make visible and comparable (Bowker & Star, 2000) the key elements and features of the program design process. Explicating these features offers up the opportunity for a more focused approach to feedback. It becomes possible to make those same key elements and features priorities in the way feedback is represented and shared. For example, Figure 8 describes the questions used to provide feedback about a criterion-based assessment task.

Figure 8

Feedback Questions

9-2-Bain-Fig8.PNG

The questions described in the figure pertain to known features of effective criterion-based assessment and provide users with an opportunity to make a rating and provide a comment about factors known to relate to the quality of the design of assessments by a program or course team. The feedback statements help to shape the way developers both engage with and respond to the development of a criterion-based assessment task. Those charged with the responsibility for evaluating programs can use the feedback statements to make workable distinctions (Drengenberg & Bain, 2016) in program quality meaning they can employ feedback to make decisions about quality that are referenced to factors known to produce better learning outcomes.

Figure 9 describes how feedback from many stakeholders (96 in this case) can be summarised to show and overall level of satisfaction with the work. This layout also aggregates comments from the stakeholders. The coloured bars show the proportion of responses in different categories that relate to the quality of the design giving a high level picture of the perceptions of many respondents about different features of the program.

Figure 9

Feedback Summary

9-2-Bain-Fig9.PNG

Team members can also provide more formative conversational feedback using the tools. Figure 10 describes commentary from a team member who has been asked to provide some formative feedback about progress in the development of an assessment task. This feedback tool allows members of a team to share perspectives as they work and before their effort is subject to summative approval by members of the team and others.

Figure 10

Informal Feedback

9-2-Bain-Fig10.PNG

The approach described here is known emergent feedback (Bain, 2007; Bain & Weston, 2012; Bain & Zundans-Fraser, 2017) where feedback on key evidence-related features emerges from the ongoing work process in a continuous cycle. Feedback is available for every module of the tools and when configured this way becomes an integral part of program development. The program team can share perspectives, and identify strengths and needs throughout the design process instead of waiting for a formative or summative judgment at a waypoint or when the design is deemed to be complete.

This feedback approach has important implications for learning analytics. Because the feedback tools focus on factors known to influence student learning, the big data produced by the tools (i.e., the data aggregated across an institution’s programs) focuses the learning analytics process on achievement-related analytics. This stands in contrast to existing approaches to learning analytics which focus on correlates of learning like user presence and navigation patterns, downloads etc., mainly used retrospectively to provide feedback about programs and courses and those responsible for them. 

Conclusion

The tools described here known collectively as the Coursespace© (Bain, 2012) have been in use over a period of six years to successfully develop degree programs at the graduate and undergraduate levels in teacher education, agriculture and engineering among others. When used as a shared institution-wide platform, the tools can assist an organization to bring better program design to scale by creating a common term of reference for learning and teaching design across faculties and schools. Further, where program and course design is undertaken by specific entities within institutions, the tools are equally useful as a common platform for experienced developers who may bring additional design expertise to the task.

Work using the tools is producing an emergent body of literature describing a range of applications in collaborative course design (Thomson et al., 2018) in online program development in speech therapy (McCormack et al., 2014); embedding indigenous content, (Zundans-Fraser et al., 2018), and in integrating engineering standards (Morgan et al., 2017).

In concluding, it is important to avoid the trap of positioning technology as a silver bullet solution to the challenges of better program design and development in higher education. Technology can make an important contribution as part of broader strategic initiatives to improve the quality of learning and teaching in higher education institutions. However, as noted in the discussion of collaboration, making an institution more collaborative, more responsive to better practice, or better at assessment also involves broader planned change. This includes policy development and refinement, organizational design, the ways faculty are recognized and rewarded, as well as extensive professional capacity building.

Importantly, and as illustrated throughout, technology can make many of the strategic and tactical intentions practical and accessible by instantiating better practice and shaping the way normal work in program design is conducted. This involves maintaining an ongoing record of that work and generating feedback that makes the effort more transparent, efficient, effective, and accreditation ready. The tools briefly described here provide one example of the way technology can help address the challenges facing academics as they navigate changing expectations associated with learning and teaching in higher education.

References

Bain, A. (2007). The self-organizing school. Next generation comprehensive school reforms. Lanham, MD: Rowman & Littlefield.

Bain, A. (2012). Smart tools (Versions 1.0 and 2.0) Computer Software. Charles, Sturt University©: CSU.

Bain, A., & Weston, M. (2012). The learning edge: What technology can do to educate all children. New York: Teachers’ College Press.

Bain, A., & Zundans-Fraser, L. (2017). The self-organizing university: Designing the higher education organization for quality learning and teaching. Singapore: Springer http://dx.doi.org/10.1007/978-981-10-4917-0

Biggs, J., & Tang, C. (2011). Teaching for quality learning at university (4th ed). Maidenhead, UK: Open University Press.

 Bowker, G. & Star, S. (2000). Classification and its consequences. Cambridge, Massachusetts: The MIT Press.

 Briggs, C. L. (2007). Curriculum collaborations: A key to continuous program renewal. The Journal of Higher Education, 7(6), 679-711. http://www.jstor.org/stable/4501239?seq=1#page_scan_tab_contents

Ciampaglia, B. I. (2010). Analysis of school-wide supports and barriers to CPS teams: Fidelity in applying the process. (Publication No. 34095570) [Doctoral dissertation, University of Massachusetts]. ProQuest Dissertation & Theses Global.

Dodd, T. (2017, November 8). Course accreditation costs a burden for higher education.  https://www.theaustralian.com.au/higher-education/course-accreditation-costs-a-burden-for-higher-education-providers/news-story/cc2af0fd6821f3ca58194cecb25f47b5

 Drengenberg, N., & Bain, A. (2016). If all you have is a hammer, everything begins to look like a nail – How wicked is the problem of measuring productivity in higher education? Higher Education Research & Development. https://doi.org/10.1080/07294360.2016.1208640

 Dvorak, N., & Busteed, B. (2015, August 11). It’s hard to differentiate one higher-ed brand from another. Gallup Business Journal. http://www.gallup.com/businessjournal/184538/hard-differentiate-one-higher-brand.aspx

Hattie, J. (2011). Which strategies best enhance teaching and learning in higher education? In D. Mashek & E. Y. Hammer (Eds.), Claremont applied social psychology series: Vol. 3. Empirical research in teaching and learning: Contributions from social psychology (pp. 130–142). Wiley-Blackwell. https://psycnet.apa.org/doi/10.1002/9781444395341.ch8

Henard, F., & Roseveare, D. (2012). An IHME guide for higher education institutions: OECD better policies for better lives. http://www.oecd.org/education/imhe/QT%20policies%20and%20practices.pdf

Jones, S., Lefoe, G., Harvey, M., & Ryland, K. (2012). Distributed leadership: a collaborative framework for academics, executives and professionals in higher education. Journal of Higher Education Policy & Management, 34(1), 67–78. http://dx.doi.org/10.1080/1360080x.2012.642334

Kezar, A. J., & Lester, J. (2009). Organizing higher education for collaboration: A guide for campus leaders. San Francisco, CA: Jossey-Bass

Massy, W., Sullivan, T., & Mackie, C. (2012). Data Needed for Improving Productivity Measurement in Higher Education. Research and Practice in Assessment, 7, 5–15.

McCormack, J., Easton, C., & Morkel-Kingsbury, C. (2014). Educating speech-language pathologists for the 21st century in the 21st century: Course design considerations for a distance education Master of Speech Pathology program. Folia Phoniatrica et Logopaedica, 66(4-5), 147-157. https://doi.org/10.1159/000367710

Morgan, J., Lindsay, E., & Roberts, P. (2017). Developing integrated standards for systematic civil engineering course design [Poster presentation]. ASEE Annual Conference and Exposition. https://www.asee.org/public/conferences/78/papers/18404/view

Newell, C., & Bain, A. (2018). Academics’ perceptions of collaboration in higher education course designHigher Education Research & Development39(4). https://doi.org/10.1080/07294360.2019.1690431

Norton, A., Sonnemann, J. & Cherastidtham, I. (2013). Taking university teaching seriously. Grattan Institute. http://grattan.edu.au/wp-content/uploads/2013/07/191_Taking-Teaching-Seriously.pdf

O’Donovan, B., Price, M., & Rust, C. (2010). The student experience of criterion-referenced assessment through the introduction of a common criteria assessment grid. Innovations in Education and Teaching International, 38(1), 74–85. https://doi.org/10.1080/147032901300002873

Pegg, A. (2013). "We think that’s the future": Curriculum reform initiatives in higher education. Higher Education Academy. https://www.heacademy.ac.uk/system/files/curriculum_reform_final_19th_dec_1.pd

Phillips, KPA. (2017). Mapping professional accreditation in Australian higher education.  http://www.phillipskpa.com.au/news/8/mapping-professional-accreditation-in-australian-higher-education

Sadler, R. (2005). Interpretations of criteria-based assessment and grading in higher education. Assessment & Evaluation in Higher Education, 30(2), 175–194.  http://www.tandfonline.com/doi/pdf/10.1080/0260293042000264262

Salisbury, C. L., Evans, I. M., & Palombaro, M. M. (1997). Collaborative problem-solving to promote the inclusion of children with significant disabilities in primary grades. Exceptional Children, 63(2), 195–209.  https://doi.org/10.1177/001440299706300204

Scott, D., & Scott, S., (2015). Leadership for quality university teaching: How bottom-up academic insights can inform top-down leadership. Educational Management Administration & Leadership, 44(3), 511–531.  https://doi.org/10.1177/1741143214549970

Stensaker, B. (2008). Outcomes of Quality Assurance: A discussion of knowledge, methodology and validity. Quality in Higher Education, 14(1), 1470–1081.

Stephens, C. S., & Myers, E. (2000). Team Process Constraints: Testing the Perceived Impact on Product Quality and the Effectiveness of Team Interactions. [Paper presentation]. International Academy for Information Management 15th Annual Conference, Brisbane, Australia.

Thomson, E. A., Auhl, G., Hicks, K., McPherson, K., Robinson, C., & Wood, D. (2017). Course design as a collaborative enterprise: Incorporating interdisciplinarity into a backward mapping systems approach to course design in higher education. In R. G. Walker & S. B. Bedford (Eds.), Research and Development in Higher Education: Curriculum Transformation, 40 (pp. 356–367). Sydney, Australia, 27–30 June 2017.

Zundans-Fraser, L. A. (2014). Self-organisation in course design: A collaborative, theory-based approach to course development in inclusive education. (Publication No. 158670588) [Doctoral thesis, Charles Sturt University]. Semantic Scholar.

 Zundans-Fraser, L., & Bain, A. (2015). The role of collaboration in a comprehensive programme design process in inclusive education. International Journal of Inclusive Education, 20(2), 136–148. https://doi.org/10.1080/13603116.2015.1075610

 Zundans-Fraser, L., Hill, B., & Bain, A. (2018). “Strong foundations, stronger futures: Using theory-based design to embed Indigenous Australian content in a teacher education programme”. In Paul Whitinui, Carmen Rodriguez de France and Onowa McIvor (Eds.), Promising practices in indigenous teacher education, Springer Education.

This content is provided to you freely by EdTech Books.

Access it online or download it at https://edtechbooks.org/jaid_9_2/addressing_the_chall.