In Support of Ethical Instructional Design

Translation and use of the ethical choices with educational technology instructional design tool
, , &
DOI:10.59668/270.4208
Instructional DesignEducational TechnologyEvaluationDecision-makingFrameworkEthical ThinkingChoices
As technology solutions continue to grow in complexity, the choices facing those who wish to use them both effectively and ethically continue to grow more complex. The purpose of this chapter is to present the Ethical Choices with Educational Technology framework translated from K-12 setting use to instructional design practices in any setting (ECET ID). Two competing instructional design tool resources are compared and scored using the ECET ID framework to illustrate how it can help a designer choose a multimedia production tool that a.) meets the needs of their idea, b.) is feasible to use by their clients in the time available, and c.) is deemed to have the best ethical outcomes from design through use.

Introduction

The goal of this chapter is to explain how the Ethical Choices with Educational Technology (ECET) framework (Beck & Warren, 2020) can be employed to guide an ethical decision-making process for instructional designers tasked with developing new tools for educators. This chapter will review the need for ethical thinking with educational technologies in K-12 and higher education environments (Spector, 2016). Then, we will explain the development and current components of the validated ECET framework available to instructors to guide their ethical decision-making process regarding new learning technology adoption. Using components of the instructor-focused ECET framework, we will then offer a practical guide for instructional designers for step-by-step use, followed by question prompts that support making ethical decisions about their overall design, individual technology decisions, and assessment methods. The primary goal of this piece is to adapt the thought process of the ECET framework into usable guidance for designers who want to perform their work ethically, with a core focus on ensuring the safety of the target users.

Ethics with Educational Technology

In this section, we examine big-picture questions about thinking ethically about the adoption (Palm & Hansson, 2006), learning use, and assessment aspects of educational technologies as a framing for the remaining sections in the background. Much of the literature covered here will be historical and often tied to other frameworks that inspired the one here (Schenk & Williamson, 2005). This will include issues that commonly arise with the use of or design of instructional systems, tools, digital curricula (Lucey & Grant, 2008), analytics (Pardo & Siemens, 2014), or created supports for education that are technology intensive (Warren & Lin, 2012; Lin, 2007).

In the past, ethics was not a focus of instructional technology design (Gray & Boling, 2016; Himelboim & Limor, 2008). The field of education has historically addressed ethics in terms of privacy and security, informed consent, data anonymity, authorship, and ownership (Chou & Chen, 2016; Papamitsiou et al., 2021; Klein & Jun, 2014). More recently, Steele et al. (2020) and Tzimas and Demetriadis (2021) expanded the discussion of ethical issues facing instructional designers, including physical, social, psychological, and moral concerns with immersive technologies and learning analytics. However, although these publications focus on relevant issues, they do not develop their findings into an easily usable form by instructional designers. On the other hand, more dated research provided by Warren and Lin (2014) provided specific questions that designers should consider as they design and develop educational technology interventions. Yet, their presentation lacks the benefit of integrating newer research on ethical issues facing teachers integrating technology into their classrooms (Warren & Beck, under review). The latter authored the Ethical Choices in Educational Technology framework (ECET) as a tool to help teachers make ethical decisions. This tool covered four sections: idea, feasibility, ethics, and evaluation. Unfortunately, no such tool currently exists for instructional designers. As a result, this paper attempts to integrate the approaches by Warren & Beck (under review) and Warren and Lin (2014) to create Ethical Choices for Educational Technology for Instructional Designers (ECET ID).

ECET Framework

This section explains the reasoning behind creating the ECET ethics question frameworks designed to help instructors think through their prospective technology ideas before implementing possible problematic technologies in classrooms. These frameworks include examinations not only of the ideas but also of feasibility given practical constraints in their local settings, as well as ethical questions about the tools they intend to employ, vendor business considerations tied to the creation and use of those tools, as well as their own classroom or training implementation practices. By allowing instructors to consider these questions before or during a planning process, a goal is that users can avoid negative ethical and practical outcomes. The first framework was developed for K-12 teachers (e.g., elementary and secondary) as the need was deemed high due to the compulsory nature of education at this level involving protected populations especially prone to harm because they lack the power to resist processes, tools, or approaches they disagree with.

Current ECET Framework Components

The existing K-12 ECET tool resulted from a five-stage development, validation, and revision process conducted with instructors and experts. The development, review, and revision stages included:

  1. Brainstorm initial teacher framework components grounded in existing ideation, ethics, and praxis models. This step required a literature review to locate existing ethics frameworks for educational technology to identify commonalities and gaps in those models.
  2. Gap analysis with existing components to identify needed ethics and praxis components. In this step, we identified formal components for inclusion in the draft initial framework to overcome gaps and supplement commonly identified aspects from existing models.
  3. Create an initial framework for review by educator experts. The authors constructed the first version of the framework, ordering the nodes by their teaching practices while also seeking alignment with instructional design processes and models such as ADDIE and ASSURE and anchored instruction that can be used cyclically for revision to curricular development over time.
  4. Review of the initial framework by instructors with a focus on improving content and linguistic structure contained in nodes to ensure readability and use value. At this stage, the authors asked five instructors working in elementary, middle, and high school settings to review the framework for feedback on its logical structure and order of questions and identify missing components.
  5. Revise the initial framework and initial teacher guide using instructor feedback. In this phase, the framework was revised to ensure the logic followed teachers’ common practices, and additional nodes were created while some were combined or eliminated.
  6. Faculty expert review of revised framework. With the revised framework completed, it was presented for feedback to a faculty member from another institution with expertise in instructional design and past K-12 teaching experience.
  7. Revise the framework (2nd). The framework was revised using the faculty expert’s feedback to add two question nodes they felt were missing and another two combined. Some question language was revised to simplify them and ensure they more plainly communicated expectations. Further, the academic-aligned version of the framework used to explain underlying educational reasoning was revised to align with the teacher version.
  8. Instructor review of revised framework and teacher guide creation (2nd). Another group of five instructors made up of different teachers from the first group then reviewed the twice-revised framework to provide additional feedback. Several minor edits were recommended to clarify question language, and two additional nodes were suggested for combination due to perceived redundancies, while one was suggested for addition.
  9. Revision of the framework (3rd). The instructors’ suggestions were reviewed and incorporated into the framework to improve its perceived clarity and usefulness.
  10. Use application framework evaluation by instructors using think-aloud discussion protocol coupled with a sample educational technology. Seven instructors participated in the usability evaluation. Each instructor selected an educational technology product, employing the framework to evaluate and produce a score. The teachers explained challenges with question nodes as they applied the framework, highlighting any structural problems with the node ordering or clarity of language.
  11. Revision of components to the current framework version. Using the teachers’ feedback from the application evaluation, the framework was revised again to produce the current, validated version.

By engaging in a rigorous, cyclical design, review, and revision process, we intended to provide a tool that teachers can use daily, written in a language accessible to practitioners. Further, the ECET K-12 framework is intended to be flexible so it can grow as technology, ethics, or practical realities change over time; the ECET K-12 framework as it is presently constituted after the process is presented in the following figure.

Figure 1

ECET K-12

ECET K-12 diagram depicting four phases with varying components in each phase
ECET K-12 (Beck & Warren, 2020)

The framework was initially more complex, with additional nodes in each of the four swim lanes (i.e., idea, feasibility, ethics, and evaluation), intending to support teachers’ engagement with deep thinking about the potential problems. The tool was originally written in academic language/jargon, reflecting the perceptions and views of the framework’s developers. These were revised based on instructors’ feedback to ensure practitioner usability.

Idea Nodes

The eight idea components (nodes) in the first swim lane were developed to help instructors evaluate whether the tool they want to use and the purpose they have for it are possible to implement. These questions aimed to ensure that the reason for choosing an educational technology and its perceived learning affordances aligned with the teachers’ purposes for adoption. The questions associated with these nodes ask them to think through the tool itself, how it is intended to meet specific learning goals, evaluate evidence of tool efficacy, and perform other thinking about whether their tool-focused learning idea is sound. Once teachers have determined whether the idea should move forward with this specific tool, they are asked to consider its feasibility, given other constraints.

Feasibility Nodes

The next set of swim lane nodes was created to ask teachers whether their idea could be implemented and integrated into their school day. Considerations are made relative to their or their students’ available time, need for and availability of training, human and technology support resources, tool access, and other relevant factors. Once the idea and feasibility are deemed acceptable, the next set of components asks teachers to consider the ethics of the technology product through different lenses since if the idea is not sound nor feasible to implement, there is no reason to proceed to the next stage.

Ethics Nodes

The ethics swim lane components were designed to take instructors on a short path from the feasibility of their idea and whether they can use it to whether they should use it. Because ethics is a consideration of whether a process, tool, or activity may lead to harm, the question of “Should I” is central to the components in this section of the ECET tool. The first question asks teachers to consider whether the tool will be used for educational purposes (ethical) versus to fill time in a school year without specific learning outcomes and lessons. This question is asked because some educators in classrooms commonly use educational technologies to avoid teaching, not to meet learning outcomes (e.g., show a movie). Once that component is reviewed, other elements, such as the present evidence of a tool’s effectiveness for their teaching purpose, accessibility, and questions about the technology’s vendor, help teachers think through whether it is ethical to use the tool. If a teacher finds the tool unethical at any stage in this swim lane, we recommend eliminating it to protect their students. Since the first three major components of the ECET framework walk teachers through 23 discrete questions, we find it helpful for them to conclude the process by revisiting their overall impression of the tool and their use plan.

Evaluation Nodes

When teachers complete their thought processes in the first three lanes, they are asked to conduct a final evaluation of the tool. This set of questions ensures that they feel comfortable using the tool based on their impression of the idea, the feasibility of successful implementation, and the ethics of a tool’s use before deciding to proceed. They are first asked to determine if they have the technology and human resources to implement the tool-supported lesson(s) successfully. Next, they consider whether they can ethically measure learning resulting from using the tool, followed by whether the instructor and students can successfully use it. Then, they are asked whether the tool can be ethically implemented based on a global consideration of items in the third lane. Last, they can consider their overall impression of a tool and their planning for use as a final opportunity to reconsider if they have any qualms about its use. Figure 2 presents the current version of the ECET framework tool for K-12 teachers with all associated questions.

Figure 2

 ECET framework for K-12 teachers with questions

ECET K-12 diagram depicting four phases with varying components in each phase and additional questions to guide decisions

While the tool continues to be studied to determine its efficacy in classrooms as an ethics consideration model for teachers, the team realized that there are already many technology-based lessons teachers use today that were created by outside companies and instructional designers, which pointed to a gap in practice that was not currently being addressed.

Need for ECET Instructional Design Framework

Existing processes and models used to support instructional design (e.g., ADDIE, ASSURE, Backwards Design, etc.) commonly do not incorporate ethics questions into the thought processes of educational developers. Instead, they are commonly focused on the structural creation of lessons, courses, and other programs of study that often incorporate technology without the ethics of using these tools. Further, current instructional design models tend not to incorporate the practical and ethical perspectives or needs of instructors or students because instructional design typically focuses on expeditious development, not processes for determining whether that delivery mechanism is ethical. The framework can be employed with any instructional design processes or models during a step that asks designers to consider including a technology to support learning (e.g., analysis, design).

As such, a practical need exists to develop an ECET ID framework for professionals to guide their thinking in developing educational technologies. Instructional design textbooks and programs historically focus on the structural components of instructional design based on student and instructor needs from the perspective of learning related to specified outcomes (Piskurich, 2015; Warren et al., 2013; Warren & Lin, 2012; Reigeluth, 1999). The goal of the design of the framework for ECET ID was to support professional training in the field of instructional design. It is meant to support individual thinking or shared discussion regarding the ethics of choosing a particular technology under consideration for inclusion in a lesson or instructional module or to support a whole course. The purpose is broadly to give instructional developers questions that allow guidance on thinking through the consequences of their decisions from the users’ perspective.

Translation Development Methods

In this section, we will explain the process used by the authors to translate the ECET teacher evaluation components into appropriate questions for instructional designers based on their differential tasks and specific needs.

Step 1

The beginning approach to development was to review each component of the K-12 framework and evaluate whether it applied. If a question component did not fit or was inappropriate based on the perspective presented (e.g., instructor instead of designer), it was targeted for elimination or adaptation. Further, if the instructional design thinking process suggested reordering, the component nodes were moved into a more logical order.

Step 2

Next, each component’s guiding questions were revised so that they could be read from the instructional designer’s perspective based on common audience analysis approaches (e.g., instructor and student). This was intended to aid a developer’s ability to answer the component questions from the perspective of the individual lenses they should consider from an ethical perspective.

Step 3

Third, the development team reviewed the components targeted for elimination or adaptation. Although none were removed after that discussion, the language was revised to inform the designer better. Two nodes were also added to focus on cost considerations.

ECET ID Structuring

This section describes the high-level structure of the applied ECET ID framework that resulted from this recursive design, review, and revision process. All components are intended for use during an analysis phase of any instructional design model that instructional designers or teams apply. The following figure presents the current ECET ID tool as it exists today.

Figure 3

ECET Instructional Design (ID) framework

ECET K-12 diagram depicting four phases with varying components in each phase and additional questions to guide decisions along with value scores for each component

With the validation of the ECET K-12 framework tested, we revised those nodes from that tool to take the viewpoint of an instructional designer. However, to improve the use value of the tool, we also sought to maintain the perspective of the teacher and student needs as paramount, leading to the first swim lane including variations on the original seven question nodes.

ECET ID Idea Nodes

As presented in the following figure (reading left to right) and aligned with the ECET K-12 framework, the instructional designer first identifies and describes the learning tool they intend to use.

Figure 4

ECET ID Idea planning nodes to evaluate whether the tool meets desired outcomes

A closeup diagram of the seven components of the ECET ID Idea planning nodes

Once the tool is identified, the designer is asked to specify how they intend to use it for teaching and learning support. Next, they will describe the educational technology’s expected benefits or learning affordances in detail, followed by the provision of logic as to why its adoption is valuable to meet the intended learning goals or objectives. Once these descriptions are provided, designers are asked to provide evidence that the tools should improve learning and consider other necessary technologies. For example, an interactive smart board may require an LCD projector, USB or HDMI connectors, and additional software installed on a local computer. Finally, the instructional developer is asked to consider ease of use with the tool in the context of their intended audience of students and teachers. In our experience in K-12 and higher education settings, even interesting tools with strong pedagogical affordances will remain unused if the use difficulty exceeds the intended audience’s value or abilities. Once the idea for using the tool is deemed sound enough to proceed, the implementation’s feasibility is evaluated.

ECET ID Feasibility Nodes

The feasibility review lane reads right to left and includes a possible 90 points. The first node in Figure 5 focuses on determining the tool’s availability within the institution or the need to purchase.

Figure 5

ECET ID Feasibility swim lane containing question set to assess whether a tool can be used given available resources

A closeup diagram of the nine components of the ECET ID Feasibility planning nodes

There is already widespread technology adoption in some schools, so including a document camera that is already available in most university classrooms is a reasonable expectation. However, if the instructional design requires the purchase of ten sets of virtual reality headsets, checking whether this is a reasonable request is a good starting point if cost is a necessary condition for the educational plan to be accepted by stakeholders. This cost node was added to ECET ID because assessing and discussing whether the client can bear the cost of a tool adoption is required to build a successful learning design. Further, considering whether the cost of the tool is reasonable given the expected learning outcomes is an important next step because spending $50,000 on a set of tools that results in a 1% increase in learner engagement may not be considered reasonable and can stop an intended design from reaching implementation. Additionally, the lack of existing tools and the high cost to purchase them means most students will not have access to the technology, preventing the adoption of the learning plan.

The next two nodes (4-5) ask the designer to think about whether training is needed to use the tool, whether it is available, and if it can be provided in the available time. Given the time constraints in many educational situations and sometimes the high need for training before implementation, these questions are important to consider before moving forward. Further, human resources support based on ease of use and demand for information technology (IT) support is covered in nodes 6-7 because low usability and a need for high support can lead to failed instructional implementation. The final two nodes ask the designer to consider the feasibility of implementing the tool and related activities within time and effort constraints. In K-12 settings, there is often limited time to use new tools because of test software use and other school day activities, resulting in insufficient time on task to achieve learning gains promised by a tool. Further, suppose a tool requires a lot of cognitive effort to figure out and too much trial and error. In that case, the instructor may stop using and shift to lower-effort approaches to teaching that they are more familiar with, and students can complete with higher comfort.

At this point, the ECET ID user has answered 17 questions focused on the practical reality of employing the tool as part of their design because whether to proceed is a complex consideration. Once the designer has considered these questions and found the tool feasible, they should consider the ethics of using an intended technology.

ECET ID Ethics Nodes

Like the feasibility question set, the ethics lane contains nine components read left to right. The first two questions in Figure 6 help designers consider their and educators’ intentions for using the tool.

Figure 6

ECET ID Ethics question set with basic thought process regarding tool value/uses

A closeup diagram of the nine components of the ECET ID Ethics planning nodes

More broadly, a challenge with instructional design and curriculum development can be a mismatch between what the tool’s affordances can realistically provide and our intentions for them. When this mismatch exists, the tool’s use can be a “filler” that takes up instructional time but cannot realistically support learning outcomes. As such, the tool asks designers to consider whether their intentions match the likely outcomes; failing to do so can result in lost instructional time, leading to educational harm (Warren & Lin, 2012).

The third node was added as part of the ethical consideration involved in the potential waste production generated by technology adoption. As instructional designs continue to grow in scale and given the potential e-waste involved in adopting educational technology tools, longer-term impacts are an increasingly important consideration when planning instruction (Warren et al., 2022). Next, we ask designers to consider the accessibility of their educational technology for students with any impairment as defined by the Americans with Disabilities Act. As schools, universities, and workplaces are increasingly held accountable for supporting their students and employees from a disability perspective, it is increasingly important to consider whether the technologies we adopt are designed in a manner that makes them usable by students with visual, auditory, and other challenges (Martín-Gutiérrez et al., 2017) to make sure all students and workers have what they need to be successful.

Designers are next asked to consider other potential harms from the project without consideration of potential benefits. This is because any harm, especially for children, is assumed to outweigh potential benefits. For example, untested products like learning games that lack research evaluation on their efficacy could teach poor mental models to students. When this occurs, the instructor and students lose significant instructional time due to reteaching by other means to eliminate the negative models and learn the correct ones (Warren & Lin, 2011). As a result, there may be insufficient time for students to learn other needed knowledge and skills, leading to minor or major student harm. With the increasing number of untested educational technology products in an unregulated commercial market, the likelihood of harm is increasing, so this question should be examined more critically than in the past. With no requirement that companies provide evidence of significant improvements from their products, coupled with lax regulations on marketing language in this area, instructional designers should be wary of educational technology effectiveness claims before recommending their adoption as part of an instructional plan.

Related to ensuring an ethical attitude towards the tools that students use, the next node asks designers to consider the purpose of the instructional tool. Increasingly, some vendors’ tools are designed not to improve student learning or improve support for the learner experience but instead to surveil and control student behavior to punish perceived transgressions. Because of the need to deliver learning online or in alternative formats, there is increasing demand for products to address perceived acts of academic dishonesty by students during assessment; however, these products often have no other pedagogical value and can create significant emotional and psychological harms for students required to use them (Krutka et al., 2021; Andrejevic & Selwyn, 2020). As such, the next question asks designers to consider whether, using this product, student performance can be measured ethically. The increasing use of student metadata and inferential statistics to judge students, coupled with poorly designed online testing tools, increases the likelihood that negative conclusions about students and their performance will be made. As with other heuristics, the more transactional distance placed by technology between the learners and the instructor making judgments about students’ performance, the more likely they are to be incorrect because context is removed. As such, instructional designers should carefully consider educational technology tools used for assessment or have assessment components that should be evaluated for ethics and efficacy. Maintaining the psychological and physical safety of the educational recipients is a complex task that falls within the purview of all educators, including instructional designers. Just as with research compliance expectations provided by institutional review boards, instructional designers have an ethical responsibility not to harm those that engage with our educational products, whether these be courses, games, or other technology products we choose to incorporate to support learning and teaching (Warren & Lin, 2012).

In the next node related to student assessment and data collection, the designer is asked to consider the vendor’s purpose for gathering that information. Historically, student performance data was used to determine whether a student learned. In instances where they did not, it helps an instructor determine how to intervene, reteach, or differentiate instruction in later lessons (e.g., personalized learning). However, with the complexity and lack of transparency in vendor digital models today, it is increasingly difficult to determine how demographics, metadata about students, and performance data are being used and whether they are being sold for profit. As such, the designer should consider whether the data being collected is primarily for the benefit of learners and instructors or the financial benefit of the vendor.

Finally, designers are asked whether the tool vendor can be considered trustworthy. This determination can be made by examining reviews posted by past clients and, increasingly, journalism pieces and legal suits a company is involved in related to the ethics of their products, marketing claims relative to actual performance when available, and the company’s decisions. With many companies today profiting from the sales of user data, it is important to consider whether that constitutes a privacy violation for your intended learners. If they are only offered one option for technology interaction and that product may unethically collect data about the learners with no opportunity to opt-out, a better decision is likely to choose another product with similar affordances but without the surveillance and sales aspect or to offer multiple options. Our decisions about which educational technology companies to work with, especially companies that may be untrustworthy, may be viewed as the designer behaving unethically through their choices. As such, this question becomes important if a designer wishes to be viewed as ethical.

ECET ID Evaluation Nodes

The evaluation nodes match those in the K-12 teacher version and provide a final review of the three main elements of idea, feasibility, and ethics before using a tool in an instructional design.

Figure 7 

ECET ID Evaluation components to summarize the designer’s views of tool inclusion value

A closeup diagram of the five components of the ECET ID Evaluation planning nodes

The first question (far right) asks designers to revisit whether an educational technology can be ethically and practically measured. Given the increasingly “black” box nature of complex technologies on the market that fail to disclose their designs, it is difficult to determine whether today’s educational technologies were developed in a manner that instructors, students, and designers will agree are both effective and offered in a manner that leaves students and instructors with the power to resist negative applications. It is difficult to understand how positive or negative outcomes result if we do not know what a tool is doing to us. While corporate trade secrets are valuable and competition can be fierce, failing to disclose how a product’s psychological structure is intended to support learning can make it difficult to trust, so designers should consider whether to recommend it.

Next, the designer is asked to think globally about whether the instructor will have the technology and other supports available to ensure all learners can benefit. This question may be situational and impact where an instructional design can be used. For example, in one setting where every student has access to a Chromebook, asking them to interact with a website for assessment daily is reasonable and efficient. In another setting where 35 students share computers with an uncertain internet connection, the answer to the questions in this node is different. This contextual difference continues in the third node, where the designer is asked to consider whether the learners and instructors have sufficient training to ensure successful implementation of the tool, process, or full learning design. In our experience, a lack of training can lead to a failed technology implementation as often as a poorly designed technology. With that in mind, the fourth node asks the designer to make one final judgment regarding whether they believe the educational technology can be used ethically. This opportunity to reconsider the ethics of the tool on a more global scope after answering more specific questions is intended to allow the designer to think through the relationships of all of these questions to get a broader impression of the tool’s ethics component before moving to the final node and making a high-level assertion that a.) their idea for the tool is a good one, b.) it is feasible to use the tool, and c.) the tool can be used ethically in the setting intended and with these users.

To avoid unusable work or product development, the ECET ID tool ideally should be employed either prior to any design work or after learning goals and activities are planned to determine whether it is ethical to proceed. However, because the suggested approach to applying the ECET ID framework is not intended to be complex, taking less than 20-30 minutes for each review, it can be revisited at any time throughout an instructional design, development, or implementation planning process. At a minimum, we recommend that a review with scoring is completed once the design plan is complete but before development begins. Since changes often occur during learning product development, it is helpful to revisit the framework once the product and associated tool are ready for use as a final check to ensure the productive outcome of this process remains one that the designer and users feel meets their practical and ethical requirements.

ECET ID Application Example

To illustrate the use of ECET ID, we present simple evaluations as a use case from the perspective of instructional designers. As such, we next provide a model application of the framework in the context of developing an undergraduate or K-12 introductory multimedia development course. A major consideration will be to evaluate potential tool options to choose from and seek to make the best choice for our audience.

In this illustration of using ECET ID, we examine whether to adopt the GNU Image Manipulation Program (GIMP) 2.1 or the Adobe Creative Suite as an educational tool for an introductory multimedia development course. A consideration will be the identified learning objectives for this course. Since the course topic could be for either senior-level high school students or introductory students in undergraduate multimedia programs, the choice of instructional technology may differ based on various constraints and local needs, so the choice rationale in each context is explained. Other learning outcomes for the course are not linked to the educational product under review for adoption. Thus, the following table presents the multimedia-focused course goals and objectives.

Table 1

Multimedia-focused course goals and objectives

ComponentDescription
Course Goal 1.0Employ industry-standard graphics software to create computer graphics for mediated learning/training.
Objective 1.1The learner will develop an infographic to teach a simple concept for learner retention.
Objective 1.2The learner will develop a training handout to support a set of 1-2 learning objectives.
Course Goal 2Apply common graphic design principles using a common industry tool.
Objective 2.1The learner will employ color theory in a manner that makes their educational media appealing to users.
Objective 2.2The learner will employ consistent visual design principles in each educational media course outcome.

The scores for each component should be generated to give the designer a sense of how the technology performed for each component. This approach can help designers compare scores between possible instructional technology products when making a tool selection choice that will most likely be effective for the target users.

In this section, we present the scores for each product as rated by the authors as an illustration of the process. While it is not required that instructional designers have more than one scorer, it is a stronger practice from a qualitative research paradigm to have multiple analysts review the products to increase the credibility and trustworthiness of the review outcomes (Ravitch & Mittenfelner Carl, 2016). All scores for each item were capped at 10 points each, except the overall evaluation component. As such, the total number of possible points is 200 to make comparison across potential educational technology products easier for users. While all scoring includes an element of subjectivity, the guidelines for each node are the following, though they vary somewhat by item as specified in the user guide because some are yes (10 points) or no (1 point) questions:

Scores not listed above provide the reviewers with a range they can use, depending on the strength of evidence provided during their reviews, allowing analysts to provide. Further, to simplify scoring, a table was created to align each component, question, and scoring outcome with simplified language representing the questions in each node to allow these to fit in the corresponding cells (see Figure 8).

Figure 8

Ethical Choices with Educational Technology Instructional Design framework blank scoring table

A blank ECET ID scoring table with four components to be scored on six to nine categories and a total possible score of 300.

The items in Figure 8 follow the path listed in the Figure 3 diagram, but because of the need to follow the logic of the table, each is listed from left to right. In the following section, we apply the framework first to the GNU Interface Manipulation Program (GIMP) 2.1 digital tool and then to the Adobe Creative Suite because these two have media affordances that may be appropriate to the course’s learning goals and objectives.

Product Evaluation 1: GIMP 2.1

The GNU Interface Manipulation Program (GIMP) is a photograph or other image tool allowing graphic design with many line, erase, and other common media development tools found in many of today’s online and software-based products. According to the product’s website (gimp.org), the photo editing tool has existed since 1998 (v.1.0), though it was publicly released in 1996 (v. 0.54). Some of the other tools that are part of Adobe Creative Suite/Cloud, which we compare to GIMP, are older, originating in the mid (Illustrator) and late 1980s (Photoshop) or later with InDesign [1999] (Hoang, 2019). With a similar length of time in development, the GIMP tool is as stable as its main competitors from Adobe and other companies, making it a reasonable option from that perspective for consideration in this course. In the next section, we provide an example of how a designer may vet the idea of whether to use it for an introductory multimedia development course using ECET ID by reviewing the website and downloading the product for consideration. The scores in Figure 9 and indicated in parentheses in the following discussion reflect the reviewers’ assessment and include suggested cutoff scores. However, each design team can create their own.

Figure 9

Ethical Choices with Educational Technology Instructional Design framework scoring: GIMP 2.1 completed scoring table

A completed ECET ID scoring table for GIMP 2.0, scoring 260 out of 300 possible points.

Idea Planning

With GIMP as the tool under consideration, the first three question nodes are addressed concurrently because the technology will be taught as the means to create educational media (10) and is therefore expected to directly support the stated learning objectives (10) by allowing them to create educational media as course end products (10). While there is no direct evidence GIMP will improve learning, allowing students to learn to use it and build successful media artifacts, which achieves the learning goals for the course (7). Besides a mid-range computer, we assume no other tools are needed (7), though a large monitor may help make use easier. The wide adoption of the tool in industry and education settings, according to our examination of public reviews, is evidence that the tool is easy for students and instructors to use for the course (6). With the idea vetted with a passing score of 50 out of 60 points, we then examine the feasibility of using GIMP to meet our learning design intentions.

Feasibility Planning

Since the software is freely available for PC, GNU/Linux, or Mac operating systems (10), unless students are using Chromebooks, the tool will be fully available. Since it is free to download and use (10), there is no cost to students, faculty, or institution (10). A review of the GIMP website indicates 17 available training sessions of various levels, with more available on YouTube, so adequate training can supplement what is created by the instructor or designer (10). Two limitations are limited evidence of ease of use (7) and the expected lack of IT support at the target institution (5) because it is outside of the scope of provided services as no media editing tools have been adopted institution-wide (5). Since the goal of the instruction is to provide training on the tool, while not guaranteed for every student, the available time (9) and expected effort (7) should be appropriate to meet learning objectives. With this review complete and a score of 78 of 90 points, whether it is ethical to adopt the tool into the design is next considered.

Ethics Planning

The GIMP tool is central to achieving the learning objectives for the course (10). Therefore, the tool-supported instruction will likely be practically effective (10) and available for future courses at no cost (10), making it a financially sustainable choice. According to GIMP’s provision of information using the Voluntary Product Accessibility Template (VPAT), the product meets some accessibility compliance requirements. However, for visually impaired students, it is an open question as to whether it is usable relative to available time and individual student needs leading to the assignment of a low score (4). However, the visual nature of the course and tools will make choosing a compliant tool challenging. We do not believe the tool’s technological or psychological aspects are likely to harm student health (10) nor that there are surveillance or control aspects embedded in GIMP (10), allowing student performance to be ethically measured through students’ designed formative and summative media products (10). We found no evidence that the GIMP tool stores learner data (10) nor that challenges result from the vendor’s trustworthiness (10). With a score of 81 out of 90 possible points, GIMP appears to be usable ethically, leading to the final review and evaluation.

GIMP Product Evaluation

Based on our global review of GIMP as a possible tool for course adoption, its efficacy appears to be an aspect that can be practically and ethically measured through student and instructor feedback (10) rather than decontextualized metadata. For our purposes, the instructor will have the available tech needed. However, the needed human resources required to implement GIMP with students in the course remain somewhat questionable, leading to a score of 7 for this component. The tool appears usable according to available reviews and public information about the tool’s use and potential training sources (e.g., videos, PDFs, etc.). However, there remains some uncertainty since our target instructor has only used Photoshop in the past (8). Since the tool is used for creation rather than assessment or other purposes, we believe it can be used ethically (10). The product was given a global score of 16 out of 20 because the various ECET components left designers with the impression that the idea is reasonably sound, the product implementation is feasible, and it is an ethical choice. Still, because of unknowns with the instructor and IT support resources, there remain questions we cannot answer. Next, we use ECET ID to review the second product under consideration, the Adobe Creative Suite/Cloud product that tends to be the industry standard for media editing and development.

Product Evaluation 2: Adobe Creative Suite or Cloud

The Adobe Creative Suite has multiple components that may be useful for the multimedia course, including Photoshop (photo manipulation), Illustrator (illustration), Premiere (video), and InDesign (publishing). This makes the paid tool more robust than GIMP, which only includes options for photo manipulation and some illustration support. However, the cost is also high at $240 a year for students and instructors or $20 per month versus GIMP, which is free. In the following sections, we modeled the thought process of an ECET ID user examining the Adobe Creative Suite for later comparison with GIMP 2.1. The associated scores are shown in Figure 10.

Figure 10

Ethical Choices with Educational Technology Instructional Design framework scoring: Adobe Creative Suite completed scoring table

A completed ECET ID scoring table for Adobe Suite, scoring 251 out of 300 possible points.

Idea Planning

Adobe Creative Suite and Cloud would be taught to students as tools to develop their own media and multimedia projects in support of the primary learning goals for the course. Since the course goals include creating new educational media using such tools, and the tool was used for similar purposes in other departmental courses, though at the graduate level providing evidence of efficacy to improve student learning. A challenge in the idea is that the tool can require a higher-end computer and is only usable on Mac and Windows operating systems, so it cannot be implemented with Chromebooks. As such, it may not work in the high school environment. While it is a complex tool, new students exposed to it in graduate courses have been able to use it successfully, and instructors are already familiar with it. The overall evaluation for the idea components was 52 out of 60 points. This indicates the tool should meet the course learning outcomes and basic audience usability; next, the feasibility of implementation with the Adobe Suite is examined.

Feasibility Planning

Starting with access, the university does not currently offer the tool to students, but educational pricing is available that would cost them $80 for the course duration. This is about the cost of a textbook, and since it is an industry-standard tool, the cost-to-benefit ratio appears reasonable. A department chair or principal scheduling instructors to teach the course must ensure that anyone teaching it has sufficient skills to deliver a course that uses the Adobe Suite successfully. Students will learn the tool as a course goal, so that component is achieved through related materials and learning activities. As a common media development tool used in the field and industry, it is easy to use for development in this course. One challenge may be that the institution offers uncertain information technology (IT) support, so any instructor must also serve as technology support. The tool should be able to be learned and used in the available class time, and the effort appears to be appropriate. With the soundness of the idea and tool use feasibility established, the ethics of implementing the technology is next examined.

Ethics Planning

Because the Adobe tools are central to what students are learning, they will be directly tied to learning outcomes and will support the identified instructional aims of the course. Since other courses in the department also use the tool, it should be sustainable in case students want to do a yearlong subscription that covers more than one related media development class. A challenge with the tool is its visual nature, so there are questions about ADA accessibility that should be followed up on with institutional teams responsible for this component. Since this is a tool for creating media, there do not appear to be inherent potential harms to student health, and no surveillance aspects appear that are present in some testing software. Since student performance is measured based on the quality of their media products supported by the tool, the ethics of this aspect are acceptable. No student use data is collected by the tool we can determine. Further, the technology company has a history of being trustworthy relative to users. With the ethics evaluation indicating a passing score, we conduct a final global product review before comparing Adobe Creative Suite and GIMP for this educational media development course.

Adobe Product Evaluation

Since Adobe Creative Suite is used to create educational media, the instructor should be able to ethically and practically measure the tool’s value based on how well it supports student learning. Measures related to course outcomes can be developed about products the tool helped students create. The instructor should have the necessary technology if the institution pays for Adobe Creative Suite or Cloud. A question remains whether they will have local human technology support if questions arise they cannot answer. Instructors should be selected based on having prerequisite knowledge of Photoshop, Illustrator, and other Adobe Suite tools. Further, since part of the course goals are to teach these technologies, training is included in the course. There do not appear to be significant ethical challenges with the tool or the vendor, and overall, Adobe Creative Suite/Cloud could be practically and ethically used for the proposed media course. The following table provides the shared scoring for Adobe Creative Suite as a potential tool for the media development course.

Final Product Score Comparison and Decision Making

Once the evaluations are complete for each product, we can compare them based on the total score or on component criteria that are most important to the designer or end-users. Table 2 can be used to break out and compare scores and look at the components in aggregate.

Table 2
Ethical Choices with Educational Technology Instructional Design framework scoring: Product comparison for final selection
ECET ID ComponentGIMPAdobe SuiteBest Choice
Idea Score50/6052/60Adobe
Feasibility Score78/9066/90GIMP
Ethics Score81/9082/90Adobe
Evaluation Score51/6051/60N/A
Total Score260/300251/300GIMP

While the Adobe Suite was the best choice for supporting the idea and ethics components, the overall scoring was higher for GIMP, partly because of its higher cost and availability in the feasibility section. Otherwise, the digital affordances of each tool were similar enough with minor scoring differences that either could be selected to meet the course needs. Note that in most regards, we say the tool should meet the needs of instructors and students because our evaluation views are naturally subjective, meaning there will be times we will come to incorrect determinations of the right tool in a particular situation. However, by completing this thought process, an instructional designer can have clearer logic to support and explain the choice of tool to the client or a manager who would need to pay for or implement the technology adoption. This approach allows designers to forecast and explain their decision-making clearly, reducing the risk that a poor technology or design decision will be made while acknowledging this risk cannot be eliminated.

Limitations

There are several limitations to this work. First, a tool like those in the ECET portfolio should be validated to accurately measure what it aims to do, regardless of the respondent. Valid instruments help to collect better quality data with high comparability, which reduces the effort while increasing the credibility of collected data. Although ECET K-12 was subject to an extensive qualitative validation process (Beck & Warren, 2020), it will continue to be quantitatively and qualitatively validated every few years to ensure it achieves its intentions. By using the ECET tool actively with instructional designers each year and incorporating feedback from surveys and interviews cyclically to improve the instrument, the acceptance of ECET should improve and be maintained. This adapted ECET ID instrument has not yet been validated by designers. As such, we will follow a similar validation process as was conducted with ECET K-12 noted earlier, so the team expects significant changes to the terminology used in questions and nodes included/excluded in the framework to ensure the views and needs of designers are accommodated. This current state of the instrument also means that initial usage of ECET ID may provide less accurate results than the ECET K-12 tool for teachers.

Additionally, the current version of our framework places more weight on feasibility and ethics (90 points each) compared to 60 points for Idea and Evaluation. These differences in weighting reflect our current understanding of the comparative relevance of these areas for instructional designers. With that said, we plan to conduct user testing of ECET ID with multiple instructional designers in multiple contexts (e.g., K-12, higher education, adult, etc.) and update these weightings resulting from testing.

Implications

Student Loss of Learning Time Due to Ineffective Tool

As stated earlier, current instructional design models and textbooks (Piskurich, 2015; Warren & Lin, 2014; Reigeluth, 1999) do not utilize ethical questions to aid instructional designers in their design process (e.g., ADDIE, ASSURE, Backwards design, etc.). Additionally, current instructional design models do not include practical and ethical perspectives on the needs of instructors or students. Using the ECET ID framework should help instructional designers focus on ethical concerns while designing high-quality instruction. Ethical questions are integrated into ECET ID along with concerns about ideation, feasibility, and evaluation, thus helping instructional designers to address ethical concerns in their proper context. As reflection and discussion are the intended outcomes of ECET ID, it is expected that using this framework will improve quality instructional designs that clearly match designers’ intentions with likely outcomes and thus reduce educational harms to end users (Warren & Lin, 2012).

Next Steps

Our next step with ECET ID is a rigorous validation process similar to the process followed with ECET K-12 designed for teachers with a multi-step design-based research approach to improve the tool’s components based on feedback followed by new participants that could ensure changes were effective and then point out other needed revisions. Additionally, work is being done to put both tools into a branching, online format that will scaffold users in using the tool and provide recommendations based on their answers. Finally, a version of ECET is being developed for software developers through a partnership with INESC TEC, a private non-profit research association dedicated to scientific research and technological development, technology transfer, advanced consulting and training, and pre-incubation of new technology-based companies. Once the tool is deemed effective and ethical to implement, it will be released broadly to determine whether it is useful to instructional designers at scale.

Conclusion

As technology choices for instructional developers, educators, and students continue to grow more complex in their designs, our choices regarding which to use are based on the quality of the idea, whether it will work with the time and other resources we have, and whether it is ethical to implement them at all. Moving forward with our use and study of the tool, we intend to release it broadly to gather user views about whether it supports their needs and to gather information about additional needs for framework improvement that exist (e.g., carceral technologies/surveillance as assessment, environmental ethics) and should be addressed in future versions. We hope to slightly slow down our design processes with the tool to encourage designers to ask relevant questions about whether we should use a technology. This needs to slow down and consider our decisions regarding whether to use any technology is increasingly important in a world focused on surface-level, rapidly produced outcomes. Instead, for instructional designs to meet the needs of an increasingly diverse world, mindful designs are important for ensuring the journey instructors and learners take is one they can feel good about and meets their educational needs. Our primary goal in designing the Ethical Choices with Educational Technology Instructional Design framework was to support an instructional designer’s decision-making process to help improve the user’s final learning product and technology interactions. It is meant to go beyond simple questions regarding whether a tool meets minimal performance outcomes and positively supports learner and instructor experience. In the future, we will rigorously test the tool, making improvements along the way, always asking whether the framework improves the ethical choices made by designers and users in a manner that both improves learning and fosters an experience they look back upon favorably.

References

Scott Warren
Scott Warren is an Professor of Learning Technologies at the University of North Texas in the College of Information. His research examines the use of emerging online technologies such as immersive digital learning environments, educational games and simulations in myriad settings. Prior to working in higher education, he taught both social studies and English in public schools for nearly a decade. His early work included creating the Anytown world to support writing, reading, and problem solving. His current instructional design work is with The 2015 Project and Refuge alternate reality courses and he designed the online literacy game Chalk House. He founded The Koan School in order to experiment with systemic change in K-12 schools using a unique technology and communication-rich problem-based learning curriculum. Over the last few years, his research has shifted to complex higher education systems to improve their performance with engineering, business, and related research methods and organizational approaches.
Dennis Beck

University of Arkansas

Dennis Beck is an Associate Professor of Educational Technology at the University of Arkansas. His research focuses on and advocates for digital, educational equity for vulnerable populations, with an emphasis on culturally and linguistically diverse and special education students at the primary, secondary, and tertiary levels. In this stream, he has studied the influence of student and teacher avatar gender and race on expectations, perceptions and evaluations. He has also examined the use of immersive learning environments for providing life skills training for low functioning young adults on the autism spectrum. Additionally, in order to better understand the impacts of immersive environments in cyber schooling on vulnerable populations, he has studied an immersive art curation environment in partnership with a local museum. He has published in several venues, including Computers & Education, American Journal of Distance Education, Educational Administration Quarterly, and the Journal of Educational Research.
Kristen McGuffin
Kristen McGuffin's professional goal is to advance knowledge and research that nurtures young thinkers to explore their role in the natural world. Her current research explores the intersections of religion, politics, and power with an emphasis on anti-colonial theory and environmental justice.

This content is provided to you freely by EdTech Books.

Access it online or download it at https://edtechbooks.org/applied_ethics_idt/K12_decision_making.