Instructional Designers as Ethical Mediators

DOI:10.59668/270.13282
EthicsValuesMediationEthical inscription
Instructional designers have been instrumental in shaping learning experiences for almost a century—contributing to perceptions of what instructional experiences should be considered valuable, worthwhile, and rigorous. However, instructional theories and models of instructional design practice have rarely considered the ethical role of the designer in creating equitable and inclusive futures. In this chapter, I use two vignettes of instructional design work framed by facial recognition technologies to locate ethical tensions between designers and learners, identifying opportunities to leverage the mediating role of the designer. I describe potential ways forward for researchers, educators, and students that reposition ethics at the core of the discipline.

Instructional designers have been at the forefront of the scale-up of learning experiences of all kinds over the last century—transitioning our societies from highly local instructional practices to ones that have a shared connection to instructional and learning theories that can be practiced at scale. However, in what has been framed as a rush to “scientize” the discipline of instructional design (Smith & Boling, 2009), some of the core components of what it means to re-shape the world through design have potentially been lost. Chief among these components is perhaps the responsibility of the designer themself in creating new futures, shaping worlds and lives, and sustaining or confronting structural norms that more often disempower or exclude rather than empower and emancipate. Indeed, the models that dominate the field of instructional design rarely include references to the moral and ethical components that are at the center of the experienced pedagogy, and this lack of focus has—for decades—kept scholars and practitioners in our field from questioning and negotiating ethical tensions in the design of learning experiences. 

In this chapter, I will confront this historic lack of attention to ethics in instructional design by focusing on the role of the designer themself in negotiating competing values and norms as a key part of engaging in design work. I will first provide some brief background to describe how the designer’s role intersects with a broader view of design as intentional change and worldbuilding. I will then use two vignettes that describe the intersection of instructional design and a specific category of technologies—facial recognition—to identify relevant values and ethical tensions that instructional designers must recognize and confront. I conclude with some ideas of how the field of instructional design might relocate ethics to its core, impacting the theory and practice of instructional designers in ways that not only acknowledge but also explicitly leverage the making of more ethical and inclusive futures.

Framing the Ethical Landscape of Design Practice

Design is an ethical act whereby we change “existing situations into preferred ones” (Simon, 1996). However, whose world is being shaped and what constitutes a preferred state is contested and value-laden (Willis, 2006). Even while education and instruction have been framed as a moral enterprise (e.g., Durkheim, 2012; Nucci, 2006) with the learner’s uptake of norms and values as part of their cognitive schema taken as an inherent part of educational praxis, the field of instructional design and technology has unevenly addressed—or even acknowledged—the role of ethics in the design of instructional experiences (see Gray & Boling, 2016; Yeaman et al., 1994 as a synthesis of the nascent interest in ethics in ID across multiple decades).

In my research spanning more than a decade, I have sought to describe how values and matters of ethical concern are manifest in design activity—including work across the spectrum of instructional design, learning experience design, human-computer interaction design, and beyond (Boling et al., 2020; Chivukula et al., 2020; Gray & Boling, 2016; Gray & Chivukula, 2019; Gray et al., 2015). Through this work, my co-authors and I have revealed the subjective and contingent judgments that guide a designer’s practice (Gray et al., 2015), the mediating role of the designer in identifying and responding to ethical design complexity (Gray & Chivukula, 2019), and the practices of designers that often reinforce an imbalance of power between stakeholders and end users through practices such as deceptive design, “asshole design,” or the use of dark patterns (Gray, Chivukula, et al., 2020, 2021; Gray et al., 2018). These studies have revealed what has been known by designers since the dawn of the Industrial Revolution: designers are powerful agents that can use their skills to reshape the world, reinforcing structural inequities, pandering to humanity’s worst excesses, or contributing to emancipatory and socially just design practices (cf., Costanza-Chock, 2020; Papanek, 1971).

While much previous research has focused on situating ethical engagement in relation to common paradigms of ethics (e.g., consequentialist, virtue, deontological)—for instance, through codes of ethics (Adams et al., 2001; Buwert, 2018) or methodologies that are grounded in moral philosophy (e.g., Flanagan & Nissenbaum, 2014; Friedman & Hendry, 2019), in my previous work, I have sought to describe how designers might frame ethics as a core part of their everyday ways of being and acting in service to client needs in a socially responsible manner—which others have described as “everyday ethics” (cf., Halberstam, 1993), building on the pragmatist tradition of ethical engagement that prioritizes both ethical awareness and attention to intentionally reshaping society in accordance with one’s values (Dixon, 2020; Steen, 2015).

Doing “Ethics by Other Means”

According to philosophical engagement by Verbeek (2006) and underscored by empirical work by Shilton (2013, 2018), designers of all sorts engage in ethical reasoning and pragmatic action. However, designers do so in ways that are characteristically different from moral philosophers or those only seeking to theorize what should or ought to be. Instead, designers (re)shape the world through judgments that are always already value-laden—or what Verbeek (2006) describes as “doing ethics by other means”—whether designers are aware of this ethical armature embedded within their work or not.

What does this engagement look like when a designer is ethically aware? And what might a design situation or set of design outcomes look like when awareness and sensitivity to ethical impact are lacking? I will present two brief vignettes of recent contexts of instructional design work, focusing on the integration of specific emerging technologies to illustrate the inscribed values present in designed outcomes and identify opportunities for increasing a designer’s ethical awareness and ability to act. Both vignettes focus on one specific type of technology deployed in the service of learning experiences to allow comparison—namely, the use of facial recognition and computer vision as a surveillance technology. Comparable technology-driven application contexts for learning (e.g., social learning via web-based interactions, engagement through mixed reality, or learning analytics-focused approaches, just to name a few) could be evaluated similarly using this same approach. 

Surveilling Affect and Attention in the Residential Classroom

 First, I will describe a vignette from prior to the COVID-19 pandemic, which leveraged advances in computer vision. With the rise of capacity to perform facial recognition in real-time, this technology has also been applied in educational contexts. More recently, the use of facial recognition and computer vision techniques has evolved beyond mere recognition (which has applications for educational settings in attendance tracking, for instance; Mothwa et al., 2018) to attempt to evaluate student attitudes, emotions, or attention (Barrett et al., 2019; Zaletelj & Košir, 2017).

Early techniques to detect learner characteristics took place in physical learning environments, using detection techniques that included video cameras and, in some cases, Kinect sensors. Facial recognition and evaluation models have been proposed using these data sources to classify learner behaviors about engagement, attention, and emotion. For example, one such proposed system published before the pandemic, known as EmotionCues, used “[a]naly[sis of] students’ emotions from classroom videos [to] help both teachers and parents quickly know the engagement of students in class” (Zeng et al., 2021). These types of detection and analysis techniques have continued to be honed through the pandemic, leading to recent plans for integration into a commercial product called Class offered by Intel and Classroom Technologies1 that will “capture[] images of students’ faces with a computer camera and computer vision technology [on Zoom] and combine[] it with contextual information about what a student is working on at that moment to assess a student’s state of understanding” (Kaye, 2022).

What values were in tension when considering the design and deployment of this system? First, let us consider how these beliefs might emerge in relation to an instructional designer or instructor’s goals of evaluating or characterizing learner attention or understanding:

Second, we can consider beliefs from the perspective of a learner whose attention or emotions may be continuously tracked by such technologies, with or without their knowledge:

Some assumptions by the instructional designer or educator are rooted in learning theory, where the learner’s attention is a critical pre-condition for them to engage in a learning experience and/or construct their knowledge. For instance, the ability of an instructor to visually recognize when students are less attentive might be a trigger to use a different pedagogical strategy (perhaps planned by an instructional designer) that is more engaging. However, some other assumptions relate to what is technically possible or how technical possibility might relate to other aspects of the learning experience. For instance, one could easily move from the belief that tracking the attention of individual learners is important to assume that any technology that could scale this assessment from dozens of learners to hundreds might bring pedagogical value. In parallel, a belief that technologies can accurately detect human emotions, attention, or other proxies for “understanding” or “learning” might lead an instructional designer to specify these technologies without anticipating instances where these technologies fail or otherwise lead to inaccurate results. For instance, Barrett et al. (2019) have previously identified numerous assumptions built into flawed models of emotion, including a lack of consideration of context, individual personality, and cultural factors. The values of the designer and learner come into tension around the technological capacity of tracking and the pragmatics of using these technologies to inform the learning experience. The learner may want to be able to express their choice not to be surveilled, even while they may have little or no agency to make this choice in their learning environment.2 From a more pragmatic perspective, the instructional designer or instructor may recognize that the attention scores produced by the machine learning model are flawed but reason that these scores are “better than nothing.” There are numerous values in tension in this example—some which relate to technological capacity or efficacy, others that relate to learner autonomy versus instructor support, and still others that relate to the surveillance “end state” of learning technologies, which some scholars have openly criticized (Andrejevic & Selwyn, 2020).

This vignette raises several questions about the roles and decision-making capacity of multiple stakeholders in relation to emerging technologies and the kinds of evidence that lead to certain decisions being made. Should a student have recourse if their “attention” is deemed lacking, but this lower attention score relates to different cultural background, neuroatypical or disability status, or other failures of the tracking technology? How accurate should technologies be for them to be allowed in the classroom? How much control should learners be able to assert over which technologies are used, what data is allowed to be collected about them, and how this data is used? How transparent should the instructor or designers’ use of data extracted through surveillance technologies to inform grades or other decisions be to the learner? 

Surveillance in the Home

In the wake of the COVID-19 pandemic, educators sought to pivot their instructional practices to address the realities of “pandemic pedagogy.” Residential instruction, in particular, moved into “emergency remote teaching” mode (Hodges et al., 2020), and instructors quickly sought to identify assessment alternatives that would translate existing proctored testing methods into the student’s home. Of course, proctored tests at a distance were nothing new—but the scale and speed at which this shift took place seldom addressed issues of equity, student autonomy, and the translation of assessment context without taking into place other assumptions of how these inscribed values would impact the learner’s experience. Articles in the popular press quickly decried this software as intrusive, and students’ experience of the kinds of behaviors flagged by common software such as Proctorio and Examity that used face- and gaze-detection precipitated outrage.

What resulted could have been predicted based on prior literature on privacy and education (e.g., Arpaci et al., 2015). Students enrolled in higher learning institutions worldwide were required to download and install highly intrusive software on their personal devices. Software typically required access to a microphone and webcam. Many proctoring “solutions” also required the student to verify that the room was clear of other people and flagged instances where other voices were audible. One anecdote of this tracking at its worst was reported in The Washington Post:

“‘A STUDENT IN 6 MINUTES HAD 776 HEAD AND EYE MOVEMENTS,’ [the instructor] wrote [to a student], adding later, ‘I would hate to have to write you up.’”

[. . .]

One student replied in a group chat with their peers: “How the hell are we [supposed] to control our eyes” (Harwell, 2020).

This tracking occurred during a pandemic where families and friends were frequently locked down in close quarters. Many students did not have adequate access to physical privacy, others were ill while attending class remotely, and still, others were experiencing high levels of anxiety as the world seemingly was burning down around them in the biggest health crisis in a century. Adding in additional realities of the pandemic, such as the need to quarantine or isolate, rapidly shifting public health protocols, and uneven transition to remote learning pedagogies, the use of invasive proctoring software was a recipe for disaster.

While it is easy to view the particular socio-cultural and socio-technical tensions brought about by the pandemic as difficult yet peculiar—issues that could not have been foreseen or mitigated—the reality is somewhat different. Even before the pandemic, some learners did not have access to the types of technology and privacy that were assumed by the proctoring software (Gonzales et al., 2020). Characteristics of assessment that were unquestioningly supported by proctoring software providers, highlighting specific forms of rigor and validity in specific controlled assessment environments, resulted in inequitable impacts on students learning in the least hospitable environments. These socio-cultural impacts were felt most acutely by those who were intersectionally disadvantaged and disempowered: those living in shared living spaces with many family members and friends, those experiencing homelessness and living in their cars or other ad hoc environs, and those lacking up-to-date digital devices. 

What values were in tension3? Values that are foregrounded in the design of instruction (or, in this case, assessment) are rooted in the beliefs one has about their discipline, their pedagogy, and the nature of student experiences that are deemed most beneficial. First, let us consider how these beliefs might emerge concerning the idea of testing as an assessment method4:

These beliefs point towards values that focus primarily on the pragmatics of instruction, focusing on issues of scale, consistency, and/or tradition. Second, we can consider beliefs from the perspective of a pandemic learner:

These beliefs point towards values such as authenticity, autonomy, or transparency. The values that were inscribed into the initial design decisions surrounding test-based assessment were potentially problematic—focusing on instructor-centric concerns rather than the student experience or the permanence of learning outcomes—but the intersectional harms of these decisions were potentially minimized due to the public nature of the residential classroom where access to technology devices was more readily ensured. However, when these public assumptions, including the minimization of individual student privacy, shifted and became translated into the student’s home, bedroom, or other living environment, these inscribed values became evidently and transparently inequitable. Should a piece of software or proctor have the right to know the student’s living situation? Should students not only submit themselves to mandated surveillance but also pay for the privilege of being surveilled (in many cases)? What types of privacy should the student have to give up to be able to participate in mandatory forms of assessment? What boundaries can or should exist in the liminal space between the instructor, instructional environment, and student? 

Discussion

While I have provided two examples of explicit surveillance in this chapter to provide a point of focus, many other tactics commonly used by instructional designers to track and evaluate learner progress could also be viewed through a more critical lens. When does the use of learning analytics to track clickstream data at a profoundly detailed level in an LMS, app, or learning module shift from a primary purpose of providing value to the learner to the collection and modeling of data because the stakeholder can? How transparent are these data collection and use methods, and how much control does the learner have over how their data are collected and used? What forms of privacy should learners be guaranteed, and how would they know they had a choice in how their data were collected and used as part of their educational experience? How might deceptive techniques such as dark patterns be used to steer learner behavior and interactions with educational materials? And when might manipulative practices be used to overtly mandate surveillance in contexts where learners have no other recourse—consistent with prior definitions of “asshole designer properties” (Gray, Chivukula, et al., 2020)?

Learning technologies and other outcomes of instructional design practices are representative of few contexts where users do not have to consent meaningfully—where their engagement with instructors or mandated learning modules is already power-laden and where the learner’s voice can often be avoided or overtly ignored. What justice-oriented design practices (Costanza-Chock, 2020; Svihla et al., 2022) might be used to reassert learner autonomy, encouraging consideration of the potential harms and future abuses of educational technologies? How might the field of instructional design and technology consider—at its very core—issues of ethical impact? As Moore (2021) has recently written, the models that are commonly referred to as the theoretical foundation of our field do not adequately explain or support the everyday practices of instructional designers; further, these models rarely address matters of ethical concern, much less making these concerns central to the practice of design. In this sense, our field is far behind others. Papanek (1971) called for the centrality of ethics in industrial design in the 1970s, citing the damage being done in the name of disposable consumer culture. Garland (1964) decried the abuses of graphic designers when marketing to consumers in the 1960s, which Milton Glaser marked out through Dante-esque steps that a designer could consider along a “Road to Hell.”5 Methodologies such as Value-Sensitive Design (Friedman & Hendry, 2019) and Values at Play (Flanagan & Nissenbaum, 2014) have also shaped fields adjacent to instructional design for decades. So, what do we need to do as instructional design scholars and practitioners to “catch up” and re-locate ethics at the center of our practice?

I will describe two foundational elements of ethics-focused practice that instructional design educators, students, and practitioners should consider: 1) identifying values and matters of ethical concern as always already existing as a part of instructional design work and 2) harnessing and languaging ethics to center design conversations on ethical concerns with attention to opportunities for action.

Values are Mediated by the Designer

The issues foregrounded through an analysis of surveillance technologies in instructional design allow initial access to the values implicit in all learning environments. Critical pedagogy scholars have described some of these facets of the learning experience as the “hidden curriculum” (Gray, Parsons, et al., 2020; Snyder, 1970; Volpi, 2020)—describing things that are learned even if they are not explicitly taught. Thus, reconstructive techniques such as those used in the vignettes above can be used as one entry point toward understanding the broader structural and socio-cultural implications of instructional design decisions at the broadest scales. 

However, value inscription and ethical impact also shape the most mundane instructional design decisions. These tensions relate to what Vickers (1984) calls one’s appreciative system, which Schön (Boling et al., 2020) used to describe how designers frame the design situation, consider solutions, and then use the underlying appreciative assumptions of those solutions to iterate and move the design process forward. This reliance upon an appreciative system—that incorporates a set and hierarchy of values and a particular point of view—is an inescapable part of design work that can only be taken on by a designer who is acting based on their moral judgments and design character (Boling et al., 2020). To address this complex and ethically nuanced space, the designer must use their judgment to understand both the inherent complexity of the design context (what Stolterman, 2008 refers to as “design complexity”) and the ethical character of that space that makes some decisions more preferable to certain stakeholders under certain conditions (what Gray & Chivukula, 2019 refer to as “ethical design complexity”). Ethical design complexity foregrounds both the values that are “in play” as part of the design context and the role of the designer in manipulating these values as a core part of the design process—“complex and choreographed arrangements of ethical considerations that the designer continuously mediates through the lens of their organization, individual practices, and ethical frameworks” (Gray & Chivukula, 2019, p. 9). Instructional designers must be equipped to recognize this inherent ethical design complexity, and rather than scientize or abstract this ethical responsibility, embrace its contingency and subjectivity on behalf of the learners and society they wish to support.

Values (and Methods That Engage Values) Should Be a Key Element of Doing and Talking About Design Work

Even designers with the best and most altruistic intentions can design outcomes that are directly harmful to learners or produce societal impacts that reproduce inequities 6. As an entry point to considering these harms, designers should consider using value-sensitive methodologies such as those proposed by Friedman and Hendry (2019) or broader and more flexible use of design methods that engage designers in considering ethical impact across various dimensions. My colleagues and I have collected and organized a set of ethics-focused methods (https://everydayethics.uxp2.com/methods), and further details about how we created this collection are available in a companion research article (Chivukula et al., 2021). As part of our collection and analysis process, we have identified multiple “intentions” that could drive more ethically centered practice, including I want to have additional information about my users, I want to identify appropriate values to drive my design work; I want to figure out how to break my design work; I want to evaluate my design outcomes; I want to apply specific values in my design work; I want to align my team in addressing difficult decisions; and I want to better understand my responsibility as a designer. Many of these intentions could be used to scaffold similar conversations to those raised in the vignettes above that relate to the ethical character of key design decisions, expectations of social impact, or identification of direct or indirect harms to learners or other stakeholders.

We have considered the case of the careful designer who is concerned about societal impact and might find substantial value in enhancing their practices through ethically-centered design methods. But designers—knowingly or unwittingly—can also inscribe harmful practices into their designed outcomes that take advantage of knowledge of human behavior. These tactics are commonly known as dark patterns—design strategies that provide more value to the stakeholder or shareholder than the user (Gray, Chen et al., 2021; Gray et al., 2018; Gunawan et al., 2021; Mathur et al., 2021). More hostile and transparent forms of manipulation or coercion have also been captured under the label of asshole designer strategies (Gray, Chivukula et al., 2020), which explicitly diminish user autonomy. As framed previously in the two vignettes that described the use of facial recognition to augment learning experiences, some harms can be directly traced back to beliefs about instruction or assessment that may be inequitable or otherwise ethically problematic. However, other deceptive tactics might be less easily identified initially, steering or nudging the learner but perhaps not forcing, manipulating, or coercing them. These learner and designer agency imbalances are an ideal space for further investigation by instructional design scholars. When is it acceptable for an instructional designer to use sneaking, nudging, nagging, or other strategies to subtly encourage learners to do things they might not otherwise do?7 How is the designer’s knowledge of learning conditions, learner profiles, and human psychology used to create more transparent spaces where autonomy and emancipation emerge as primary inscribed values? What commitments do instructional designers have to negotiate the complex tensions among different stakeholders, and what values should be central to the praxis of instructional design?

Conclusion

In this chapter, I have described two vignettes that reveal ethical tensions in the design of instructional experiences, identifying opportunities for competing sets of values to be articulated and used to make appropriate and ethically-centered design decisions. Leveraging these vignettes, I posit that instructional design educators, students, and practitioners should attend to the value-laden nature of design work by increasing their awareness of how the actions of a designer always already mediate ethics as a central part of the design context. Since this is the case, designers should attend to values as a key means of doing and discussing their design work. 

References 

Zeng, H., Shu, X., Wang, Y., Wang, Y., Zhang, L., Pong, T.-C., & Qu, H. (2021). EmotionCues: Emotion-oriented visual summarization of classroom videos. IEEE Transactions on Visualization and Computer Graphics, 27(7), 3168–3181. https://doi.org/10.1109/TVCG.2019.2963659

Footnotes

1 https://www.class.com/

2 For instance, see equity issues that emerged in relation to “camera on” policies during the pandemic that left some learners with limited ability to express their privacy preferences (Castelli & Sarvary, 2021)

3 See (Friedman & Kahn, 2003) for a further discussion of human values, including how values become part of the fabric of designer interactions through embodied, exogeneous, and interactional positions.

4 Many of these beliefs were discussed and espoused by educators throughout the pandemic on the Facebook group “Pandemic Pedagogy,” which at the time of writing has over 32,000 members.

5 Glaser’s original “road to hell” steps along with contemporary interpretations for digital product designers are available at https://dropbox.design/article/the-new-12-steps-on-the-road-to-product-design-hell.

6 As a classic example in the context of educational technologies, consider the problematic legacy of the “One Laptop Per Child” initiative; (Ames, 2019; Warschauer & Ames, 2010).

7 See (Gray, Chivukula, et al., 2021) for a description of deceptive roles that designers can take on when attempting to resolve tensions between user agency and design goals.

Colin M. Gray

Indiana University

Colin M. Gray is an Associate Professor in the Luddy School of Informatics, Computing, and Engineering at Indiana University Bloomington, where they are Director of the Human-Computer Interaction design (HCI/d) program. They hold appointments as Guest Professor at Beijing Normal University and Visiting Researcher at Newcastle University. They have worked as an art director, contract designer, and trainer, and their involvement in design work informs their research on design activity and how design capability is learned. Colin’s research focuses on the ways in which the pedagogy and practice of designers informs the development of design ability, particularly in relation to ethics, design knowledge, and learning experience. They have consulted on multiple legal cases relating to dark patterns and data protection and work with regulatory bodies and non-profit organizations to increase awareness and action relating to deceptive and manipulative design practices. Colin’s research and engagement activities cross multiple disciplines, including human-computer interaction, instructional design and technology, law and policy, design theory and education, and engineering and technology education.

This content is provided to you freely by EdTech Books.

Access it online or download it at https://edtechbooks.org/applied_ethics_idt/ethical_mediators.