Instructional designers have been instrumental in shaping learning experiences for almost a century—contributing to perceptions of what instructional experiences should be considered valuable, worthwhile, and rigorous. However, instructional theories and models of instructional design practice have rarely considered the ethical role of the designer in creating equitable and inclusive futures. In this chapter, I use two vignettes of instructional design work framed by facial recognition technologies to locate ethical tensions between designers and learners, identifying opportunities to leverage the mediating role of the designer. I describe potential ways forward for researchers, educators, and students that reposition ethics at the core of the discipline.
Instructional designers have been at the forefront of the scale-up of learning experiences of all kinds over the last century—transitioning our societies from highly local instructional practices to ones that have a shared connection to instructional and learning theories that can be practiced at scale. However, in what has been framed as a rush to “scientize” the discipline of instructional design (Smith & Boling, 2009), some of the core components of what it means to re-shape the world through design have potentially been lost. Chief among these components is perhaps the responsibility of the designer themself in creating new futures, shaping worlds and lives, and sustaining or confronting structural norms that more often disempower or exclude rather than empower and emancipate. Indeed, the models that dominate the field of instructional design rarely include references to the moral and ethical components that are at the center of the experienced pedagogy, and this lack of focus has—for decades—kept scholars and practitioners in our field from questioning and negotiating ethical tensions in the design of learning experiences.
In this chapter, I will confront this historic lack of attention to ethics in instructional design by focusing on the role of the designer themself in negotiating competing values and norms as a key part of engaging in design work. I will first provide some brief background to describe how the designer's role intersects with a broader view of design as intentional change and worldbuilding. I will then use two vignettes that describe the intersection of instructional design and a specific category of technologies—facial recognition—to identify relevant values and ethical tensions that instructional designers must recognize and confront. I conclude with some ideas of how the field of instructional design might relocate ethics to its core, impacting the theory and practice of instructional designers in ways that not only acknowledge but also explicitly leverage the making of more ethical and inclusive futures.
Framing the Ethical Landscape of Design Practice
Design is an ethical act whereby we change “existing situations into preferred ones” (Simon, 1996). However, whose world is being shaped and what constitutes a preferred state is contested and value-laden (Willis, 2006). Even while education and instruction have been framed as a moral enterprise (e.g., Durkheim, 2012; Nucci, 2006) with the learner’s uptake of norms and values as part of their cognitive schema taken as an inherent part of educational praxis, the field of instructional design and technology has unevenly addressed—or even acknowledged—the role of ethics in the design of instructional experiences (see Gray & Boling, 2016; Yeaman et al., 1994 as a synthesis of the nascent interest in ethics in ID across multiple decades).
In my research spanning more than a decade, I have sought to describe how values and matters of ethical concern are manifest in design activity—including work across the spectrum of instructional design, learning experience design, human-computer interaction design, and beyond (Boling et al., 2020; Chivukula et al., 2020; Gray & Boling, 2016; Gray & Chivukula, 2019; Gray et al., 2015). Through this work, my co-authors and I have revealed the subjective and contingent judgments that guide a designer’s practice (Gray et al., 2015), the mediating role of the designer in identifying and responding to ethical design complexity (Gray & Chivukula, 2019), and the practices of designers that often reinforce an imbalance of power between stakeholders and end users through practices such as deceptive design, “asshole design,” or the use of dark patterns (Gray, Chivukula, et al., 2020, 2021; Gray et al., 2018). These studies have revealed what has been known by designers since the dawn of the Industrial Revolution: designers are powerful agents that can use their skills to reshape the world, reinforcing structural inequities, pandering to humanity’s worst excesses, or contributing to emancipatory and socially just design practices (cf., Costanza-Chock, 2020; Papanek, 1971).
While much previous research has focused on situating ethical engagement in relation to common paradigms of ethics (e.g., consequentialist, virtue, deontological)—for instance, through codes of ethics (Adams et al., 2001; Buwert, 2018) or methodologies that are grounded in moral philosophy (e.g., Flanagan & Nissenbaum, 2014; Friedman & Hendry, 2019), in my previous work, I have sought to describe how designers might frame ethics as a core part of their everyday ways of being and acting in service to client needs in a socially responsible manner—which others have described as “everyday ethics” (cf., Halberstam, 1993), building on the pragmatist tradition of ethical engagement that prioritizes both ethical awareness and attention to intentionally reshaping society in accordance with one’s values (Dixon, 2020; Steen, 2015).
Doing "Ethics by Other Means"
According to philosophical engagement by Verbeek (2006) and underscored by empirical work by Shilton (2013, 2018), designers of all sorts engage in ethical reasoning and pragmatic action. However, designers do so in ways that are characteristically different from moral philosophers or those only seeking to theorize what should or ought to be. Instead, designers (re)shape the world through judgments that are always already value-laden—or what Verbeek (2006) describes as “doing ethics by other means”—whether designers are aware of this ethical armature embedded within their work or not.
What does this engagement look like when a designer is ethically aware? And what might a design situation or set of design outcomes look like when awareness and sensitivity to ethical impact are lacking? I will present two brief vignettes of recent contexts of instructional design work, focusing on the integration of specific emerging technologies to illustrate the inscribed values present in designed outcomes and identify opportunities for increasing a designer’s ethical awareness and ability to act. Both vignettes focus on one specific type of technology deployed in the service of learning experiences to allow comparison—namely, the use of facial recognition and computer vision as a surveillance technology. Comparable technology-driven application contexts for learning (e.g., social learning via web-based interactions, engagement through mixed reality, or learning analytics-focused approaches, just to name a few) could be evaluated similarly using this same approach.
Surveilling Affect and Attention in the Residential Classroom
First, I will describe a vignette from prior to the COVID-19 pandemic, which leveraged advances in computer vision. With the rise of capacity to perform facial recognition in real-time, this technology has also been applied in educational contexts. More recently, the use of facial recognition and computer vision techniques has evolved beyond mere recognition (which has applications for educational settings in attendance tracking, for instance; Mothwa et al., 2018) to attempt to evaluate student attitudes, emotions, or attention (Barrett et al., 2019; Zaletelj & Košir, 2017).
Early techniques to detect learner characteristics took place in physical learning environments, using detection techniques that included video cameras and, in some cases, Kinect sensors. Facial recognition and evaluation models have been proposed using these data sources to classify learner behaviors about engagement, attention, and emotion. For example, one such proposed system published before the pandemic, known as EmotionCues, used “[a]naly[sis of] students’ emotions from classroom videos [to] help both teachers and parents quickly know the engagement of students in class” (Zeng et al., 2021). These types of detection and analysis techniques have continued to be honed through the pandemic, leading to recent plans for integration into a commercial product called Class offered by Intel and Classroom Technologies[1] that will “capture[] images of students’ faces with a computer camera and computer vision technology [on Zoom] and combine[] it with contextual information about what a student is working on at that moment to assess a student’s state of understanding” (Kaye, 2022).
What values were in tension when considering the design and deployment of this system? First, let us consider how these beliefs might emerge in relation to an instructional designer or instructor’s goals of evaluating or characterizing learner attention or understanding:
- Visible learner attention is critical to the efficacy of learning experiences
- Learner attention can be rigorously tracked and evaluated through facial recognition technologies
- Tracking the attention of individual learners in large groups is important to provide customized learning or tutoring
- Visible emotions can indicate a learner’s level of understanding (as a proxy for learning)
- Even if emotional or attention tracking is knowingly flawed, it is better than nothing
Second, we can consider beliefs from the perspective of a learner whose attention or emotions may be continuously tracked by such technologies, with or without their knowledge:
- I want to be valued as an individual.
- I want to have control over how I am perceived by others.
- I want to be aware of what data is being collected about me and how these data might be used to inform my learning experience.
- I want to be able to say no to being surveilled as part of my learning experience.
- I am okay with being tracked by video, but I want to know how the instructor uses these data or if they relate to my participation grade.
Some assumptions by the instructional designer or educator are rooted in learning theory, where the learner's attention is a critical pre-condition for them to engage in a learning experience and/or construct their knowledge. For instance, the ability of an instructor to visually recognize when students are less attentive might be a trigger to use a different pedagogical strategy (perhaps planned by an instructional designer) that is more engaging. However, some other assumptions relate to what is technically possible or how technical possibility might relate to other aspects of the learning experience. For instance, one could easily move from the belief that tracking the attention of individual learners is important to assume that any technology that could scale this assessment from dozens of learners to hundreds might bring pedagogical value. In parallel, a belief that technologies can accurately detect human emotions, attention, or other proxies for “understanding” or “learning” might lead an instructional designer to specify these technologies without anticipating instances where these technologies fail or otherwise lead to inaccurate results. For instance, Barrett et al. (2019) have previously identified numerous assumptions built into flawed models of emotion, including a lack of consideration of context, individual personality, and cultural factors. The values of the designer and learner come into tension around the technological capacity of tracking and the pragmatics of using these technologies to inform the learning experience. The learner may want to be able to express their choice not to be surveilled, even while they may have little or no agency to make this choice in their learning environment.[2] From a more pragmatic perspective, the instructional designer or instructor may recognize that the attention scores produced by the machine learning model are flawed but reason that these scores are “better than nothing.” There are numerous values in tension in this example—some which relate to technological capacity or efficacy, others that relate to learner autonomy versus instructor support, and still others that relate to the surveillance “end state” of learning technologies, which some scholars have openly criticized (Andrejevic & Selwyn, 2020).
This vignette raises several questions about the roles and decision-making capacity of multiple stakeholders in relation to emerging technologies and the kinds of evidence that lead to certain decisions being made. Should a student have recourse if their “attention” is deemed lacking, but this lower attention score relates to different cultural background, neuroatypical or disability status, or other failures of the tracking technology? How accurate should technologies be for them to be allowed in the classroom? How much control should learners be able to assert over which technologies are used, what data is allowed to be collected about them, and how this data is used? How transparent should the instructor or designers’ use of data extracted through surveillance technologies to inform grades or other decisions be to the learner?
Surveillance in the Home
In the wake of the COVID-19 pandemic, educators sought to pivot their instructional practices to address the realities of “pandemic pedagogy.” Residential instruction, in particular, moved into “emergency remote teaching” mode (Hodges et al., 2020), and instructors quickly sought to identify assessment alternatives that would translate existing proctored testing methods into the student’s home. Of course, proctored tests at a distance were nothing new—but the scale and speed at which this shift took place seldom addressed issues of equity, student autonomy, and the translation of assessment context without taking into place other assumptions of how these inscribed values would impact the learner’s experience. Articles in the popular press quickly decried this software as intrusive, and students’ experience of the kinds of behaviors flagged by common software such as Proctorio and Examity that used face- and gaze-detection precipitated outrage.
What resulted could have been predicted based on prior literature on privacy and education (e.g., Arpaci et al., 2015). Students enrolled in higher learning institutions worldwide were required to download and install highly intrusive software on their personal devices. Software typically required access to a microphone and webcam. Many proctoring “solutions” also required the student to verify that the room was clear of other people and flagged instances where other voices were audible. One anecdote of this tracking at its worst was reported in The Washington Post:
“‘A STUDENT IN 6 MINUTES HAD 776 HEAD AND EYE MOVEMENTS,’ [the instructor] wrote [to a student], adding later, ‘I would hate to have to write you up.’”
[…]
One student replied in a group chat with their peers: “How the hell are we [supposed] to control our eyes” (Harwell, 2020).
This tracking occurred during a pandemic where families and friends were frequently locked down in close quarters. Many students did not have adequate access to physical privacy, others were ill while attending class remotely, and still, others were experiencing high levels of anxiety as the world seemingly was burning down around them in the biggest health crisis in a century. Adding in additional realities of the pandemic, such as the need to quarantine or isolate, rapidly shifting public health protocols, and uneven transition to remote learning pedagogies, the use of invasive proctoring software was a recipe for disaster.
While it is easy to view the particular socio-cultural and socio-technical tensions brought about by the pandemic as difficult yet peculiar—issues that could not have been foreseen or mitigated—the reality is somewhat different. Even before the pandemic, some learners did not have access to the types of technology and privacy that were assumed by the proctoring software (Gonzales et al., 2020). Characteristics of assessment that were unquestioningly supported by proctoring software providers, highlighting specific forms of rigor and validity in specific controlled assessment environments, resulted in inequitable impacts on students learning in the least hospitable environments. These socio-cultural impacts were felt most acutely by those who were intersectionally disadvantaged and disempowered: those living in shared living spaces with many family members and friends, those experiencing homelessness and living in their cars or other ad hoc environs, and those lacking up-to-date digital devices.
What values were in tension[3]? Values that are foregrounded in the design of instruction (or, in this case, assessment) are rooted in the beliefs one has about their discipline, their pedagogy, and the nature of student experiences that are deemed most beneficial. First, let us consider how these beliefs might emerge concerning the idea of testing as an assessment method[4]:
- Testing is the best mode of assessment for certain types of content knowledge.
- Testing is the most practical means of assessing student mastery of content knowledge in large classrooms.
- Testing is flawed, but there is no time to pivot to other assessment forms.
- Testing is the only common assessment method in my discipline.
- Students’ inability to “cheat” or use outside sources of knowledge is a key criterion for assessment rigor and validity.
- Evaluation of external signals (e.g., audio, video, gaze) can be used to detect common drivers for cheating.
These beliefs point towards values that focus primarily on the pragmatics of instruction, focusing on issues of scale, consistency, and/or tradition. Second, we can consider beliefs from the perspective of a pandemic learner:
- I am just trying to survive
- I want to feel valued as a person
- I want to be judged by what I can do in an authentic setting
These beliefs point towards values such as authenticity, autonomy, or transparency. The values that were inscribed into the initial design decisions surrounding test-based assessment were potentially problematic—focusing on instructor-centric concerns rather than the student experience or the permanence of learning outcomes—but the intersectional harms of these decisions were potentially minimized due to the public nature of the residential classroom where access to technology devices was more readily ensured. However, when these public assumptions, including the minimization of individual student privacy, shifted and became translated into the student’s home, bedroom, or other living environment, these inscribed values became evidently and transparently inequitable. Should a piece of software or proctor have the right to know the student's living situation? Should students not only submit themselves to mandated surveillance but also pay for the privilege of being surveilled (in many cases)? What types of privacy should the student have to give up to be able to participate in mandatory forms of assessment? What boundaries can or should exist in the liminal space between the instructor, instructional environment, and student?
Discussion
While I have provided two examples of explicit surveillance in this chapter to provide a point of focus, many other tactics commonly used by instructional designers to track and evaluate learner progress could also be viewed through a more critical lens. When does the use of learning analytics to track clickstream data at a profoundly detailed level in an LMS, app, or learning module shift from a primary purpose of providing value to the learner to the collection and modeling of data because the stakeholder can? How transparent are these data collection and use methods, and how much control does the learner have over how their data are collected and used? What forms of privacy should learners be guaranteed, and how would they know they had a choice in how their data were collected and used as part of their educational experience? How might deceptive techniques such as dark patterns be used to steer learner behavior and interactions with educational materials? And when might manipulative practices be used to overtly mandate surveillance in contexts where learners have no other recourse—consistent with prior definitions of “asshole designer properties” (Gray, Chivukula, et al., 2020)?
Learning technologies and other outcomes of instructional design practices are representative of few contexts where users do not have to consent meaningfully—where their engagement with instructors or mandated learning modules is already power-laden and where the learner’s voice can often be avoided or overtly ignored. What justice-oriented design practices (Costanza-Chock, 2020; Svihla et al., 2022) might be used to reassert learner autonomy, encouraging consideration of the potential harms and future abuses of educational technologies? How might the field of instructional design and technology consider—at its very core—issues of ethical impact? As Moore (2021) has recently written, the models that are commonly referred to as the theoretical foundation of our field do not adequately explain or support the everyday practices of instructional designers; further, these models rarely address matters of ethical concern, much less making these concerns central to the practice of design. In this sense, our field is far behind others. Papanek (1971) called for the centrality of ethics in industrial design in the 1970s, citing the damage being done in the name of disposable consumer culture. Garland (1964) decried the abuses of graphic designers when marketing to consumers in the 1960s, which Milton Glaser marked out through Dante-esque steps that a designer could consider along a “Road to Hell”[5]. Methodologies such as Value-Sensitive Design (Friedman & Hendry, 2019) and Values at Play (Flanagan & Nissenbaum, 2014) have also shaped fields adjacent to instructional design for decades. So, what do we need to do as instructional design scholars and practitioners to “catch up” and re-locate ethics at the center of our practice?
I will describe two foundational elements of ethics-focused practice that instructional design educators, students, and practitioners should consider: 1) identifying values and matters of ethical concern as always already existing as a part of instructional design work and 2) harnessing and languaging ethics to center design conversations on ethical concerns with attention to opportunities for action.
Values are Mediated by the Designer
The issues foregrounded through an analysis of surveillance technologies in instructional design allow initial access to the values implicit in all learning environments. Critical pedagogy scholars have described some of these facets of the learning experience as the “hidden curriculum” (Gray, Parsons, et al., 2020; Snyder, 1970; Volpi, 2020)—describing things that are learned even if they are not explicitly taught. Thus, reconstructive techniques such as those used in the vignettes above can be used as one entry point toward understanding the broader structural and socio-cultural implications of instructional design decisions at the broadest scales.
However, value inscription and ethical impact also shape the most mundane instructional design decisions. These tensions relate to what Vickers (1984) calls one’s appreciative system, which Schön (Boling et al., 2020) used to describe how designers frame the design situation, consider solutions, and then use the underlying appreciative assumptions of those solutions to iterate and move the design process forward. This reliance upon an appreciative system—that incorporates a set and hierarchy of values and a particular point of view—is an inescapable part of design work that can only be taken on by a designer who is acting based on their moral judgments and design character (Boling et al., 2020). To address this complex and ethically nuanced space, the designer must use their judgment to understand both the inherent complexity of the design context (what Stolterman, 2008 refers to as “design complexity”) and the ethical character of that space that makes some decisions more preferable to certain stakeholders under certain conditions (what Gray & Chivukula, 2019 refer to as “ethical design complexity”). Ethical design complexity foregrounds both the values that are “in play” as part of the design context and the role of the designer in manipulating these values as a core part of the design process—"complex and choreographed arrangements of ethical considerations that the designer continuously mediates through the lens of their organization, individual practices, and ethical frameworks” (Gray & Chivukula, 2019, p. 9). Instructional designers must be equipped to recognize this inherent ethical design complexity, and rather than scientize or abstract this ethical responsibility, embrace its contingency and subjectivity on behalf of the learners and society they wish to support.
Values (and Methods That Engage Values) Should Be a Key Element of Doing and Talking About Design Work
Even designers with the best and most altruistic intentions can design outcomes that are directly harmful to learners or produce societal impacts that reproduce inequities [6]. As an entry point to considering these harms, designers should consider using value-sensitive methodologies such as those proposed by Friedman and Hendry (2019) or broader and more flexible use of design methods that engage designers in considering ethical impact across various dimensions. My colleagues and I have collected and organized a set of ethics-focused methods https://everydayethics.uxp2.com/methods, and further details about how we created this collection are available in a companion research article (Chivukula et al., 2021). As part of our collection and analysis process, we have identified multiple “intentions” that could drive more ethically centered practice, including I want to have additional information about my users, I want to identify appropriate values to drive my design work; I want to figure out how to break my design work; I want to evaluate my design outcomes; I want to apply specific values in my design work; I want to align my team in addressing difficult decisions; and I want to better understand my responsibility as a designer. Many of these intentions could be used to scaffold similar conversations to those raised in the vignettes above that relate to the ethical character of key design decisions, expectations of social impact, or identification of direct or indirect harms to learners or other stakeholders.
We have considered the case of the careful designer who is concerned about societal impact and might find substantial value in enhancing their practices through ethically-centered design methods. But designers—knowingly or unwittingly—can also inscribe harmful practices into their designed outcomes that take advantage of knowledge of human behavior. These tactics are commonly known as dark patterns—design strategies that provide more value to the stakeholder or shareholder than the user (Gray, Chen et al., 2021; Gray et al., 2018; Gunawan et al., 2021; Mathur et al., 2021). More hostile and transparent forms of manipulation or coercion have also been captured under the label of asshole designer strategies (Gray, Chivukula et al., 2020), which explicitly diminish user autonomy. As framed previously in the two vignettes that described the use of facial recognition to augment learning experiences, some harms can be directly traced back to beliefs about instruction or assessment that may be inequitable or otherwise ethically problematic. However, other deceptive tactics might be less easily identified initially, steering or nudging the learner but perhaps not forcing, manipulating, or coercing them. These learner and designer agency imbalances are an ideal space for further investigation by instructional design scholars. When is it acceptable for an instructional designer to use sneaking, nudging, nagging, or other strategies to subtly encourage learners to do things they might not otherwise do?[7] How is the designer’s knowledge of learning conditions, learner profiles, and human psychology used to create more transparent spaces where autonomy and emancipation emerge as primary inscribed values? What commitments do instructional designers have to negotiate the complex tensions among different stakeholders, and what values should be central to the praxis of instructional design?
Conclusion
In this chapter, I have described two vignettes that reveal ethical tensions in the design of instructional experiences, identifying opportunities for competing sets of values to be articulated and used to make appropriate and ethically-centered design decisions. Leveraging these vignettes, I posit that instructional design educators, students, and practitioners should attend to the value-laden nature of design work by increasing their awareness of how the actions of a designer always already mediate ethics as a central part of the design context. Since this is the case, designers should attend to values as a key means of doing and discussing their design work.
References
Adams, J. S., Tashchian, A., & Shore, T. H. (2001). Codes of ethics as signals for ethical behavior. Journal of Business Ethics, 29(3), 199–211. https://doi.org/10.1023/A:1026576421399
Ames, M. G. (2019). The charisma machine: The life, death, and legacy of one laptop per child. MIT Press. https://play.google.com/store/books/details?id=v4y5DwAAQBAJ
Andrejevic, M., & Selwyn, N. (2020). Facial recognition technology in schools: critical questions and concerns. Learning, Media and Technology, 45(2), 115–128. https://doi.org/10.1080/17439884.2020.1686014
Arpaci, I., Kilicer, K., & Bardakci, S. (2015). Effects of security and privacy concerns on educational use of cloud services. Computers in Human Behavior, 45, 93–98. https://doi.org/10.1016/j.chb.2014.11.075
Barrett, L. F., Adolphs, R., Marsella, S., Martinez, A. M., & Pollak, S. D. (2019). Emotional expressions reconsidered: Challenges to inferring emotion from human facial Movements. Psychological Science in the Public Interest, 20(1), 1–68. https://doi.org/10.1177/1529100619832930
Boling, E., Gray, C. M., & Smith, K. M. (2020, August). Educating for design character in higher education: Challenges in studio pedagogy. Proceedings of the Design Research Society. https://doi.org/10.21606/drs.2020.120
Buwert, P. (2018, June). Examining the professional codes of design organisations. Proceedings of the Design Research Society. https://doi.org/10.21606/dma.2017.493
Castelli, F. R., & Sarvary, M. A. (2021). Why students do not turn on their video cameras during online classes and an equitable and inclusive plan to encourage them to do so. Ecology and Evolution, 11(8), 3565–3576. https://doi.org/10.1002/ece3.7123
Chivukula, S. S., Li, Z., Pivonka, A. C., Chen, J., & Gray, C. M. (2021). Surveying the Landscape of Ethics-Focused Design Methods. arXiv. http://arxiv.org/abs/2102.08909
Chivukula, S. S., Watkins, C. R., Manocha, R., Chen, J., & Gray, C. M. (2020). Dimensions of UX practice that shape ethical awareness. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–13. https://doi.org/10.1145/3313831.3376459
Costanza-Chock, S. (2020). Design justice: Community-led practices to build the worlds we need. MIT Press. https://play.google.com/store/books/details?id=h4LPDwAAQBAJ
Dixon, B. S. (2020). Dewey and design. Springer International Publishing. https://doi.org/10.1007/978-3-030-47471-3
Durkheim, É. (2012). Moral education. Courier Corporation. https://play.google.com/store/books/details?id=R-2nCgAAQBAJ
Flanagan, M., & Nissenbaum, H. (2014). Values at play in digital games. MIT Press. https://market.android.com/details?id=book-iIYRBAAAQBAJ
Friedman, B., & Hendry, D. G. (2019). Value sensitive design: Shaping technology with moral imagination. MIT Press. https://market.android.com/details?id=book-C4FruwEACAAJ
Friedman, B., & Kahn, P. H., Jr. (2003). Human values, ethics, and design. In J. A. Jacko & A. Sears (Eds.), The human-computer interaction handbook (pp. 1223–1248). Lawrence Erlbaum Associates. https://content.taylorfrancis.com/books/e/download?dac=C2009-0-12021-5&isbn=9781410615862&doi=10.1201/9781410615862-81&format=pdf
Garland, K. (1964). First things first manifesto. Design is history. http://www.designishistory.com/1960/first-things-first/
Gonzales, A. L., McCrory Calarco, J., & Lynch, T. (2020). Technology problems and student achievement gaps: A validation and extension of the technology maintenance construct. Communication Research, 47(5), 750–770. https://doi.org/10.1177/0093650218796366
Gray, C. M., & Boling, E. (2016). Inscribing ethics and values in designs for learning: a problematic. Educational Technology Research and Development, 64(5), 969–1001. https://doi.org/10.1007/s11423-016-9478-x
Gray, C. M., Chen, J., Chivukula, S. S., & Qu, L. (2021). End user accounts of dark patterns as felt manipulation. Proceedings of the ACM on Human-Computer Interaction, 5. https://doi.org/10.1145/3479516
Gray, C. M., & Chivukula, S. S. (2019). Ethical mediation in UX practice. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–11. https://doi.org/10.1145/3290605.3300408
Gray, C. M., Chivukula, S. S., & Lee, A. (2020). What kind of work do “asshole designers” create? Describing properties of ethical concern on Reddit. Proceedings of the 2020 ACM Designing Interactive Systems Conference, 61–73. https://doi.org/10.1145/3357236.3395486
Gray, C. M., Chivukula, S. S., Melkey, K., & Manocha, R. (2021). Understanding “dark” design roles in computing education. Proceedings of the 17th ACM Conference on International Computing Education Research. https://doi.org/10.1145/3446871.3469754
Gray, C. M., Dagli, C., Demiral-Uzan, M., Ergulec, F., Tan, V., Altuwaijri, A. A., Gyabak, K., Hilligoss, M., Kizilboga, R., Tomita, K., & Boling, E. (2015). Judgment and instructional design: how ID practitioners work in practice. Performance Improvement Quarterly, 28(3), 25–49. https://doi.org/10.1002/piq.21198
Gray, C. M., Kou, Y., Battles, B., Hoggatt, J., & Toombs, A. L. (2018). The dark (patterns) side of UX design. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 534:1–534:14. https://doi.org/10.1145/3173574.3174108
Gray, C. M., Parsons, P., & Toombs, A. L. (2020). Building a holistic design identity through integrated studio education. In B. Hokanson, G. Clinton, A. A. Tawfik, A. Grincewicz, & M. Schmidt (Eds.), Educational technology beyond content: A new focus for learning (pp. 43–55). Springer International Publishing. https://doi.org/10.1007/978-3-030-37254-5_4
Gunawan, J., Pradeep, A., Choffnes, D., Hartzog, W., & Wilson, C. (2021). A comparative study of dark patterns across web and mobile modalities. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 1–29. https://doi.org/10.1145/3479521
Halberstam, J. (1993). Everyday ethics: Inspired solutions to real-life dilemmas. Penguin Books.
Harwell, D. (2020, November 12). Cheating-detection companies made millions during the pandemic. Now students are fighting back. The Washington Post. https://www.washingtonpost.com/technology/2020/11/12/test-monitoring-student-revolt/
Hodges, C., Moore, S., Lockee, B., Trust, T., Bond, A., & Others. (2020). The difference between emergency remote teaching and online learning. Educause Review, 27, 1–12.
Kaye, K. (2022, April 17). Intel thinks its AI knows what students think and feel in class. Protocol. https://www.protocol.com/enterprise/emotion-ai-school-intel-edutech
Mathur, A., Kshirsagar, M., & Mayer, J. (2021). What makes a dark pattern... dark? Design attributes, normative considerations, and measurement methods. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–18. https://doi.org/10.1145/3411764.3445610
Moore, S. (2021). The design models we have are not the design models we need. Journal of Applied Instructional Design, 10(4). https://edtechbooks.org/jaid_10_4/the_design_models_we
Mothwa, L., Tapamo, J.-R., & Mapati, T. (2018). Conceptual model of the Smart Attendance Monitoring System using computer vision. 2018 14th International Conference on Signal-Image Technology Internet-Based Systems (SITIS), 229–234. https://doi.org/10.1109/SITIS.2018.00042
Nucci, L. (2006). Education for moral development. In Handbook of moral development (pp. 675–700). Psychology Press. https://www.taylorfrancis.com/chapters/edit/10.4324/9781410615336-36/education-moral-development-larry-nucci
Papanek, V. J. (1971). Design for the real world; human ecology and social change. Bantam Books.
Schön, D. A. (1984). Problems, frames and perspectives on designing. Design Studies, 5(3), 132–136. https://doi.org/10.1016/0142-694X(84)90002-4
Shilton, K. (2013). Values levers: Building ethics into design. Science, Technology & Human Values, 38(3), 374–397. https://doi.org/10.1177/0162243912436985
Shilton, K. (2018). Engaging values despite neutrality: Challenges and approaches to values reflection during the design of internet infrastructure. Science, Technology & Human Values, 43(2). https://doi.org/10.1177/0162243917714869
Simon, H. A. (1996). The sciences of the artificial. MIT Press.
Smith, K. M., & Boling, E. (2009). What do we make of design? Design as a concept in educational technology. Educational Technology Research and Development, 49(4), 3–17.
Snyder, B. R. (1970). The hidden curriculum. MIT Press. https://philpapers.org/rec/SNYTHC
Steen, M. (2015). Upon opening the black box and finding it full: Exploring the ethics in design practices. Science, Technology & Human Values, 40(3), 389–420. https://doi.org/10.1177/0162243914547645
Stolterman, E. (2008). The nature of design practice and implications for interaction design research. International Journal of Design, 2(1), 55–65. https://doi.org/10.1016/j.phymed.2007.09.005
Svihla, V., Chen, Y., & Kang, S. “pil.” (2022). A funds of knowledge approach to developing engineering students’ design problem framing skills. Journal of Engineering Education. https://doi.org/10.1002/jee.20445
Verbeek, P.-P. (2006). Materializing morality: Design ethics and technological mediation. In Science, technology & human values (Vol. 31, pp. 361–380). https://doi.org/10.1177/0162243905285847
Vickers, S. G. (1984). Judgment. In The Vickers papers (pp. 230–245). Harper & Row.
Volpi, C. (2020). Unveil awareness: A design education tool for revealing the hidden curriculum [Master’s thesis, Delft University of Technology]. https://repository.tudelft.nl/islandora/object/uuid:2bfc57e9-9bfb-4440-9cfe-daaba5b219c9
Warschauer, M., & Ames, M. (2010). Can one laptop per child save the world’s poor? Journal of International Affairs, 64(1), 33–51. http://www.jstor.org/stable/24385184
Willis, A.-M. (2006). Ontological designing. Design Philosophy Papers, 4(2), 69–92. https://doi.org/10.2752/144871306X13966268131514
Yeaman, A. R. J., Koetting, J. R., & Nichols, R. G. (1994). Critical Theory, Cultural Analysis, and the Ethics of Educational Technology as Social Responsibility. Educational Technology, 34(2), 5–13. https://www.jstor.org/stable/44428138
Zaletelj, J., & Košir, A. (2017). Predicting students’ attention in the classroom from Kinect facial and body features. EURASIP Journal on Image and Video Processing, 2017(1), 1–12. https://doi.org/10.1186/s13640-017-0228-8
- Zeng, H., Shu, X., Wang, Y., Wang, Y., Zhang, L., Pong, T.-C., & Qu, H. (2021). EmotionCues: Emotion-oriented visual summarization of classroom videos. IEEE Transactions on Visualization and Computer Graphics, 27(7), 3168–3181. https://doi.org/10.1109/TVCG.2019.2963659
Footnotes
[1] https://www.class.com/
[2] For instance, see equity issues that emerged in relation to “camera on” policies during the pandemic that left some learners with limited ability to express their privacy preferences (Castelli & Sarvary, 2021).
[3] See (Friedman & Kahn, 2003) for a further discussion of human values, including how values become part of the fabric of designer interactions through embodied, exogeneous, and interactional positions.
[4] Many of these beliefs were discussed and espoused by educators throughout the pandemic on the Facebook group “Pandemic Pedagogy,” which at the time of writing has over 32,000 members.
[5] Glaser’s original “road to hell” steps along with contemporary interpretations for digital product designers are available at https://dropbox.design/article/the-new-12-steps-on-the-road-to-product-design-hell.
[6] As a classic example in the context of educational technologies, consider the problematic legacy of the “One Laptop Per Child” initiative; (Ames, 2019; Warschauer & Ames, 2010).
[7] See (Gray, Chivukula, et al., 2021) for a description of deceptive roles that designers can take on when attempting to resolve tensions between user agency and design goals.