Deep Assumptions and Data Ethics in Educational Technology

DOI:10.59668/270.12692
EthicsEducational TechnologyAssumptionsData
Deeper assumptions frequently shape the ways educational technology stakeholders collect and use data. This influence of assumptions on data decisions makes it critical that educational technology stakeholders engage with deeper assumptions as part of ethical considerations; indeed, they are key to ensuring that stakeholders engage with structural issues in education and educational technology rather than use ethical compliance as a superficial nod to questions of justice, harm, and power. In this chapter, I illustrate the relationship between deep assumptions and data ethics by considering assumptions related to four broad questions about the purpose of education, the purpose of educational technology, the determination of quality in educational (technology) research, and who has what say in these domains. Debates about data ethics are often better understood as debates about these deeper assumptions, which must be surfaced to consider data ethics in our field thoroughly.

Introduction

In the opening chapter of his book Language and Power, Fairclough (1989) argued that because language had become increasingly important worldwide, people were not paying it enough attention. In particular, Fairclough sought to draw the reader’s attention to two relationships: first, the “extent to which their language… rest[s] on common-sense assumptions ” (p. 4), and second, the ways in which those assumptions might reflect undesirable social organizations and power imbalances. As a French teacher turned researcher of data-rich technologies, it is perhaps fitting that my purpose in this chapter is to echo Fairclough’s arguments about language in the context of data. That is because data have become increasingly important in educational settings, educational technology stakeholders are not paying them enough attention. Such attention, I suggest, will reveal that the ways that we—and others—approach data represent deeper assumptions about our work that often go unquestioned or unchallenged—but that often create or maintain unjust power relations between involved parties.

To illustrate this point, consider an exchange I once had with the help desk for the Canvas Learning Management System (LMS). I was frustrated with Canvas’s use of cookies—small chunks of data stored on computers to track internet users—to welcome first-time users of the LMS. Because companies often use cookies for undesirable online surveillance, I use a web browser that blocks most of them. While it was possible to use Canvas without consenting to this particular cookie, it created considerable annoyance—every time I opened Canvas in a new tab, I received a pop-up welcome message, despite the many hours I had logged in the LMS. I explained the situation in pleading tones to the Canvas help team only to receive a discouraging reply: “I totally get that… However, with Canvas being an educational software, it does have to be tracked.” I was struck by how this employee described this use of data not as defensible in and of itself but rather as a natural conclusion of how things work: because Canvas is educational software, it must engage in tracking. For this employee, it followed that users like me should consent to that tracking or be prepared to deal with disruptive consequences. This “common-sense assumption” that educational technology must engage in tracking (and is therefore unconditionally justified in doing so) is frighteningly far-reaching—indeed, the employee seemed to overlook that this particular use of cookies was not actually tracking educational performance outcomes. Some data must indeed be collected and analyzed for teaching and learning to happen, but if educational software is therefore given a pass for all forms of tracking, where are the limits for intrusion into personal and private lives?

Throughout this chapter, I elaborate on my argument that the ethical and just collection and use of data in education partly depends on acknowledging, evaluating, and even challenging often-unspoken assumptions about education, educational technology, and research in both of these umbrella disciplines. After establishing the purpose and scope of this chapter, I illustrate how deep assumptions about education inform data ethics by considering four questions:

In addressing these questions, my purpose is not to answer them myself but to demonstrate how different answers might change the ethical calculus involved in data decisions in our discipline. Nonetheless, before concluding the chapter, I provide one example of how these questions might inform educational technology stakeholders’ ethical decision-making.

Purpose and Scope

In this section, I clarify my intended purpose for this chapter. One genre of papers on data ethics in education is the checklist for ethical data use (e.g., Drachsler & Greller, 2016). However, it is important to emphasize that data-rich technologies raise difficult issues that defy simple approaches (boyd & Crawford, 2012); although a checklist may be genuinely useful, it is important that stakeholders not get locked into a simplistic, rote view of ethical compliance. For example, while compliance with legal and institutional frameworks is important, it does not represent the sum of one’s ethical responsibilities (Beardsley et al., 2019; Drachsler & Greller, 2016; Mandinach & Gummer, 2021a). Indeed, stakeholders must recognize ways in which ethical and just action may even stand in tension with those frameworks (Corrin et al., 2019; Fiesler et al., 2020). In this vein, other authors (e.g., Corrin et al., 2019; Hakimi et al., 2021) stop short of specific steps, instead articulating or synthesizing guiding principles that have informed or can inform individual, context-dependent decisions about ethics.

However, some authors argue that ethics may be the wrong focus when making these considerations. For example, Green (2021) suggests that an ethics perspective is incapable of adequately addressing social justice issues related to data-rich technologies and methodologies—and that it even runs the risk of “deploying the language of ethics to resist more structural reforms” (p. 250). boyd and Crawford (2012) complicate things further, suggesting that the mere acceptance of the big data phenomenon entails accepting new ethical perspectives, demanding a thorough evaluation of not just ethics within data-rich approaches but also the ethics of data-rich approaches. In response to these challenges, D’Ignazio and Klein (2020) suggest that a commitment to data justice—beyond individual decisions and particular technologies to interrogate deeper, structural issues—is more appropriate than a commitment to data ethics. Educational technology needs such interrogation; for example, in a feminist autoethnographic treatment of her experience as an Afro-Latinx woman, Romero-Hall (2022) describes several ways in which the field of educational technology has privileged White, male perspectives at the expense of others (see also, e.g., Donaldson, 2016). Even a deep commitment to ethical decision-making will not necessarily compensate for these structural influences.

Thus, even when I refer to “ethics” in this chapter, I intend to go deeper (and invite my reader to go deeper) than checklists or principles. Nevertheless, although some of my own opinions will surely be clear from my writing, I also stop short of offering a specific theory or framework of justice that should guide our collection and use of data in educational technology. In diverse disciplines—such as our own—Green (2021) argues that perfect consensus is less important than a willingness to surface debates between perspectives that often remain implicit. Therefore, my purpose in this chapter is to illustrate how educational technology stakeholders’ unspoken assumptions (and the perspectives and structures they are informed by and inform) guide specific decisions about data. My hope is that readers will respond by not only identifying the ways that their assumptions inform their approach to educational data but also questioning those assumptions in a way that invites further ethical reflection.

Given this purpose, the scope of this chapter is necessarily broad in some ways and necessarily narrow in others. In the first respect, I understand the word data broadly—not just as the digital or so-called “big” data that have brought additional prestige to the term (see boyd & Crawford, 2012), but rather as “any type of information that is systematically collected, organized, and analyzed” (D’Ignazio & Klein, 2020, p. 14). Likewise, I follow Molenda (2008) in using the term educational technology to refer broadly to many disciplines that are interested in how learning and teaching intersect with technology (compare with Romero-Hall’s [2021] similar use of learning design and technology). Furthermore, I acknowledge that stakeholders in educational questions of ethics and justice relating to data and technology are not limited to even this broad collection of disciplines. Indeed, to truly consider the deeper, structural issues that I privilege in this chapter, it is necessary to consider questions more traditionally associated with other disciplines related to education.

In the second respect, the breadth and importance of this phenomenon make it impossible to address every possible assumption held by any possible stakeholder—or even to address a single assumption held by a single stakeholder at the level of detail it deserves. I have chosen four broad categories of assumptions that stood out to me as I wrote this chapter, but I am confident many other such categories merit our attention. I begin my description of each category with an example of unethical collection and use of data before describing how stakeholders justified it by underlying assumptions and other assumptions that may also justify the collection and use of data. After addressing each category, I provide an extended example of how identifying and questioning assumptions associated with these categories can inform additional ethical reflection related to educational data. Although an extended example still cannot address all of the ways my central thesis may apply to practice, it will serve as a model for applying these considerations.

What is the Purpose of Education?

Between August 2014 and June 2016, an officer of the Brigham Young University Police Department regularly accessed data from Utah law enforcement agencies to conduct surveillance on students of the private, religious university (Miller, 2019). This access—which nearly led to the Utah Department of Public Safety decertifying BYU Police (Miller, 2021)—was “part of a de facto system, with university employees in several school departments asking him for information and welcoming his reports” (Miller & Alberty, 2021, para. 9). In one case, the data retrieved by BYU Police was used by the associate dean of students (since promoted to dean of students) to ask a woman detailed questions about a sexual assault that she had reported to city—not university—police. The associate dean questioned whether the woman was responsible for the assault and eventually told her she “wasn’t welcome to sign up for classes again” (Miller & Alberty, 2021, para. 42).

Understanding assumptions about the purpose of education at BYU lends insight into how such an invasive and degrading use of data could be seen by educators as justified. Like other religious universities, BYU has been keenly aware of tensions between its academic goals and religious convictions throughout its history (Simpson, 2016). In the 1960s, the then-president of the university established a strict student code of conduct (known as the Honor Code) intended to ensure that students met not just academic standards but moral ones, suggesting that the latter were more important for this educational institution (Waterman & Kagel, 1998). Indeed, the previously-described use of police data to press a BYU student about her sexual assault was part of Honor Code-related concerns (Miller & Alberty, 2021); if—as the associate dean appears to have believed—the sexual contact were, in fact consensual, university rules would allow the student to be disciplined independent of academic performance. Thus, this aggressive surveillance of students was influenced by (though not necessarily an inevitable result of) a particular understanding of the purpose of education at this institution.

Less extreme examples also demonstrate the ways in which assumptions about the purpose of education drive the collection and use of data in educational contexts. Speaking broadly, our expectations about what schools, teachers, students, and others should accomplish necessarily inform what data we collect and how we use it. This is largely obvious and often justified; however, because these expectations do not provide any incentives to limit the scope or intensity of such collection and use, even good intentions can inspire ultimately unethical data collection and analysis. Thus, Crooks (2019) describes the definition and measurement of educational outcomes as a driving factor behind “the proliferation of surveillance” (p. 486) in schools. Even when assumptions about the purpose of education are sound, they must be weighed against other ethical considerations instead of used as the sole justification for decisions about data.

However, it is rare that our assumptions about the purpose of education do not merit further scrutiny. Consider, for example, something as seemingly benign as the content areas emphasized in formal curricula, which obviously affect how educational data are collected and how educational technologies are developed and employed. For example, world language education was once considered important enough that standardized testing in New York in the early 20th century included assessments of students’ understanding of French, Spanish, and German; in turn, the importance of these data collection mechanisms led to inquiries about whether machines could be developed to score the assessments automatically (Watters, 2021). Likewise, transactions on the modern educational marketplace platform TeachersPayTeachers are dominated by materials related to English Language Arts and Math, reflecting the importance of these subjects in the U.S. Common Core State Standards and related assessments (Shelton et al., 2021). In contrast, because the baccalauréat (a French assessment of secondary students) emphasizes philosophy, a tweet sharing philosophy notes to a hashtag related to the 2018 baccalauréat exams received over 23,000 retweets (Greenhalgh, Nnagboro, et al., 2021).

The need for further scrutiny in these examples is not because any of these content areas is unworthy of attention. However, even deeper assumptions about the purpose of education may be present in the emphasis on particular content areas. For example, while mathematics and literacy are undeniably important, Smith and Greenhalgh (2017) contrasted a Deweyan focus on the democratic aims of education with the Common Core State Standards’ implicit suggestion that “the primary purpose of education is utilitarian: Students should master the standards so that they are positioned to achieve greater economic success in knowledge-based work” (p. 115). The tension between these two visions of the purpose of education is not new. In the throes of post-Sputnik concerns about education, Dewey’s contemporaries criticized his influence on schools, which they suggested were ill-prepared to help the United States compete scientifically with the Soviet Union—another assumption about the purpose of education (Watters, 2021; see also Nichols, 2021).

This is particularly important because an emphasis on certain purposes of education necessarily de-emphasizes others, shaping data collection and use accordingly. Bradbury (2019) describes how an increased emphasis on mathematics and literacy in English early childhood education has drastically increased data collection about young children in England, creating tensions between teachers’ obligation to collect data and their ability to build relationships or provide a more holistic education. In contrast, schools across the Channel in France do not collect data related to race and ethnicity because they stand in tension with French ideals of a color-blind Republic (Raveaud, 2008; Simon, 2015). The deliberate decision not to collect this data implicitly de-emphasizes the importance of racial or ethnic educational equity; Cuban (2003) provides a brief U.S. example of this danger, and Watters (2021) makes a similar observation that narratives about educational progress (or the lack thereof) tend to sidestep questions of race. However, the decision to collect data about such disparities is not itself the solution to them: D’Ignazio and Klein (2020) suggest that despite the potential of data to address issues of social justice, even well-intentioned collection of such data can do harm in propping up deficit narratives that “reduce a group or culture to its ‘problems,’ rather than portraying it with the strengths, creativity, and agency that people from those cultures possess” (p. 58). Indeed, Au (2016) applies this criticism to standardized testing regimes in the United States, arguing that although they are cloaked in superficially anti-racist arguments, they actually exacerbate structural racism by assuming their own objectivity and thereby providing an empirical basis for the argument that “low test scores and the educational failure of working class, children of color is due to their own deficiencies” (p. 46).

One important concern about modern educational technology platforms is that their design and use of data may stand in tension with long-standing Western values of public education, “such as Bildung—the ideal to teach children to become not just skilled workers but knowledgeable citizens—and equality” (van Dijck et al., 2018, p. 117). As important as this concern is, it is based on the understanding that these values are indeed held within the broader educational system. The previous paragraphs have demonstrated that other assumptions about the purposes of education are alive and well in the U.S. and other contexts, and this only exacerbates van Dijck and colleagues’ concern that assumptions about data and technology determine education values rather than the other way around. For example, Bradbury (2019) speculates that it is in part because it is relatively easy to collect data about mathematics and literacy that these content areas have received so much attention in England—that is, the perceived necessity of data is driving the purposes of education rather than the other way around. Likewise, Corrin and colleagues (2019) note that more adaptive learning platforms are developed for the STEM disciplines (partly because of their relatively well-structured nature) than for other content areas. Therefore, a decision to value adapted and personalized learning may lead to the preference of certain disciplines over others, for knee-jerk practical rather than thoroughly considered philosophical reasons.

What is the Purpose of Educational Technology?

In April 2019, an article in Kentucky’s Lexington Herald-Leader described how social media monitoring efforts by Fayette County Public Schools (FCPS) helped the district intervene with two students needing help. (Spears, 2019). While the stories celebrated in the article are undeniably positive developments, they also raise questions about the scope and effectiveness of this surveillance. These two interventions were the result of a third-party company’s review of over 60,000 social media posts, which flagged 60 posts subjected to further scrutiny by “a team that included mental health and law enforcement staff” (Spears, 2019, para. 1). In short, only one of every 30,000 posts subjected to surveillance warranted intervention. This situation raises questions about the costs to privacy imposed on other people—although the company exclusively surveilled public posts, many internet users “operate in public spaces but maintain strong perceptions or expectations of privacy” (Markham & Buchanan, 2012, p. 6; see also Fiesler & Proferes, 2018; Gilbert et al., 2021). Furthermore, the article mentions (almost as an aside) that one of the two people who received help was no longer an FCPS student but was attending college outside of Kentucky; how far did the scope of this surveillance reach in the name of helping local students?

A particular understanding of the purposes of educational technology drives this surveillance. As reported in the article, the monitoring efforts were part of a broader response to safety concerns over the previous academic year. However, this does not dismiss concerns voiced by the Electronic Frontier Foundation (among others) that “a growing number of schools across the country [are] conducting mass privacy violations of kids in the name of ‘safety’” (Wang & Gebhart, 2020, para. 1). Indeed, the social media monitoring contracted by FCPS is not dissimilar to efforts by other companies to use social media data to surveil Black Lives Matter protests in cooperation with law enforcement agencies (e.g., Biddle, 2020).

Although this example compellingly demonstrates how assumptions about the purpose of educational technology shape data collection and use, it may also invite an objection that should be addressed before discussing this subject further. Watters (2018) noted that proponents of educational technology are unlikely to see safety-oriented technologies like metal detectors, school shooting simulators, and social media surveillance software as falling under this category. Instead, educational technology's purpose is understood to be to advance (or even revolutionize) teaching and learning. Technologies like social media surveillance software are dismissed as irrelevant to our field because they do not fit neatly into this narrative. Nevertheless, persistent concerns about whether educational technologies achieve this purpose call into question the appropriateness of this assumption; research over the years has repeatedly questioned whether advancements in technology have fundamentally changed the ways teaching and learning happen (e.g., Cuban, 2003; Crooks, 2019). Indeed, the same logic also applies to less-obvious educational technologies: The 2022 school shooting in Uvalde, Texas, has invited scrutiny about whether school safety technologies work in the first place, raising the possibility that they merely serve as expensive “security theater” (e.g., Faife, 2022; Gordon & Rose, 2022; Rose, 2022). If stakeholders’ assumptions about educational technology’s purpose—and success—are invalid, this obviously raises questions about whether the collection and analysis of data about students and other stakeholders are justified.

Yet, even when these assumptions are valid, they risk validating the use of educational technologies without considering data ethics and justice. To illustrate this point, consider Cuban’s (1986) examination of film, radio, and television as educational technologies. Each was held to have considerable promise and was introduced with fanfare, only to largely go unused (see Molenda, 2008, for a similar discussion). Cuban (2003) later suggested that personal computers followed a similar pattern. However, contemporary educational technologies differ from their predecessors in at least two respects. First, although data collection has long been a feature of educational technologies (see Watters, 2021), contemporary technologies allow for collecting more kinds of data at greater volumes (Corrin et al., 2019; Mandinach & Gummer, 2021b). Second, if previous generations of educational technology ended up going largely unused, contemporary educational technologies are pervasive. For example, the COVID-19 pandemic has required the use of educational technology at a scale never before seen; this not only increases the importance of validating claims and assumptions about technologies (Reeves & Lin, 2020) but also exposes students to more surveillance than ever before (Hankerson et al., 2021). 

The combination of these two differences suggests that data collection and use in educational contexts exist at a greater scale than ever before. Put simply, contemporary educational technologies may allow for the continuing collection of data about students rather than targeted and constrained efforts (Beardsley et al., 2019). Because students and other stakeholders’ “sharing of personal data carries with it risks.” (Beardsley et al., 2019, p. 1019), this scale of sharing—which is more often compelled than volunteered—increases the scale of associated risks (e.g., the virtual impossibility of anonymizing data; Drachsler & Greller, 2016). This is especially so given that advancements in data collection often outpace the development of legal and ethical frameworks for data collection (Corrin et al., 2019). Furthermore, while it is true that some of these “increasing risks” are related to “inadvertent and innocent misuses of data” (Mandinach and Gummer, 2021b, p. viii), it would be unwise to ignore risks associated with bad (or at least self-interested) actors. Consider the example of a graduate program that encourages or requires its students to engage with each other and their instructors on Twitter. While this could serve genuinely important learning purposes (Greenhalgh et al., 2016), there is no denying that such a requirement supports social media platforms’ use of digital labor, “in which value is created from the unpaid action of online audiences” (Selwyn, 2019, p. 53; see also D’Ignazio & Klein, 2020; Drachsler & Greller, 2016; Krutka et al., 2019).

Thus, the risk in adopting contemporary technologies in the hope of improving education is no longer just that today’s optimism may one day look “just as silly to people 50 years from now” as past hyperbolic promises look to us today (Mishra et al., 2009, p. 49). Rather, even if today’s optimism is warranted, it may come at an ethical cost that is not. This realization must lead us to interrogate the purposes we assign to educational technology and weigh them against the costs imposed by contemporary technologies. Adopting technology may indeed lead to improvements in teaching and learning; however, we must also consider the possibilities that—and perils if—stakeholders merely “use the rhetoric of technological progress to establish legitimacy” (Cuban, 2003, p. 159). Student accountability is important, but we must also consider how learning management systems (LMSs) allow us to monitor students in invasive ways that would be unimaginable in a face-to-face context. Building on an example from Eaton (2021), it would be absurd and unacceptable for a university instructor to sit in their student’s dorm room, looking over their shoulder and timing how long they spend reading each page in their textbook. Yet, this is a commonly included and widely valued feature of LMSs. Student safety must be a priority, but are there initiatives other than social media surveillance (led by stakeholders other than education professionals) to ensure that guns aren’t brought into the classroom? Whatever the technology and whatever its purpose, we must consider “ethical and privacy values on the same level as functional requirements” (Drachsler & Greller, 2016, p. 8).

What Determines Quality in Educational (Technology) Research?

During the Fall 2021 semester, IT and facilities units at George Washington (GW) University began researching how community members used campus buildings. While this could conceivably be measured in many ways, GW employees chose to use  “locational data from… WiFi access points across GW campuses” (Wrighton, 2022, para. 2). Because many students’ devices were registered with the university to access WiFi, employees saw an opportunity for examining building use data through various demographic lenses; this locational data was therefore combined with “additional de-identified student data” (para. 2). In February 2022, the new president of the university (who had only assumed that role a month before) apologized to the university population that they had not been informed of the research ahead of time; in doing so, he acknowledged that the technical infrastructure used for this project could potentially have tracked members of the campus community on an individual basis (Beals, 2022).

This threat to individuals’ privacy results from overly narrow assumptions about what determines quality in research. Research in education contexts can be understood as “a form of humanistic inquiry grounded in argument from evidence” (Penuel & Frank, 2016, p. 16), so it is clear that some good data are necessary for quality research. Even the quality of a “conceptual” chapter like this one depends on its ability to build on and correspond with empirical observations. Nonetheless, a legitimate empirical commitment can sometimes be narrowed into a more problematic assumption that all data is necessarily good and that the use of empirical data is the sole determining measure of research quality. It is undeniable that the research project described above needed some data, and the data were indeed both readily available and well-suited to answer the question; yet, as Heath (2021) writes, the “mere availability of data does not confer ethical collection of data” (p. 334).

It is important to acknowledge that it is normal for researchers to consider new forms of data as part of their commitment to empiricism and quality research. Throughout the history of research, data has typically required considerable effort to collect (boyd & Crawford, 2012). Therefore, we should not be surprised that educational technology scholars have been eager to explore new data sources (Rosenberg et al., 2021). Indeed, technologies such as the internet (Kimmons & Veletisanos, 2018), social media platforms (Greenhalgh et al., 2021), and learning management systems “generate user data untiringly” (Romero-Hall et al., 2021, p. 216), drastically simplifying data collection and leading to the application of new methodologies designed to take advantage of large amounts of data (e.g., Baker & Siemens, 2014; Jin, 2021; Rosenberg et al., 2021).

These new methods and methodologies build admirably on our field’s commitment to empiricism—however, they risk adopting other assumptions that may misshape our understanding of quality research. For example, while it is true that a major obstacle to properly using these methods is an absence of corresponding technical training (Kimmons & Veletsianos, 2018), it is critical to note that lack of opportunity does not affect all populations equally. For example, D’Ignazio and Klein (2020) argue that expertise in data science (among other fields) is often formally defined in terms of credentials, affiliations, or technical training that men are more likely to have access to—despite the fact that self-taught women helped lay the foundation for the field to begin with. Thus, the use of these methodologies in education contexts will only be inclusive if training and membership in these communities are also inclusive (Rosenberg et al., 2021). Given the dominance of masculine perspectives in educational technology independent of these methods (e.g., Romero-Hall, 2022), these necessary course corrections may require considerable effort. However, failure to do so risks the perpetuation of data-rich projects in education that are “characterized by masculinist, totalizing fantasies of world domination as enacted through data capture and analysis” (D’Ignazio & Klein, 2020, p. 151).

Other troubling assumptions about quality research stem from the association of the eugenics movement with quantitative research and its application in education. Several early pioneers of statistical analysis were eugenicists (Saltz & Stanton, 2018), and it is impossible to separate widely-accepted ideas such as correlation (Shaffer, 2017) and data cleaning (D’Ignazio & Klein, 2020) from their development in and for projects underpinned by racist and social Darwinist assumptions. This troubled history does not necessarily invalidate data-rich or any other quantitative research. However, it does underline the importance of critically reflecting on associated assumptions to determine where they might stand in tension with important ethical commitments. This is particularly true in the context of educational research, where eugenicist ideas played a role in the development of educational psychology constructs such as IQ and in assessment instruments such as standardized tests, which were hoped by some to compellingly shore up White intellectual supremacy (Au, 2016; Kendi, 2017).

It must also not be assumed that data-rich research is necessarily quality research. While this might seem obvious, such an assumption is implicit in much of the discourse about these methods. Indeed, D’Ignazio and Klein (2020) argue that the 17th-century coining of the term data was itself a rhetorical flourish meant to convey trustworthiness: “Identifying information as data… converted otherwise debatable information into the solid basis for subsequent claims” (p. 10). This rhetorical force arguably extends to the term data science and its application in educational contexts. If the constituent parts of this term are taken literally, it is difficult (if not impossible) to identify a science that does not employ data (D’Ignazio & Klein, 2020; Shaffer, 2017)—why, then, do these methodologies deserve this label and its glowing reputation? One strong candidate for setting apart data science and associated methodologies is their ability to consider large data sets; however, Saltz and Stanton (2018) problematize the novelty of this distinction as well, returning us to the original concern. Furthermore, many also assume big data to be inherently high-quality—to the extent that boyd and Crawford (2012) argue that the phenomenon is defined in part by a mythology of “truth, objectivity, and accuracy” (p. 663). While it is true that large datasets can often be helpful, it is equally true that some “projects ignore context, fetishize size, and inflate their technical and scientific capabilities” (D’Ignazio & Klein, 2020, p. 151).

The key to deflating erroneous assumptions about big data—and holding appropriate assumptions about the importance of empiricism in check—is to emphasize that data are inherently non-objective. While some stakeholders may tacitly acknowledge this, there is reason to believe that the objectivity of data is the prevailing assumption in our discipline. Consider, for example, the authority that educational technology stakeholders lend to LMS data: Corrin and colleagues (2019) note that students may instinctively trust—rather than interrogate—learning analytics that have institutional approval, and the Electronic Frontier Foundation points to teachers and institutions using data for purposes that LMS developers have not intended or endorsed (Budington, 2021). Indeed, quantitative techniques and quantified data are particularly likely to be seen as (more) objective and, therefore, of higher quality, despite many debates among educational technology stakeholders on this subject over the years (e.g., Boekweg et al., 2021; Romero-Hall, 2021). In contrast, Shaffer (2017) suggests that quantitative modeling may require more scrutiny than qualitative research—not because it is inherently inferior but because it is more often the basis for decisions. Such scrutiny is based on the understanding that despite countless assertions to the contrary, “data cannot speak for themselves, so they must be made to speak” (Crooks, 2019, p. 485; see also boyd & Crawford, 2012). Campos and colleagues (2021) describe how teachers’ making sense of LMS data is influenced by individuals’ emotions, analyses, and intentions—not to mention collective, organizational, and institutional factors. On a similar note, Crooks (2019) describes how school administrators made a sudden shift in their interpretation of standardized testing data in response to labor disputes at the school: “the relevant data did not change, rather what these data were allowed to represent changed and did so rather abruptly” (p. 492).

Beyond a general and foundational non-objectivity, we must also consider the constraints and limitations of the technologies we use to provide these novel data. The design and governance of social media platforms influence which platforms researchers collect data from (Tufecki, 2014) and what phenomena they study on the platform (boyd & Crawford, 2012). Some kinds of data are easier to collect through LMSs than others (Corrin et al., 2019), and Jin (2021) raises the possibility that available LMS data may not perfectly align with the theoretical constructs researchers are investigating. Furthermore, digital data “dynamically order and reorder the world” (Crooks, 2019, p. 495) rather than merely capture reality. Facebook (or Twitter) data neatly quantify likability for internal—and scholarly—consideration, but van Dijck (2013) problematizes the validity of those measures, drawing particular attention to how corporate values shape platforms’ understanding of these constructs. Learning Management Systems offer massive amounts of data about student activity, but by privileging quantitative and categorical data and presenting them in carefully arranged and neatly structured formats, they may “undermine and erase” the messy complexity that defines “humans and learning” (Eaton, 2021, para. 9). These limitations may not challenge our assumptions about the importance of data for quality research, but they should invite consideration about what data we assume to be of sufficient quality.

Who Has What Say in These Domains?

For over 20 years, the Pasco County Sheriff’s Office in Florida accessed data collected by county schools and combined the data with records from other public agencies to produce “a secret list of kids it thinks could ‘fall into a life of crime’” (Bedi & McGrory, 2020, para. 1). This collection and use of data is based on several dubious assumptions, including that one purpose of educational data is to engage in predictive policing and that school grades are an objective measure of intelligence. However, I include the story in this section to draw attention to two other controversial aspects which led to the dismantling of the program six months later (Associated Press, 2021): first, the sharing of educational data with a law enforcement agency, a move which experts described as “highly unusual” (Bedi & McGrory, 2020, para. 12); second, the fact that students and other stakeholders had no say in—because they were not informed of—the development and use of this list. 

This kind of data misuse is based on assumptions about who has what say in education, educational technology, and research in these contexts. Throughout this chapter, I have referred generally to “educational technology stakeholders” without specifically considering who these stakeholders are or should be. The purpose of this section is to underline the importance of these questions, although (as with previous sections) I stop short of trying to answer them. While these questions could be considered in many ways, I focus particularly on how they relate to our use of data. Comparing digital data to oil has become somewhat of a cliché in the popular discourse because both have had a revolutionary impact on the world. However, D’Ignazio and Klein (2020) note that this metaphor draws (perhaps unintended) attention to how the changes brought about by data are not always for the better. Not only are the “power and profit” associated with data distributed unevenly (i.e., with data barons succeeding oil barons), but the metaphor also “helps highlight the exploitative dimensions of extracting data from their source—people—as well as their ecological cost” (p. 45). Different stakeholders may have conflicting, equally legitimate perspectives (and corresponding ethical interpretations; Corrin et al., 2019), so care must be taken to ensure the just treatment of all stakeholders.

Indeed, Slade and Prinsloo (2013) argue that the ethical application of learning analytics depends on benefiting all parties. Learning analytics is usually (perhaps even always) deployed under the assumption that all parties will benefit, but careful consideration is important. For example, one application of learning analytics allows instructors and institutions to intervene when a model predicts that a student may be about to drop a course or leave a university (Corrin et al., 2019). While intended to benefit students, U.S. institutions of higher education may also have self-serving reasons for wanting to prevent attrition, including retaining tuition dollars and improving performance metrics—are there cases where institutions’ priorities stand in tension with students’? Moreover, if so, whose priorities do learning analytics serve? Corrin and colleagues (2019) also draw attention to ways other stakeholders might benefit from student data that raise ethical tensions: Professors may use the data to advance their research careers, and educational technology companies may use it to improve their products. This latter point is particularly important given that products “offered by commercial vendors obviously come at a cost” even though their effectiveness has not yet been proven (p. 16). Going further, Eaton (2021) asks why Learning Management Systems collect fine-grained student data (such as time spent taking a quiz) for instructors and institutions but not fine-grained instructor and institution data (such as time spent grading a quiz) for students; there are no technical obstacles to sharing the data both ways, revealing the role of underlying assumptions about the relative importance of various parties. Indeed, Doyle (2021)—writing from a student perspective—notes that she and her peers do not always have a choice to resist data collection they object to on privacy grounds.

On a similar note, it is important to understand the way that the act of data collection shifts agency from some stakeholders to others. High-level stakeholders have always used data to shape educational policy (Nichols, 2021), and digital data and associated tools are playing a growing role in shaping how teaching and learning happen (Williamson 2016a, 2016b). To a certain extent, this is necessary and good, but data collection can also be motivated by an implicit distrust of teachers and a corresponding shift of agency and power to other stakeholders. This was true of the push for curriculum standards and corresponding testing in the U.S. in the early 20th century (Watters, 2021) and has continued through the push for so-called accountability in U.S. federal policy, which Nichols (2021) describes as “a specific mandate for how achievement data should be used [that] has had deleterious effects on teacher practices and student outcomes” (p. 82). “Educators are literally drowning in data” (Mandinach & Gummer, 2021b, p. viii), and in some cases, their professionalism is defined in terms of their ability to produce data so that others may evaluate outcomes (Bradbury, 2019; see also Eaton, 2021) rather than their ability to evaluate outcomes on their own.

We must also consider who has what say in educational (technology) research. These considerations can become highly complex in internet and social media research (e.g., Greenhalgh, Koehler, et al., 2021; Kimmons & Veletsianos, 2018). For example, the public nature of these data means that research of this type in educational technology and other disciplines is often not subject to ethical review, creating obvious opportunities for misuse. However, there is little consensus among professionals engaged in ethical review about what that process should look like for this kind of research (Vitak et al., 2017), and failure to understand “the distinctive characteristics of internet research” (franzke et al., 2019, p. 13) may lead to overly conservative approaches to ethical review. Likewise, informed consent is typically not required when research data is public. Because participants have expressed general discomfort with the possibility of researchers’ collecting and analyzing their social media data (Fiesler & Proferes, 2018; Gilbert et al., 2021), educational technology researchers should consider whether and how it would be appropriate to obtain participants’ consent (Proferes and Walker [2020] discuss these considerations at length).

Nevertheless, there may be cases where it would be more appropriate not to obtain consent. For example, teachers sympathetic to the far right (see Greenhalgh et al., 2021) may be unlikely to permit researchers (who are often perceived as left-wing) to study their public social media posts. The role of private social media companies must also be considered here; these companies are under no obligation to share their data with researchers (boyd & Crawford, 2012) and may use Terms of Service agreements and other policies to restrict researchers from collecting data from their platforms. Although researchers should not violate these policies willy-nilly, there may be cases where ethical research requires their violation (Fiesler et al., 2020); for example, if an influential online educational marketplace forbade automated data collection, its very influence might nonetheless justify such a collection in the name of scholarly scrutiny (e.g., Aguilar et al., 2022; Shelton et al., 2021).

Of course, ensuring that the appropriate stakeholders have a say in how data are used in educational contexts depends on their awareness of how data—and associated technologies—are being used. Traditional, perfunctory approaches to obtaining consent for data collection are often insufficient, especially when people are not fully aware of the risks associated with that consent (Beardsley et al., 2019, p. 1031). Modern data platforms are often highly complex, making it difficult for users to understand what that collection and use look like (Drachsler & Greller, 2016; Proferes, 2017). Thus, Corrin and colleagues (2019) emphasize that stakeholders cannot truly consent to the collection and use of data unless those leading the collection are “open and transparent” about how they do so (p. 10).

Questioning Assumptions and Ethical Reflection: An Extended Example

In this final section, I provide an extended example of how identifying and questioning assumptions associated with the categories above can inform additional ethical reflection when making decisions related to educational data. This example is necessarily narrow in scope; as I have previously argued, it is impossible to address every possible implication of all possible assumptions held by any possible stakeholder—or even to address in appropriate detail a single implication of a single assumption held by a single stakeholder. Furthermore, I have deliberately decided to focus this entire section on a single hypothetical decision by a single hypothetical stakeholder; while this allows me to demonstrate how a single decision may be influenced differently by different assumptions, it also further limits the scope of this example.

More specifically, I consider a hypothetical scenario in which an American high school French teacher is considering adopting the ClassDojo app in their classroom. This app has many features, but this teacher is specifically considering its use for behavior management. They are relatively new at their job and are facing obstacles related to disruptive classroom behavior, so they are interested in the app's ability to measure classroom behavior by awarding and deducting points to and from students. This teacher knows that ClassDojo has long been controversial—especially regarding data privacy (e.g., Singer, 2014; Williamson, 2017)—and understands that this decision has ethical dimensions. However, while they take for granted that there are ethical costs to collecting data on their students through ClassDojo, they are open to the possibility that the value of the data collected through ClassDojo could potentially outweigh the costs of privacy violations. In the following sections, I revisit each category of assumptions described above to demonstrate how interrogating these assumptions might affect this teacher’s ethical reflection.

What is the Purpose of Education?

In reflecting on whether or not to use ClassDojo, this French teacher asks how important behavior management is among all their professional responsibilities. They are genuinely frustrated by the disruptive behavior in their classroom, and ClassDojo offers a potential solution to this problem. However, this teacher believes that an important purpose of their job is to prepare their high school students to become adult citizens in a democratic society, and they desperately hope that adults’ behavior is based on prosocial commitment rather than a gamified point count. The ethical cost of ClassDojo data collection seems higher when the app’s design stands in tension with this professional commitment. In contrast, however, this teacher is also committed to establishing an immersion classroom where they and their students only speak French for long periods. They know from their experience as a French student that their students will struggle with this, and they have fond memories of classes they took where students tried to go as long as possible without getting “strikes” for speaking English. ClassDojo might support this particular purpose of the French classroom enough to outweigh ethical concerns.

What is the Purpose of Educational Technology?

In continuing their consideration, this teacher also asks what the role of educational technology in their classroom is. Like many teachers-in-training, they learned that educational technology is only worth adopting when it distinctly enhances teaching or improves learning. Thinking back to the “strike system” in some of the immersive French classes they took, they feel confident that the system helped them break the habit of resorting to English instead of pushing the limits of their French. The French teacher considers that ClassDojo might be useful for the same purpose. After all, if the purpose of educational technology is to improve learning, and if ClassDojo could improve learning, that might be enough to dismiss ethical concerns about the app’s data collection. However, it also occurs to them that the teachers and professors who issued strikes never used an app, instead keeping tallies in a notebook or on a whiteboard. This changes the calculation: If a notebook or whiteboard improves learning in the same way (the assumed measure of success of any educational technology) but without the cost to student privacy, they concede that it must be the better option from an ethical point of view.

What Determines Quality in Educational (Technology) Research?

This teacher then continues their reflection by asking whether the data provided by ClassDojo is the kind of data they seek. Although their assumptions about the purpose of their teaching stand in tension with using ClassDojo as a behavior management tool, this—understandably!—has not entirely dismissed their frustration about their students’ disruptive behavior. Quantifying students’ behavior and communicating those quantifications to parents is an attractive possibility. However, to do so involves figuring out which behaviors merit the awarding of a point and which merit the deducting of a point—and this proves harder than expected for the French teacher. They are unsure they can determine which behaviors are equal in point value and not confident that they would be perfectly consistent across students (including across races, genders, and other demographic categories) in awarding and deducting those points. A point value seems like a simple, objective measure of behavior, but some students bother this teacher more than others. When pressed, they can’t defend their initial assumption that ClassDojo points would be a quality, consistent measure of behavior.

Who Has What Say in These Domains?

The French teacher is making this decision about ClassDojo independently, but they must still navigate assumptions about who gets what say in this decision. For example, this teacher’s concerns about the app (on data privacy grounds) implies a resistance to the ClassDojo company’s assumption that they have a right to collect—and presumably, analyze—data about students in exchange for providing services to classrooms. The teacher may also have to consider whether their advanced students would prefer using an app than a whiteboard or a notebook to manage the “strike system” for not speaking English in class. They may be more likely to adopt the app if they assume that students have a right to determine the educational technologies they use—and are mature enough to consider some of the ethical risks involved. Conversely, if they assumed they had the sole right or responsibility to determine which technologies are used in their classroom, their students’ feelings about ClassDojo would become less relevant to this ethical decision. On a related note, it is possible that their school—or a local, regional, or national educational or legislative body—would make the decision about ClassDojo for them, either mandating or forbidding its use based on the assumption that they better understand the benefits and risks involved.

Conclusion

Throughout this chapter, I have demonstrated how our assumptions about education, educational technology, research, and stakeholders in these pursuits shape our collection and use of data. It follows, therefore, that questions of ethics and justice as they apply to data are not limited to the data themselves. Rather, data misuse can be motivated by deeper assumptions, and debates about data ethics are often better understood as debates about deeper issues. Further complicating this issue, few of the assumptions that I have considered in this chapter are inherently wrong: From my perspective, at least, developing mastery of mathematics and literacy is an important part of education, technology does sometimes improve the processes of learning and teaching, collecting data is a necessary part of research, and policymakers can use data to improve educational systems. However, the collection and use of data are often justified on the basis of these assumptions alone, without critically examining them or holding them in tension with other guiding beliefs. For example, we would benefit from asking what other content areas are important, what other technologies are used in educational settings, what kinds of data are valued in education research, and what limits should be placed on policymakers’ influence in the classroom. Likewise, even after critical examination, all of these assumptions must stand alongside—rather than override—assumptions about stakeholders’ dignity, agency, and privacy.

Many of these considerations are typically seen as outside the realm of educational technology, but they are not less important for that. I do not wish to dismiss the expertise built up within the more traditional boundaries of our discipline, nor would I dare suggest that we do not need to consult stakeholders in other disciplines who are more used to thinking these questions through. Nonetheless, just as the ethical and just use of data in educational technology contexts is not merely about data, it must not be informed only by established and uncontested ideas within our field. Indeed, over 35 years ago, Mason (1986) described four fundamental “ethical issues of the information age” that overlap considerably with many of the considerations I have described here. These issues have only become more pressing in the decades since, and if we have not fully grappled with them, it is perhaps because we have been overly narrow in our concerns. To ensure the ethical use of data in educational technology, we must be willing to explore widely and dig deep.

References

Aguilar, S. J., Silver, D., & Polikoff, M. S. (2022). Analyzing 500,000 TeachersPayTeachers.com lesson descriptions shows focus on K-5 and lack of Common Core alignment. Computers and Education Open, 3, 100081. https://doi.org/10.1016/j.caeo.2022.100081

Associated Press. (2021, May 4). Sheriff, school board revise plan to access student data. AP News. https://apnews.com/article/school-boards-education-073cd70e4d0e7988207a618c12ae0851

Au, W. (2016). Meritocracy 2.0: High-stakes, standardized testing as a racial project of neoliberal multiculturalism. Educational Policy, 30(1), 39–62. https://doi.org/10.1177/0895904815614916

Baker, R., & Siemens, G. (2014). Educational data mining and learning analytics. In R. K. Sawyer (Ed.), The Cambridge handbook of the learning sciences (2nd ed.; pp. 253–272). Cambridge University Press.

Beals, M. (2022, February 14). George Washington University apologizes for tracking locations of students, faculty. The Hill. https://thehill.com/homenews/state-watch/594142-george-washington-university-apologizes-for-tracking-locations-of

Beardsley, M., Santos, P., Hernández-Leo, D., & Michos, K. (2019). Ethics in educational technology research: Informing participants on data sharing risks. British Journal of Educational Technology, 50(3), 1019–1034. https://doi.org/10.1111/bjet.12781

Bedi, N., & McGrory, K. (2020, November 19). Pasco’s sheriff uses grades and abuse histories to label schoolchildren potential criminals: The kids and their parents don’t know. Tampa Bay Times. https://projects.tampabay.com/projects/2020/investigations/police-pasco-sheriff-targeted/school-data/

Biddle, S. (2020, July 9). Police surveilled George Floyd protests with help from Twitter-affiliated startup Dataminr. The Intercept. https://theintercept.com/2020/07/09/twitter-dataminr-police-spy-surveillance-black-lives-matter-protests/

Boekweg, A., Call, H., Craw, D., Jennings, F., Irvine, J., & Kimmons, R. (2021). Educational technology: A history of research trends from 1970 to 2020. In R. Kimmons & J. Irvine (Eds.), 50 years of education research trends. https://edtechbooks.org/50_years

boyd, d., & Crawford, K. (2012). Critical questions for big data. Information, Communication & Society, 15(5), 662–679. https://doi.org/10.1080/1369118X.2012.678878

Bradbury, A. (2019). Datafied at four: The role of data in the ‘schoolification’ of early childhood education in England. Learning, Media and Technology, 44(1). https://doi.org/10.1080/17439884.2018.1511577

Budington, B. (2021, August 9). The company behind online learning platform Canvas should commit to transparency, due process for students. Electronic Frontier Foundation Deeplinks Blog. https://www.eff.org/deeplinks/2021/08/company-behind-online-learning-platform-canvas-should-commit-transparency-due

Campos, F. C., Ahn, J., DiGiacomo, D. K., Nguyen, H., & Hays, M. (2021). Making sense of sensemaking: Understanding how K-12 teachers and coaches react to visual analytics. Journal of Learning Analytics, 8(3), 60–80. https://doi.org/10.18608/jla.2021.7113

Corrin, L., Kennedy, G., French, S., Shum, S. B., Kitto, K., Pardo, A., West, D., Mirriahi, N., & Colvin, C. (2019). The ethics of learning analytics in Australian higher education: A discussion paper. https://melbourne-cshe.unimelb.edu.au/research/research-projects/edutech/the-ethical-use-of-learning-analytics

Crooks, R. (2019). Cat-and-mouse games: Dataveillance and performativity in urban schools. Surveillance and Society, 17(3/4): 484–498. https://doi.org/10.24908/ss.v17i3/4.7098

Cuban, L. (1986). Teachers and machines: The classroom use of technology since 1920. Teachers College Press.

Cuban, L. (2003). Oversold & underused: Computers in the classroom. Harvard University Press.

D’Ignazio, C., & Klein, L. F. (2020). Data feminism. The MIT Press.

Donaldson, J. (2016). Women’s voices in the field of educational technology. Springer. https://doi.org/10.1007/978-3-319-33452-3

Drachsler, H. (2016). Privacy and analytics—it’s a DELICATE issue: A checklist for trusted learning analytics. In D. Gašević & G. Lynch (Chairs), LAK ‘16: Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (pp. 89–98). https://doi.org/10.1145/2883851.2883893

Doyle, S. (2021). Why don’t you trust us? The Journal of Interactive Technology & Pedagogy, 20. https://jitp.commons.gc.cuny.edu/why-dont-you-trust-us/

Eaton, L. (2021). The new LMS rule: Transparency working both ways. The Journal of Interactive Technology & Pedagogy, 20. https://jitp.commons.gc.cuny.edu/the-new-lms-rule-transparency-working-both-ways/

Faife, C. (2022, May 31). After Uvalde, social media monitoring apps struggle to justify surveillance. The Verge. https://www.theverge.com/2022/5/31/23148541/digital-surveillance-school-shootings-social-sentinel-uvalde

Franzke, A. S., Bechmann, A., Zimmer, M., & Ess, C., & the Association of Internet Researchers (2020). Internet research: Ethical guidelines 3.0. Retrieved from https://aoir.org/reports/ethics3.pdf

Fairclough, N. (1989). Language and power. Addison Wesley Longman Limited.

Fiesler, C., Beard, N., & Keegan, B. C. (2020). No robots, spiders, or scrapers: Legal and ethical regulation of data collection methods in social media terms of service. In M. De Choudhary., R. Chunara, A. Culotta, & B. Foucalt (Eds.), Proceedings of the Fourteenth International AAAI Conference on Web and Social Media (pp. 187–196). Association for the Advancement of Artificial Intelligence.

Fiesler, C., & Proferes, N. (2018). “Participant” perceptions of Twitter research ethics. Social Media + Society, 4(1). https://doi.org/10.1177/2056305118763366

Gilbert, S., Vitak, J., & Shilton, K. (2021). Measuring Americans’ comfort with research uses of their social media data. Social Media + Society, 7(3). https://doi.org/10.1177/20563051211033824

Gordon, A., & Rose, J. (2022, August 25). “The least safe day”: Rollout of gun-detecting AI scanners in schools has been a ‘cluster,’ emails show. Motherboard. https://www.vice.com/en/article/5d3dw5/the-least-safe-day-rollout-of-gun-detecting-ai-scanners-in-schools-has-been-a-cluster-emails-show

Green, B. (2021). Data science as political action: Grounding data science in a politics of justice. Journal of Social Computing, 2(3), 249–265. https://doi.org/10.23919/JSC.2021.0029

Greenhalgh, S. P., Rosenberg, J. M., & Wolf, L. G. (2016). For all intents and purposes: Twitter as a foundational technology for teachers. E-Learning and Digital Media, 13(1-2), 81–98. https://doi.org/10.1177/2042753016672131

Greenhalgh, S. P., Koehler, M. J., Rosenberg, J. M., & Staudt Willet, K. B. (2021). Considerations for using social media data in learning design and technology research. In E. J. Romero-Hall (Ed.), Research methods in learning design and technology (pp. 64–77). Routledge.

Greenhalgh, S. P., Nnagboro, C., Kaufmann, R., & Gretter, S. (2021). Academic, social, and cultural learning in the French #bac2018 hashtag. Educational Technology Research and Development, 69(3), 1835–1851. https://doi.org/10.1007/s11423-021-10015-6

Greenhalgh, S. P., Krutka, D., & Oltmann, S. M. (2021). Gab, Parler, and (mis)educational technologies: Reconsidering informal learning on social media platforms. Journal of Applied Instructional Design, 10(3). https://doi.org/10.51869/103/sgdkso

Hankerson, D. L., Venzke, C., Laird, E., Grant-Chapman, H., & Thakur, D. (2021). Online and observed: Student privacy implications of school-issued devices and student activity monitoring software. Center for Democracy & Technology. https://cdt.org/insights/report-online-and-observed-student-privacy-implications-of-school-issued-devices-and-student-activity-monitoring-software/

Hakimi, L., Eynon, R., & Murphy, V. A. (2021). The ethics of using digital trace data in education: A thematic review of the research landscape. Review of Educational Research, 91(5), 671–717. https://doi.org/10.3102/00346543211020116

Heath, M. K. (2021). Buried treasure or ill-gotten spoils: The ethics of data mining and learning analytics in online instruction. Educational Technology Research and Development, 69, 331–334. https://doi.org/10.1007/s11423-020-09841-x

Jin, T. (2021). Learning analytics: The emerging research method for enhancing teaching and learning. In E. Romero-Hall (Ed.), Research methods in learning design and technology (pp. 192–205). New York, NY: Routledge.

Kendi, I. X. (2017). Stamped from the beginning: The definitive history of racist ideas in America. Bold Type Books.

Kimmons, R., & Veletsianos, G. (2018). Public internet data mining methods in instructional design, educational technology, and online learning research. TechTrends, 62, 492–500. https://doi.org/10.1007/s11528-018-0307-4

Krutka, D. G., Manca, S., Galvin, S. M., Greenhow, C., Koehler, M. J., & Askari, E. (2019). Teaching “against” social media: Confronting problems of profit in the curriculum. Teachers College Record, 121(14), 23046. https://doi.org/10.1177/016146811912101

Mandinach, E. B., & Gummer, E. S. (2021a). Data ethics: an introduction. In E. B. Mandinach & E. S. Gummer (Eds.), The ethical use of data in education: Promoting responsible policies and practices (pp. 1-32). WestEd.

Mandinach, E. B., & Gummer, E. S. (2021b). The ethical use of data in education: Promoting responsible policies and practices. WestEd.

Mason, R. O. (1986). Four ethical issues of the information age. MIS Quarterly, 10(1), 5–12.

Miller J. (2019, February 28). A lieutenant with BYU police shared private reports with university officials investigating students when he shouldn’t have. And new info shows he did it for two years. The Salt Lake Tribune. https://www.sltrib.com/news/2019/02/28/lieutenant-with-byu/

Miller, J. (2021, January 5). BYU will keep its police department, after a judge dismisses Utah’s decertification efforts. The Salt Lake Tribune. https://www.sltrib.com/news/2021/01/05/byu-will-keep-its-police/

Miller, J., & Alberty, E. (2021, December 16). Newly released records show it was ‘standard practice’ for BYU police to help with Honor Code surveillance. The Salt Lake Tribune. https://www.sltrib.com/news/2021/12/16/newly-released-records/

Mishra, P., Koehler, M. J., & Kereluik, K. (2009). The song remains the same: Looking back to the future of educational technology. TechTrends, 53(5), 48–53. https://doi.org/10.1007/s11528-009-0325-3

Molenda, M. (2008). Historical foundations. In: J. M., Spector, M. D., Merrill, J. van Merriënboer, & M. P. Driscoll (Eds.), Handbook of research on educational communications and technology (3rd ed., pp. 3–20). Lawrence Erlbaum Associates.

Nichols, S. L. (2021). Educational policy contexts and the (un)ethical use of data. In E. B. Mandinach & E. S. Gummer (Eds.), The ethical use of data in education: Promoting responsible policies and practices (pp. 81–97). WestEd.

Penuel, W. R., & Frank, K. A. (2016). Modes of inquiry in educational psychology and learning sciences research. In L. Corno & E. M. Anderman (Eds.), Handbook of educational psychology (3rd ed., pp. 16–28). Routledge.

Proferes, N. (2017). Information flow solipsism in an exploratory study of beliefs about Twitter. Social Media + Society, 3(1). https://doi.org/10.1177/2056305117698493

Proferes, N., & Walker, S. (2020). Researcher views and practices around informing, getting consent, and sharing research outputs with social media users when using their public data. In Proceedings of the 53rd Annual Hawai’i International Conference on System Sciences. IEEE Computer Society.

Raveaud, M. (2008). Culture-blind? Parental discourse on religion, ethnicity and secularism in the French educational context. European Educational Research Journal, 7(1), 74–88. https://doi.org/10.2304/eerj.2008.7.1.74

Reeves, T. C., & Lin, L. (2020). The research we have is not the research we need. Educational Technology Research and Development, 68, 1991–2001. https://doi.org/10.1007/s11423-020-09811-3

Romero-Hall, E. (2021). Research methods in learning design and technology: A historical perspective of the last 40 years. In E. Romero-Hall (Ed.), Research methods in learning design and technology (pp. 1–10). Routledge.

Romero-Hall, E. (2022). Navigating the instructional design field as an Afro-Latinx woman: A feminist autoethnography. TechTrends, 66, 39–46. https://doi.org/10.1007/s11528-021-00681-x

Romero-Hall, E., Correia, A. P., Branch, R. M., Cevik, Y. D., Dickson-Dean, C., Chen, B., Liu, J. C., Tang, H., Vasconcelos, L., Pallit, N., & Thankachan, B. (2021). Futurama: Learning design and technology research methods. In E. Romero-Hall (Ed.), Research methods in learning design and technology (pp. 206–226). Routledge.

Rose, J. (2022, June 6). Axon halts plans to sell flying taser drones to schools. Motherboard. https://www.vice.com/en/article/88q4gk/axon-halts-plans-to-sell-flying-taser-drones-to-schools

Rosenberg, J. M., Lawson, M., Anderson, D. J., Jones, R. J., & Rutherford, T. (2021). Making data science count in and for education. In E. Romero-Hall (Ed.), Research methods in learning design and technology (pp. 94–110). Routledge.

Saltz, J. S., & Stanton, J. M. (2018). An introduction to data science. SAGE Publications, Inc.

Selwyn, N. (2019). What is digital sociology? Polity Press.

Shaffer, D. W. (2017). Quantitative ethnography. Cathcart Press.

Shelton, C. C., Koehler, M. J., Greenhalgh, S. P., & Carpenter, J. P. (2021). Lifting the veil on TeachersPayTeachers.com: An investigation of educational marketplace offerings and downloads. Learning, Media, and Technology. https://doi.org10.1080/17439884.2021.1961148

Simon, P. (2015). The choice of ignorance: The debate on ethnic and racial statistics in France. In P. Simon, V. Piché, & A. A. Gagnon (Eds.), Social statistics and ethnic diversity: Cross-national perspectives in classifications and identity politics (pp. 65–88). Springer.

Simpson, T. W. (2016). American universities and the birth of modern Mormonism, 1867-1940. The University of North Carolina Press.

Singer, N. (2014, November 16). Privacy concerns for ClassDojo and other tracking apps for schoolchildren. The New York Times. https://www.nytimes.com/2014/11/17/technology/privacy-concerns-for-classdojo-and-other-tracking-apps-for-schoolchildren.html

Slade, S., & Prinsloo, P. (2013). Learning analytics: Ethical issues and dilemmas. American Behavioral Scientist, 57(10), 1509–1528. https://doi.org/10.1177/0002764213479366

Smith, J. P., & Greenhalgh, S. P. (2017). The role of (real) thinking in education: Why Dewey still raises the bar on educators. In L. J. Waks & A. R. English (Eds.), John Dewey’s Democracy and Education: A centennial handbook (pp. 99–107). New York, NY: Cambridge University Press.

Spears, V. H. (2019, April 25). Lexington schools are monitoring students on social media. How that prevented a suicide. Lexington Herald-Leader. https://www.kentucky.com/article229626509.html

Tufekci, Z. (2014). Big questions for social media big data: Representativeness, validity, and other methodological pitfalls. Proceedings of the International AAAI Conference on Web and Social Media, 8(1), 505–514. https://doi.org/10.1609/icwsm.v8i1.14517

van Dijck, J. (2013) The culture of connectivity: A critical history of social media. Oxford University Press.

van Dijck, J., Poell, T., & de Waal, M. (2018). The platform society: Public values in a connected world. Oxford University Press.

Vitak, J., Proferes, N., Shilton, K., & Ashktorab, Z. (2017). Ethics regulation in social computing research: Examining the role of Institutional Review Boards. Journal of Empirical Research on Human Research Ethics, 12(5), 372–382. https://doi.org/10.1177/1556264617725200

Wang, M., & Gebhart, G. (2020, February 27). Schools are pushing the boundaries of surveillance technologies. EFF Deeplinks. https://www.eff.org/deeplinks/2020/02/schools-are-pushing-boundaries-surveillance-technologies

Waterman, B., & Kagel, B. (1998). The Lord’s university: Freedom and authority at BYU. Signature Books.

Watters, A. (2018, February 8). School shooting simulation software (and the problem with how people define ‘ed-tech’). Hack Education. https://hackeducation.com/2018/02/08/what-is-ed-tech

Watters, A. (2021). Teaching machines: The history of personalized learning. MIT Press

Williamson, B. (2016a). Digital education governance: An introduction. European Educational Research Journal, 15(1), 3–13. https://doi.org/10.1177/1474904115616630

Williamson, B. (2016b). Digital education governance: Data visualization, predictive analytics, and ‘real-time’ policy instruments. Journal of Education Policy, 31(2), 123–141. https://doi.org/10.1080/02680939.2015.1035758

Williamson, B. (2017). Learning in the ‘platform society’: Disassembling an educational data assemblage. Research in Education, 98(1), 59–82. https://doi.org/10.1177/0034523717723389

Wrighton, M. S. (2022, February 11). Message regarding data analytics pilot project. Office of the President, George Washington University. https://president.gwu.edu/message-regarding-data-analytics-pilot-project



Spencer P. Greenhalgh

University of Kentucky

Spencer P. Greenhalgh is an transdisciplinary digital methods researcher studying meaning-making practices on online platforms. He holds a BA in French Teaching from Brigham Young University and a PhD in Educational Psychology and Educational Technology from Michigan State University. He is currently an associate professor of information communication technology in the School of Information Science at the University of Kentucky, where he teaches courses on games and literacies, information technology, web content management, and data science. You can learn more about Spencer and his work at https://spencergreenhalgh.com/work/

This content is provided to you freely by EdTech Books.

Access it online or download it at https://edtechbooks.org/applied_ethics_idt/edtech_data_ethics.