Ethics of Using AI in Higher Education and Its Impact on Academic Integrity

Background: Jessica, a diligent third-year student majoring in Computer Science at University X, had consistently maintained a strong academic record. As a member of the university’s honor society, she was well-regarded by her peers and professors for her dedication and integrity. However, during her spring semester, Jessica faced a series of personal challenges that significantly impacted her ability to keep up with coursework.

Struggling to balance her studies with her personal life, Jessica found herself overwhelmed, particularly in her Advanced Algorithms course. The pressure to maintain her GPA and uphold her reputation as a top student led her to explore alternative methods of completing her assignments. It was then that she discovered an AI-based tool that promised to generate high-quality code for complex algorithmic problems.

The Incident: The final project for Jessica’s Advanced Algorithms course was a significant portion of her grade. The assignment required students to develop an original algorithm to solve a given problem, write a detailed report explaining the logic and efficiency of the algorithm, and present their findings in class.

Pressed for time and under immense stress, Jessica decided to use the AI tool to generate the code for her project. She rationalized her decision by telling herself that she would carefully review and understand the AI-generated code before submitting it. However, as the deadline approached, she realized she didn’t fully comprehend the intricacies of the code. Despite her reservations, she submitted the project with minor edits, hoping it would go unnoticed.

When Jessica presented her work to the class, her professor, Dr. Thompson, noticed inconsistencies between her explanation and the code. The algorithm, though functional, was overly complex for a student at her level, and certain elements lacked the logical flow typical of her previous work. Suspicious, Dr. Thompson decided to run the code through the university’s plagiarism detection software, which had recently been updated to include AI-generated content detection. The software flagged significant portions of Jessica’s code as potentially AI-generated.

The Consequences: Confronted with the evidence, Jessica confessed to using the AI tool. The university’s academic integrity policy was clear: submitting work that is not one’s own, whether plagiarized or generated by an AI, constitutes a serious violation. Jessica was referred to the academic integrity board, where she faced severe consequences. Her final project was given a failing grade, which significantly lowered her overall course grade, and she was placed on academic probation. The incident also led to her suspension from the honor society.

Beyond the immediate academic penalties, Jessica’s reputation suffered. Her peers, who once admired her, began to distance themselves, and she experienced feelings of guilt and shame. The misuse of AI not only jeopardized her academic standing but also had a profound impact on her mental health and future career prospects.


The above case study was generated by ChatGPT 3.

Implications of Using AI in Higher Education

Artificial Intelligence (AI) has the capacity for creating and building upon existing personalized experiences, designing efficient frameworks and strengthening effectiveness of affordances of technology. AI’s ability in processing large data provides insights on trends in students’ behaviors while learning, and as a result can suggest personalizations that may enhance faculty’s pedagogical practices to design customized learning experiences that are relevant to meet the needs of their students.  To that end tools such as chatbots and adaptive learning systems are instrumental in strengthening student engagement by continually adjusting the speed and level of challenge of the content by evaluating students’ engagement. In addition, the ability to provide just in time feedback, identifying areas that are challenging and require attention and standard queries on class policies are helpful for enhancing students’ learning and engagement with the content. For faculty, AI can assist with automated grading, managing student gradebook, communications and setting up time to meet with students. This facilitates faculty to spend more time on ensuring that the learning experience is personalized to meet the students’ academic levels. 


 AI is considered disruptive by society, as the speed at which the technology is developing and broadening its scope is faster than the rate at which those involved in development of the technology can develop guidelines for its usage. This poses a challenge in the field of higher education- specially to faculty, students and universities as they do not have adequate time needed to parse out ways by which AI can be integrated within teaching and learning ethically. The focus on ensuring ethical integration is warranted by its pervasive adoption. While AI has the potential to enrich learning experiences, ethically it is imperative that such experiences are inclusive and unbiased. 


The focus of this chapter is to explore applications of AI from the lenses of students and higher education institutes (consisting of faculty and school administrators). The first section navigates ethical challenges posed by application of AI powered tools in higher educational institutes, specifically with a focus on biases and surveillance in AI algorithms, impact on decision making for college admissions, grading and data privacy. Complementing ethical concerns of higher education institutes using AI, the second half of the chapter sheds light on the impact of AI powered tools on students’ academic integrity.

Ethical Challenges of Using AI in Higher Education


eLearning or electronic learning is defined as any learning that takes place on a digital platform. While eLearning is a relatively modern concept, distance education in the1950s originates from the usage of slide projectors and television to aid in teaching. BF Skinner in 1954 invented the ‘teaching machine’ that made it possible for students to receive programmed instructions. This was followed by the creation of University of Illinois’s PLATO (Programmed Logic for Automated Teaching Operations) the world’s first computer based training program. While the intent of early online learning systems was to distribute information to students, by the 1970s, online learning became more interactive. 


Successively, with the advent of personal computing and the internet in the 1980s and 1990s, affordances of eLearning tools expanded. Virtual learning environments empowered students to access information virtually, thereby facilitating learning processes to extend beyond the walls of the classroom. Universities now offer online courses and degree programs to accommodate the needs of busy adult students. eLearning affords discourse (through videoconferencing, webinars, chats, podcasts) between teachers and their students, who are registered to take classes, at times across the globe. However, beyond the numerous benefits that are offered by eLearning, there are risks related to students’ privacy, security and ethical considerations that need to be considered (Etambakonga, 2021). Specifically, research indicates that equity, quality of academic program, academic integrity and surveillance are primary concerns (Reamer, 2013).


Building upon existing eLearning, the scope of AI technologies in education has significantly impacted traditional teaching and learning practices For instance, chatbots offer students individualized teaching support (Chocarro et al., 2021; Nye, 2015; Smutny & Schreiberova, 2020, Yang & Evans, 2019) and feedback (Dawson et al, 2018). AI can be used for automated grading and formative assessments (Dumelle, 2020; Hsu et al., 2021). In addition, AI has the ability to generate virtual reality environments that afford opportunities for students to practice and refine skills, such as language learning (Hannan & Liu, 2021; Luan et al., 2020; McKenzie, 2018) or surgical procedures (Fazlollahi et al, 2022). 


In short, AI primarily has two capabilities- generative i.e. creating personalized learning paths, intelligent tutoring systems, content generation tools and predictive i.e. to be able to provide real time learning analytics and feedback, real-time intervention and support and engage in data-assisted curriculum design- to list a few functionalities. AI is able to engage in these processes based on the data that it has been trained upon. 


The upcoming sections will explore ethical concerns about using AI tools that may have biases within the data set it was trained upon. that arise on using data that have existing biases


Ethical Concerns About Biases and Surveillance in AI Algorithms


The basic premise of AI algorithms is that the accuracy of the content that it produces is largely dependent on the data it is trained on. If there is a possibility that the data is biased on incomplete, it is likely that these biases will be reproduced and advocated. This in turn could result in the learning experiences not being equitable or inclusive. The hazardous outcome of these inconsistencies is not evenly shared amongst the student population. As an illustration, a study conducted by Yoder-Himes et al (2022), reviewed the outputs of a widely adopted automated proctoring-software that was used to evaluate the likeliness of students’ needing additional review and guidance by instructors- on the basis of students’ race, skin color or gender. Findings indicated that students with darker skin tones and black students would be more likely to be singled out as requiring instructors’ review -owing to the possibility of cheating- as compared to fellow students with lighter skin tones. More importantly, findings of the study proposed that there was an implicit bias towards female students with darker skin tones to be more likely to be identified as needing review- in comparison to male students with darker skin tones and female students with lighter skin tones. The study is important as it highlights ethical concerns related to use of AI technology such as online proctoring, that make decisions that impact education, equity and social justice, on the basis of students’ race and gender.


 AI powered tools for student device tracking, predictive policing and facial recognition are not an unfamiliar concept in schools. A recent survey by the Center for Democracy & Technology (2023), on middle and high school parents and teachers, reports that over 88% school districts use student device monitoring, 33% use facial recognition and 38% share student data with law enforcement. The intent of AI surveillance, as promoted by software companies that develop them, is to assist schools to support their students’ mental health. This implies that algorithms are designed to collect data on students’ online activities. As an illustration, for close to a decade, 37 universities have relied upon AI powered tools such as Social Sentinel to support students’ mental health by collecting information on their (students’) activities from their social media posts (Sen & Bennett, 2022). The intent for this surveillance is noble i.e. to identify students in crisis (who may be at risk or engage in self-harm or violence) and notify administrators. However, some universities have used the service to track students who may be involved in protests by scavenging their social media sites. For example, during demonstrations at a confederate statue at UNC-Chapel Hill, Social Sentinel found students’ social media posts related to the protest, by looking for specific keywords and thereby identified students who either participated or supported the incident. Similarly, Social Sentinel reported that a cheerleader at North Carolina A&T alleged that the school mishandled her rape complaint through her social media posts.


 AI surveillance can be helpful to enhance security by preventing criminal activities and alerting site managers of potential threats. However, it is critical to ensure that students’ right to privacy are not violated in the process. A key ethical concern for using AI for surveillance is the potential for discrimination and bias. For many low-income and minority students, school-sponsored computers are the only means by which they are able to engage and participate in online activities. Knowing that there are AI powered algorithms that track and flag their online activities instill a sense of fear of being criminalized (Sampath & Syed, 2023) and impact students’ freedom of expression. 6 in 10 students are hesitant to express their opinions online as they are unaware of the extent to which sensitive information stored on their computer may be viewed by others outside of the school district (Madrigal, 2021). A primary cause for concern is that their personal information and history of web searches on controversial and political topics (such as gun control, abortion, homosexuality) and mental health has the potential to be shared publicly without their consent. Cyber threats or cyber attacks, a reality in today’s world, have gravely damaging consequences on individuals. Such attacks are in the form of computer viruses, data breaches and disruption of service threats. Hackers have the potential to gain unauthorized access to a database, corrupt data and steal personal information. 


The term ‘surveillance capitalism’ succinctly coined by Zuboff (2020), describes how AI powered tools capture massive amounts of user data by enabling machine learning algorithms to have access to student experiences. Students themselves are unable to protect rights to their privacy if they are expected to use the in-built tools as part of course requirements. For instance, when students sign up for Piazza (an external tool integrated within learning management systems, to facilitate collaboration between students and instructors through a question and answer discussion board), their data is being shared with third-party vendors looking for candidates to fulfill job postings. Thus, the onus of ensuring students’ privacy falls on the higher education institution. Student leaders from Encode Justice, a global youth centered coalition geared to promote human-centered AI, posit that youth are at a stage where they are desensitized to mass surveillance (Sampath & Syed, 2023). They consider AI powered surveillance a threat to their individual autonomy as they are compelled to limit their choices and conform to expected societal behaviors and norms which may impact their creativity and growth.

To reduce the risk of violation of privacy and surveillance, Swartz & McElroy (2023) suggest that students, faculty and staff be key stakeholders when deciding on which AI led tools are to be integrated as part of the learning experience. This will give them autonomy and knowledge of the extent to which their data will be used by external vendors. Faculty can be encouraged to revise their syllabus to commit to transparency, about the kinds of student data that the AI powered tools they mandate in class collect.

Ethics of Using AI to Manage Students’ Data 


Universities collect personal and academic information about their students. These data points include demographic information (gender, ethnicity, socio economic backgrounds) and academic information (grades by semester across courses and schools, data analytics on their activities within course shells- time spent on specific pages, assignment submission regularity etc.). There is also potential to capture a more granular level of information about students’ activities on campus (such as the frequency of badge swipes to access libraries, academic buildings, student centers, dining halls etc.).  


Usage of AI to manage this data has potential for data breaches and misuse. Universities need to consider multiple approaches to ensure privacy of students’ data. First, they must adhere to data protection laws, such as HIPAA, that ensures confidentiality, secure storage and usage of data only for educational purposes. It is critical to train AI powered tools to strictly follow established protocols. Second, there needs to be transparency in terms of keeping students aware of the ways that AI is gathering and using their data and guidelines that exist to safeguard their information. 


Experts, such as Balaban (2024) recommend that all sensitive data be encrypted using robust algorithms so that it is unreadable if it happens to reach the hands of unauthorized users. This includes frequent data backup and testing of restoration procedures. He strongly advises strengthening authentication strategies to prohibit unauthorized access to AI tools that store student data. In addition, he cautions that AI powered tools also have potential to be attacked by malware attacks. These can be mitigated by ensuring that all operating systems are updated with the latest security patches. Given that AI powered tools are trained on datasets, it is critical to keep a close eye on abnormalities, inconsistencies and ensure that there is representation of diversity to reduce bias in outcomes.

Ethics of Using AI for Automating College Admissions 

The presence of chatbots on university websites to answer frequently asked questions by applicants has become common practice (Anonymous, 2024). In addition, chatbots review students' profiles to provide personalized guidance and reminder for application deadlines (Evaristo, 2023). The intent behind using such tools is to free admissions staff's time to focus on other aspects of applicants' submissions.


However, the influence of AI in the admissions process extends beyond answering routine questions about the university. Schools such as North Carolina State University make use of Sia, an AI tool to process college transcripts by gathering information on students’ coursework and college transfer credits (Evaristo, 2023). A study by Intelligent.com (2023) reports that out of the 346 participating institutions, 87% report that they use AI to influence their final decisions for admissions- 43% using it sometimes and 44% always. Institutions have used AI to review letters of recommendations, transcripts and communicate with applicants. This report shares Diane Gayeski’s, professor of Communications at Ithaca College, views that “AI can look at the number of extracurriculars. It can figure out whether you're a captain of your team or the president of the honor society. The technology can take the rubrics given to an admissions reader and give them to AI.” In addition, Dr. Gayeski champions AI powered review software as it ignores students’ demographic data, such as age, socio economic background, the zip code they live in or even their name, thereby eliminating the possibility of any bias. However, 65% of admissions professionals who were a part of the survey by Intelligent.com expressed ethical concerns over the use of AI. They were perturbed that the entire admissions process would be devoid of the lack of human consideration of specific or special circumstances that may impact students’ applications.


In addition, several higher education institutions have concerns that AI may contribute towards existing biases during the application process as opposed to mitigating it. University of Texas at Austin reported using an AI tool, for selection of its PhD candidates, that the institution had created. Using AI as part of their admissions process led them to conclude that the selected applicant pool was representative of the student demographic that had historically been a part of the institutions. As a result, the AI tool was successful in reducing human bias, but unsuccessful in ignoring existing biases in the LLM training data. 


Ethics of Using AI for Grading


AI powered assignment grading has helped automate a range of student evaluations- multiple choice questions, short answers, essays and problem-solving written responses. In the case of multiple choice questions, AI led evaluation systems are trained to review the data set i.e. student responses for correct responses from the grading rubric.  Natural language processing technology has the ability to make the grading process automated by reviewing students’ work for detecting errors and is trained on identifying argument structures (Fu et al., 2018). Machine learning algorithms are designed to analyze student data and develop models for grading based on continued analysis- thereby making it accurate over a period of time. 


The process of repeated evaluations trains AI systems to assess new student data/responses against existing rubrics. This is advantageous for students as they have the opportunity to review their evaluation outcomes faster than it may have taken their instructors to grade their work and would be free from instructors’ biases and opinions. 

 

However, using AI to grade assignments questions the validity of the design of the instrument itself to be free from biases of the humans who created them (Yang, 2022). Silverstrone & Rubman (2024) from MIT Sloan School of Management illustrate this point by sharing that AI tools that have been trained on business plans from male-led startups in specific industries, unintentionally are not favorable to business plans that are directed to fulfill gaps in markets catering to women, non-binary or other underrepresented genders. 


Arguments against the usage of AI grading tools caution faculty to consider the subscription cost of using such software, possible breach of privacy in terms of instructors’ and students’ demographic information- HIPAA violations, legal concerns of going against university’s grading policy and ethics of sharing student submissions without their consent and betraying students’ trust of expecting feedback from their instructor (Kumar, 2023).


Kelly (2024) shares from her discussions with faculty, such as Leslie Layne, from the University of Lynchburg, Virginia that there are several ethical violations in the process of using AI as a grading tool. First, is uploading students’ work to the LLM, thereby breaching their intellectual property. The cause for concern is that AI tools can potentially use student submissions as data to train their algorithms. Dorothy Leidner, a professor of Business Ethics at the University of Virginia cautions that this could be damaging for masters and doctoral students who aspire to publish their dissertation and contribute to their area of research. Second, it would be ethically incorrect if done so without students’ consent or awareness. It is essential for students that there is transparency in terms of which AI tools are being used to evaluate their submissions and a shared understanding of what content will be uploaded. Third, and possibly most important ethical violation is the intent of using AI as a grading tool- for declarative knowledge (that has a single correct or incorrect response) or as a substitute when there is a requirement for personalized feedback to guide students’ understanding, creativity and progress over time. For parents and students, it raises concerns of investing time and large sums of money in terms of tuition, for feedback loops that are AI generated and AI graded. 

Academic Integrity in Higher Education Under the Lens of AI

The International Society of Academic Integrity (2021) defines academic integrity as a promise to uphold honesty, fairness, respect, responsibility and courage. The intent of upholding these critical facets is to create a learning environment where credibility, ethical decision making capacities and values are cornerstones to building a culture of integrity at the individual, classroom and university level. 

Guerrero-Dib, Portales & Heredia-Escorza (2020) emphasize that academic integrity extends beyond cheating, plagiarizing or copying, as a commitment to the learning process by using available resources ethically and making a genuine effort. However, the onus of maintaining academic integrity is not on students alone. It is imperative that higher education institutions enforce high quality pedagogical practices, curriculum development, research and clear guidelines for what counts as violation of academic integrity. 

From a learning perspective, violation of academic integrity shortchanges students’ opportunities to gain mastery over the content. This may be attributed to having assistive technologies assign grades without providing qualitative feedback on students’ work. Or if students’ submissions are not their own, they miss the opportunity for the instructor to give them guidance based on their level of comprehension and competence.

An immediate challenge to academic integrity has resulted from the impact of AI in the metamorphosis of traditional classrooms.  Assessments is an area that has been under scrutiny as the reliability of AI powered detection softwares to detect student submissions that may have been products of generative AI (such as texts, images, videos etc.) has not been completely accurate. 

Easy access to free versions of generative AI tools such as ChatGPT has made it possible for available technologies to be put to wrong use, such as plagiarism, perpetuating biases and inequity. Beyond the possibilities that it can be used in the learning process, a larger area of concern is the lack of establishment of shared understanding of ways that such technologies can be abused. Universities are in the process of developing and sharing comprehensive policies around the usage of AI led tools by faculty, staff and students. 

The focus of the following sections are to explore challenges to students’ academic integrity due to the presence of AI powered tools in their academic environments.

Impact on Academic Integrity in the Form of Plagiarism Due to AI Tools

Plagiarism is defined as an individual not giving credit to the source or copying someone else’s work and citing it as their own. From the perspective of building upon existing knowledge, the process of plagiarizing does not add value if the credibility and accuracy of the source is not verified.  

Those relying on large language models (LLM) to guide generation of content are faced with a similar dilemma. This is attributed to the fact that LLMs scour numerous online sources that they have been trained upon to construct a response to the user’s prompt. However, authenticity of the sources or content available in those sources have not been vetted. This phenomena, called hallucination, produces incorrect content or falsely creates data sources that do not exist. In the early 2000s, the term ‘hallucination’ was used in the field of computer vision to signify addition of a specific detail to an image. However, it underwent a transformation over the following decade to acquire a shared understanding of it signifying an incorrect or misleading output by AI systems (Maleki, Padmanabhan & Dutta, 2024).

 From a plagiarism perspective, this process of hallucination is akin to individuals not acknowledging, either the reliability or authenticity of their sources. As an illustration, Lane (2024) shares that Perplexity, an existing LLM, released a particular “story”, the day after the original article was published by Forbes magazine using similar wording, illustrations and phrases. Acknowledgement of Forbes, as a source was not present and visually not clearly attributed (besides a small F icon resembling the Forbes logo). This was followed by Perplexity releasing this story to its subscribers through multiple platforms- mobile, web & video. Next, they proceeded to outrank Forbes on a Google search on the topic that was the central theme of the article. As a consumer, one is led (mistakenly) to believe that Perplexity is the credible source of this news story. 

The most widely used application of generative AI in higher education is to produce outputs (in the form of texts, images, videos etc.) based on prompts that are provided to LLMs. This serves as a double edged sword, as while the onus of ethical use of the tool lies with the user, it is not the responsibility of the user if outputs of the tool itself are a result of biases that exist in the dataset that the tool is trained upon. The issue is exacerbated as there doesn’t appear to be a tacit understanding of what counts as ethical use of AI tools amongst users. 

To illustrate, technologies that serve as adaptive tools and help in predicting texts have been in existence since the 1980s. A device called Predictive Adaptive Lexicon (PAL) was designed as a communication aid and keyboard emulator (Swiffin, Arnot, Pickering & Newell, 2009). The basic premise of the tool was to reduce the number of key pushes or character selections needed while composing a text. PAL was able to complete words based on the user’s vocabulary, thereby reducing the number of character inputs necessary to enter any text which in turn led to saving the user’s time and effort.

Eventually, the adoption of such technologies gained a wider audience once it integrated with text messaging. Whereas once the technology was used as a tool to aid learning, subsequently it started being used to draft responses without the user having to engage in the process of thinking and crafting responses. This indicates that the purpose and intent of how the tool is used ,determines its outcomes of using the tool.

To be clear, merely the usage of AI tools does not necessarily imply dishonesty. For instance, students may be assigned to use tools such Grammarly or ChatGPT to produce working drafts of assignments to build upon and/or critique for accuracy. In such cases, the tool serves as a learning aid as it affords opportunities to use prior knowledge to make sense of new content (generated by the tool) . However, students need to be clear on the purpose of using AI powered tools i.e. to review and autocorrect original work (for example, possible errors in language and spelling) or if the intent is to use the tool to generate content (that may be in the form of audio, video, text or multimedia) that they would proceed to cite as their own original work.

Impact on Academic Integrity Due to Efficacy of AI Detection Tools

A survey of over 2000 students at 2 and 4-year public and private institutions in March, 2023 indicated that an alarmingly high percentage of students are willing to use generative AI to assist them with school work, even if there exists University mandated policies prohibiting them from doing so (Shaw et al., 2023). Faculty cite preventing students from cheating using AI tools as a primary instructional challenge and threat to academic integrity. To counter this problem, higher education institutions are turning to available plagiarism checking technologies.

In the beginning of 2023, OpenAI released its classifier tool that had been trained to identify AI generated text. The company cautioned that the tool be used in conjunction with other detection strategies and not serve as a standalone solution to determining whether a written piece of text is AI generated or written by a human. The company proceeded to fine tune the classifier based on data sets that compared human generated and AI-written texts on the same topic. However, in July, 2023 the classifier  ceased to exist for public usage due to low accuracy of detecting AI generated text (only 26%) and incorrectly identifying human-written text as AI generated (9%- false positive).  In their resources for educators, OpenAI continues to invite feedback from educators using ChatGPT in the classroom to strengthen their understanding of the capabilities and limitations of the tool. 

In comparison, for more than a quarter of a century, the company TurnItIn has carved a niche for being the industry leaders in supporting online higher education by detecting similarities (not plagiarism) between submitted student work to content in its database, available across the Internet, academic and other student papers. The purpose of their ‘Similarity Report’ is to generate a percentage that shares the extent to which submitted student work is similar to existing content. In April, 2023 TurnItIn expanded its core philosophy by releasing its latest tool intended to detect texts that have been generated using AI. The company boasts that the new tool detects texts generated by ChatGPT to 97% accuracy. The algorithm is based on a statistical measure that observes patterns in the text for variety. A higher percentage of idiosyncrasy indicates a higher likeness to being human generated. 

However, AIHumanizer.ai has challenged that it can bypass TurnItIn’s AI detection tool along with a host of other players in the market of AI detection (such as GPTZero, Originality.ai, ZeroGPT to name a few). It promises that it can rewrite AI generated texts by humanizing the content to appear unique and authentic. The company claims that their rewritten content has a low risk of being flagged as plagiarized by detection tools such as TurnItIn, Grammarly and Scribber. 

Beyond AIHumanizer, the efficacy of TurnItIn’s AI detection tool has not received favorable response from its users in higher education. As a result, several prestigious universities, such as Vanderbilt have questioned the trustworthiness of the tool and have discontinued its usage (Coley, 2023).

Students’ academic integrity is constantly challenged by easy access to generative AI. Technological advancements of tools, such as AIHumanizer.ai and GPTMinus1, that mimic their writing styles, potentially position students to have to make the conscious choice of putting in the hard labor themselves, or adopting practices that make it possible to accomplish assignment requirements quickly but may short circuit their learning. In addition, an abundance of YouTube videos and online tutorials focused on techniques to outdo AI detectors, ensure that knowledge of available resources to engage in plagiarizing spreads rapidly to a global audience (Nelson, 2023). 

Having AI companies invest resources in developing software that helps distinguish content created by LLMs (developed by them) and human writing, appears to be a logical strategy to address the problem. However, this is not likely to happen as it would be counter intuitive and challenge their corporate agenda of training natural language processors to mimic and simulate writing that is as close as possible to human responses (Alimardani & Jane, 2023). 

Impact on Academic Integrity Due to Human Biases Generated 
By AI tools

Inadvertently misclassifying student written prompts as AI generated is a cause for concern for students, specially marginalized, under-represented groups, people of color and non-native English speakers.  While analyzing content that was mistakenly flagged as AI generated, by AI detectors, researchers observed that a large number of such writings were by non-native English speakers (Liang, Yuksekgonul,  Mao, Wu & Zou, 2023). In the study conducted by Liang et al. (2023), researchers used an AI detection tool to determine the authenticity of essays that were part of a Chinese dataset and written as practice for taking the Taking of English as a Foreign Language (TOEFL) exam by Chinese students. As a point of comparison, an equal number of essays written by eighth grade students, part of the Hewlett Foundation ASAP dataset, were also evaluated by the same tool.  The researchers noted that the AI detector reported a high false negative rate (61.3%) of the TOEFL essays being AI generated. In comparison, it accurately identified all US student essays as being not AI generated. The cause for concern stems from the fact that non-native English writers, when communicating in English, demonstrate a lower level of grammatical variability and choice of vocabulary. This lowers their text perplexity (i.e. ease at which the generative AI model can predict the next set of words in a sentence) in comparison to native English speakers. This calls for cautioning against the use of low perplexity markers as a criteria for citing AI-generated texts. It is possible that this could unintentionally create biases against non-native English speakers. Overall, this draws attention to the potential of inequality that exists in embracing diversity within the academic community.

Another barrier that this bias illustrates is that AI outputs are not comprehensive to support students’ diverse cultural experiences and native languages spoken across the globe. This is a challenge when the intent is for students to be able to find their own voice. Laura Dumin, professor of English and director of the technical writing program at the University of Central Oklahoma fears that students who speak dialects of English, may feel inclined to sacrifice diversity in their writing to mimic the blanket text generated by AI (D'Agostino, 2023).

The issue of marginalization of non-native English speakers by AI detectors extends beyond texts produced by generative AI. Stereotypes propagated by AI generated images and videos expose and promote biases, sometimes unconscious, in people’s minds. For instance, AI generated images of women of color, are not a true representation of their projected physical attributes. In comparison, AI generated depictions of white women are closer in likeliness.  To counter such harmful outputs, some image generator tools do not allow users to enter specific keyword prompts (that have the propensity to be racial in nature), to be used to guide the image generation. However, this runs the risk of downplaying relative importance, experiences and perspective of minorities, in favor of dominance of the majority (Anonymous, 2023). This is potentially harmful in higher education as it questions the existence of diversity in representation of students and faculty within the university. 

 In the context of a communications classroom, researchers Hu & Kurylo (2024) analyzed outputs from Dall-E, Midjourney and Pika Art for the conventionalized nature of images generated by these platforms to showcase Asians. In the study, they highlighted the similarities between processes adopted by AI and human information processing (learning, perceiving, reproducing) to promote cliched perceptions. Their analysis of available AI generated images and videos shed light on the risks that are generated by AI in reproducing and spreading harmful stereotypes about historically marginalized groups and subsequent biases against them. Communications is a very human centered medium that has the potential to benefit heavily from the vast collection of AI generated imagery. As a result, it is critical that communication students and faculty are made aware of strategies to identify biases in AI generated images to prevent propagation of stereotypical notions about groups of people who are not a part of the mainstream culture.  

Accusation of cheating or use of unfair means to complete course requirements, existence of biases in media or propagating stereotypes has the potential to have a severe impact on students’ academic and professional lives. 

Conclusion

From the case study shared in the introduction section, it is evident that Jessica succumbed to the pressure of compromising her integrity to maintain her GPA. It reflects the rampant focus within academia on outcomes and not on the process of learning. AI provides an abundance of tools that are geared to strengthen one’s understanding of the content. However, misuse of the tools can have a severe impact on academic integrity. This incident emphasizes the possible challenges on ethics and academic integrity due to the easy accessibility to AI powered tools. 


Higher education institutes are poised to play a pivotal role in the advancement and integration of AI within their ecosystem. A key role that higher education institutes can play in the future would be to join forces with companies involved in creation of AI powered tools to support design and alignment with ethical standards (Diaz, 2024). This collaboration would be beneficial to develop software that are compliant with universities’ policies regarding effective use of student data- thereby reducing the possibility of generating biased and inaccurate content. 


AI brings both opportunities and challenges in the context of academic integrity. Importantly, it raises ethical questions, especially when it comes to privacy and surveillance. AI tools can support, but not replace, the work of teachers and administrators in promoting academic integrity. Ensuring honesty in academic work and minimizing cheating also relies heavily on creating a culture of integrity and setting clear expectations for ethical behavior. Higher education institutes need to direct attention to advance AI literacy to all their stakeholders (students, faculty, administration) about the potential risks and ethical usage of AI powered tools. This implies a focus on human-centric AI integration. Higher education institutes need to foster an environment where stakeholders feel empowered to be using AI tools for advancing their knowledge, engage in critical thinking and collaborate with each other.


References


Alimardani, A. & Jane, E.A. (2023, February). We pitted ChatGPT against tools for detecting AI-written text, and the results are troubling. The Conversation. https://theconversation.com/we-pitted-chatgpt-against-tools-for-detecting-ai-written-text-and-the-results-are-troubling-199774 


Anderson, J. & Applebom, P. (2011, December). Exam Cheating on Long Island Hardly a Secret. The New York Times https://www.nytimes.com/2011/12/02/education/on-long-island-sat-cheating-was-hardly-a-secret.html 


Anonymous. (2020, July).  Is Student Cheating On the Rise? How You Can Discourage It In Your Classroom? The Wiley Network. https://www.wiley.com/en-us/network/education/instructors/teaching-strategies/is-student-cheating-on-the-rise-how-you-can-discourage-it-in-your-classroom 


Anonymous. (2023, September). 8 out 10 colleges will use AI for admissions in 2024 Intelligent https://www.intelligent.com/8-in-10-colleges-will-use-ai-in-admissions-by-2024/ 


Anonymous. (2023). Cultural Hegemony: How Generative AI Systems Reinforce Existing Power Structures. Sustain (3). https://sustain.algorithmwatch.org/en/cultural-hegemony-how-generative-ai-systems-reinforce-existing-power-structures/ 


Anonymous. (2024). The Role of AI in Transforming Higher Education. Hyland https://www.hyland.com/en/resources/articles/ai-higher-education 


Balaban, D. (2024, March). Privacy & Security Issues Of Using AI for Academic Purposes. Forbes https://www.forbes.com/sites/davidbalaban/2024/03/29/privacy-and-security-issues-of-using-ai-for-academic-purposes/


Chocarro, R., Cortiñas, M., & Marcos-Matás, G. (2021). Teachers’ attitudes towards chatbots in education: a technology acceptance model approach considering the effect of social language, bot proactiveness, and users’ characteristics. Educational Studies, 49(2), 295–313. https://doi.org/10.1080/03055698.2020.1850426 


Coley, M.  (2023, August). Guidance On AI Detection and Why We’re Disabling TurnitIn’s AI Detector  [Editorial]. Brightspace. https://www.vanderbilt.edu/brightspace/2023/08/16/guidance-on-ai-detection-and-why-were-disabling-turnitins-ai-detector/ 


Dawson, P., Henderson, M., Ryan, T., Mahoney, P., Boud, D., Phillips, M., & Molloy, E.(2018). Technology and feedback design. In M. J. Spector, B. B. Lockee, & M. D. Childress (Eds.), Learning, Design, and Technology: An International Compendium of Theory, Research, Practice, and Policy. Cham Switzerland: Springer. https://doi.org/10.1007/978-3-319-17727-4_124-


Diaz, V. (2024, February). Exploring the Opportunities and Challenges with Generative AI. Educause Review. https://er.educause.edu/articles/2024/2/exploring-the-opportunities-and-challenges-with-generative-ai 


Dumelle, K. (2020, September).Grading exams: How gradescope revealed deeper insights into our teaching. Faculty Focus

www.facultyfocus.com/articles/educational-assessment/grading-exams-howgradescope-revealed-deeper-insights-into-our-teaching/


Evaristo, E. (2023, December). Balancing the potentials and pitfalls of AI in college admissions. USC Rossier School of Education

https://rossier.usc.edu/news-insights/news/balancing-potentials-and-pitfalls-ai-college-admissions 


Etambakonga, C.L. (2021). The Rise of Virtual Reality in Online Courses: Ethical Issues and Policy Recommendations. Factoring Ethics in Technology, Policy Making and Regulation [Working Title].  


Fazlollahi, A.M, Bakhaidar, M., Alsayegh, A., Yilmaz, R., Winkler-Schwartz, A., Mirchi, N., Langleben, I., Ledwos, N., Sabbagh, A.J., Bajunaid, K., Harley, J.M., Del Maestro, R.F. (2021)  Effect of Artificial Intelligence Tutoring vs Expert Instruction on Learning Simulated Surgical Skills Among Medical Students: A Randomized Clinical Trial. JAMA doi: 10.1001/jamanetworkopen.2021.49008.



Fu, R., Wang, D., Wang, S., Hu, G., & Liu, T. (2018). Elegart sentence recognition for automated essay scoring. Journal of Chinese Information Processing, 32(6), 10.


Guerrero-Dib, J.G., Portales, L. & Heredia-Escorza, Y.  (2020) Impact of academic integrity on workplace ethical behavior. International Journal for Educational Integrity, 16 (2) https://doi.org/10.1007/s40979-020-0051-3


Hannan, E., & Liu, S.  (2021) AI: New source of competitiveness in higher education. Competitiveness Review, 33 (265-279) 10.1108/CR-03-2021-0045


Hsu, S., Li, T.W., Zhang, Z., Fowler, M., Zilles, C., Karahalios, K. (2021).

Attitudes surrounding an imperfect AI autograder

Proceedings of the 2021 CHI conference on human factors in computing systems (CHI '21), Association for Computing Machinery, New York, NY, USA (2021), pp. 1-15, 10.1145/3411764.3445424


Hu, Y. & Kurylo, A. D. (2024). Screaming Out Loud in the Communication Classroom: Asian Stereotypes and the Fallibility of Image Generating Artificial Intelligence (AI). In S. Elmoudden & J. Wrench (Eds.), The Role of Generative AI in the Communication Classroom (pp. 262-283). IGI Global. https://doi.org/10.4018/979-8-3693-0831-8.ch012


International Center for Academic Integrity [ICAI]. (2021). The Fundamental Values of Academic Integrity. (3rd ed.). www.academicintegrity.org/the-fundamental-valuesof-academic-integrity


Kelly, S.M. (2024, April). Teachers are using AI to grade essays. But some experts are raising ethical concerns. [Post]. CNN. https://amp.cnn.com/cnn/2024/04/06/tech/teachers-grading-ai 


Kumar, R. Faculty members’ use of artificial intelligence to grade student papers: a case of implications (2023) . International Journal for Educational Integrity 19, 9. https://doi.org/10.1007/s40979-023-00130-7


Laird, E. & Dwyer, M. (2023, September). Report- Off task: Threats to Student Privacy and Equity in the Age of AI. Center for Democracy & Technology https://cdt.org/insights/report-off-task-edtech-threats-to-student-privacy-and-equity-in-the-age-of-ai/  


Lane, R. (2024, June). Why Perplexity’s Cynical Theft Represents Everything That Could Go Wrong With AI. Forbes. https://www.forbes.com/sites/randalllane/2024/06/11/why-perplexitys-cynical-theft-represents-everything-that-could-go-wrong-with-ai/


Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 4(7).


Luan, H., Géczy, P., Lai, H., Gobert, J.D., Yang, S.J., Ogata, H., Baltes, J., Guerra, R.D., Li, P., & Tsai, C. (2020). Challenges and Future Directions of Big Data and Artificial Intelligence in Education. Frontiers in Psychology, 11.


Reamer, F. G. (2013). Distance and online social work education: Novel ethical challenges. Journal of Teaching in Social Work, 33(4-5), 369-384.


Madrigal,  (2023, September). Report- Off task: Threats to Student Privacy and Equity in the Age of AI. Center for Democracy & Technology https://cdt.org/insights/report-off-task-edtech-threats-to-student-privacy-and-equity-in-the-age-of-ai/  


McKenzie, L. (2018, September). Pushing the boundaries of learning with AI. Inside Higher Ed.

www.insidehighered.com/digital-learning/article/2018/09/26/academics-push-expand-use-aihigher-ed-teaching-and-learning


Maleki, N., Padmanabhan, B., & Dutta, K. (2024). AI Hallucinations: A Misnomer Worth Clarifying. IEEE Conference on Artificial Intelligence (CAI), Singapore, pp. 133-138. https://doi.org/10.48550/arxiv.2401.06796


Nelson, J. (2023, April). The Reliability of AI Detection Tools and Their Impact on Academic Integrity. [Post]. LinkedIn. https://www.linkedin.com/pulse/reliability-ai-detection-tools-impact-academic-integrity-james-nelson/ 

Nye, B.D. (2015). Intelligent Tutoring Systems by and for the Developing World: A Review of Trends and Approaches for Educational Technology in a Global Context. International Journal of Artificial Intelligence in Education 25, 177–203. https://doi.org/10.1007/s40593-014-0028-6 


Sampath, S. & Syed, M. (2023, September). As Students We Face Invasive AI Powered School Surveillance. Now We’re Calling Lawmakers To Regulate It. ACLU-NJ

https://www.aclu-nj.org/en/news/students-we-face-invasive-ai-powered-school-surveillance-now-were-calling-lawmakers-regulate-it 


Sen, S. & Bennett, D.K. (2022, September). Tracked: How Colleges Use AI to Monitor Student Protests. The Dallas Morning News. https://interactives.dallasnews.com./2022/social-sentinel/?fbclid=IwAR1HE3WCzkEpzpPo4BeT-ZmpJDy7m3yqHb_JBPg7xMTi2Bg5MFzlusz1TIw 


Silverstrone, S. & Rubman, J. (2024, May). AI-Assisted Grading: A Magic Box or a Pandora’s Box. MIT Management STS Teaching & Learning Technologies

https://mitsloanedtech.mit.edu/2024/05/09/ai-assisted-grading-a-magic-wand-or-a-pandoras-box/ 

Smutny, P. & Schreiberova, P. (2020) Chatbots for learning: A review of educational chatbots for the facebook messenger. Computers & Education, 151 10.1016/j.compedu.2020.103862


Shaw, C., Bhardwaj, R., Condon, K., NeJame, L., Martin, S., Rich, J., Janson, N., Bryant, G., and Fox, K. (2023, September). Listening to Learners 2023: Increasing Belonging In and Out of the Classroom. Tyton Partners. https://tytonpartners.com/listening-to-learners-2023-increasing-belonging-in-and-out-of-the-classroom/


Swartz, M., & McElroy, K. (2023). The "Academicon": AI and Surveillance in Higher Education. Surveillance & Society, 21(3), 276-281. https://login.proxy.libraries.rutgers.edu/login?qurl=https%3A%2F%2Fwww.proquest.com%2Fscholarly-journals%2Facademicon-ai-surveillance-higher-education%2Fdocview%2F2878446291%2Fse-2%3Faccountid%3D13626


Swiffin, A., Arnott, J., Pickering, J. A., & Newell, A. (1987). Adaptive and predictive techniques in a communication prosthesis. Augmentative and Alternative Communication, 3(4), 181–191. https://doi.org/10.1080/07434618712331274499 


Yang, S. & Evans, C. (2019). Opportunities and challenges in using AI chatbots in higher education. Proceedings of the 2019 3rd international conference on education and E-learning (ICEEL 2019), Association for Computing Machinery, New York, NY, USA (2019), pp. 79-83. 10.1145/3371647.3371659


Yang, S. & Evans, C. (2020) Opportunities and Challenges in Using AI Chatbots in Higher Education. In Proceedings of the 2019 3rd International Conference on Education and E-Learning (ICEEL '19). Association for Computing Machinery, New York, NY, USA, 79–83. https://doi.org/10.1145/3371647.3371659 


The Crimson Editorial Board. (2022, October). Social Sentinel and the Creeping Surveillance University [Editorial]. The Harvard Crimson. https://www.thecrimson.com/article/2022/10/18/editorial-surveillance-social-sentinel/ 


Yang, X. (2022). Trace on calculation method of evaluation reform of education. Journal of East China Normal University (Educational Sciences), 40(1), 19.


Yoder-Himes, D. R., Asif, A., Kinney, K., Brandt, T. J., Cecil, R. E., Himes, P. R.,Cashon, C., Hopp, R. & Ross, E. (2022, September). Racial, skin tone, and sex disparities in automated proctoring software. In Frontiers in Education (Vol. 7, p. 881449). Frontiers.


Zuboff, Shoshana (2020) The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: Public Affairs.