Will Robots Replace Us?
Understanding the Instructor Perspective on Generative Artificial Intelligence
By Idaho OPAL
Learning Outcomes
After reading this chapter, students will be able to
Understand the educator perspective on generative AI use in college courses.
Appreciate how educators and instructional designers are using generative AI.
Learn how to create acknowledgement statements and track version histories for AI use in academic assignments.
Develop strategies for communicating with instructors about AI use.
Understand the educator perspective on generative AI use in college courses.
Appreciate how educators and instructional designers are using generative AI.
Learn how to create acknowledgement statements and track version histories for AI use in academic assignments.
Develop strategies for communicating with instructors about AI use.
Introduction
Roberto has always been a straight A student. He is planning to apply to medical school, so his learning is personally important to him. However, writing has never been his favorite subject. Roberto has heard of generative AI tools like ChatGPT, but he does not feel like using the tools is ethical, and after a few half-hearted attempts at prompting, he is not even sure that they are useful. Besides, he has developed a unique essay drafting process over the years that really works for him. To overcome writer’s block, he often starts his papers by dictating his thoughts about a topic into a talk-to-text program. He then adds sources, edits and organizes, and finally, he uses Grammarly as a tool to check his grammar and syntax before submitting the final paper.
For an ethics course, Roberto is assigned to write a paper applying two ethical theories to a problem and explaining his personal stance on the issue. He chooses to apply utilitarianism and deontology to the problem of whether we should eat meat. Roberto follows his normal writing process. He reads the assigned materials, chooses quotes to support his points, and decides on a stance. Then he dictates his ideas into the Notes app on his phone to create a very rough draft of the assignment. After adding the quotes, editing and revising his text, organizing his information, formatting his paper in APA style, and running it through Grammarly for a final check, Roberto feels confident that he has met the assignment requirements. He submits the paper.
The next morning, Roberto receives an email from his instructor accusing him of plagiarism. The email states that the instructor has given him a 0 for unauthorized use of generative AI. Roberto is confused. He has not plagiarized his paper. The ideas and labor are his own. He has used this same process in high school and college papers for years, and he uses Grammarly because previous instructors have recommended it. He checks the instructor’s syllabus for information about academic integrity and the use of generative AI, and he does not find any information there. How should Roberto respond to this accusation? Why does his instructor think he has used ChatGPT when he hasn’t?
If you’re a college student, you probably are at least somewhat familiar with generative AI tools. First introduced in November 2022, Open AI’s ChatGPT and its rivals such as Google Gemini, Microsoft Copilot, and Anthropic’s Claude quickly changed the landscape for educators and students alike. Students must navigate a wide range of policies and stances from their instructors, from those who embrace AI tools to those who prohibit any use of AI in the college classroom. Institutions often lack clear academic integrity policies to address the use of AI, and instructors do not always have syllabus policies that explain the acceptable uses of AI in their classes.
As educators, we believe that most students don’t want to cheat. But many are confused about these tools and how they can be useful–and harmful–in education. In this chapter, our goal as educators and instructional designers is to help you understand how your teachers are also grappling with this new technology and to give you some practical strategies for addressing challenges and differences around the use of generative artificial intelligence when you encounter them.
Why Your Teachers Are Worried: AI as a Disruptive Technology
With all the benefits provided by artificial intelligence to students, educators, administrators, and workers, you might ask “Why are my instructors worried about students using generative AI tools for educational purposes? After all, they could (and should) be using it in their own work!”
The answer lies in a concept called “best practices.” You may have heard it referenced in theoretical articles or introductory materials for your field. This phrase refers to the commonly accepted procedures, theories, and paradigms in a field. All projects, actions, and workstyles are compared to these ideals (Alyahyan & Düştegör, 2020). What many people seem to forget, however, is that like all ideals, human agents often fall short of best practices in many ways. Additionally, best practices are frequently created for a person’s present situation, or they represent the way that things have been for a long time. In other words, they are related to the average situation, abilities, and priorities of people in a particular field. Any changes, such as revolutionary technologies like generative AI, render some of these best practices and ideal circumstances inadequate or even impossible.
However, some best practices or foundational theories can be easily adapted or applied to changes. These models and theories describe the necessary elements of a situation, not necessarily the exact manner in which these elements must be present
Not all of your educators are worried about generative AI, but some are very concerned. This is because they view AI as undermining the best practices followed by educators and students for years. In other words, they view generative AI as a “disruptive” technology. A disruptive technology is one that at least significantly alters a long-held perspective or way of working. There is a chance that this disruption could be larger or smaller than expected, and if it is sufficiently large, the new technology could result in a complete shift in the economy of a particular field. An example of this seismic shift is reflected in an argument that “AI will replace teachers because it is more expensive to hire a human teacher than to subscribe to an AI tool,” or “Students will only use AI to cheat on assignments, and so we must completely change all of our assignments or our courses will be worthless."
There are four main perspectives held by educators regarding generative AI. These perspectives are similar to initial viewpoints about almost every new technology. Also, virtually every field has analogous fears to these regarding generative AI and other technologies. These perspectives can be summarized as follows:
Fear that student use of generative AI tools such as ChatGPT primarily creates new forms of preexisting unethical practices (for example, plagiarism).
Fear that generative AI tools undermine systems and norms of online learning.
Confidence that students want to use generative AI tools in effective and constructive ways.
Confidence that educator use of generative AI tools results in innovative products and efficient workflows to enhance instructional design, implementation, and assessment.
In this chapter, we are focused on helping students rather than reassuring faculty, so we will focus our comments on the student perspectives reflected in this list. The chapter on the faculty perspective will cover the second and fourth ideas.
Fear that student use of generative AI tools such as ChatGPT primarily creates new forms of preexisting unethical practices (for example, plagiarism).
Fear that generative AI tools undermine systems and norms of online learning.
Confidence that students want to use generative AI tools in effective and constructive ways.
Confidence that educator use of generative AI tools results in innovative products and efficient workflows to enhance instructional design, implementation, and assessment.
Instructor Fears about Unethical Student Use of Generative AI Tools
Let’s make one thing clear: There are numerous examples of students who have used generative AI ethically in classroom settings and for completing assessments. Many faculty at our institutions have worked with students to create an acceptable use policy for generative AI in their courses. Students and instructors have held themselves accountable to these agreements, learning together how to incorporate generative AI tools to augment human intelligence.
For example, where previously students might have been limited by time and resource constraints to create a simple web page to demonstrate a skill, they can now create a multi-page website using templates and other AI-generated resources. Generation of page templates, images, landing page material, and other digital artifacts requires a knowledge of prompt engineering, which is how we interact with large language models like ChatGPT to produce content. Prompt engineering is a skill that requires critical thinking, problem formulation, and a knowledge of the generative AI tools that are most efficient and appropriate for the content they want to create.
However, in part because not all faculty understand how generative AI tools work, many educators are issuing chilling warnings to students prohibiting the use of generative AI tools of any kind. While these instructors are limiting their students’ access to potential study and creative aids, instructors do have the right to set these policies for their classrooms.
Unfortunately, some educators like Roberto’s teacher take their zeal for “traditional” education a step too far and paradoxically turn to AI tools such as GPT Zero (or even ChatGPT!) to “detect” whether or not a student has used generative AI for an assignment. Around 95% of detectors are meant to discern the origin of text, but there are an increasing number of AI-image detectors as well.
If using AI-based detectors to “detect” AI-generated work seems hypocritical, you’re right. Educators who demand that their students refrain from using generative artificial intelligence should also refrain from using it to assess their students’ work. Furthermore, much of the fear regarding generative AI use by students is based on the assumption that students will not use it ethically or effectively. If educators believe in the reports created by these “detectors,” they are not using these “tools” ethically or effectively. They are putting faith in the “determinations” of artificial intelligence.
Several news articles have highlighted stories of students who were falsely accused of using generative AI in their assignments. Later, we learned that the charges were spurious. In the early days of generative AI, educators sometimes accused entire classes of using generative AI based solely on the claims of AI tools, such as the infamous case of the Texas A&M professor who asked ChatGPT whether his students had cheated. False claims of intellectual dishonesty using AI are disproportionately initiated against English language learners, students whose prose is overly technical, and students whose speaking and writing voice are either outside of the norm or who adhere too closely to it. More than once, each of us has created reports based on our personal writing and received a 100% “AI-generated” score. In other words, we are sympathetic with Roberto and students like him.
In our view, the solution is not to rely on faulty plagiarism checkers to detect generative AI use. Instead, both students and faculty who use generative AI should be transparent about their use of AI up front. Preferably, this should happen through citations and acknowledgements. We also recommend that you track your version history for every document you write for a class, especially if your instructor asks you to.
What’s In Your Syllabus?
To better appreciate how your instructors’ attitudes may affect your education, let’s take a closer look at how the faculty concerns we previously identified may have shown up in your course syllabus. As we mentioned earlier, there’s a wide range of approaches to how students may use generative AI in college classrooms. Here are three examples of syllabus policies from Professor Lance Eaton’s crowdsourced syllabus policy document that represent the most common approaches: All, some, or none.
Any use of generative AI tools is allowed:
- In this case from Wharton School of Business, Professor Ethan Mollick allows all uses of generative AI and even requires it for some assignments:
I expect you to use AI (ChatGPT and image generation tools, at a minimum), in this class. In fact, some assignments will require it. Learning to use AI is an emerging skill, and I provide tutorials in Canvas about how to use them. I am happy to meet and help with these tools during office hours or after class.
If you provide minimum effort prompts, you will get low quality results. You will need to refine your prompts in order to get good outcomes. This will take work.
Don’t trust anything it says. If it gives you a number or fact, assume it is wrong unless you either know the answer or can check in with another source. You will be responsible for any errors or omissions provided by the tool. It works best for topics you understand.
AI is a tool, but one that you need to acknowledge using. Please include a paragraph at the end of any assignment that uses AI explaining what you used the AI for and what prompts you used to get the results. Failure to do so is in violation of the academic honesty policies.
Be thoughtful about when this tool is useful. Don’t use it if it isn’t appropriate for the case or circumstance.” –Ethan Mollick, Wharton University, business courses
When you read through this syllabus policy, does it make you feel excited or concerned? We have found that some students are actually concerned about using generative AI, either because they aren’t sure how to use it or because they are afraid they will be accused of cheating. Remember that most college classes that allow any use of AI also ask students to cite and document their use. Ethan Mollick has been on the forefront of considering how students can use these tools to improve their learning and assignments. If your syllabus has a policy like this one, you should definitely follow up with your professor any time you have questions or concerns about how to use AI.
Some use of generative AI is allowed under specific circumstances:
If you provide minimum effort prompts, you will get low quality results. You will need to refine your prompts in order to get good outcomes. This will take work.
Don’t trust anything it says. If it gives you a number or fact, assume it is wrong unless you either know the answer or can check in with another source. You will be responsible for any errors or omissions provided by the tool. It works best for topics you understand.
AI is a tool, but one that you need to acknowledge using. Please include a paragraph at the end of any assignment that uses AI explaining what you used the AI for and what prompts you used to get the results. Failure to do so is in violation of the academic honesty policies.
Be thoughtful about when this tool is useful. Don’t use it if it isn’t appropriate for the case or circumstance.” –Ethan Mollick, Wharton University, business courses
- In this example from Professor Liza Long at the College of Western Idaho, you can see how some AI use is allowed and even encouraged, but other uses are not allowed:
I encourage students to use generative AI tools for the following types of tasks:
Outlining content or generating ideas.
Providing background knowledge (with the understanding that ChatGPT and other generative AI programs are sometimes wrong—Wikipedia is a better resource for background information right now)
Checking essay drafts for organization, grammar, and syntax.
We will use generative AI occasionally for class activities.
If you choose to use generative AI tools for your essays, you MUST do the following:
Cite the AI tool (see this resource for more information on how to do this).
Write a brief acknowledgment statement at the end of your work explaining how and why you used an AI tool. Include the prompts you used and links (when available).
I reserve the right based on my assessment of your assignment to require you to revise and resubmit all or parts of the assignment if I conclude that you have not used AI tools appropriately.
If I suspect that you have used generative AI tools, and you have not included the required citation and acknowledgement statement, then you will need to meet with me either in person or through Zoom to talk about the assignment. This conversation will include knowledge checks for course content.–Liza Long, College of Western Idaho, English 102
This “middle of the road” approach is common in courses that emphasize writing. It allows students to use generative AI in certain circumstances but wants students to produce their final product. Like the first example, this example requires the student to cite and acknowledge AI use. If you have questions about what is and isn’t allowed, you should work with your instructor.
Suggestions for Acknowledging Use of AI
Outlining content or generating ideas.
Providing background knowledge (with the understanding that ChatGPT and other generative AI programs are sometimes wrong—Wikipedia is a better resource for background information right now)
Checking essay drafts for organization, grammar, and syntax.
Cite the AI tool (see this resource for more information on how to do this).
Write a brief acknowledgment statement at the end of your work explaining how and why you used an AI tool. Include the prompts you used and links (when available).
I reserve the right based on my assessment of your assignment to require you to revise and resubmit all or parts of the assignment if I conclude that you have not used AI tools appropriately.
With both the first and the second examples, it’s a good idea to get in the habit of citing and acknowledging your use of generative AI tools. Before you use any tool, ask yourself this question: “Why and how am I using generative AI?” Reflecting on how and why you are using generative AI can help you to ensure that you are not cheating yourself of important learning opportunities when using these tools.
Monash University provides helpful recommendations for how to acknowledge when and how you’ve used generated material as part of an assignment or project. If you decide to use generative artificial intelligence such as ChatGPT for an assignment, it’s a best practice to include a statement that does the following:
Provides a written acknowledgment of the use of generative artificial intelligence.
Specifies which technology was used.
Includes explicit descriptions of how the information was generated.
Identifies the prompts used.
Explains how the output was used in your work.
The format Monash University provides is also helpful. Students may include this information either in a cover letter or in an appendix to the submitted work.
I acknowledge the use of [insert AI system(s) and link] to [specific use of generative artificial intelligence]. The prompts used include [list of prompts]. The output from these prompts was used to [explain use].
Academic style guides such as APA already include guidelines for including appendices after essays and reports. Review Purdue Owl’s entry on Footnotes and Appendices for help.
For more information about how to cite generative AI tools, we recommend going to the style guide’s official website (e.g., APA, MLA, Chicago, etc.). Since this field is rapidly evolving, checking the website will provide you with the most current guidelines.
No Use of Generative AI Is Allowed
Provides a written acknowledgment of the use of generative artificial intelligence.
Specifies which technology was used.
Includes explicit descriptions of how the information was generated.
Identifies the prompts used.
Explains how the output was used in your work.
- A final common approach is to prohibit the use of generative AI tools entirely. Here is an example from Professor Tara Perrin, an Instructional Design teacher at Middle Tennessee State University:
Use of an AI Generator such as ChatGPT, MidJourney, DALL-E, etc. is explicitly prohibited unless otherwise noted by the instructor. The information derived from these tools is based on previously published materials. Therefore, using these tools without proper citation constitutes plagiarism. Additionally, be aware that the information derived from these tools is often inaccurate or incomplete. It’s imperative that all work submitted should be your own. Any assignment that is found to have been plagiarized or to have used unauthorized AI tools may receive a zero and / or be reported for academic misconduct.–Tara Perrin, Middle Tennessee State University, Instructional Design
If Roberto’s instructor had included a syllabus policy, this final example probably most closely aligns with the instructor’s attitude toward generative artificial intelligence. Students may wonder exactly how instructors will enforce a policy like this one, and they should. As we noted previously, generative AI detectors are not accurate and notoriously and unfairly target English language learners. But this syllabus policy brings up an important ethical point for instructors who may be opposed to any use of AI tools in the classroom: These tools were created from human labor without proper attribution. The main AI companies are currently defending lawsuits for copyright violations.
There are plenty of ethical concerns associated with generative artificial intelligence, and we have found that students who are educated about these concerns sometimes prefer not to use or interact with generative AI tools. Maybe you are one of those students. We will explore this and other challenging scenarios later in the chapter, but first, let’s consider the opposite approach. Why are some instructors like Ethan Mollick above embracing generative artificial intelligence?
The “Postplagiarism” World
While some faculty are still responding (or not responding) to generative AI tools by hoping they'll go away, others, ourselves included, believe that with the advent of generative artificial intelligence, we are now living in what some may call a "postplagiarism" world. We want to share this perspective with you so you'll understand how some instructors are increasingly integrating AI into their classrooms.
An increasing number of professionals are supporting the idea that using copyrighted materials to train generative AI tools falls under the Fair Use doctrine (see, for example, Eaton, S., E,, 2023). Postplagiarism is a movement promoting the idea that in our society, using copyrighted works to create new things is not unethical. This concept offers a new interpretation of the saying that "imitation is the sincerest form of flattery."
Postplagiarism, combined with the idea of "non-consumptive use" promoted by those who argue the Fair Use doctrine defense, suggests that even if a user reproduces the ideas of another work more or less verbatim, the dissemination of an idea takes precedence over the original authorship of specific words or characters. In this view, what truly matters is productivity and the spread of ideas.
You may have encountered open education resources (OER) in your courses. Many practitioners in the Open Education movement align with postplagiarism ideology. "Open EdTech" or "Neo-EdTech" combine open pedagogy (a teaching approach that incorporates OER) with the principles of experiential learning.
The accessibility of AI tools and their products, even with basic technological knowledge, opens up new possibilities for students. In this paradigm, students can potentially create their own AI tools—essentially developing their own educational technology. Ideally, students will learn course material and then input this knowledge (along with relevant sources) into their tools. These student-created tools and products can then be shared with peers, embodying the spirit of open access and open pedagogy.
This emerging landscape presents exciting opportunities for collaborative learning and knowledge creation. As AI becomes more integrated into educational settings, students may find themselves not just consuming information, but actively participating in the creation and dissemination of educational content. While this brave new world of education offers promising possibilities, it also raises important questions about the nature of learning, authorship, and academic integrity that we must continue to explore and address.
As you navigate your academic journey in this evolving context, it's crucial to engage critically with these new tools and ideas, always maintaining open communication with your instructors about your methods and thought processes. The goal is not to replace traditional learning with AI, but to harness these new technologies to enhance and deepen your educational experience.
Pedagogical Theory and Best Practices in Instructional Design
Let’s return to the concept of best practices we discussed at the beginning of this chapter. There are two main educational and instructional design theories that can easily be applied to education with generative AI tools. They are Gagne’s Nine Events of Instruction and Bloom’s Taxonomy.
Robert Gagne, an educational technologist who created training videos for both military groups and formal education institutions, was one of the foremost twentieth-century researchers who investigated what people need from their teachers and environments to be able to learn. Through a rigorous process and years of experience, he developed a list of nine events that all educators need to take their students through to provide a complete learning experience:
Gaining attention
Informing about the course objectives
Stimulating recall of prior learning
Presenting stimuli for future learning
Providing learning guidance
Eliciting appropriate performance
Providing feedback
Assessing performance
Enhancing retention and transfer (Gagne, 1985, pp. 243-256)
All of these events and actions can be performed just as well with generative artificial intelligence as they can without these tools. While AI is “disruptive” in that it provides new ways for learning, it is not “disruptive” in that it completely negates all of our previous knowledge about education and how people learn. In fact, generative AI tools may enable more effective learning at the highest level of Bloom’s Taxonomy by fostering open pedagogy and the creation of new materials, which demonstrates the most in-depth knowledge of a topic or skill.
Gaining attention
Informing about the course objectives
Stimulating recall of prior learning
Presenting stimuli for future learning
Providing learning guidance
Eliciting appropriate performance
Providing feedback
Assessing performance
Enhancing retention and transfer (Gagne, 1985, pp. 243-256)
All of these events and actions can be performed just as well with generative artificial intelligence as they can without these tools. While AI is “disruptive” in that it provides new ways for learning, it is not “disruptive” in that it completely negates all of our previous knowledge about education and how people learn. In fact, generative AI tools may enable more effective learning at the highest level of Bloom’s Taxonomy by fostering open pedagogy and the creation of new materials, which demonstrates the most in-depth knowledge of a topic or skill.
How Instructors Are Using AI
Now that we’ve covered some theoretical and pedagogical approaches to generative AI, let’s look at how some instructors are actually using generative artificial intelligence in the classroom and beyond. According to a 2024 poll, 72% of faculty are using generative AI tools in their classrooms (Ruediger et al., 2024). The examples below are certainly not all-inclusive, but they represent some common ways instructors are experimenting with generative AI tools.
Designing Assessments
As we saw above from the syllabus policies we reviewed, some instructors are embracing AI to design assignments. For example, in her first-year writing courses, Liza Long incorporates generative AI tools to provide formative feedback on students’ brief writing assignments. Students interact with AI tools weekly to refine and narrow their research questions, improve their essay organization, brainstorm creative hooks and titles for their papers, or clearly define their target audience. In a literature class she teaches, she co-wrote the textbook, Critical Worlds, using ChatGPT 3.5 so that she could evaluate how well the tool worked for literary analysis. In this class, students now use a generative AI tool to “write” their rough drafts, then critique the AI output to improve those drafts and ensure that they are factually correct.
Scaffolding Assignments with AI Support
Joel Gladd, another English instructor, uses generative AI tools to provide scaffolding and support for his students. For example, he has created custom GPTs to help students interact with and better understand difficult reading assignments.
Assessing Assignments
An obvious use case for professors is to have generative AI tools help with grading tasks. But is this ethical? Long does not use generative AI tools for summative (final) assessments, and she has ethical concerns about feeding student work into generative AI models that are training on the data we provide. For Long, the concept of informed consent is critical. The formative assessment tool she uses does not provide any student data to training models. When she uses student work to demonstrate these tools, she obtains the student’s consent first.
What do you think? If you use generative AI to assist with assignments, is it ethically permissible for instructors to use these tools to assist with grading? We’ll discuss this question at greater length later in the chapter.
“Boring” Writing: Business Correspondence, Recommendation Letters, Emails
One of the least ethically murky areas for generative AI use for most college instructors is for “template” writing such as recommendation letters and email drafts or business correspondence. This is the kind of writing that does not require much original thought or input. Drafts can be easily customized to the specific audience and purpose, saving time.
Research and Scholarship
Just as students have faced some pushback for using generative AI tools in their writing, instructors are also experiencing challenges with generative AI tools in research and scholarship. A 2023 paper found that “an AI language model can create a highly convincing fraudulent article that resembled a genuine scientific paper in terms of word usage, sentence structure, and overall composition” (Májovský et al., 2023). The pressures on scholars to publish research likely contributes to the use of generative AI tools, just as pressures on students to be successful may incentivize unethical AI use.
However, there are some useful applications of generative AI in research for both students and professors. AI can assist with data analysis, for example, or check a paper to ensure that it is coherent and organized. And tools like Perplexity.ai can help scholars to locate applicable research more quickly than a Google Scholar or library database search can.
How to Have Hard Conversations with Your Teacher
But what happens when you and your teacher don’t see eye to eye about the use of generative artificial intelligence? Let’s return to Roberto's situation and provide some suggestions for how he can advocate for himself. This section will provide you with specific guidance for navigating situations involving faculty and AI in the classroom. We'll explore three common scenarios: first, when a student is required to use AI in a course but feels uncomfortable doing so; second, when a student wants to use AI as part of their workflow, but the course bans it; and third, when a student is accused of unauthorized AI use. Knowing a little bit about how your instructors are thinking about AI use in their class, as well as some key institutional protocols around academic integrity violations, will help you make more informed choices.
When You Don’t Want to Use Generative AI
Some students are rightly uncomfortable with using AI. What should you do if a teacher requires it? First, know that AI is a developing technology, and the ways that AI can be implemented (or avoided) in a classroom vary widely. Keep in mind the instructor’s intent. AI is increasingly in demand in the workplace; as higher education is increasingly expected to justify how a course fosters "durable skills" that translate to the workplace, AI is going to become one of those bridgeable technologies that will be difficult for faculty to carve out of their syllabus. Or, your instructor may include AI assignments not to promote using it uncritically, but rather to encourage savvy awareness around its limits and capabilities. If you want to resist or critically engage with AI, for ethical or other reasons, your ethical stance may be perfectly compatible with using it in a controlled environment.
If you want to remain in a section even if it requires using AI, establish a line of communication early on to see if you can complete alternate assignments, such as arguments that engage critically with the exercise and provide explanations for how the technology may be limited or unethical. Faculty who allow opt-out sometimes provide sample chatbot conversations. Ask the faculty member if they would be able to provide these for you to engage with and reflect on, if you do not want to use the technology yourself.
You should also look for what platform(s) the instructor expects students to use in the course. Does the institution provide safe and secure access to something like Microsoft Co-Pilot or ChatGPT for Enterprise? Are they working with a company that uses the APIs of Anthropic, OpenAI, or another company but within a contained environment that doesn’t share your data? If not, the instructor may be requiring you to sign up for a service that violates basic expectations around privacy. If the nature of the course content involves highly personal work, press them on this issue. You can of course find ways to transfer to another section early in the semester, if you feel it's not in your best interest to remain in that section.
When You Want to Use AI, But Your Teacher Doesn’t Allow It
Other students have partially or fully integrated AI into their workflows, and this will create some friction with courses that “ban” AI. As faculty, we have been in many departmental meetings in which we discuss how frustrated instructors are that their students seem to be using ChatGPT to complete their discussion forums. Discussion forums, in particular, are infamous assignments that students like to outsource to AI—they seem low stakes, and students who do report using AI to complete them state that they did so because of stress and lack of time (so, time management is a major issue). If you scour subreddits such as r/chatgpt and r/college, you'll find plenty of instances where students admit this. But, at the same time, many others in those same forums report being accused unfairly, and as faculty, we have all seen this happen.
You can probably appreciate that faculty feel frustrated and insulted when they suspect that students are attempting to pass the course without engaging with content that the faculty have dedicated their lives to learning and teaching. Faculty begin to suspect that every high-performing submission is AI-generated.
If you want to use AI in a course, but the syllabus has a ban, you should seek clarity about what that “ban” means. For example, some instructors may clarify in a writing course syllabus that students should complete their rough drafts unassisted. Faculty do this because they are tracking research that shows students tend to perform worse over the long-term on a particular skill if they first attempt it with AI assistance and then later do not have access to that technology. In other words, students would have become more proficient in a particular outcome with no assistance at all. But, with proper scaffolding, students can be expected to learn something unassisted and then practice incorporating AI into the workflow. In some courses, this means students start with an unassisted rough draft, receive human feedback, then ask for AI feedback, and finally can use AI to help address the feedback they received. What's important is that students are forced to make choices to solve a particular problem, and instructors need to assess those rhetorical moves. Restricting AI use and then allowing it at other times is increasingly common.
Another way to qualify what AI assistance looks like is whether it's upstream or downstream of someone's workflow when completing a task (such as designing an app or writing an essay). Upstream of a rough draft often involves research, note-taking, and brainstorming. Then within each of those stages, a course may have even smaller tasks. At any point, AI can assist. Consider the research stage: there are a slew of research tools now, such as Elicit and Perplexity AI, that leverage LLMs to do "semantic" rather than "keyword" searches. This is an emerging form of research that allows researchers to access archives differently than in the past. Even if your instructor expects you to practice keyword searches, you may want to cross-check with an AI-infused platform to see what you might have missed (and vice versa). Even if a writing course “bans” AI, this upstream usage is likely not within the scope of the ban, but technically it’s “using generative AI” to help complete a task.
Downstream of a workflow is where instructors tend to focus—what you actually submit to your LMS (Canvas, Moodle, Blackboard, etc.). It's here that you should pay careful attention to syllabus language around generative AI and what is allowed in submissions. As we explained above, there is a range of AI tolerances in higher education, from highly tolerant to outright bans. A ban often means that when you submit an artifact (an essay, infographic, digital portfolio, etc.), it must be entirely your own, without the assistance of AI (not “generated”). Plugging an outline into ChatGPT and asking for an essay would be an example of a generated essay, even if the initial seed was your own work. Asking ChatGPT to check your spelling and grammar may or may not be considered generated text, depending on the syllabus language.
The categories of “assisted” or “unassisted” submissions are becoming complicated. What's odd about blanket bans is that they’re impossible to enforce consistently. A student can ask ChatGPT to brainstorm topics, outline their essay, create a very rough draft, and then completely rewrite it in their own words and infused with their own ideas and research, and the submitted text would not technically be "generated" in the way the syllabus language intended, even if the final product represents a mesh of human and machine labor. Ethan Mollick calls this mix of human and machine labor a “centaur,” a workflow routine that increasingly explains how many students, faculty, and workers use these technologies. Technically the draft adheres to the expectation of "non-generated text." However, you should still have a good faith conversation with the instructor about your workflow to establish trust and clarity.
When You Are Unfairly Accused of Unauthorized AI Use
What happens if you're unfairly accused of using generative AI, like Roberto? Unfortunately, as of this writing in 2024, such accusations are extremely common. It will be helpful to know that a high percentage of faculty are just trying to figure out this technology themselves. They're learners, just like you, and they're applying an older framework (plagiarism) to a new technology (generative AI). Most higher education institutions did not update their academic integrity policies to include artificial intelligence until 2023. Until most faculty have fully wrapped their heads around how to teach and assess in a way that "fits" with how students are engaging with a course, it will help to keep that in mind.
So how can you deal with an accusation like this? We have seen that when a student is accused and receives a zero for an assignment (whether it’s a low stakes discussion board or a higher stakes exam or paper), it's extremely important to continue the conversation and ask to meet with the faculty member to demonstrate your proficiency. Start there. Rather than lashing out in anger (even though your anger is understandable), show them you're eager to demonstrate that you're engaging with the course content. Set up a Zoom meeting or, better, visit them in person, as soon after the accusation as you can.
Second, know your institution's protocols around academic integrity violations. This is extremely important. If a student receives a "0" for an assignment, and the instructor believes it's AI-generated text, the instructor needs to follow institutional protocol by notifying academic integrity officers, usually by submitting an academic integrity violation report. Students can challenge this, and you should, if it comes to that—but first, start with a sincere and eager communication with the instructor. When reporting a student, faculty must be able to demonstrate "with reasonable certainty" that the student has committed a violation. It doesn't have to be 100% certainty, but rather something they could argue successfully in an academic integrity hearing.
When you meet with the instructor, ask how they determined your submission was AI-generated. As mentioned above, AI-checkers are highly flawed. AI cannot be used to detect AI with certainty. If communication breaks down, and you challenge the grade, make sure you are aware of institutional appeal deadlines (usually available in your college catalog). Do not hesitate to appeal the grade if your instructor is unwilling to work with you after that initial meeting.
Finally, what this entire scenario demonstrates is that it’s often helpful to leave a digital trail of your work. As we mentioned previously, tracking your version history can be one way to do this. Google Docs and Microsoft Word have histories with timestamps that show the progress of your work. If you're particularly concerned, you can download Chrome extensions, like Cursive, that record your labor in a more granular way. It's a good practice to write first in Word or Google Docs, etc., and then copy your work into the LMS. That way, you can prove your labor.
If you've been accused of using AI and you did, keep in mind the same steps provided above regarding unfair accusations (reach out to demonstrate your engagement, know the academic integrity reporting process, etc.), but the best thing to do is simply to ask for an opportunity to redo the assignment or complete an oral assessment. Know that most faculty truly do want to work with you, and if they see a good faith effort to re-engage, they usually will accept a redo or alternate assessment.
Conclusion
As we've explored throughout this chapter, the integration of generative artificial intelligence in higher education classrooms presents both exciting opportunities and complex challenges for students and instructors alike. As Roberto's dilemma demonstrates, it's clear that navigating the use of AI in academic settings requires thoughtful consideration and open communication.Let's summarize the key themes we've covered:Instructor Perspectives: We've seen that faculty views on AI range from enthusiastic adoption to cautious skepticism to outright denial. Understanding these perspectives can help you navigate your courses more effectively.
Ethical Considerations: The emergence of AI in education has raised important questions about academic integrity, plagiarism, and the nature of original work. As we move into a "postplagiarism" world, it's crucial to engage critically with these ethical dimensions.
Practical Applications: We've explored how both students and instructors are using AI tools for various tasks, from research and writing assistance to assessment design.
Policy Variations: As demonstrated by the syllabus examples, policies on AI use can vary widely between courses and institutions. Being aware of these differences is essential for academic success.
Communication Strategies: We've discussed how to approach difficult conversations with instructors about AI use, whether you're seeking to use AI in a course that prohibits it or defending yourself against unfair accusations.
As you move forward in your academic journey in this AI-enhanced landscape, keep these key takeaways in mind:
Stay Informed about AI and Focus on Your Own Learning: Keep up with the latest developments in AI and how they're being applied in your field of study while also being mindful that your use of AI enhances rather than replaces your own critical thinking and skill development. Never rely blindly on output from generative artificial intelligence tools.
Be Transparent: When using AI tools, always be upfront about it. Use proper citation and acknowledgment practices. Also, keep records of your work process, including prompts used and how you've incorporated AI-generated content.
Communicate Openly: Familiarize yourself with your institution's and individual instructors' policies on AI use. If you're unsure about AI use in a course, don't hesitate to have a respectful conversation with your instructor.
Remember, the goal of your education is not just to complete assignments, but to develop critical thinking skills, subject expertise, and the ability to navigate complex ethical landscapes. AI tools, when used thoughtfully and ethically, can enhance this process rather than shortcut or subvert it.
As we continue to explore the implications of AI in education, maintaining open dialogue between students and instructors will be crucial. By approaching these tools with a combination of curiosity, critical thinking, and ethical consideration, you can harness the benefits of AI while preserving the integrity and value of your education.
The future of education is being shaped by these technologies, and you have the opportunity to be at the forefront of defining how they're used. Embrace this responsibility with thoughtfulness and integrity, and you'll be well-prepared for the AI-augmented world that awaits at the successful conclusion of your academic studies.
References
Abbas, M., Jam, F. A., & Khan, T. I. (2024). Is it harmful or helpful? Examining the causes and consequences of generative AI usage among university students. International Journal of Educational Technology in Higher Education, 21(1), 10. https://link.springer.com/article/10.1186/s41239-024-00444-7
Alyahyan, E., & Düştegör, D. (2020). Predicting academic success in higher education: literature review and best practices. International Journal of Educational Technology in Higher Education, 17(1), 3. https://link.springer.com/article/10.1186/s41239-020-0177-7
Alier, M., García-Peñalvo, F., & Camba, J. D. (2024). Generative Artificial Intelligence in Education: From Deceptive to Disruptive. https://reunir.unir.net/handle/123456789/16211
Bartholomew, J. (2023). Q&A: Uncovering the labor exploitation that powers AI. Columbia Journalism Review. https://www.cjr.org/tow_center/qa-uncovering-the-labor-exploitation-that-powers-ai.php
Eaton, L. (2023). Syllabi Policies for Generative AI. https://docs.google.com/spreadsheets/d/1lM6g4yveQMyWeUbEwBM6FZVxEWCLfvWDh1aWUErWWbQ/edit?usp=sharing
Eaton, S. E. (2023). Postplagiarism: transdisciplinary ethics and integrity in the age of artificial intelligence and neurotechnology. International Journal for Educational Integrity, 19(1), 23. https://link.springer.com/article/10.1007/s40979-023-00144-1
Gagné, R. M. (1985). The conditions of learning: And the theory of instruction (4th ed.). Holt, Rinehart and Winston. The conditions of learning and theory of instruction : Gagné, Robert M. (Robert Mills), 1916-2002 : Free Download, Borrow, and Streaming : Internet Archive
Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 4(7). https://doi.org/10.1016/j.patter.2023.100779
Long, L. (2023). Critical Worlds: A Targeted Approach to Literary Analysis. College of Western Idaho. https://cwi.pressbooks.pub/lit-crit/
Májovský, M., Černý, M., Kasal, M., Komarc, M., & Netuka, D. (2023). Artificial intelligence can generate fraudulent but authentic-looking scientific medical articles: Pandora's box has been opened. Journal of Medical Internet Research, 25, e46924. https://doi.org/10.2196/46924
McDonald, N., Johri, A., Ali, A., & Hingle, A. (2024). Generative artificial intelligence in higher education: Evidence from an analysis of institutional policies and guidelines. arXiv preprint arXiv:2402.01659. https://arxiv.org/abs/2402.01659
Mollick, E. (2024). Co-Intelligence: Living and Working with Artificial Intelligence. Portfolio.
Ruediger, D., Blankstein, M., & Love, S. (2024, June 20). How are faculty using generative AI in the classroom? Findings from a national survey. Ithaka SR. https://sr.ithaka.org/blog/how-are-faculty-using-generative-ai-in-the-classroom/
The authors acknowledge the use of Claude.ai to review the paper for grammar, consistent tone, and organization. The writing, research, and examples are our own.