What is generative AI?
Generative AI (gen AI) is a kind of artificial intelligence "that can create original content such as text, images, video, audio or software code in response to a user's prompt or request (Stryker & Scapicchio). While gen AI is not new, the way the average user could access these kinds of tools changed wildly with the broad release of ChatGPT in 2022.
ChatGPT is a generative artificial intelligence chatbot that uses predictive analytics to compose natural language. The Introduction to Generative AI with GPT on LinkedIn Learning is a great, 65-minute course that is free to UIS students, staff, and faculty. This video excerpt provides a good 4-minute overview of GPT and generative AI:
In many conversations online and with colleagues, we've resorted to using "AI" as a shorthand to mean, largely, generative AI. However, this kind of shorthand muddles the dialogue surrounding this topic; there are many aspects and forms of artificial intelligence, and generative AI is a small subset of that larger topic. Specificity of language is important and can help clear up misconceptions about artificial intelligence broadly and generative AI more specifically.
Considerations before implementing generative AI in online classrooms

Whether or not to integrate generative AI into the online classroom--or any classroom--is a complex and nuanced decision. We've found that, generally, instructors must consider the pedagogical, ethical, and social emotional implications of adding generative AI into the classroom. This is not to say there is a right or wrong answer, but rather that instructors should consider many angles and should be trusted to make a decision that will work for them and their students.
Pedagogical considerations
+
- Early research suggests that some generative AI tools–especially personalized AI tutors–can lead to improved outcomes for students (Baillifard et al., 2024).
- Researchers at the University of Pennsylvania’s Wharton School of Business have proposed a third system to the model of decision-making. They refer to this as “artificial cognition,” or the way that we might make decisions based on what information we receive as a result of prompting generative AI programs. One of the major concerns of this reshaped way of thinking is the amount of cognitive surrender (or the thinking that we “give up”) we are okay with (Shaw & Nave, 2026).
- Researchers in Germany found that use of LLMs leads to reports of lower cognitive load for students (Stadler, Banner, & Sailer, 2024). On the other hand, they also found that these same students did not engage as deeply as other students. Data from the RAND American Youth Panel (2026) indicates that students are aware that their critical thinking skills may be harmed when they use AI tools for their schoolwork, but this does not keep them from turning to these tools.
- A study of research in high school math education shows underperformance with removal of access to AI during practice sessions and suggests safe-guarding chatbot interactions (Bastani et al, 2025).
- Researchers at Martin Luther University Halle-Wittenberg conducted an experiment regarding reliance on AI. Though their research focused on “financially risky decisions,” they found that their participants “follow[ed] AI advice that conflict[ed] with available contextual information and [was] against their own interests” (Klingbeil, Grützner, & Schreck, 2023). Dr. Elias Aboujaoude (professor of psychiatry at Stanford University School of Medicine, said in an interview that “By sounding authoritative, data-driven and evidence-based, they [generative AI tools] come across as knowing what they’re talking about. By being sycophantic and always aiming to please, they come across as having our best interest at heart and like they would never fool us” (Katz, 2026). Combining these perspectives can, when extrapolated out to the classroom, have serious implications for students. If students don’t recognize the limitations of generative AI in their learning experiences, they are likely to act against their own interests and commit to memory potentially false or misleading information.
Ethical considerations
+
- The electricity and water demands of training and using generative AI are staggering and cause harm to the environment (Zewe, 2025). In our own community, residents have largely opposed the development of a data center being built in part due to the effects it would have on the local environment (Raju, 2026).
- In K-12 classrooms, “[s]tudents lack awareness and understanding of ethical issues associated with AI,” largely because they have not been educated on the topic (Gouseti et al., 2025). If students do not have a place to discuss generative AI with trusted figures, they are more likely to blindly turn to generative AI tools. If students have not received this education before coming to UIS, the education will need to be provided.
- Some companies have started tracking AI use of their employees and including this information in performance reviews (Bindley & Blunt, 2026). Some employers will not hire employees who don’t demonstrate AI fluency. Depending on their career paths, some students will need to engage with AI tools to be competitive in the workplace. Alternatively, there is evidence that people in creative fields (artists, writers, etc.) are being “squeezed out” of their fields as more and more companies and consumers rely on AI-generated content (Waikar, 2025).
- There are major concerns about how generative AI models are trained. We have long been aware that generative AI platforms are biased, provide dis- and misinformation, and are trained on copyrighted texts and images without the permission of the writers and artists behind the work. Tools like X’s Grok are notorious for producing harmful language and images, particularly targeted at people of color (POC) and women. Not all tools are created equal; students, faculty, and staff need to consider and only promote tools that align with their values and goals as much as possible.
Social emotional considerations
+
- One of the major reasons why students choose online courses is because they appreciate the flexibility; most online students have obligations outside of the classroom that prevent them from being available during the day, and so they work on their courses outside of normal working hours (Emerson, 2025). Generative AI tools–especially well-trained AI tutors or chatbots available 24/7–may help students get the information they need on a schedule that works for them.
- Generative AI tools can simplify processes like tutoring, reviewing writing, and more by eliminating the need to coordinate between students, especially in an asynchronous online course. There are generative AI tools that have been developed in all of these areas. However, it is worth noting that the Office of the Surgeon General, in 2023, identified an “epidemic of loneliness and isolation,” and online students can be particularly vulnerable to these feelings. It is worth asking what kinds of social and emotional learning opportunities are lost as we outsource tasks to generative AI programs.
- In 2025, Sam Altman wrote on X that “If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it.” While this can aid in student learning by sharing information in an easy-to-understand way, it can also be damaging to particularly vulnerable populations. Because ChatGPT and similar tools respond this way, many people become “addicted” to talking with these tools and distance themselves from people around them. This is particularly concerning for those who need mental health support; over a million users each week discuss mental health issues with ChatGPT (2025), likely including those who have not disclosed mental health struggles elsewhere.

Enact your plan for generative AI use in the classroom
The number one thing to know is that you are the expert of your own classroom. When it comes to generative AI use, each instructor should feel empowered to make decisions about what is best for their students and for the learning environment. There are plenty of arguments for and against generative AI use, but only you know what will work for your courses.
Regardless of what you choose to do, consider the following as you implement your plan:
- Write your policy in your syllabus. If you use the DesignPLUS syllabus template, there is a specific place in the policy accordions where you can write your specific policy. Some instructors choose to include their policy as an extension of the academic integrity policy. As long as it is present and easy to find, there is no wrong answer.
- Reiterate your expectations for generative AI use for all assignments in the course. This is especially important if the expectations differ from assignment to assignment, but it can still be valuable to remind students on each assignment whether or not generative AI use is appropriate and, if it is appropriate, what the boundaries are. Should it only be used in brainstorming? Revising? If it is used, how should it be cited? What are the consequences for suspected generative AI use? Answering these questions explicitly for your students each time is key to ensuring student success. If you have trouble answering these questions, you might try using some of the strategies in this faculty reinvention checklist (Haymes, 2026) to preemptively see what AI is capable of and how you might address that in assignment design or in communications with students.
- If you plan to have your students use generative AI in the classroom, it is worth considering what you will do to support students who have personal, negative feelings towards generative AI. Like anyone else, our students’ opinions on generative AI exist on a spectrum; some students will be excited to use these tools, but other students will be hesitant or resistant due to the same pedagogical, ethical, and social emotional implications that instructors should also consider. You might think about whether generative AI use is required or just an option for an assignment. If it’s required, you could explicitly connect the “why” of its use to the course or module objectives.
- If you use generative AI yourself for parts of your courses, it is important to be transparent about your own generative AI use and to model the citation behavior you would expect of your students. Students often don’t know how generative AI is being used in different fields and across contexts, and they will look to their instructors to provide that guidance. It can be helpful for them to see how their instructors use these tools (if they do) and what their instructors do to verify and modify the AI-generated content. PETRA AI can provide some common language for different ways you may use an AI tool.
- If you are planning to use generative AI in your classrooms, we recommend you explore the resources from UIUC’s Center for Innovation in Teaching & Learning (CITL). These resources are in line with the UI System’s guidance on generative AI use, and great care has been taken to provide a variety of options for instructors. You might also consider utilizing the syllabus and assignment icons to make clear what uses of generative AI are acceptable.
- If you do not want your students to use generative AI and are concerned about the ways in which they may turn to these tools, you might explore some of the following tools and resources, which can add a layer of AI-resilience to your courses. It’s important to note that these tools are not “magic wands” capable of solving all problems; rather, these are options that instructors can implement when appropriate and use as data points in conversations with students:
- Google Assignments (for student assignments–can help instructors see what students have worked on and when)
- Copyleaks (detects plagiarism and AI-generated content)
- Respondus LockDown Browser (for use on Canvas Quizzes–can prevent students from accessing other sites on the same device they take the quiz on)
Our process for rolling out AI tools
COLRS staff is committed to supporting our instructors, no matter where they stand on this topic. We understand that AI is being integrated into more and more tools, and we want to be transparent about our process for potential rollouts of these features:
- All potential AI features are vetted by members of the COLRS staff before rollout.
- AI features that COLRS staff feel solve problems raised by instructors are potential candidates for rollout.
- Whenever possible, COLRS staff will not have AI features turned on by default. We want our instructors to be able to opt in to using these tools instead of having to opt out of using them.
- There are some tools we have access to where the AI features are turned on by default and cannot otherwise be turned off. We will always make instructors aware of these features and will provide guidance and/or resources for these parts of the tools so that instructors can decide if they want to use them. These features are instructor-facing; COLRS does not currently have any tools where student-facing AI features are on by default, and we do not intend to make any of those tools available.
- COLRS is always open to feedback and suggestions. If there are tools or features you’d like to try, we’d love to hear about them! If you have concerns about or objections to particular tools or features, we welcome your feedback!
At the heart of this, we recognize that generative AI use is nuanced; we want our campus community to be able to engage in dialogue surrounding this topic, no matter where they stand. We will only be able to successfully navigate this landscape if we see it from all sides and recognize all viewpoints as worthy of consideration.
Additional resources
+
- For national findings on AI in higher education from more than 200 universities, please see the AAUP topical report.
- Summary of ethical concerns