Artificial intelligence has been in our everyday lives for some time. Think of Amazon's Alexa and Apple's Siri voice assistants. They can be helpful to perform tasks.

Generative artificial intelligence has been researched for decades, but has not been widely implemented or discussed outside research circles. You may have experienced generative AI though predictive text (Smart Compose) in Gmail.

The release of ChatGPT on November 30, 2022, though, was the first time many of us learned about generative artificial intelligence. There are now many generative AI models and tools based on them.

What is ChatGPT?

A generative artificial intelligence chatbot that uses predictive analytics to compose natural language. Introduction to Generative AI with GPT on LinkedIn Learning is a great 65 minute course that is free to UIS students and staff. This video excerpt provides a good 4 minute overview of GPT and generative AI:

Generative artificial intelligence tools like ChatGPT should be looked at as a tool that can be employed by faculty, staff, and students. It is a tool that is here and will not be going away. There are affordances, challenges, and risks related to using these technologies. As a university, we believe that we need to learn to use AI technologies and to teach our students to use and understand them in productive and balanced ways.

One risk, for instance, that ChatGPT and other large language models (LLM) will always have is the potential to generate unsubstantiated or untrue content, even though this risk may be diminished as the tools develop. A challenge for instructors is the concern of AI cheating and the workload that can come with it. One affordance could be the use of AI to take care of administrative tasks.

"The best way to think about this is you are chatting with an omniscient, eager-to-please intern who sometimes lies to you."

Ethan Mollick, a professor at the University of Pennsylvania's Wharton School of Business in an NPR Interview on December 19, 2022

The Biden administration issued an executive order aimed at setting comprehensive guidelines on artificial intelligence technology, which will allow multiple agencies to start regulating emerging technology and protect individuals’ privacy in the absence of any legislation governing AI. Explore the resources below to learn more and consider the challenges and opportunities presented by generative AI.

UIS workshop recordings. View the Explore ChatGPT Session that COLRS hosted with the UIS Learning Hub and Center for Faculty Excellence in January 2022. Assignment ideas and an overview of generative AI are covered "Generative AI in Our Classrooms" (presented by Layne Morsch and Emily Boles on December 1, 2023 for the Central and Southern Illinois Faculty Development Network).

U of I System guidance. The University of Illinois System convened a focus group from across the three universities to create guidance documents on Generative AI, including specific information for instructors and students.

Mind the AI Gap – Equity, Access & the Risks of Standing Still. In this Contact North webinar Dr. Philippa Hardman explores the technical, ethical and cultural fear and risks associated with the rise of AI in the education context. It explores complex issues surrounding generative AI in higher education.

Syllabus Statements are a great way to frame the conversation about generative AI with your students. COLRS has created a common document for UIS instructors to share the generative AI syllabus statements. Please feel free to send it to COLRS staff for it to be added to the document.

A set of UIS syllabus and assignment icons that represent common generative AI course policies and acceptable use. These icons have been adapted from the Oregon State Syllabus and Assignment AI Icon Project. Icons can be downloaded from Box

Syllabus Resources

Resources

Cheating with AI: Classroom Strategies and Detection Tools

There is an arms race taking place between AI and AI detection tools. It is a race that isn't winnable by educators. The generative AI detectors will always be catching up to the tools.There is a significant negative impact on students when AI detectors present false positives. Research has also shown that AI detectors are more likely to label text written by non-native English speakers as being written by generative AI (Myers, 2023). Turnitin has said that it has a 1% false-positive rate for AI detection. If that is true, then out of 4,000 papers submitted by UIS students each term, 40 students could be falsely accused of cheating with generative AI tools. 

Becoming the cheating police does not teach our students to value their voice, agency, and creativity. Rather, we need to encourage our students to view conversational AI as a tool. Conversations and classroom activities that demonstrate intentional uses for AI are effective strategies with many students. 

The UIS Academic Integrity Council sent a guidance memo on generative AI and student work to UIS faculty on April 3, 2023.

In classroom discussion and reflection on AI, we need to position humans -- instructors and students! -- as experts and critics of the tool and its output. We recommend reinforcing that position often.

Students who are intrinsically motivated to learn are not likely to cheat. To encourage intrinsic motivation, try to provide room and support for autonomy, mastery, and purpose in your assessments.

  • Allow students to self-direct a portion of the assignment. Can you allow them to pick a topic or direction? Can they customize it to address a project at work or to further research in a favorite topic?
  • Scaffold your large assignments to provide students the confidence that completing the assessment is possible. Hopeless students who aren't sure what to do next are more likely to cheat. 
  • Tell your students your "why." What is the purpose of the work? What will it enable them to do in your field, later in the class, at a job. Let them know why they should care about this work. 

Review your assessment and activities. Are they asking students to recite facts? If so, those are easy targets for AI chatbot use. Try to put a twist on those assessment: add an element of comparison; ask students to reflection on their own experiences; or add a local twist. These types of work are beyond the capabilities of current AI chatbots.

Another effective action is to create explicit assignment instructions that address what you view as appropriate uses of AI in each of your assignments and what you view as cheating. This keeps your knowledge of AI tools centered and provides students guardrails for their behavior. Show students that using AI tools can save time and reduce busywork without relinquishing their agency and voice and missing opportunities for growth and learning.

Even if you do all of this, some some will still cheat. It will happen. What are instructors to do?

  • First, get to know AI writing style. We suggest using ChatGPT and other conversational AI tools to learn to recognize the consistent output (writing style) of the chatbot. Your observations will be your strongest tool.
  • Second, check the suspected writing with several tools. When you suspect AI writing is being used, we recommend feeding a large amount of the suspected text into several detectors to compare the results. Longer pieces of text  (250+ words) provide more evidence for the detector to consider and view patterns in the writing. Know that the detectors are fairly easy to trick. With some light editing and error introduction, you can easily change the results from "likely an AI chatbot" to "likely human." Also, beware that creative outputs from ChatGPT (write in an accent/style, write a poem or lyrics) aren't detectable by these tools. 
  • Third, have a conversation with your student(s). Ask them if they used an AI chatbot in their work. Some student work can sound stilted like AI chatbots. They may not have cheated. Since it is a tool and not a source, using chatbots isn't plagiarism. It can be a gray area for students if you haven't specifically addressed it in class or in your syllabus. Consider giving them a chance to redo the work after the conversation. Tell them the consequences for using an AI chatbot moving forward (zeroes, academic integrity violations, etc). 

The AI Detectors

Search Engine Land released a comparison of AI detectors. AI detection tools “grade content based on how predictable the phrase choices are within a piece of content.” Does the text align with the likely pattern AI would in creating the content? The AI detectors generally look at two qualities:

  • Burstiness: A predictable length and tempo to sentence structure.
  • Perplexity: A randomness to the words chosen in a sentence or collection of sentences.

CopyLeaks is a mainstream plagiarism detection tool that developed an detector to recognize human writing (as opposed to AI writing). They advertise a 0.2% false positive rate. They offer a free Chrome extension. 

GPTzero was developed by Princeton undergrad Edward Tian, and also uses GPT4. You may upload entire files to this tool in addition to copy and pasting text. It uses "perplexity" and "burstiness" scores for rating writing as human or AI authored.

Turnitin released a beta tool for AI detection in March 2023. It was available at no cost while in beta. In January 2024, it became a separate paid service and is no longer available in our Turnitin subscription.

AI Tools to Check Out

For a full searchable list of AI Tools, check out the AI Scout Directory.

Elicit - Advanced Semantic Search finds relevant papers using semantic similarity, even if they don’t match specific keywords, expanding the scope of your research.

SkipIt - Summarize and chat withYouTube videos via question and response.

Quillbot - Drop in articles - can summarize and create annotated bibliography

Semantic Scholar - Semantic Scholar is a free, AI-powered research tool for scientific literature, providing access to over 211 million papers from all fields of science.

Scholarcy - Scholarcy is an AI-powered article summarizer that reads research articles, reports, and book chapters in seconds. Identify key information such as study participants, data analyses, main findings, and limitations to reduce the time spent appraising a study by over 70%

OpenAI - The earliest mover with the viral web front-end and the name ChatGPT. The free version uses GPT 3.5 as the model, which is suitable for most tasks, though there are significant improvements in the latest model GPT 4, particularly in the area of reasoning and avoiding hallucinations (think of AI as an intern). An iPhone app was released in mid-May with a phased launch across the world and they are going to launch Android app soon. If you sign up for the paid plan ($20 a month), you get early access to features such as the ability for the bot to access the web and use plugins that supercharge the functionality of the bot. The Code Interpreter feature is in alpha currently and promises to be significantly good at analysis.  

Google Bard - A free interface available to everyone with a Google account. Does reasonably well on most tasks, and has access to the web by default. Currently does not match up to the latest models from OpenAI, but it might improve fast given Google’s focus on AI now. A feature similar to plugins, is being labeled ’tools’ and was announced in May 2023. Some AI capabilities are already integrated into Google tools. 

Microsoft Bing Chat - A free interface available to all. Powered by OpenAI models. Currently requires downloading the Microsoft Edge browser to access the chat feature. Has default access to the web and is powered by the latest models from OpenAI, so for normal usage this could be considered a free version, versus the paid version from OpenAI. Microsoft has also enabled plugins for several services and will continue to add more. They have also committed to following the same standard for plugins as OpenAI, so the plugins developed once will work across both interfaces. Given that Microsoft is currently offering OpenAI models for free, and adding plugins support, it seems the best free option, and it includes references as well. 

ChatPDF - One specific use case that might be very relevant for most users is to be able to ‘chat’ with a PDF document. This might be a research article, a printout of a web page (saved as .pdf), or a PDF of a book. It is a very simple and elegant solution. There are limitations on how much you can use it for free. This feature will be subsumed in one of the other platforms soon, but for now, it is an excellent tool.  

Perplexity - If you are looking for a response from a model that includes an aggregation of search and a generative model and present it with references, this is a good option. Microsoft Bing chat also includes references in its response, so this is not specific to Perplexity, though the experience is better with this tool. 

Adobe Firefly – Generative AI can be used for images, audio, and video. Adobe Firefly (beta) is currently available via university license and lets you quickly create graphics. Two powerful features include: “text to image” to generate images from a detailed text description and “generative fill” which makes it easy to remove objects or paint in new ones by supplying text descriptions. Other features include text effects, generative recolor, 3D to image, and extending images. 

AI Definitions

Definitions

Ray Schroder, COLRS Founding Director, shared some definitions he collected for a January 2023 EDUCAUSE Quick Chat. AIPRM has additional definitions for generative AI terms

  • Artificial Intelligence - the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. from Lexico.com
  • Machine Learning - Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Machine learning focuses on the development of computer programs that can access data and use it... learn for themselves. from expert.ai
  • Deep Learning - In practical terms, deep learning is just a subset of machine learning. In fact, deep learning technically is machine learning and functions in a similar way (hence why the terms are sometimes loosely interchanged). However, its capabilities are different. While basic machine learning models do become progressively better at whatever their function is, but they still need some guidance. If an AI algorithm returns an inaccurate prediction, then an engineer has to step in and make adjustments. With a deep learning model, an algorithm can determine on its own if a prediction is accurate or not through its own neural network. from Zendesk
  • Supervised and Unsupervised Learning - In a supervised learning model, the algorithm learns on a labeled dataset, providing an answer key that the algorithm can use to evaluate its accuracy on training data. An unsupervised model, in contrast, provides unlabeled data that the algorithm tries to make sense of by extracting features and patterns on its own. from nvidia

  • Neural Network in AI - A neural network is a method in artificial intelligence that teaches computers to process data in a way that is inspired by the human brain. It is a type of machine learning process, called deep learning, that uses interconnected nodes or neurons in a layered structure that resembles the human brain. It creates an adaptive system that computers use to learn from their mistakes and improve continuously. Thus, artificial neural networks attempt to solve complicated problems, like summarizing documents or recognizing faces, with greater accuracy. from Amazon

  • GPT-3 - In May 2020, Open AI published a groundbreaking paper titled Language Models Are Few-Shot Learners. They presented GPT-3, a language model that holds the record for being the largest neural network ever created with 175 billion parameters. It’s an order of magnitude larger than the largest previous language models. GPT-3 was trained with almost all available data from the Internet, and showed amazing performance in various NLP (natural language processing) tasks, including translation, question-answering, and cloze tasks, even surpassing state-of-the-art models. from towardsdatascience.com

  • Generative AI - refers to unsupervised and semi-supervised machine learning algorithms that enable computers to use existing text, audio and video files, images, and even code to create new possible content. Generative AI allows computers to abstract the underlying patterns related to the input data so that the model can generate or output new content. from indiaai.gov

  • ChatGPT - is a large language model chatbot developed by OpenAI based on GPT-3.5. It has a remarkable ability to interact in conversational dialogue form and provide responses that can appear surprisingly human. Large language models perform the task of predicting the next word in a series of words. Reinforcement Learning with Human Feedback (RLHF) is an additional layer of training that uses human feedback to help ChatGPT learn the ability to follow directions and generate responses that are satisfactory to humans. from Search Engine Journal

Select Articles and Resources