What is ChatGPT?

An artificial intelligence chatbot that uses predictive analytics to compose natural language. 

"The best way to think about this is you are chatting with an omniscient, eager-to-please intern who sometimes lies to you."

Ethan Mollick, a professor at the University of Pennsylvania's Wharton School of Business in an NPR Interview on December 19, 2022

We asked ChatGPT, "What is ChatGPT?" and received this response:

What is ChatGPT? ChatGPT is a pre-trained language model developed by OpenAI. It uses a deep learning technique called the Transformer to generate human-like text. ChatGPT can be fine-tuned for a variety of natural language processing tasks such as language translation, text summarization, text completion, and conversation modeling. The model was trained on a large dataset of text and can generate a wide range of responses to different inputs, making it suitable for tasks such as chatbot development and other applications that require natural language understanding and generation capabilities.

ChatGPT has also written a paperback book to explain itself.

What Can ChatGPT Do?

We've seen many other chatbots in the past, though. How is ChatGPT different from others? Why is everyone talking about? It is a lot better at writing, and it can refine responses based on information already provided in the same chat session.

View the Explore ChatGPT Session that COLRS hosted with the UIS Learning Hub and Center for Faculty Excellence.

ChatGPT Resources

Syllabus and Policy Statements on Artificial Intelligence (AI)

At a recent Center for Faculty Excellence discussion, Professor Graham Peck shared his approach to academic integrity and AI.

Recent developments in AI, such as chatGPT, make it important to underscore that the core principle of academic integrity is doing your own work. We can only learn by applying our own minds to a problem. Learning can be done individually or cooperatively with others, but we must be participants in the learning. Any other approach precludes us from learning. This is the fundamental reason that academic integrity policies prohibit practices such as cheating on exams, plagiarizing, or paying others to take tests or write papers. In all such cases, students are not doing their own work and cannot be learning or demonstrating their learning. Correspondingly, universities do not wish to give credit for students who submit others’ work as their own. The submissions lack integrity because the students did not do their own work.

Although AI is relatively new, its use for illicit purposes presents identical dangers to academic integrity. For instance, chatGPT offers students a new alternative to writing their own essay. However, the outcome is the same: any person who uses it for that purpose sacrifices the opportunity to learn how to write for themselves. It is morally wrong because it violates university policies designed to promote learning and measure it fairly, but it is also a self-defeating strategy of the highest magnitude. The best reason for all of us to do our own work is to give ourselves the chance to learn and to grow. That is true at the University, and it is true in life.

Syllabus statement for Spring 2023, Graham Peck, Professor of History, UIS

Lance Eaton, from College Unbound, is collecting classroom policies on AI.

The Sentient Syllabus Project has some language to consider. 

Check out Ryan Watkin's Update Your Course Syllabus for ChatGPT on Medium. 

Cheating with AI: Classroom Strategies and Detection Tools

There is an arms race taking place between AI and AI detection tools. It is a race that isn't winnable for instructors. Becoming the cheating police does not teach our students to value their voice, agency, and creativity. Rather, we need to encourage our students to view conversational AI as a tool. Conversations and classroom activities that demonstrate intentional uses for AI are effective strategies with many students. 

In classroom discussion and reflection on AI, we need to position humans -- instructors and students! -- as experts and critics of the tool and its output. We recommend reinforcing that position often.

Students who are intrinsically motivated to learn are not likely to cheat. To encourage intrinsic motivation, try to provide room and support for autonomy, mastery, and purpose in your assessments.

  • Allow students to self-direct a portion of the assignment. Can you allow them to pick a topic or direction? Can they customize it to address a project at work or to further research in a favorite topic?
  • Scaffold your large assignments to provide students the confidence that completing the assessment is possible. Hopeless students who aren't sure what to do next are more likely to cheat. 
  • Tell your students your "why." What is the purpose of the work? What will it enable them to do in your field, later in the class, at a job. Let them know why they should care about this work. 

Review your assessment and activities. Are they asking students to recite facts? If so, those are easy targets for AI chatbot use. Try to put a twist on those assessment: add an element of comparison; ask students to reflection on their own experiences; or add a local twist. These types of work are beyond the capabilities of current AI chatbots.

Another effective action is to create explicit assignment instructions that address what you view as appropriate uses of AI in each of your assignments and what you view as cheating. This keeps your knowledge of AI tools centered and provides students guardrails for their behavior. Show students that using AI tools can save time and reduce busywork without relinquishing their agency and voice and missing opportunities for growth and learning.

Even if you do all of this, some some will still cheat. It will happen. What are instructors to do?

  • First, get to know AI writing style. We suggest using ChatGPT and other conversational AI tools to learn to recognize the consistent output (writing style) of the chatbot. Your observations will be your strongest tool.
  • Second, check the suspected writing with several tools. When you suspect AI writing is being used, we recommend feeding a large amount of the suspected text into several detectors to compare the results. Longer pieces of text  (250+ words) provide more evidence for the detector to consider and view patterns in the writing. Know that the detectors are fairly easy to trick. With some light editing and error introduction, you can easily change the results from "likely an AI chatbot" to "likely human." Also, beware that creative outputs from ChatGPT (write in an accent/style, write a poem or lyrics) aren't detectable by these tools. 
  • Third, have a conversation with your student(s). Ask them if they used an AI chatbot in their work. Some student work can sound stilted like AI chatbots. They may not have cheated. Since it is a tool and not a source, using chatbots isn't plagiarism. It can be a gray area for students if you haven't specifically addressed it in class or in your syllabus. Consider giving them a chance to redo the work after the conversation. Tell them the consequences for using an AI chatbot moving forward (zeroes, academic integrity violations, etc). 

The AI Detectors

Best at realizing superficial edits in ChatGPT Text

Open AI Text Classifier - This detector, created by the same company that created ChatGPT, requires at least 1000 characters and uses same GPT version as ChatGPT.

Better tools (in Emily’s preferred use order)

CopyLeaks is a mainstream plagiarism detection tool with results that are easy to understand. Be sure to hover over the text in the results area for details. 

DetectGPT was created by Stanford PhD student Eric Mitchell and uses GPT 2. This site provides clear information in how your text is being analyzed and provides Z scores for how perturbed the detector is by the text provided.

GPTzero was developed by Princeton undergrad Edward Tian, and also uses GPT2. You may upload entire files to this tool in addition to copy and pasting text. It uses "perplexity" and "burstiness" scores for rating writing as human or AI authored. 

  • Perplexity score – If GPTZero is highly perplexed by the writing sample (high score), then that writing sample is more complex and more likely to be authored by a human.
  • Burstiness score –  This score assesses the diversity of the sentence structure (creativity!) in the writing sample.  The higher the burstiness score, the more likely the writing is human generated. Machines have pretty much constant writing over time.
Less Helpful (easier to trick)
Future tools?

Turnitin (the current UIS plagiarism tool) is developing one that looks promising due to detailed output and text parsing. 

AI Tools to Check Out

DALL·E 2 is a AI system that can create realistic images and art from a description in natural language.

Stable Diffusion 2 - another text to image AI tool

Lumen5 - AI video creation software for marketing

Soundraw - music generator

Looka - logo creator 

Podcastle - audio recording and editing platform that uses AI to create clear and professional sounding audio

Deep Nostalgia - animate faces in photos

Murf - text to speech vocal recording AI software

Legal Robot - legalese to English translator 

Cleanup.Pictures - retouch photos and remove unwanted objects


Select Recent News Articles

Definitions & Overviews

ChatGPT for Beginngers


Ray Schroder, COLRS Founding Director, shared some definitions he collected for a January 2023 EDUCAUSE Quick Chat. 

  • Artificial Intelligence - the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. from Lexico.com

  • Machine Learning - Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Machine learning focuses on the development of computer programs that can access data and use it... learn for themselves. from expert.ai

  • Deep Learning - In practical terms, deep learning is just a subset of machine learning. In fact, deep learning technically is machine learning and functions in a similar way (hence why the terms are sometimes loosely interchanged). However, its capabilities are different. While basic machine learning models do become progressively better at whatever their function is, but they still need some guidance. If an AI algorithm returns an inaccurate prediction, then an engineer has to step in and make adjustments. With a deep learning model, an algorithm can determine on its own if a prediction is accurate or not through its own neural network. from Zendesk

  • Supervised and Unsupervised Learning - In a supervised learning model, the algorithm learns on a labeled dataset, providing an answer key that the algorithm can use to evaluate its accuracy on training data. An unsupervised model, in contrast, provides unlabeled data that the algorithm tries to make sense of by extracting features and patterns on its own. from nvidia

  • Neural Network in AI - A neural network is a method in artificial intelligence that teaches computers to process data in a way that is inspired by the human brain. It is a type of machine learning process, called deep learning, that uses interconnected nodes or neurons in a layered structure that resembles the human brain. It creates an adaptive system that computers use to learn from their mistakes and improve continuously. Thus, artificial neural networks attempt to solve complicated problems, like summarizing documents or recognizing faces, with greater accuracy. from Amazon

  • GPT-3 - In May 2020, Open AI published a groundbreaking paper titled Language Models Are Few-Shot Learners. They presented GPT-3, a language model that holds the record for being the largest neural network ever created with 175 billion parameters. It’s an order of magnitude larger than the largest previous language models. GPT-3 was trained with almost all available data from the Internet, and showed amazing performance in various NLP (natural language processing) tasks, including translation, question-answering, and cloze tasks, even surpassing state-of-the-art models. from towardsdatascience.com

  • Generative AI - refers to unsupervised and semi-supervised machine learning algorithms that enable computers to use existing text, audio and video files, images, and even code to create new possible content. Generative AI allows computers to abstract the underlying patterns related to the input data so that the model can generate or output new content. from indiaai.gov

  • ChatGPT - is a large language model chatbot developed by OpenAI based on GPT-3.5. It has a remarkable ability to interact in conversational dialogue form and provide responses that can appear surprisingly human. Large language models perform the task of predicting the next word in a series of words. Reinforcement Learning with Human Feedback (RLHF) is an additional layer of training that uses human feedback to help ChatGPT learn the ability to follow directions and generate responses that are satisfactory to humans. from Search Engine Journal