What is ChatGPT?
A generative artificial intelligence chatbot that uses predictive analytics to compose natural language.
We asked ChatGPT, "What is ChatGPT?" and received this response:
"ChatGPT is a pre-trained language model developed by OpenAI. It uses a deep learning technique called the Transformer to generate human-like text. ChatGPT can be fine-tuned for a variety of natural language processing tasks such as language translation, text summarization, text completion, and conversation modeling. The model was trained on a large dataset of text and can generate a wide range of responses to different inputs, making it suitable for tasks such as chatbot development and other applications that require natural language understanding and generation capabilities."
"The best way to think about this is you are chatting with an omniscient, eager-to-please intern who sometimes lies to you."
Ethan Mollick, a professor at the University of Pennsylvania's Wharton School of Business in an NPR Interview on December 19, 2022
Generative artificial intelligence tools like ChatGPT should be looked at as a tool that can be employed by faculty, staff, and students. It is a tool that is here and will not be going away. There are affordances, challenges, and risks related to using these technologies. As a university, we believe that we need to learn to use AI technologies and to teach our students to use and understand them in productive and balanced ways.
One risk, for instance, that ChatGPT and other large language models (LLM) will always have is the potential to generate unsubstantiated or untrue content, even though this risk may be diminished as the tools develop.
We've seen many other chatbots in the past, though. How are ChatGPT, Google Bard, Bing Chat, and others different from earlier AI tools? Why is everyone suddenly talking about this?
View the Explore ChatGPT Session that COLRS hosted with the UIS Learning Hub and Center for Faculty Excellence and explore the resources below to learn more.
The University of Illinois System convened a focus group from across the three universities to create guidance documents on Generative AI, including specific information for instructors and students.
Resources
Syllabus and Policy Statements on Artificial Intelligence (AI)
Syllabus Resources
- Lance Eaton, from College Unbound, is collecting classroom policies on AI.
- The Sentient Syllabus Project has some language to consider.
- Check out Ryan Watkin's Update Your Course Syllabus for ChatGPT on Medium.
Graham Peck Syllabus Statement - Spring 2023
At a recent Center for Faculty Excellence discussion, Graham Peck, UIS Professor of History, shared his approach to academic integrity and AI. This is his syllabus statement from Spring 2023
Recent developments in AI, such as chatGPT, make it important to underscore that the core principle of academic integrity is doing your own work. We can only learn by applying our own minds to a problem. Learning can be done individually or cooperatively with others, but we must be participants in the learning. Any other approach precludes us from learning. This is the fundamental reason that academic integrity policies prohibit practices such as cheating on exams, plagiarizing, or paying others to take tests or write papers. In all such cases, students are not doing their own work and cannot be learning or demonstrating their learning. Correspondingly, universities do not wish to give credit for students who submit others’ work as their own. The submissions lack integrity because the students did not do their own work.
Although AI is relatively new, its use for illicit purposes presents identical dangers to academic integrity. For instance, chatGPT offers students a new alternative to writing their own essay. However, the outcome is the same: any person who uses it for that purpose sacrifices the opportunity to learn how to write for themselves. It is morally wrong because it violates university policies designed to promote learning and measure it fairly, but it is also a self-defeating strategy of the highest magnitude. The best reason for all of us to do our own work is to give ourselves the chance to learn and to grow. That is true at the University, and it is true in life.
Anne Marie Hanson Syllabus Statement - Summer 2023
Anne-Marie Hanson, UIS Associate Professor of Environmental Studies, adapted AI tool syllabus language from multiple sources for her Summer 2023 undergraduate course.
AI Tools Policy
As most of you are aware, recently more AI tools (such as ChatGPT) became available to the general public. I am aware that students have access to these tools when creating and preparing assignments, discussions, or other deliverables. I do not forbid the use of these tools, as they can be a valuable source of information. I do ask that students follow these guidelines on how to responsibly use these tools:
- You are not allowed to use an AI tool to create your complete deliverables (images, video, text, or any other kind). This makes it impossible for me to assess students on their ability to generate, develop, and effectively communicate original ideas.
- If you want to use AI generated artifacts in your deliverables, you are required to either:
- Cite the AI tool as a source (meaning quotations marks, include the date, the prompt you used and add it to the references). Or,
- Create an acknowledgement statement that describes how AI tools were used for this deliverable.
- The option you choose is dependent on how you use the AI tools and specifically on the difference between input and output (i.e. using AI to create vs to correct).
- If you do not follow the points above, I will consider that you are violating the Academic Integrity Policy and at risk of committing plagiarism.
Further aspects to consider when using AI tools:
- If you want to use AI tools as a source for inspiration, make sure that you double check the information, declare the use of AI, and also make use of other (non AI) sources. The answers given by AI tools are not always correct.
- Currently, most open AI tools use sources up to 2021 and are therefore not able to answer questions about recent events or research.
- Be aware that there might be IP issues emerging when you upload data/written work/prototypes or other ideas in these tools, as everything will be stored in their cloud and will be publicly available. Be especially careful when working with research data.
Cheating with AI: Classroom Strategies and Detection Tools
There is an arms race taking place between AI and AI detection tools. It is a race that isn't winnable for instructors. Becoming the cheating police does not teach our students to value their voice, agency, and creativity. Rather, we need to encourage our students to view conversational AI as a tool. Conversations and classroom activities that demonstrate intentional uses for AI are effective strategies with many students.
In classroom discussion and reflection on AI, we need to position humans -- instructors and students! -- as experts and critics of the tool and its output. We recommend reinforcing that position often.
Students who are intrinsically motivated to learn are not likely to cheat. To encourage intrinsic motivation, try to provide room and support for autonomy, mastery, and purpose in your assessments.
- Allow students to self-direct a portion of the assignment. Can you allow them to pick a topic or direction? Can they customize it to address a project at work or to further research in a favorite topic?
- Scaffold your large assignments to provide students the confidence that completing the assessment is possible. Hopeless students who aren't sure what to do next are more likely to cheat.
- Tell your students your "why." What is the purpose of the work? What will it enable them to do in your field, later in the class, at a job. Let them know why they should care about this work.
Review your assessment and activities. Are they asking students to recite facts? If so, those are easy targets for AI chatbot use. Try to put a twist on those assessment: add an element of comparison; ask students to reflection on their own experiences; or add a local twist. These types of work are beyond the capabilities of current AI chatbots.
Another effective action is to create explicit assignment instructions that address what you view as appropriate uses of AI in each of your assignments and what you view as cheating. This keeps your knowledge of AI tools centered and provides students guardrails for their behavior. Show students that using AI tools can save time and reduce busywork without relinquishing their agency and voice and missing opportunities for growth and learning.
Even if you do all of this, some some will still cheat. It will happen. What are instructors to do?
- First, get to know AI writing style. We suggest using ChatGPT and other conversational AI tools to learn to recognize the consistent output (writing style) of the chatbot. Your observations will be your strongest tool.
- Second, check the suspected writing with several tools. When you suspect AI writing is being used, we recommend feeding a large amount of the suspected text into several detectors to compare the results. Longer pieces of text (250+ words) provide more evidence for the detector to consider and view patterns in the writing. Know that the detectors are fairly easy to trick. With some light editing and error introduction, you can easily change the results from "likely an AI chatbot" to "likely human." Also, beware that creative outputs from ChatGPT (write in an accent/style, write a poem or lyrics) aren't detectable by these tools.
- Third, have a conversation with your student(s). Ask them if they used an AI chatbot in their work. Some student work can sound stilted like AI chatbots. They may not have cheated. Since it is a tool and not a source, using chatbots isn't plagiarism. It can be a gray area for students if you haven't specifically addressed it in class or in your syllabus. Consider giving them a chance to redo the work after the conversation. Tell them the consequences for using an AI chatbot moving forward (zeroes, academic integrity violations, etc).
The AI Detectors
Search Engine Land released a comparison of AI detectors. AI detection tools “grade content based on how predictable the phrase choices are within a piece of content.” Does the text align with the likely pattern AI would in creating the content? The AI detectors generally look at two qualities:
- Burstiness: A predictable length and tempo to sentence structure.
- Perplexity: A randomness to the words chosen in a sentence or collection of sentences.
CopyLeaks is a mainstream plagiarism detection tool with results that are easy to understand. Be sure to hover over the text in the results area for details.
GPTzero was developed by Princeton undergrad Edward Tian, and also uses GPT2. You may upload entire files to this tool in addition to copy and pasting text. It uses "perplexity" and "burstiness" scores for rating writing as human or AI authored.
Turnitin, the current UIS plagiarism tool, has released a beta tool for AI detection. It is not reliable yet. You will see it available if you use the plagiarism detector, you will see this AI score. We do not recommend this tool.
AI Tools to Check Out
OpenAI - The earliest mover with the viral web front-end and the name ChatGPT. The free version uses GPT 3.5 as the model, which is suitable for most tasks, though there are significant improvements in the latest model GPT 4, particularly in the area of reasoning and avoiding hallucinations (think of AI as an intern). An iPhone app was released in mid-May with a phased launch across the world and they are going to launch Android app soon. If you sign up for the paid plan ($20 a month), you get early access to features such as the ability for the bot to access the web and use plugins that supercharge the functionality of the bot. The Code Interpreter feature is in alpha currently and promises to be significantly good at analysis.
Google Bard - A free interface available to everyone with a Google account. Does reasonably well on most tasks, and has access to the web by default. Currently does not match up to the latest models from OpenAI, but it might improve fast given Google’s focus on AI now. A feature similar to plugins, is being labeled ’tools’ and was announced in May 2023. Some AI capabilities are already integrated into Google tools.
Microsoft Bing Chat - A free interface available to all. Powered by OpenAI models. Currently requires downloading the Microsoft Edge browser to access the chat feature. Has default access to the web and is powered by the latest models from OpenAI, so for normal usage this could be considered a free version, versus the paid version from OpenAI. Microsoft has also enabled plugins for several services and will continue to add more. They have also committed to following the same standard for plugins as OpenAI, so the plugins developed once will work across both interfaces. Given that Microsoft is currently offering OpenAI models for free, and adding plugins support, it seems the best free option, and it includes references as well.
ChatPDF - One specific use case that might be very relevant for most users is to be able to ‘chat’ with a PDF document. This might be a research article, a printout of a web page (saved as .pdf), or a PDF of a book. It is a very simple and elegant solution. There are limitations on how much you can use it for free. This feature will be subsumed in one of the other platforms soon, but for now, it is an excellent tool.
Perplexity - If you are looking for a response from a model that includes an aggregation of search and a generative model and present it with references, this is a good option. Microsoft Bing chat also includes references in its response, so this is not specific to Perplexity, though the experience is better with this tool.
Adobe Firefly – Generative AI can be used for images, audio, and video. Adobe Firefly (beta) is currently available via university license and lets you quickly create graphics. Two powerful features include: “text to image” to generate images from a detailed text description and “generative fill” which makes it easy to remove objects or paint in new ones by supplying text descriptions. Other features include text effects, generative recolor, 3D to image, and extending images.
For a full searchable list of AI Tools, check out the AI Scout Directory.
Select Recent News Articles
Crash: How Computers Are Setting Us Up For Disaster - The Guardian
"FutureTools Collects & Organizes All The Best AI Tools So YOU Too Can Become Superhuman!" - AI Tool List
Disinformation Researchers Raise Alarms About A.I. Chatbots - NY Times
Microsoft’s Bing is an emotionally manipulative liar, and people love it - The Verge
Bing’s A.I. Chat Reveals Its Feelings: ‘I Want to Be Alive. 😈’ - NY Times
A Conversation With Bing’s Chatbot Left Me Deeply Unsettled - NY Times
Bing Is Not Sentient, Does Not Have Feelings, Is Not Alive, and Does Not Want to Be Alive - Vice
Microsoft Considers More Limits for Its New A.I. Chatbot - NY Times
Science Fiction Magazines Battle a Flood of Chatbot-Generated Stories - NY Times
The Imminent Danger of A.I. Is One We’re Not Talking About - NY Times
Why Chatbots Sometimes Act Weird and Spout Nonsense - NY Times
Definitions & Overviews
ChatGPT for Beginngers
Definitions
Ray Schroder, COLRS Founding Director, shared some definitions he collected for a January 2023 EDUCAUSE Quick Chat.
Artificial Intelligence - the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. from Lexico.com
Machine Learning - Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Machine learning focuses on the development of computer programs that can access data and use it... learn for themselves. from expert.ai
Deep Learning - In practical terms, deep learning is just a subset of machine learning. In fact, deep learning technically is machine learning and functions in a similar way (hence why the terms are sometimes loosely interchanged). However, its capabilities are different. While basic machine learning models do become progressively better at whatever their function is, but they still need some guidance. If an AI algorithm returns an inaccurate prediction, then an engineer has to step in and make adjustments. With a deep learning model, an algorithm can determine on its own if a prediction is accurate or not through its own neural network. from Zendesk
Supervised and Unsupervised Learning - In a supervised learning model, the algorithm learns on a labeled dataset, providing an answer key that the algorithm can use to evaluate its accuracy on training data. An unsupervised model, in contrast, provides unlabeled data that the algorithm tries to make sense of by extracting features and patterns on its own. from nvidia
Neural Network in AI - A neural network is a method in artificial intelligence that teaches computers to process data in a way that is inspired by the human brain. It is a type of machine learning process, called deep learning, that uses interconnected nodes or neurons in a layered structure that resembles the human brain. It creates an adaptive system that computers use to learn from their mistakes and improve continuously. Thus, artificial neural networks attempt to solve complicated problems, like summarizing documents or recognizing faces, with greater accuracy. from Amazon
GPT-3 - In May 2020, Open AI published a groundbreaking paper titled Language Models Are Few-Shot Learners. They presented GPT-3, a language model that holds the record for being the largest neural network ever created with 175 billion parameters. It’s an order of magnitude larger than the largest previous language models. GPT-3 was trained with almost all available data from the Internet, and showed amazing performance in various NLP (natural language processing) tasks, including translation, question-answering, and cloze tasks, even surpassing state-of-the-art models. from towardsdatascience.com
Generative AI - refers to unsupervised and semi-supervised machine learning algorithms that enable computers to use existing text, audio and video files, images, and even code to create new possible content. Generative AI allows computers to abstract the underlying patterns related to the input data so that the model can generate or output new content. from indiaai.gov
- ChatGPT - is a large language model chatbot developed by OpenAI based on GPT-3.5. It has a remarkable ability to interact in conversational dialogue form and provide responses that can appear surprisingly human. Large language models perform the task of predicting the next word in a series of words. Reinforcement Learning with Human Feedback (RLHF) is an additional layer of training that uses human feedback to help ChatGPT learn the ability to follow directions and generate responses that are satisfactory to humans. from Search Engine Journal