GPT Chatbots and Academic Integrity: Why Generative AI Should be Allowed in the Classroom

The International Baccalaureate, a prestigious college prep organization, recently allowed students to use ChatGPT on the IB exam as long as they do not claim it as their work. This surprising development is the latest in the string of headlines related to generative AI. Many school systems have already banned ChatGPT out of fear of academic dishonesty. Turnitin has announced the development of an AI-detection software that will be rolling out in April. Generative AI has made many rapid improvements, but students will not completely forego traditional learning in favor of bot-generated responses. Educators should integrate AI into the classroom so students know how to use the developing technology effectively and ethically.

Limitations with AI

An educator’s biggest fear is that students will use ChatGPT to do their work for them. While this is a valid fear it is often exagerrated. Olya Kudina’s students were tasked with comparing their assignments with an AI generated assignment. They were initially blown away by “how quickly the chatbot rendered information into fluid prose;” that is until they reread the bot’s work. The students realized that the bot was using incorrect information and it was unable to provide sources for the things it was claiming . Kudina’s students concluded that “copying from ChatGPT wouldn’t actually net them a good grade.”

On the other hand, Pieter Snepvangers received a passing grade on a 2,000 word essay that was written by AI. While AI generated work is passable at first glance, it fails to produce high quality academic work. Snepvangers’ essay was very generalized and didn’t include any citations. The lecturer grading the essay said it had “fishy language.”

Many teachers are horrified at the thought that students will be able to type an essay prompt into an AI generator and submit a paper in about 20 minutes; however it is not always that simple. ChatGPT can only produce up to 365 words, meaning that the student must ask the AI multiple questions to meet a higher word count. Snepvanger asked ChatGPT ten different questions related to the essay prompt. After receiving the paragraphs, Snepvanger selected the best ones and “copied them in an order that ‘resembled the structure of an essay,’”  Even when trying to use Chat GPT, students must use fundamental writing skills to create a coherent paper.

An AI is unlikely to do well on any assignment that requires anything more than a surface-level understanding of the material.  AI can’t properly source an article, and it doesn’t tell its users where its information came from. Any analysis produced by AI will be far too shallow to receive a good grade because the AI is unable to understand the deeper themes of texts or form original perspectives.

AI and Academic Integrity

Many institutions are grappling with the ethics of allowing AI in the classroom. Some are simply requiring that students state when they are using AI generated text. Others are claiming that use of AI in any form is plagiarism. Kalley Huang equates the current status of ChatGPT to Wikipedia in the early 2000’s, which educators also saw as the end of traditional education. Villanova University’s Chair of the Academic Integrity Program, Alice Dailey, believes that schools should allow AI, but they should develop a blanket policy that covers a wide range of circumstances. The advancement of AI technology is going to force educators revolutionize the way they evaluate student progress.

Many schools are moving away from take home essays and are using “in-class assignments, handwritten papers, group work and oral exams” to combat AI plagiarism. Stephen Marche claims that the essay is the “way we teach children how to research, think, and write.” Though these skills are important, there are better ways to teach them in an AI world. Antony Aumann is requiring students write their first draft in the classroom and explain each revision. This method of teaching is not only teaching the students how to write an essay, but is also forcing them to think critically about why they are writing it in this way.

AI in the Classroom

Instead of banning ChatGPT because of its risk to academic integrity, educators should use it as a tool in the classroom. The technology is not going away and students will be unprepared for the future if they are not taught how to use AI efficiently and ethically.

Students can only know the limitations of AI by using it. Ethan Mollick is not only allowing AI in his classroom, but he is mandating it. Mollick’s AI policy requires students to assume the AI is wrong, adding that students will be responsible for errors and omissions on behalf of the AI. Mollick also requires his students to include a paragraph at the end of their work disclosing the use of AI, as well as the prompts used.

Contrary to popular belief, turning in a research paper is not the height of academic evaluation. According to Kathy-Hersh Paseck, it is more important that students develop writing skills, like how to write a thesis and support it, rather than turning in a paper. Students are able to learn these necessary skills by interacting with AI.

Donnie Piercey’s fifth grade class plays a game of “find-the-bot,” which asks students to pick the AI summary out of a lineup with peers’ work.  The students said that the exercise helped them identify proper capitalization and punctuation, as well as how to correctly summarize information. This exercise also led to a discussion on writing voice, and why the AI text sounds “stilted.”

AI text generators are tools that should be embraced in the classroom. Like other technology that was new for its time, AI will not go away. The best thing educators can do for their students is to teach them efficient and ethical ways to use AI text generators while still honoring academic integrity and learning essential writing skills.  

AI Writing, Self-Publishing, and the Culture of Instant Gratification

The digital age has ushered in a culture of instant gratification, where people expect to get what they want when they want it. This is especially true with the advent of AI writing and self-publication tools that make it easier than ever for anyone to become an author or content creator almost overnight. But while these new technologies have made creating and sharing content faster, there are some potential drawbacks as well.

The traditional process of publication is a lengthy one. The process from submission to publication can take an average of nine to eighteen months, or even upwards of two years. This time excludes the writing and editing process, which, depending on the book, can add months or years. In the age of instant everything, that is simply unacceptable.

Enter Self-Publication

Self-publication has gone through many evolutions in the digital age, with each iteration becoming more accessible to the public. Desktop Publishing, was introduced in the 1970’s with the adoption of word processing software. Though this form of self-publication was easily accessible by the masses, it was still costly. “Print on demand” revolutionized the self-publishing world. Publishers were no longer responsible for mass printing costs, inventory, and distribution, which further opened the world of self-publication to the public. The blog era allowed authors to reach the masses and publish their works via PDF, with even Stephen King joining in.

Amazon Kindle Direct Publishing was introduced in 2007 to “democratize” the publishing industry. Amazon made it easier than ever to self-publish a book and offered authors 70% of royalties and has since grown to offer more incentives and opportunities to authors. In 2011, authors who gave full digital rights to Kindle were offered KDP Select. KDP Select members exclusively received a higher percentage of royalties and promotional tools.  Amazon expanded KDP again in 2016 to include print publishing and has added options for hardcover and lower-cost color printing in the following years. Amazon adapted to the bite-sized market and introduced Kindle Vella in 2021, allowing authors to publish “serial” style stories.

Self-publication and all of its advancements have reduced publishing time from a year and a half to five minutes. It makes sense that the writing process is next on the proverbial chopping block.

Instant Gratification and Independent Authors

In the age of instant gratification, authors are racing against the clock to produce content before readers move on to another writer. Jennifer Lepp, a self-published “cozy paranormal mystery” writer, gets about four months to produce a new work. That deadline is doable, barring any creative setbacks. When those setbacks do happen, it could be catastrophic for reader engagement. Enter AI, specifically Sudowrite in Lepp’s case. Sudowrite is an AI writing tool specifically geared to creative writers. Before we ask if we should use AI, we should understand what it is.

AI Writing

AI has gained increasing notoriety in the past few years by tackling everything from editing and proof-reading to content creation in a few minutes. Most AI geared towards writing has been trained with GPT-3, a program specializing in text completion. This AI program can “understand and generate natural language.” Proofreading, editing, and even writing can be given to most AI software with relative ease. In fact, the introduction paragraph to this article was written by Jasper, an AI program commonly used for text generation. AI is incredibly useful in writing shorter bits of text and it saves writers a ton of time, which is necessary in today’s fast-paced world that demands new content at all times.

The ethical question of AI writing

Just because something is useful does not mean it should be used. The ethical dilemma of AI writing is one that has hounded its users since its inception. In an interview with The Verge, Jennifer Lepp expanded on the ethical dilemma of using AI tools that the writing community is facing. Questions concerning authenticity and intellectual ownership are at the forefront of these debates.

Many authors fear that their work will no longer be original if they allow an AI to write for them. The Author’s Guild argues that human art and literature is advanced by individual experiences, and that AI works will stagnate without human input. AI learns from other people’s work on the internet and compiles that knowledge to generate new work. It could be argued that the writing is plagiarized because it is informed by other author’s works without giving them credit; however, every piece of media informs and is informed by other pieces of media. True originality is not possible, especially in a society that is so digitally connected.

Another concern with AI writing is ownership of the piece. Should the AI program be listed as the author? According to US copywrite laws, no, and others agree. The Alliance of Independent Authors added a new clause to their code of standards regarding AI. The code calls for the author to edit the generated text and ensure that it is not “discriminatory, libellous, an infringement of copyright or otherwise illegal or illicit.” The responsibility of legal compliancy falls on the author, not the AI.

Some writers fear that the AI will take over their writing. In a Plagiarism Today article, Jonathan Bailey goes as far as to say that writers are completely powerless when using an AI. Jennifer Lepp certainly experienced this power imbalance in her writing. She would give Sudowrite an outline, press expand, and keep feeding the algorithm until it spat out a finished product. This process led to a disconnect between herself and the stories she was creating. Now, Lepp offloads certain details to the AI, like the description for a hospital lobby. With her current system, she is still seeing an uptick in productivity while still being much more connected to her work.

The integration of AI is unavoidable if self-published authors are going to keep up with the demand of readers steeped in a culture of instant gratification. Though there should be self-imposed limits to the use of AI, authors should not avoid using it entirely. It is the responsibility of the author to inject the humanity into the writing.

I, Robot Author

Image of Myia Fitzgerald

Earlier this year, science-centered publisher Springer Nature produced the online textbook Lithium-Ion Batteries: A Machine-Generated Summary of Current Research. This e-book has no earth-shattering findings on the batteries, but it made headlines all the same: “This is the first time AI has authored an entire research book, complete with a table of contents, introductions, and linked references.” 

AI Now 

The first fully AI-authored e-book is here. Similarly, an AI-authored travel novel was released this year, though only in print. In The Verge, James Vincent wrote, “For decades, machines have struggled with the subtleties of human language, and even the recent boom in deep learning powered by big data and improved processors has failed to crack this cognitive challenge,” but this no longer holds true. Now multiple businesses have released writing AI in the past year, all capable of producing intelligible sentences.  

Google, Springer Nature, and OpenAI produce the most crucial writing AI. Google’s BERT works with NLG or natural language generation. BERT aims to replicate the way language organically flows.  

BetaWriter outranks BERT, though, for writers. BetaWriter wrote the first published e-book from Springer Nature. The publishing industry has hailed the 250+ page textbook as a turning point in the advancement of AI writing. 

OpenAI’s GPT-2 also holds serious status for authors. GPT-2 excels in language modeling. The program can create anything from a realistic news headline to an entire story length tale from one line of input. 

Positive Aspects of Writing with AI 

Writing with AI can certainly benefit authors. The bots excel at matching texts in their samples, which makes them ideal for both writing passages in foreign languages and adding multiple versions of an e-book. Macho from PublishDrive touches on this subject saying, “This innovation shows a more accessible future translation market by listening to or reading a book out loud and getting them translated realtime.” 

While the AI bots may not be able to write precisely what the author imagines, they can compile large libraries easily. This research aspect helps authors streamline the writing process. As Kevin Waddel points out in this Axios article, the bots’ function ideally to “Dig researchers out from under information overload.” This function benefits both academic writers trying to compile educational or experimental data and the pleasure writer logging settings, mythical characters, and historical events. 

Bots also function within an established framework, making them ideal for online authors. Not only can AI compile all the information necessary to make writing easy, but authors can use the formatting “technicality” to format their e-book files with little error or effort. The bots can do all the formatting that people can, so authors and publishers should take advantage of what the bots can reliably do to maximize the payoff. 

Downsides to Writing with AI 

Writing with AI can come with some real drawbacks, especially if humans don’t run interference. AI learns through what it reads by searching for patterns, but that’s it. Macho explains, “The key lies in EQ or EI – whatever you call it – using emotional intelligence to engage your audience.” AI can only copy writing moves people because people are where the emotional intelligence comes from. 

AI also struggles to understand the more profound meaning and context that often fills writing. The more thorough parts of the pattern analysis, deep learning, can still only measure so much. The resulting text, though accurate, is filled with continuity errors and cold opens. These issues regularly leave the reader confused or lost, which deems AI an unreliable tool for writers. 

Many experts consider the AI’s self-learning from input to be the most dangerous drawback for writers. CNN and The Verge both criticized the newly available, high-quality AI writers for their potentially dangerous results. Vincent’s article in The Verge says the following: 

In the wrong hands, GPT-2 could be an automated trolling machine, spitting out endless bile and hatred.” OpenAI’s helpful research tool could be used to publish hateful propaganda with minimal effort. These downsides and ambiguities raise many questions. 

 Questions About Credit 

Whenever new technology develops, it always takes time for rules and general knowledge to catch up. With AI itself being so new, authors or publishers intending to use it don’t have very much guidance on doing so ethically. Coldewey of TechCrunch raises several questions about crediting when writing with AI: 

Who is the originator of machine-generated content? Can developers of the algorithms be seen as authors? Or is it the person who starts with the initial input (such as “Lithium-Ion Batteries” as a term) and tunes the various parameters? Is there a designated originator at all? Who decides what a machine is supposed to generate in the first place? Who is accountable for machine-generated content from an ethical point of view? 

Springer Nature credited the program itself in the textbook they produced, but this does not factor in the rest of Coldewey’s questions. In fact, those questions can’t be answered until the industry knows more about the instrument. In the meantime, each user must rely on their instincts for best practices.  

 Best Practices for Writers and Publishers  

Some experts in AI gave their advice to authors and publishers about the truly effective ways to incorporate AI into their trades. Macho wrote, “There are two big areas of publishing where AI can (and will) make an impact: content analysis, recommendation and creation; and audience analysis.” 

The best ways to use AI without cutting out the human touch are by using the bots for everything but the writing. Publishers should use the bots for marketing: find out the types of people viewing the content, their preferences, and then use the bots to implement a targeted marketing plan. 

Authors should use AI to prepare their library for writing. The bots can compile all kinds of data which allows the author to focus only on producing the text. The bots could even theoretically produce dialogue to help the author create realistic conversations that sound varied and natural, especially if dialogue challenges the author. 

Publishers and authors can both use AI to make widespread changes, such as name or location changes. They can also use AI to reformat the text and files for publication or to determine the best place to insert features like images and other interactive aspects. With these options, authors and publishers should feel motivated to incorporate the bots more effectively. 

While writing AI advances further and further in ability each day, the writing AI produces has a very narrow audience, as Springer Nature’s e-book shows. People simply have more skill and nuance. AI can be incorporated more into the writing and publishing world, but only at the writer and publisher’s discretion.