How ChatGPT Could Revolutionize Academia

The AI chatbot could enhance learning, but also creates some challenges

3 min read
woman student reading a book in a library with a brain with AI set on top of image

OpenAI in November launched ChatGPT, one of the most significant real-world applications of artificial intelligence to date. The tool allows its users to quickly generate sophisticated textual content that is uniquely constructed.

Such content, therefore, likely can avoid detection by traditional plagiarism tools—which creates a concern for universities about how to assess students’ learning and skill development.

Many types of assessments used to evaluate students require them to demonstrate they have understood new materials by investigating the content and collating their learning in the form of a written essay or report. The role of an academic assessor has been to evaluate individual students’ submissions to gauge the breadth and depth of their understanding of the topic.

If students use ChatGPT to write an essay or report, the problem is the output generated provides limited, if any, representation about the quality of their learning. The AI tool offers the opportunity for well-written researched content without the need for a student to search for detailed sources.

Unfortunately, this type of problem is not new. For many years, students have been able to copy text from essay banks. Anti-plagiarism detection tools such as iThenticate and TurnItIn have deterred the use of such repositories. Although the anti-plagiarism tools have been successful, they use sophisticated pattern-matching techniques—which makes them ill-equipped to detect the language constructs resulting from advanced AI.

Another way students get around writing original content is through essay mills, which provide writing services for a fee. It could be argued that the open nature of ChatGPT has leveled the playing field between those who can and can’t afford to pay for the services.

A different assessment for STEM students

In the fields of science, technology, engineering, and mathematics, a far broader assessment strategy than essays is used. To meet the learning outcomes, STEM students must demonstrate skills such as programming. Because ChatGPT can solve many mathematical problems and generate and debug code, however, the computing field cannot simply ignore the AI evolution. And ChatGPT’s capabilities are certain to improve over time.

The immediate reaction from academia is likely to be to adopt traditional assessment strategies that comprise predominantly closed-book, exam-style assessments.

“We recommend an approach where teaching and learning adapt to recognize the opportunities posed by new technologies.”

Before adopting that obvious quick fix, though, it is important to reflect on the reasons why a broad assessment strategy was adopted in the first place.

Engineering and computing students must tackle large, complex problems and adopt collaborative strategies. The skill sets are not easily or accurately tested individually in an examination hall in a three-hour period. To some extent, essays—and, perhaps even more controversially, doctoral theses—are already not well aligned to the needs of many employers.

Issues universities need to consider

ChatGPT and similar technologies will continue to shape the future of what we call the World of Work (WoW). As employers increasingly adopt advanced AI, the academic world will need to amend its teaching and assessment practices. ChatGPT and other AI tools are already being adopted in industry as a way to automate mundane tasks. The big question is: Should educators ban such developments or embrace them?

Here are some issues that universities might want to consider.

  • Awareness is the best line of defense. Educate students and staff about the strengths and weaknesses of AI-generated content. For example, when does reliance on localized or peer-reviewed content matter, and when is a quick-and-dirty content review sufficient?
  • Develop assessment and other educational practices for the WoW to embrace or reject the use of AI-generated content. Using authentic assessment tasks that are aligned to a local context or problem, for example, would require students to foster a culture of exploration and curiosity. Project-based assessment tasks are good examples that enable students to conduct exploration of ChatGPT and similar tools, but they ultimately demonstrate the learning and skill on their own. Assessment criteria also will need to recognize sophisticated use of AI but ensure greater recognition for elements that demonstrate higher-order skills such as evaluation and synthesis.
  • Reassure staff that new tools are emerging to detect the use of AI. Princeton student Edward Tian has already created one such tool: GPTZero, which adopts thinking similar to OpenAI’s tool but uses deep learning in reverse to detect ChatGPT.
  • Adapt to embrace opportunities brought on by innovation. Consider using ChatGPT and similar tools to advance pedagogy and curricula. Everyone finds unnecessary repetition and menial tasks tiresome. Use the tools to stretch and enable more innovation by students.
  • Reinforce principles of professional standards and ethics. Advance a culture of academic integrity, acknowledging that new tools will emerge. If ethical culture is engraved in the group ethos, students and scholars will use new AI tools appropriately.

ChatGPT and similar tools should be seen as accelerating necessary change. We recommend an approach where teaching and learning adapt to recognize the opportunities posed by new technologies and continue to foster a culture of exploration and curiosity. Ultimately, our priority is to provide graduates ready to face the ever-changing WoW.

The views expressed here are the authors’ own and do not represent positions of IEEE Spectrum, The Institute, or IEEE. The authors write in their personal capacity.

This article appears in the June 2023 print issue as “How ChatGPT Could Transform Academia.”

The Conversation (7)
James Isaak
James Isaak 27 May, 2023

I submitted a Bing Chat paper, "with citations" to Alpha Zero, including the phrase

"Source: Conversation with Bing, 3/20/2023" ... it reported: "Your text is likely to be written entirely by a human" --- so this particular tool has some flaws at this time.My suggestion to instructors: have students create a report using a Chatbot

use strike though to indicate text they find to be incorrect, or not a useful contribution

highlight their own added text

add a paragraph at the end describing their experience

--------------- for more

David Haney
David Haney 16 Mar, 2023

I'm preparing for an adjunct course I'm teaching next month on the Fundamentals of Management, and I am finding the hubbub around ChatGPT and other AI bots very compelling regarding Management Decision-Making. In a simple exercise I conducted around Q&A for Business Cost Cutting during a down economy, I found that most of the answers I was receiving were nothing more than regurgitation of online / copyrighted book content, which I expected. But it does raise the question of whether using the AI academically to 'look up' material as a research assistant is a valid use case.

Donald Olshove
Donald Olshove 03 Mar, 2023

Emotional subtleties like sarcasm are often lost in the words. This is an example of the kind of information that the chat bots can't handle.