ChatGPT: USC Experts Break Down What You Need to Know

| February 28, 2023 

USC computer science experts share their perspectives on the meteoric rise of generative AI application ChatGPT

To get a sense of the potential promises and perils of ChatGPT, we turned to a group of USC computer science researchers. Photo/iStock.

Since its launch in November 2022, ChatGPT has gained massive popularity and widespread usage, with millions of users around the world turning to generative AI technology to prompt conversations ranging from the practical to the creative.

But while it holds promise for applications like drafting cover letters, debugging code, and even penning screenplays and song lyrics, the application’s popularity also opens up ethical quandaries. How accurate are the responses? How was it trained? How could it change the way we live, for better or for worse? And would you trust it to act as your therapist?

To get a sense of the potential promises and perils of ChatGPT, we turned to a group of USC computer science researchers and natural language processing experts.

How do you feel about ChatGPT’s performance overall?

“I’m impressed by its capability to generate quick, coherent, and relevant responses, as well as maintain conversation turns. Some specific areas that are particularly impressive are its ability to generate code, debug code and summarize web content, and do so in a multi-turn conversation where it can remember the previous information exchange. I’m also reassured by its capabilities to enforce safeguards against some potentially toxic topics, although folks have figured out workarounds since its introduction.”—Swabha Swayamdipta, Gabilan Assistant Professor and Assistant Professor of Computer Science

“I’m definitely excited about its performance. I’m sure many NLP researchers didn’t expect this level of performance could be achieved so soon. The high-level idea behind it wasn’t complicated and the devil is in the implementation details and computing. That’s why it is less of a scientific advancement to researchers but more of a huge win for ‘scaling can give us much more.’”Xiang Ren, Andrew and Erna Viterbi Early Career Chair and Assistant Professor of Computer Science

“In general, ChatGPT and the large-scale pre-trained language models we’ve seen coming out generally in the last few years have been surprisingly good at unconstrained language generation. ‘Good’ meaning ‘generated content that is relevant to the prompt/question and is syntactically correct and locally coherent.’ All that said, it’s hard to say what’s surprising and what’s not with a fully closed-source model.”— Jesse Thomason, Assistant Professor of Computer Science  

How does GPT differ from other language generation models?

“ChatGPT and its variants are trained specifically to do well with instructions. ChatGPT is also designed to incorporate human feedback through multiple turns of conversation. Typical language models are simply trained on text, i.e. given a piece of text, predicted what words should follow it. ChatGPT is built using extremely large-scale language modeling that went into the creation of its predecessor, GPT-3, trained on nearly 45TB of data. Of course, OpenAI hasn’t released the exact details about how ChatGPT was trained, so there are many unknown details.”— Swabha Swayamdipta

What are some of its limitations? Can we trust this type of system?

“From my perspective, the most concerning issue is what we call ‘hallucination’ during ChatGPT’s conversation with humans. The response will seem quite credible to a lay human on the subject matter in terms of its tone and phrasing but could be totally off in terms of factuality. This could be harmful when used for education scenarios and mislead decision-makers to build on the wrong evidence in their predictions.” –Xiang Ren

“While it was not trained to solve math problems, some of the basic mistakes it makes are rather disappointing. At a higher level, the most fundamental limitation of ChatGPT is its unreliability. There are some questions that it can provide relevant, concise, and appropriate answers for, while others where it’s plain wrong. And it cannot predict when it is wrong. This will be a fundamental roadblock to its deployment.”— Swabha Swayamdipta

“I think ChatGPT and related models will make it much easier for state actors and malware or fraud companies to flood user-contributed content sites and email with large volumes of coherent and harder-to-detect-as-generated spam.” –Jesse Thomason

Do you have concerns regarding its potential to generate fake content and how this might impact society? (Could it, for instance, fuel a scientific integrity crisis?) 

“This is definitely a risk that any language model carries – the tendency of ‘hallucinating’ new information which might seem real but is really not. However, I believe we will get better at spotting ChatGPT generations as opposed to human written language pretty soon, or better we might be able to build that technology. And sure, it can fool peer reviewers, but it cannot appear at conferences or run experiments or do fieldwork. There might be a few cases of it successfully misleading humans before we can learn to spot these fakes though.”—Swabha Swayamdipta

Are there any areas where you see an opportunity for ChatGPT to help people do their jobs?

“I think it might bring about a different era of writing – where instead of writing from scratch, writers will learn how to use ChatGPT as an assistant that provides them ideas. The same could be said of programmers. I think many are excited about ChatGPT’s potential to act as an AI therapist, which is something I’m not fully comfortable with. For one, it cannot simply replace human therapists, particularly for at-risk patients, such as those who are prone to harmful behavior. I think there are a lot of risks associated with this capability and safeguards must be put in place before this type of functionality is made widely available.”—Swabha Swayamdipta

Published on February 28th, 2023

Last updated on May 16th, 2024

Share this Post