One of artificial intelligence models’ selling points is that they improve with time. However, it seems like the trend has gone downhill for OpenAI’s chatbot ChatGPT. According to a study conducted by researchers at Stanford University and UC Berkeley, the chatbot is experiencing a noticeable decline in accuracy. So, is ChatGPT getting dumber?
“ChatGPT is becoming dumber by each passing day. It has stopped interpreting prompts unless you spoon-feed it like a nursery kid.”
Is ChatGPT Getting Dumber?
Yes. OpenAI’s ChatGPT is seeing a significant drop in accuracy. Three researchers at Stanford University and UC Berkeley evaluated the March 2023 and June 2023 versions of GPT-3.5 and GPT-4 in their paper “How Is ChatGPT’s Behavior Changing Over Time?”.
They ran multiple tests like:
- Math problems
- Sensitive/dangerous questions
- Opinion surveys
- Multi-hop knowledge-intensive questions
- Generating code
- US Medical License tests
- Visual reasoning
The results were disappointing and clearly indicated that ChatGPT is getting dumber. Or at least, it’s less accurate and efficient than it used to be.
- GPT-4’s accuracy dropped from 84% in March 2023 to 51% in June 2023 at identifying prime vs. composite numbers.
- Moreover, GPT-4 became less willing to answer sensitive questions and opinion surveys in June than in March.
- Plus, GPT-3.5’s performance got worse in June at multi-hop questions.
- Both GPT-3.5 and GPT-4 had more formatting mistakes in code generation in June than in March.
Did Users Notice That ChatGPT Is Getting Dumber?
Are only researchers wondering if ChatGPT is getting dumber? Nope. In fact, users have flooded social media with complaints about the infamous chatbot.
ChatGPT user, Akshay, has said “ChatGPT is becoming dumber by each passing day. It has stopped interpreting prompts unless you spoon-feed it like a nursery kid.”
Another user noted on Twitter that “GPT3.5 now making basic spelling mistakes despite having the correct spelling in the prompt. Never seen it this bad before.”
Moreover, users have also voiced their complaints in OpenAI’s forums demanding an explanation for the “for the very clear degradation of ChatGPT’s performance”.
Some of the mentioned issues in relation to ChatGPT’s decline are:
- Unprecedented reasoning errors
- Greater capacity to forget
- Quickly losing track of instructions
- Loss of ability to generate proper code
- Bad explanation
According to Reuters, Worldwide desktop and mobile traffic to the chatbot’s website dropped by 9.7% in June from May. And, unique visitors to ChatGPT’s website dropped by 5.7%.
As an almost daily user for months, the quality of chatGPT is going down heavily. Even remembering the start of the conversation and ignoring your commands is way bad now. My company stopped paying for the chatGPT+ because it’s clear they are making it dumber on purpose.
— Web3 Shark (@web3shark) September 21, 2023
So, What’s The Reason Behind OpenAI’s Chatbot Decline?
Do humans just ruin everything they land their hands on? Or is there a more reasonable explanation for the obvious decline? Well, humans do ruin most things. It might be due to the excessive use of the bot.
Some are saying that ChatGPT is experiencing an “AI drift”. This refers to large language models (LLMs) behaving in unexpected ways that deviate from the original parameters. This, however, needs the company to run updates.
On that note, what did they say about this? OpenAI’s VP, Peter Welinder, has denied the claims saying that they’re not making ChatGPT dumber but quite the opposite.
No, we haven't made GPT-4 dumber. Quite the opposite: we make each new version smarter than the previous one.
Current hypothesis: When you use it more heavily, you start noticing issues you didn't see before.
— Peter Welinder (@npew) July 13, 2023
They don’t seem to be addressing the issue but are rather sticking to gaslighting users. Is gaslighting ever the answer, though? OpenAI should be honest and transparent or else it will keep losing users and increasing complaints.
You might also like: