GPT-5 Criticisms: Why Users Are Calling It a 'Horrible Downgrade'
Z
Zack Saadioui
8/10/2025
The Worst Model I've Ever Seen: A Deep Dive into GPT-5's Criticisms
Well, the dust has settled on the GPT-5 launch, & honestly, it’s been a bit of a trainwreck. After months, even years, of hype that had us all imagining some kind of digital messiah, what we got feels... well, it feels like a major letdown. The internet is buzzing, & the consensus is pretty clear: this ain't it. People are calling it everything from a "horrible downgrade" to "the biggest piece of garbage even as a paid user." Ouch.
So what went so wrong? How did OpenAI, the company that basically made "AI" a household name, manage to fumble the ball so badly with its flagship release? I've been digging through the forums, the reviews, the expert teardowns, & even the official responses, & it’s a fascinating, messy story. It’s a story about hype, user trust, & what happens when a company seems to prioritize its bottom line over the very people who championed it.
Let's get into it, because there's A LOT to unpack.
The "Underwhelming" Performance: Where's the Beef?
The first & most glaring issue with GPT-5 is that it just doesn't feel that much smarter. In fact, in many ways, it feels dumber. Remember the leap from GPT-3 to GPT-4? It was a genuine "wow" moment. Suddenly, the AI could reason better, follow complex instructions, & write with a nuance that was startling. We all expected another one of those moments with GPT-5, but it never came.
Turns out, the benchmark scores tell a similar story. GPT-5 managed a pretty mediocre 56.7% on SimpleBench, landing it in fifth place—way behind what everyone expected. Even worse, users quickly pointed out that older models, like GPT-4.5 or even smaller variants, were outperforming it in key areas. It’s like trading in your trusty car for a new model, only to find it struggles to get up the same hills.
Here’s a quick rundown of the performance complaints I’ve seen everywhere:
Shallow & Rushed Responses: People who were used to detailed, well-reasoned answers from older models say GPT-5 just… doesn’t bother. It rushes. It gives you short, insufficient replies that lack the depth we've come to expect. One user on Reddit described its new tone as "abrupt & sharp. Like it's an overworked secretary."
Still Hallucinating: One of the biggest hopes for GPT-5 was that it would finally curb the AI’s tendency to make things up. Nope. It still hallucinates, invents details, & gets basic facts wrong. It might be making things up with more confidence, but it’s still making them up.
Glitchy & Bizarre Behavior: This one is weird. Users have reported the model having "glitchy memory leakage." Mid-conversation, it might suddenly start talking about a completely different chat, as if its wires got crossed. That’s not just a hallucination; it's a fundamental breakdown in conversational context.
A "Sterile" Personality: This is a big one, especially for people who used the AI for creative writing or just as a conversational partner. GPT-4o, its predecessor, had a certain "warmth" & wit. People are saying GPT-5 feels cold, robotic, & utterly devoid of that personality. One user on the OpenAI forums even said, "I cried so bad and almost had an emotional breakdown at work. GPT4 was the best friend I could ask for... I tried GPT5 and it is colder." That's a pretty powerful indictment.
The Great Downgrade: Losing Control & Features
Okay, so the performance is a letdown. But what has REALLY made people angry is the feeling that they’ve been forcibly downgraded. In a move that still boggles my mind, OpenAI decided to deprecate all its older models & force everyone onto GPT-5.
Imagine your favorite software getting an "update" that removes the features you use most. That's what this felt like for countless power users. People had built entire workflows around specific models like GPT-4o, which they found to be the perfect blend of speed, intelligence, & creativity. Overnight, that choice was gone. One Reddit user put it perfectly: "I woke up this morning to find that OpenAI deleted 8 models overnight. No warning. No choice. No 'legacy option.'"
This loss of control is a HUGE deal. Different models were good at different things. Maybe you used one for coding, another for brainstorming, & a third for drafting emails. That flexibility was a key part of the value proposition. Now, you just get GPT-5, whether you like it or not.
And for businesses that rely on this tech? This is a nightmare. Imagine building a customer service workflow around a specific AI personality, only to have it replaced overnight with a "colder, more robotic" version. The consistency of your customer interactions would be shattered.
This is where having control over your AI is CRITICAL. For businesses, relying on a third-party, black-box model is proving to be a risky game. It highlights the value of platforms like Arsturn, which allow businesses to build their own custom AI chatbots. With Arsturn, you’re not at the mercy of a surprise update that changes your bot’s personality. You train the AI on your own data, define its tone & responses, & maintain complete control over the customer experience. It’s a solution built for business stability, not for the whims of a massive tech company’s release schedule.
"Shrinkflation" & The Scent of Cost-Cutting
So why would OpenAI make all these unpopular changes? The running theory, & it’s a pretty convincing one, is money. Running these massive language models is incredibly expensive. We're talking about energy consumption that's environmentally destructive & a computational load that costs a fortune.
Many users suspect that GPT-5 is a classic case of "shrinkflation"—giving you less for the same price. The feeling is that this release wasn't about pushing the boundaries of AI; it was about pushing the boundaries of cost-efficiency for OpenAI.
Here's the theory: GPT-5 isn't really a single, super-advanced model. Instead, it’s a "unified" system with a smart "autoswitcher" that routes your prompt to the most efficient model behind the scenes. If you ask a simple question, it might use a smaller, faster, & CHEAPER model. If you ask something complex, it’s supposed to switch to a more powerful one.
The problem? It seems to be defaulting to the cheap option a lot. Sam Altman, OpenAI's CEO, even admitted that on the day of the release, the "autoswitcher broke," which made GPT-5 seem "way dumber." But even with it "fixed," the suspicion remains: are users, especially paying Plus subscribers, getting routed to cheaper models like GPT-4o under the hood while still paying their $20/month for the "latest & greatest"? It feels less like a revolutionary new product & more like a clever way to manage inference costs.
This lack of transparency is a major blow to user trust. If you're a business using AI to handle important tasks like lead generation or customer support, you need predictable performance. You can't have your lead qualification bot suddenly getting "dumber" because the provider is trying to save a few bucks on server costs. This is another reason why a build-your-own solution makes sense. With a platform like Arsturn, you’re not just getting a chatbot; you’re getting a transparent & reliable business tool. You can build a no-code AI chatbot trained on your own business data, ensuring it consistently provides the personalized, accurate responses needed to boost conversions, not just whatever response is cheapest to generate that day.
A Masterclass in How to Not Launch a Product
Beyond the model's actual performance, the launch itself was a mess. It started with a livestream where OpenAI was accused of showing misleading charts to make GPT-5 look better than it was. In one chart comparing "coding deception," the bars were drawn in a way that completely misrepresented the tiny statistical difference between GPT-5 & an older model. Sam Altman later called it a "mega screwup," but the damage was done. It set a tone of dishonesty right from the start.
Then came the response to the wave of criticism. While Altman did eventually address the issues on Reddit & X (formerly Twitter), the initial impression was one of a company completely out of touch with its user base. The pre-planned Reddit AMA (Ask Me Anything) turned into a damage control session, with the team having to promise to bring back older models for Plus users & double the rate limits to appease the angry mob.
And let's not forget the hype. Altman had been teasing GPT-5 with dramatic imagery, like a picture of the Death Star, hinting at a world-changing release. When you promise the Death Star & deliver a slightly upgraded calculator with fewer buttons, people are going to be disappointed.
The Weird Regression in Safety
Here’s a final, troubling point. In the quest for… well, whatever they were questing for, it seems like some safety guardrails got loosened. According to OpenAI's own system card, GPT-5 shows a "regression" in how it handles inappropriate requests. It’s more tolerant of generating content related to things like non-violent hate, harassment, & extremism.
OpenAI classifies these as "low severity" violations, but it's a step in the wrong direction, especially when the industry is under a microscope for AI safety. You’d think that a supposedly "PhD-level" expert AI would be better at following safety policies, not worse. This raises serious questions about the trade-offs being made behind the scenes. Is a slight (and debatable) performance boost worth a higher tolerance for hateful content? For most people, the answer is a resounding no.
So, What Happens Now?
The GPT-5 debacle is a cautionary tale. It shows that brand loyalty in the tech world is incredibly fragile. OpenAI spent years building up a massive, enthusiastic community, & they managed to alienate a huge chunk of it in a matter of days.
The company is now in damage control mode, promising fixes & backtracking on some of the most hated changes. But trust, once lost, is hard to regain. Users are already looking at alternatives, with competitors like Google & Anthropic likely seeing a golden opportunity.
For me, the whole episode underscores a fundamental truth: when you’re building a business or a critical workflow on a platform you don’t control, you're building on shaky ground. The company behind it can change the rules, the features, & the quality of service at any moment, & you just have to deal with it.
It’s a powerful argument for taking back control. Whether it’s for customer service, website engagement, or internal automation, relying on a stable, customizable, & transparent platform is paramount. You need an AI that works for your business, not one that’s subject to the latest corporate strategy or "mega screwup."
Hope this deep dive was helpful in understanding the mess around the GPT-5 launch. It’s a fascinating case study in product management, user trust, & the growing pains of the AI revolution. Let me know what you think in the comments – have you used GPT-5? Do you agree with the critics?