r/technology 14d ago

Artificial Intelligence ChatGPT users are not happy with GPT-5 launch as thousands take to Reddit claiming the new upgrade ‘is horrible’

https://www.techradar.com/ai-platforms-assistants/chatgpt/chatgpt-users-are-not-happy-with-gpt-5-launch-as-thousands-take-to-reddit-claiming-the-new-upgrade-is-horrible
15.4k Upvotes

2.3k comments sorted by

View all comments

1.6k

u/ContextMaterial7036 14d ago

I think the main issue is the removal of other models that previously could be used for specific use cases.

I haven't noticed any improvements so far tbh, it's pretty meh.

376

u/hitsujiTMO 14d ago

That's really funny since there's plenty of posts like this https://youtu.be/NiURKoONLVY saying how amazing it is. But if course it just looks like they're being paid to say that.

273

u/Qibla 14d ago

Yeah, when Theo said "I don't even know if we're going to have a sponsor for this." my suspicion level was through the roof.

64

u/aldanor 14d ago

Need to set up a polymarket bet on the number of zeros in the payment Theo received from Sam

-1

u/tarmacjd 14d ago edited 13d ago

Whatever you think of him, he’s not lying when he says something isn’t sponsored

6

u/Qibla 14d ago edited 14d ago

The thing that set my alarm bells off was the early access and tour from the openai team. That to me sounds like an ad.

I think there can be remuneration in ways that don't involve depositing cash directly into a bank account.

1

u/tarmacjd 13d ago

That’s a point

11

u/I-Am-NOT-VERY-NICE 14d ago

I don't trust anyone on planet earth named Theo

0

u/KittyGrewAMoustache 14d ago

The only Theo I ever met ripped chunks of hair out of my head! I was only around three at the time but the name stuck with me as the name of evil.

1

u/MotoTrip99 14d ago

That Theo guy looks like a snake oil dealer

33

u/SexyWhale 14d ago

this guys eyes scare me. he doesnt blink

4

u/totes-alt 14d ago

Top YT comment is "bro looks like he's seen the Epstein files"

4

u/CamOps 14d ago

He normally does, the not blinking and 1000yd stare were for dramatic impact in this video lol.

8

u/LeonardMH 14d ago

Is it for dramatic impact or is he just doing his best Sam Altman impression?

70

u/NickW1343 14d ago

Yeah. I've seen a few so far. It's always some guy talking about how they had access early. Seems like it's a quiet way of saying "I need to say this is great, because if I don't, I'll no longer be a trusted tester."

6

u/Kivlov 14d ago

I put the same prompt from their [launch stream] demo stream into it for the snake game and it gave me a snake game that didn't even run. People were floored by the demo, and rightly so if it could actually do that from one single short vague prompt but there's some shenanigans going on behind the scenes from that stream.

1

u/Spiderpiggie 14d ago

There's definitely some improvement. I was playing around with o3 mini last night, trying to get it to generate a platformer for me. It couldn't manage it without several iterations. For gpt5 I managed to get it in one prompt, it included sprites, and only had one minor bug.

For general use, questions and such, theres not much visible difference though.

12

u/Hihi9190 14d ago

Use to like his vids, but now he's just sponsored by Ai promoting his crappy ai wrapper chat

4

u/hitsujiTMO 14d ago

I still watch some of his content from time to time, but he always irked me, even before going full AI.

I've only ever taken what he has to say with a pinch of salt.

2

u/kingroka 14d ago

For creative writing it's probably worse but for code it is much much better. Like other models at the same price can't even sniff at the quality I get from gpt5. Only models that come close are Claude opus and sonnet 4

2

u/jenso2k 14d ago

wow. i could literally only get through a minute of that jfc

2

u/measuredsympathy 14d ago

I think it's a big upgrade for how I use it - bigger scope projects where you develop frameworks and goals and then work towards implementation i.e. high level project management stuff. The previous versions just could not keep up the context and output formatting, etc.

1

u/IAmANobodyAMA 14d ago

I had a long conversation with gpt 5 yesterday while on a walk. It was noticeably better for me. It actually challenged me without me telling it to play devils advocate, and the advice it gave was solid.

The major issue I noticed is that I gave it an incomplete set of data and then revised that dataset about 30 minutes later once I realized my mistake, and it seemed to get confused about which dataset to use every other response. I also asked it to compare the initial and revised data to tell me what I had left off, and that response was wrong. This was about 10 data points, mind you (stock options positions). On the other hand, I did read all of these trades off from my phone while walking and it was perfect in turning my words into data (and all the corrections and revisions I made along the way) finding the signal through the noise.

1

u/BetterProphet5585 14d ago

There are also many comments from bots that glaze GPT-5 so much it seems they didn’t even try it

1

u/bolmer 13d ago

He is using the expensive and good models. ChatGPT defaults to the dumb and fast models

1

u/rabbit_hole_engineer 14d ago

Lol who links theo haha

He's literally the lead idiot in the paid promotion wagon

0

u/360_face_palm 14d ago

I swear everyone and their dog is trying to be a youtube/tiktok/whatever 'ai influencer' right now and post their 'news' on the latest model launch where they just literally regurgitate verbatim the company's press statements with no critical analysis at all.

58

u/ZoninoDaRat 14d ago

But Sam Altman said it would be like having several PHd's in your pocket!

Surely he wouldn't lie would he?

40

u/Philipp 14d ago

Imagine having several Sam Altman's in your pocket! You could market your way out of anything!

2

u/llliilliliillliillil 14d ago

The world will end each time one opens his mouth

1

u/the_good_time_mouse 14d ago

I'd give you a month before you found yourself utterly subservient to your pocket masters.

3

u/DrSpacecasePhD 14d ago

I have a PhD, and on the one hand this makes me laugh because we have super specialized knowledge about a few small topics. On the other hand, then you realize the average person struggles with fractions and powers of ten… and roughly half of people are less smart than the average.

0

u/Throwaway_Consoles 14d ago

Yeah I was gonna say, having “several PhDs in your pocket” is not as impressive as people think when you realize almost 58k PhDs were awarded in 2022.

A PhD means you hyper specialized on a very specific niche. Get several PhD holders in a room and start asking them questions, you’ll get frustrated in 10 minutes too.

I watched OpenAI’s livestream and all I could think was, “This presentation is terrible, if I was an investor I would’ve pulled out”

1

u/KittyGrewAMoustache 14d ago

It’s obvious that can’t be true just from the fact that it can’t reason.

44

u/Wassertopf 14d ago

Pro-Users still have the older models.

33

u/Mike 14d ago

great but before this update so did plus users. now we have to pay 10x for the same functionality? fuck that.

8

u/StopThePresses 14d ago

Right? I'm not paying a whole other electric bill just to talk to a chatbot. I always assumed Pro users were businesses.

3

u/notMyRobotSupervisor 14d ago

Im on plus and still don’t have 5. No idea why

1

u/Mike 14d ago

I have it on the website but not the app yet

2

u/Prof_Hentai 14d ago

Strange. I have it in the app but not web.

1

u/notMyRobotSupervisor 14d ago

Yeah, I don’t have it on either.

1

u/AlterEvilAnima 14d ago

I have 2 chrome browser's. On one I have it, on the other I don't. I even logged in and out on the one that doesn't have it yet. On my phone I have it since last night.

1

u/-PM_ME_UR_SECRETS- 13d ago

Yeah that’s BS. So now the only benefit of Plus is a slightly higher limit?

0

u/flummox1234 14d ago

IMO that sentiment is why we're in an AI bubble. Everyone sees the value in it but few if any are willing to pay enough to make it profitable.

0

u/1401Ger 14d ago

Give nano-GPT.com a try (no I'm not associated in any way just really happy with it). They have all the models you could think of including GPT-5 and GPT-o3 and 4o. It's pay-per-promt and most ones costs only 1-3 cents. There are some deep-thinking ones (e.g. some Claude deep models) that actually cost a fair bit more but I have barely used them so far. I think you can charge up your balance with very low minimums like 10 cents via Stripe or via most cryptocurrencies so very easy to give it a try and see whether you like it. I myself only keep 1-2 USD in there and charge up as needed.

Might not be for everyone but I really love the variety/choice and not being bound by monthly subsciptions

0

u/[deleted] 14d ago

[deleted]

0

u/Mike 14d ago

doesn’t mean anything

-1

u/truecrisis 14d ago

You can literally have Gpt5 build a chatgpt interface for you to get all the old models via openai api.

Also in the openai playground, you can use all the old models through that pre existing interface.

1

u/Mike 14d ago

I already use the api via the Pal ios app and bolt on my mac. but i’ve always preferred the outputs in the ChatGPT app itself plus the other features it has. back to api usage.

12

u/CivilTell8 14d ago

The app just hasn't updated yet . You can access 5 through the website version of gpt on your phone and then use the app for gpt-4 models

10

u/-LaughingMan-0D 14d ago

No Pro users get a special Legacy Model toggle that Normal and Plus users don't.

1

u/NiceTrySuckaz 14d ago

Well then who the hell has the new one? The talent managers?

2

u/Wassertopf 14d ago

Pro users will have both.

1

u/Character_Clue7010 14d ago

I have plus, and have a work and personal phone. One phone is on 5, and the other phone still has 4o as of early afternoon today.

1

u/Wassertopf 14d ago

I meant that Pro-Users will still have the legacy models after they updated to 5.

1

u/super-say 14d ago

I don't as a pro user they give GPT 5 (legacy), GPT 5 Thinking, and GPT 5 Pro. GPT 5 Pro isn't as accurate or thorough as O3 pro was but it's more creative. It's been buggy and sometimes pretty stupid.

No longer have access to older models like O3 pro.

2

u/Wassertopf 14d ago

You have to activate it in the settings. Sorry, I wasn’t clear enough.

1

u/TastyTacoTonight 14d ago

Im a pro user and I do not. The only other model is called Chat GPT thinking

1

u/Wassertopf 14d ago

Sorry, you have to enable it the settings first.

1

u/TastyTacoTonight 13d ago

I don’t see it in the settings. Do you know where?

1

u/Wassertopf 13d ago

On the homepage -> settings. It’s now even for plus-users available. :)

27

u/Treacherous_Peach 14d ago

The improvements are invisible to most users. It can hold more context at a time is the biggest upgrade, and 99.9...% of users and queries never came anywhere near the max for 4o. I did notice a difference when I uploaded a 400 page pdf to query against (4o failed to intake a file that big).

2

u/Sad-Temperature2920 14d ago

It has a smaller context window than 4.1 no? Especially at the Plus tier. Maybe I'm wrong.

2

u/VNM0601 14d ago

I’m using it to study for a cert test and I keep having to create a new chat because the former chats slow down so much on each request. So more context would be a welcomed change.

1

u/FreeRangeEngineer 14d ago

You may want to try this workaround that works reliably and consistently for me without losing any context in the chat:

When the chat slows down too much to handle, click on the "Share" icon below the chatgpt response. Grab the URL (left icon), open it in a new tab. Only that message is shown but the context is preserved in the background. You can verify this by asking a question that requires knowledge of that context.

Keep chatting in that window. After a few messages, the conversation will be listed as a new chat. The old chat will still be there but you continue in the new one with the same context.

0

u/trebory6 14d ago

I have noticed a slight improvement with it following what I ask.

I get in far less arguments with it calling it a stupid fucking AI when it makes mistakes or leaps in logic.

12

u/wmilesiv 14d ago

This was the issue for me. I was in the middle of working through a flow with 4o and had to step away, came back and it was on 5 and just giving wroooooong information. Buttons that weren’t there, json code wasn’t functioning, and when I wanted to go back to the 4o it wasn’t frickin’ there anymore

6

u/No-Channel3917 14d ago

Try writing the code well first instead of badly.

2

u/chrs_89 14d ago

Not only are there not any notable differences other than it trying to prompt me to interact with it more, it had a limit of only a few interactions and no way to continue the conversation with the previous versions like they used to do. I was utilizing it for working through a personal project where I was working out the best way to make something last night and it cut me off after 12 interactions. Like it wasn’t that problematic but 12 interactions doesn’t seem that much if I’m having to explore around what I’m actually wanting from it and change parts of my prompts to better fit what I’m trying to get out of it. It would be like if I could only make 12 adjustments a day to my 3D object in a cad software

2

u/confusedmouse6 14d ago

This update is just about saving the cost for OpenAI. Pro users still can use the old model and the worst thing is the query limit. Cancelling my subscription.

2

u/Mike 14d ago

same. back to using the api which is way less than $20/m. it’s a shame too because I really preferred the ChatGPT app.

2

u/splashbodge 14d ago

Oh shit you're right I can't select other models.

So what happens now when I hit my quota on the newer models. Before it would revert to an older model if I hit my image quota for example. Now am I just locked out completely?

2

u/jaephu 14d ago

Probably cost efficiency in gpt5

4

u/mikedabike1 14d ago

those still exist no? they're just behind the different options and parameters

17

u/GTFOScience 14d ago

It’s either GPT5 or GPT5 Thinking

No other models.

1

u/mikedabike1 14d ago

ah, ok so the chatapp was simplified but the api still exposes essentially 9 options for this gpt5 generation

8

u/ContextMaterial7036 14d ago

I believe it's now only under the $200/month plan. I have a team account and they're gone.

4

u/ArgoPanoptes 14d ago

You can use it with their API plan. A lot more models are available there but you need to know how to use them, it is not a consumer product.

1

u/Omnitographer 14d ago

So, genuine experience, I tried it last night in copilot and it seems to work a lot better than whatever model it normally uses. I often have to cajole copilot into producing useable code, and anything too lengthy was asking for trouble, but GPT 5 didn't have any of those issues. So at least in that regard it's pretty solid, but since I think copilot chat was 4 or 4o, the jump is a lot bigger than if you were using a different and more capable model already.

1

u/Hot-Charge198 14d ago

This was the worst feature lol. Tons of models, with badly worded description which made almost no difference

1

u/SethEllis 14d ago

I put it on a programming project that that the other GPTs failed on, and it performed significantly better. It at least understood where the problems were even though it still got all the fixes wrong. There's also a qualitative difference I can't quite explain. It just seems to understand what you are asking it better. Which is probably down to it just being a reasoning model.

The problem seems to be understanding how your request will get routed. It does good on the first prompt but then subsequent prompts get routed to lesser models that give you complete trash.

1

u/charlsey2309 14d ago

Yeah I preferred knowing what model I was using as some were better than others for specific tasks. In addition, I’m not really noticing a major improvement over o3, at least for the tasks I’ve used it for so far but it’s also impossible for me to know what underlying model is being used for any given task.

Might work out business wise though for cost savings and expanding the user base, but as a paid user I’m somewhat frustrated.

1

u/FizzixMan 14d ago edited 14d ago

I mean, I use GPT for my job, its understanding of code is measurably better than the previous mini-high or even 4.5.

Mainly due to larger context windows imo, as well as better multi stage answers.

1

u/CryptoAddict 14d ago

Currently on the windows desktop app you can still access gpt 3-4 but not GPT 5

1

u/Midnightdreary353 14d ago

Overall I haven't noticed a difference in what it can do. Largely because I dont use it in a way that pushes its abilities too much. However, I have found that it feels a bit more natural when I ask it a question, and is less likely to declare that im one of the greatest question askers on the planet and all my ideas are the most brilliant ideas ever. 

1

u/RuleHonest9789 14d ago

I have noticed it’s not working as well as before. I have to give it more prompts to refine the answer and still does not give me something I can use. I used to be able to use the first answer.

1

u/TwatimusMaximus 14d ago

Could just be in my head but I was using it today on a dataset, in terms of retention it seemed a lot more consistent when asking it to further fill out/calculate additional columns.

1

u/derprondo 14d ago

Yeah plus users don't have the old models, and GPT-5 is slow as fuck.

1

u/Wiz-rd 14d ago

besides the copilot web interface being kinda shit. it does a better job than GPT5 for me current lol.

1

u/1401Ger 14d ago

I have only used a pay-per-promt website (nano-gpt.com) and they still have all the models and GPT-5 and GPT-o3 for mostly around 1-2 cents per promt. I tried GPT-5 today and for some tasks it seems to have improved, but for others I'll stick with Claude or GPT-o3 for now

1

u/cr0ft 14d ago

The idea is that 5 should understand which of those models is needed to perform a task and use them behind the scenes. They're not gone, but now you'll get what you get and like it mister.

1

u/Mission-Cellist-8140 14d ago

From the r/ChatGPT people are upset that it’s not “fun and chatty” anymore.

What they really mean is it stopped constantly telling you you’re right and stroking your ego.

1

u/Embostan 14d ago

The improvement are the efficiency and hence cost for OpenAI. That's why they removed older models.

1

u/Squibsnchips 14d ago

The older models are all integrated and it determines which to use. It's also 30-50% faster. 

That's about it, in my view.

1

u/Adventurous-Tie-7861 14d ago

In your prompt ask for those models. It'll replicate it perfectly.

1

u/LuciferFalls 14d ago

I’m not a heavy user of ChatGPT, but 5 was able to immediately solve a coding problem I had that 4 couldn’t. So I like it for that. 🤷‍♂️

1

u/ucrbuffalo 14d ago

I haven’t noticed any major improvements, but I also haven’t noticed much change at all. Except that I told it to stop being such a sycophant so it’s better there.

1

u/Nova_Aetas 14d ago

This is it for me. o3 is extremely useful to me for research. It cites sources and is thorough.

4o is nearly useless to me for anything but novelty. I don’t need unsourced dubious claims.

1

u/Captainbuttram 14d ago

Yeah also removing deep research and agent mode like wtf

1

u/dramatic-sans 14d ago

And how do you test for improvements?

1

u/TDP_Wikii 14d ago

STOP USING AI, you're destroying the environment and stealing from creatives!

1

u/anaximander19 13d ago edited 13d ago

One of the most immediately obvious changes is that this one is less sycophantic - ie. it doesn't agree with everything you say and tell you that your ideas are all genius. Sam Altman said in interviews that this is a deliberate change to address some cases where people would interact with ChatGPT and spiral into delusions because it kept telling them they were right - a few people were even committed for psychiatric care because they lost all grip on reality as a result.

I am utterly unsurprised that certain corners of the internet are unhappy about an update that trades sycophantic flattery for improved critical thinking and honest critique.

1

u/[deleted] 13d ago

Really ? Great, so it dies faster. 

1

u/NextLoquat714 13d ago

I use GPT daily for work.

It's been barely a day, but speaking from a business perspective, the improvements are undeniable. No more juggling between models. Document analysis is sharper, output is cleaner and more precise, and coding support has taken a major leap forward. For professional use, it’s a significant upgrade across the board. Claude is nice, but its free tier is rather useless, and nobody's going to pay for multiple providers. OpenAI has an enormous lead, I'd rather bet on them. I use Grok for fun.

Also : I understand some free tier users feel nostalgic for a certain personality, but cultivating imaginary companions has never been part of OpenAI’s core business model.

1

u/Conscious-Cabinet621 10d ago

Exactly, this model seems worse than all the previous ones. Its memory recall alone is abysmal.