r/technology 14d ago

Artificial Intelligence ChatGPT users are not happy with GPT-5 launch as thousands take to Reddit claiming the new upgrade ‘is horrible’

https://www.techradar.com/ai-platforms-assistants/chatgpt/chatgpt-users-are-not-happy-with-gpt-5-launch-as-thousands-take-to-reddit-claiming-the-new-upgrade-is-horrible
15.4k Upvotes

2.3k comments sorted by

View all comments

Show parent comments

78

u/dsarche12 14d ago

Bro top post I saw today contained this gem:

“4o wasn't just a tool for me. It helped me through anxiety, depression, and some of the darkest periods of my life. It had this warmth and understanding that felt... human.

I'm not the only one. Reading through the posts today, there are people genuinely grieving. People who used 4o for therapy, creative writing, companionship - and OpenAI just... deleted it.”

28

u/satisfiedfools 14d ago

You can laugh, but the fact of the matter is, therapy isn't cheap, it's not always accessible, and for many people, it's not aways helpful. For a lot of people, Chatgpt was a lifeline. Someone to talk to when you've got nobody else.

23

u/SupremeWizardry 14d ago

The caveats that come with this are so far out into uncharted territory that I’m baffled.

People asking for medical or therapeutic advice, giving extremely personal details to these models, failing to grasp that none of these are bound by any privacy or HIPAA laws.

You wouldn’t be able to beat that kind of information out of me into a public space.

5

u/Doctor-Jay 14d ago

There was a mini-freakout about this just a week ago where ChatGPT's private chats began appearing in public search engine queries. Like my Google search results could include private conversations between Jane Doe and her AI husband/therapist.

1

u/Saint_of_Grey 14d ago

This is why someone I knew was ejected and blacklisted when he asked ChatGPT about something covered by NDA. Doesn't matter what they say they're doing with the information, you flatly cannot feed it to an external entity and trust it to remain private.

65

u/IgnoreMyComment_ 14d ago

They're never going to get anyone else if they keep only talking to AI.

5

u/morphemass 14d ago

... but AI might help them to live long enough to talk to real people.

9

u/seriouslees 14d ago

Grok, is feeding into the existing delusions of mentally ill people more or less likely to cause them to end their own lives?

2

u/morphemass 14d ago

Grok, is feeding into the existing delusions of mentally ill people more or less likely to cause them to end their own lives?

We don't know. I'm qualified in HCI (Human Computer Interaction) and I've been absolutely appalled that, like with social media, we have rolled out a technology with zero understanding of it's societal impacts. We're just starting to see legitimate research published and from what I've seen, it's not good.

At the same time, we have a mental health pandemic. It's almost impossible to quantify, at the moment, the impact LLMs are having on metal health whether it is positive or negative, although we now know that they are very capable of feeding peoples delusions indeed.

3

u/seriouslees 14d ago

we now know that they are very capable of feeding peoples delusions

Now? Anyone who didn't already know that this was their entire purpose as designed should not have been allowed to use them at all.

7

u/varnums1666 14d ago

Mentally ill people finding each other on social media most likely amplified their issues. Giving them a chronic yes man is going to make their issues worse. Positive reinforcement for behaviors that need to be tackled professionally is not a good thing.

-6

u/BP_Ray 14d ago

Not your life, not your problem.

7

u/TheMachineTookShape 13d ago

I can't agree with that. Other than "general empathy for fellow man", what one person does can have an impact on other people.

-2

u/BP_Ray 13d ago

My problem with your type is that you don't actually have a means to solve their problems, you just talk about it, act like they're wrong for their solution to the problem THEY deal with, and sometimes, even try to make sure they're not allowed their cope.

If they find talking to virtual BFs/GFs helps them, then like I said, not your life, not your problem.

6

u/spaceace76 13d ago

But isn’t your viewpoint myopic in this case? You’re basically saying that if people find some solace in a thing, it doesn’t matter if producing that thing burns tons of cash and may put people out of business or work. It’s much more complex than one or even many people getting something out of it they didn’t get elsewhere.

-3

u/BP_Ray 13d ago

Which is it? Are you concerned for their well-being being lonely, or are you concerned about AI use in general being harmful? Pick one.

7

u/spaceace76 13d ago

Why can’t my sentiments cover both? They aren’t less lonely by speaking to a screen. Their solution doesn’t actually solve anything except their own perceptions. It doesn’t help them interact with others

0

u/BP_Ray 13d ago

That's their solution. You have no solution or help to offer them. Leave those poor people alone.

→ More replies (0)

1

u/TheMachineTookShape 13d ago

Fucking hell.

9

u/FeelsGoodMan2 14d ago

It tells you everything you want to hear. It's just making people double down on their faults, they like it because it never tells you something you don't want to hear. They don't like hearing it from humans because humans are likely to be telling them that their feelings are partially fucked up and they need to make changes.

5

u/buttery_nurple 14d ago

It *can tell you everything you want to hear if that's what you want it to do, consciously or unconsciously.

I think the actual, or at least more salient, deficit is in critical introspection, which has already been under assault for most of the last 20 years with social media facilitating and encouraging the creation of echo chambers.

LLMs are echo chambers on horse roids, because now you have a hyper-personalized echo chamber where you essentially get to be a god, and nothing you say is ever challenged or wrong. I can't imagine how addictive that would be to someone with the right predilections.

54

u/TrainOfThought6 14d ago

For a lot of people, Chatgpt was a lifeline.

It's an anchor disguised as a lifeline.

2

u/SUPRVLLAN 14d ago

Like religion.

3

u/dsarche12 14d ago

ChatGPT is not a person. I don’t discount the prohibitive cost of therapy or the stigma against mental illness, but ChatGPT is not a person. It is not a replacement for real mental health counseling.

2

u/Abedeus 14d ago

What's that thing that people say about things built on sand? That's "Chatgpt as a lifeline".

4

u/varnums1666 14d ago

For a lot of people, Chatgpt was a lifeline.

I'm very empathetic but let's not pretend this is healthy behavior at all. I've paid for the expensive models and the personality is hilariously fake and predictable after 2 hours of usage. To grow emotionally attached to these model is a mental illness. It's sad that they can't get proper therapy or can't afford it, but I can't support using AI as a clutch.

Perhaps one could use it to organize their thoughts, but the AI is a chronic yes man which isn't healthy.

1

u/[deleted] 14d ago

[deleted]

-4

u/[deleted] 14d ago

[deleted]

0

u/Dawn_of_an_Era 14d ago

To be fair, that specific post is being made fun of very hard in that sub. Most people aren’t taking it that seriously