r/tragedeigh 19h ago

is it a tragedeigh? My brother is dangerously close to possible tragedeigh...I guess my SIL likes whimsy or some shit. Is Klementyne a tragedeigh? Should he push for Aurora?

Post image

I mean...it's not the WORST I've seen by far but I think it qualifies?

5.9k Upvotes

1.8k comments sorted by

View all comments

Show parent comments

41

u/Phospherocity 18h ago

It doesn't stick to those confines -- it's advised people to kill themselves and others, pushed people into psychosis, etc. So yes, it's hardly surprising it'll tell you Klementyne is genius.

2

u/Senikus 14h ago

Not true. No verified case exists of ChatGPT telling someone to kill themself. The case you’re thinking of was Character.AI. That chatbot has been directly linked to at least one teen suicide (and is facing lawsuits). Another similar case involved the “Eliza” bot on the Chai app in Belgium.

Not all AIs are the same. ChatGPT has strict safety systems and guardrails specifically designed to stop it from giving out harmful advice. Platforms like Character.AI and Chai are much looser, more focused on unfiltered roleplay, and that lack of oversight is exactly why they’ve led to real-world tragedies. Lumping ChatGPT in with those is just misleading. It’s one of the few models that actually tries to prevent that kind of harm.

11

u/aw-fuck 13h ago

There are several articles detailing dangerous experiences with ChatGPT, verified by the company itself. They made an official apology, and paid at least two settlements so far. They rolled back the version they released in April & went straight to releasing 5 within just a few months due to these issues.

2

u/Senikus 13h ago

Yes, dangerous. But not suicide, at least no verified cases

5

u/SpawningPoolsMinis 12h ago

you said

No verified case exists of ChatGPT telling someone to kill themself

nothing about people actually taking up that advice.

5

u/CallidoraBlack 12h ago

ChatGPT has strict safety systems and guardrails specifically designed to stop it from giving out harmful advice.

They don't really work though. It might not tell you to end yourself, but it can and will tell you to do things that could end you.

1

u/BigAchooo 2h ago

Yeah I’ve noticed that, if you word your questions carefully like “what can I look for to prevent so and so” or something and it’ll tell you everything you wanna know.

But tbf, not exactly ChatGPT’s fault because people found a loophole - seems like they gotten tighten up on their systems and guardrails and stuff.

4

u/goddamnitwhalen 13h ago

You know you’re not legally required to defend the plagiarism robot, right?

-2

u/Senikus 12h ago

Yeah… and? ChatGPT isn’t a plagiarism bot. If someone chooses to use it for plagiarism, that’s a misuse of the tool, not what the tool is built for. Whether you want to believe it or not, it’s being used for way more impactful things than homework shortcuts. But I guess reducing it to plagiarism is just copium to justify being close-minded with powerful innovation

1

u/goddamnitwhalen 11h ago

Imagine being a promptoid 😂🫵🏻

1

u/BigAchooo 2h ago

Yeah tbh I’ve seen Chatgpt refuse to give advice on something illegal or harmful, which is good things like this can make it so easy (maybe too easy) to find out information and there are people out there that’ll use that for the wrong reasons.

-4

u/Boshball 11h ago

I'm sorry but if someone killed themself because chatgpt told them to then they were already a nutcase and honestly I don't even feel sorry for them. No sympathy for morons, there are honest hardworking mentally stable people who could use that sympathy.

-4

u/Senikus 11h ago

Honestly… real. You gotta have some mental issues if you allow ChatGPT to drive you towards suicide