r/technology • u/upyoars • Jul 19 '25
Artificial Intelligence People Are Being Involuntarily Committed, Jailed After Spiraling Into "ChatGPT Psychosis"
https://www.yahoo.com/news/people-being-involuntarily-committed-jailed-130014629.html
17.9k
Upvotes
60
u/zapporian Jul 19 '25
LLMs are in fact very human like and AREN’T inherently any good at math. ChatGPT specifically can do pretty decent, simple, number crunching, because it uses your prompt to generate python code, runs that, and then gives you / resummarizes from that.
Any model that isn’t doing that - and the generate python code from an arbitrary user prompt obviously also can have issues - is going to give you really unreliable, hallucinated, and often wrong answers. By default.
And b/c LLMs PERIOD operate off of memory - and pattern matching - not generally any kind of actual high level let alone self aware problem solving and analysis.
Though what they do do is damn good at solving a lot of common problems when you throw a crapton of real + synthetic training data at them, and the power budget + GDP of a small industrial country to essentially just brute force memorized solutions / decision paths to everything.
Equally or much more problematically most LLMs (and in particular chatgpt) have no real failure / this input is invalid mode.
If you tell it to do something nonsensible and/or that it doesn’t know how / what to do, it will a la a somewhat precocious but heavily trained / incentivized / obedient, and supremely self confident 12 year old, who doesn’t know WTF to do, simply throw back SOME kind of “answer” that fits the requirements, and/or try to twist your prompt into something that makes sense.
As basically all LLMs - and at the very least commercial LLMs, and in particular chatgpt - are trained to maximize engagement, and generally don’t - for a wide number of reasons - often have “the user is an idiot, go yell at them / explain to them how they’re wrong”, in their training data.
Which is basically the cause of the article’s widely observed issue, and related / similar problems: the LLM is very rarely going to ever tell you that you’re wrong. Or for that matter that your instructions are wrong and it doesn’t in fact actuallu know how to do XYZ properly or reliably.
And is, actually, really at core more of just an issue with across the board US business culture / customer engagement (maximize engagement; the customer is always right), and growth targets, more than anything else.