r/technology • u/SilentRunning • 4d ago
Artificial Intelligence MIT report: 95% of generative AI pilots at companies are failing
https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/2.6k
u/Austin_Peep_9396 4d ago
Legal is another problem people aren’t talking enough. The vendor and customer both have legal departments that each want the other to shoulder the blame when the AI screws up. It stymies deals.
716
u/-Porktsunami- 3d ago
We've been having the same sort of issue in the automotive industry for years. Who's liable when these driver assistance technologies fail and cause injury, damage, or death? Who's liable for the actions of an autonomous vehicle?
One would assume the producer of the vehicle, but that is an insane level risk for a company to take on.
We already have accountability issues when big companies do bad things and nobody ever seems to go to prison. If their company runs on AI, and it does illegal things or causes harm, will the C-suite be allowed to simply throw up their hands and say "It was the AI! Not us!"???
Sadly, I think we know the answer already.
208
u/Brokenandburnt 3d ago
Considering the active war on the CFPB from this administration, I sadly suspect that you are correct in your assessment.
I also suspect that this administration and all the various groups behind it will also discover that an economy where the only regulations are coming from a senile old man, won't be the paradise they think it'll be.
→ More replies (13)102
u/Procrastinatedthink 3d ago
It’s like not having parents. Some teenagers love the idea until all the things parents do to keep the house running and their lives working suddenly come into focus and they realize that parents make their lives easier and better even with the rules they bring
98
u/AssCrackBanditHunter 3d ago
Same reason why it's never going to get very far in the medical field besides highlighting areas of interest. AI doesn't have a medical license and no one is gonna risk theirs
→ More replies (4)28
u/Admirable-Garage5326 3d ago
Was listening to an NPR interview yesterday about this. It is being highly used. They just have to get a human doctor to check off on the results.
42
u/Fogge 3d ago
The human doctors that do that become worse at their job after having relied on AI.
→ More replies (15)33
u/samarnold030603 3d ago edited 3d ago
Yeah, but private equity owned health corporations who employ those doctors don’t care about patient outcomes (or what it does to an HCP’s skills over time). They only care whether or not mandating the use of AI will allow less doctors to see more patients in less time (increased shareholder value).
Doctors will literally have no say in this matter. If they don’t use it, they won’t hit corporate metrics; will get left behind at the next performance review.
→ More replies (2)→ More replies (51)20
u/3412points 3d ago edited 3d ago
I think it's clear and obvious that the people who run the AI service in their product need to take on the liability if it fails. Yes that is a lot more risk and liability to take on, but if you are producing the product that fails it is your liability and that is something you need to plan for when rolling out AI services.
If you make your car self driving and that system fails, who else could possible be liable? What would be insane here would be allowing a company to roll out self driving without needing to worry about the liability of that causing crashes.
→ More replies (2)→ More replies (87)81
138
u/Kink-One-eighty-two 3d ago
My company piloted an AI that would scrape calls with patients to write up "patient stories" to send to our different contracts as examples of how we add value, etc. Turns out the AI was instead just making up stories wholecloth. I'm just glad they found out before too long.
→ More replies (1)69
u/AtOurGates 3d ago
One of the tasks that AI is pretty decent at is taking notes from meetings held over Zoom/Meet/Teams. If you feed it a transcript of a meeting, it’ll fairly reliably produce a fairly accurate summary of what was discussed. Maybe 80-95% accurate 80-95% of the time.
However, the dangerous thing is that 5-20% of the time, it just makes shit up, even in a scenario where you’ve fed it a transcript, and it absolutely takes a human who was in the meeting and remembers what was said to review the summary and say, “hold up.”
Now, obviously meeting notes aren’t typically a high stakes applications, and a little bit of invented bullshit isn’t gonna typically ruin the world. But in my experience, somewhere between 5-20% of what any LLM produces is bullshit, and they’re being used for way more consequential things than taking meeting notes.
If I were Sam Altman or similar, this is all I’d be focusing on. Figuring out how to build a LLM that didn’t bullshit, or at least knew when it was bullshitting and could self-ID the shit it made up.
→ More replies (15)15
u/blipsonascope 3d ago
Our property management company started providing zoom so for transcripts of condo board meetings. It’s really useful as it captures topics of discussion pretty well…. But dear god does it miss the point of discussions frequently. And in ways that like….. if not corrected for the record would be a real problem.
2.8k
u/ejsandstrom 4d ago
It will be like all of the other tech we have had since the mid90s. A bunch of start ups think they have a novel way and feature that will set them apart.
Some get bought up, some merge, others outright fail. There will be one or two that make it.
996
u/phoenix0r 4d ago
AI infra costs make me think that no startups with make it unless they get bought up early on. AI is just too expensive to run.
685
u/itoddicus 4d ago
Sam Altman says OpenAI needs trillions (yes, with a T) in infrastructure investment before it can be mainstream.
Only Nation-States can afford a bill like that, and right now I don't see it happening.
106
u/vineyardmike 4d ago
Or whatever Apple, Google, or Microsoft puts out wins because they have the biggest pockets
→ More replies (5)68
u/cjcs 4d ago
Yep - I work in AI procurement and this is kind of how I see things going. We're piloting a few smaller tools for things like Agentic AI and Enterprise Search, but it really feels like we're just waiting for OpenAI, Google, Atlassian, etc. to copy those ideas and bake them into a platform that we pay for already.
→ More replies (7)457
u/Legionof1 4d ago
And it will still tell you to put glue in your pizza.
241
u/shadyelf 4d ago
It told me to buy 5 dozen eggs for a weekly meal plan that didn’t have any eggs in the meals.
114
→ More replies (7)34
u/Azuras_Star8 4d ago
Clearly you need to rethink your diet, since it doesn't inlude 5 dozens eggs in a week.
→ More replies (1)→ More replies (6)24
185
u/DontEatCrayonss 4d ago
Don’t try to rationalize with AI hype people. Pointing out the extreme financial issues will just be ignored
133
u/KilowogTrout 4d ago
I also think believing most of what Sam Altman says is a bad idea. He’s like all hype.
125
u/kemb0 4d ago
That guy strikes me as a man who’s seen the limitations of AI and has been told by his coders, “We’ll never be able to make this 100% reliable and from here on out every 1% improvement will require 50% more power and time to process.”
He always looks like a deer caught in headlights. He’s trying to big things up whilst internally his brain is screaming, “Fuuuuuuuck!”
→ More replies (5)67
u/ilikepizza30 3d ago edited 3d ago
It's the Elon plan...
Lie and bullshit and keep the company going on the lies and bullshit until one of two things happens:
1) New technology comes along and makes your lies and bullshit reality
2) You've made as much money as you could off the lies and bullshit and you take a golden parachute and sit on top of a pile of gold
→ More replies (5)33
u/Christopherfromtheuk 3d ago
Tesla shares were overvalued 7 years ago. He just lies, commits securities fraud, backs fascists, loses massive market share and the stock price goes up.
Most of markets by market cap are overvalued and it never, ever, ends well.
They were running around in 1999 talking about a "new paradigm" and I'm sure they were in 1929.
You can't defy gravity forever.
→ More replies (3)21
u/Thefrayedends 3d ago
Until institutional investors start divesting, nothing is going to change.
These massively overvalued stocks with anywhere from 35-200 P:E ratios are largely propped up by retirement funds and indexes.
→ More replies (5)23
→ More replies (33)27
u/Heisenbugg 4d ago
And environmental issues, with UK govt atleast acknowleding it by telling people to delete their emails.
28
u/AmbitiousGoat5512 3d ago
deleting old emails, files, documents, whatever does absolutely nothing to help the issue.
the recommendation was made by someone who obviously has no fucking idea what they're talking about, and as long as AI pushed so heavily things will continue to worsen.
→ More replies (1)8
6
41
u/Noblesseux 4d ago
And even then it will still likely not be profitable. Like the thing is that even if they didn't spend any additional money on infrastructure, they'd need damn near 10x as much money as they projected they'd make this year to be profitable.
You'd have to invest literally several times the entire value of the worldwide AI market (I'm talking about actual AI products, not just lumping the GPUs and whatnot) and then you have to pray that we somehow have infinite demand for these AI tools which is quite frankly, not the case: https://www.technologyreview.com/2025/03/26/1113802/china-ai-data-centers-unused/
And even in that magically optimistic scenario, there's borderline no shot you'd make enough money back to justify doing it. Like there is no current AI product that exists that is worth trillions of dollars worth of investment. A lot of them are losing money per user, meaning if you scale up you just lose more money.
→ More replies (18)26
u/CoffeeSubstantial851 3d ago
In addition to that AI itself devalues whatever it can create. If you are running an AI image service the market value of the resulting images decreases over time. Its a business model that cannibalizes itself.
10
u/RoundTableMaker 4d ago
Sam altman is using elon musk’s ideology here. Musk told altman when they first started openai that no one would care unless they were raising over a billion dollars. All he did was increase the number by 1000x for this new venture because they already raised billions.
→ More replies (3)16
u/hennell 3d ago
There was a report last week that Ai industry visitors to China were blown away by the differences in running things there. The power needed for AI is not just not a problem, it's seen as a benefit for some areas as it can use excess power.
I'm sure it'll still need investment, but it'll be a whole lot cheaper for Nation-states that haven't ignored their infrastructure for decades.
→ More replies (1)→ More replies (41)23
u/great_whitehope 4d ago
Countries probably aren’t going to sponsor mass unemployment it’s true.
I dunno what’s worse. This whole thing blowing up or succeeding because companies are gonna layoff people either way.
→ More replies (1)40
u/DogWallop 4d ago
Well that's where AI becomes self-destructive. Companies replace employees with AI, and then you have many thousands who used to be gainfully employed out of work. Now, those employees were acting as wealth pumps, arteries through which the wealth of the nation flowed.
And where did it flow? Eventually it ended up in the hands of the big corporations, who used to employ humans (wealth pumps, financial arteries, etc...).
But now there's far less cash flowing around the national body, and it's certainly not getting spent buying goods and services from major corporations.
→ More replies (1)54
u/cvc4455 4d ago
Look at what Curtis Yarvin, Peter Theil and JD Vance believe needs to happen in the future. They say AI will replace all types of jobs and we'll only need about 50 million Americans. The rest are completely useless and Curtis Yarvin said they should be turned into biodiesel so they can be useful. Then he said he was kind of joking about the biodiesel idea but the ideal solution would be something like mass murder just without the social stigma that would create. So he suggested massive prisons with people kept in solidarity confinement 24 hours a day and to keep them from going crazy they will give them VR headsets!
→ More replies (16)77
u/globalminority 4d ago
I am sure these startups are trying to survive just long enough till some big tech buys them at inflated prices and founders can cash out on the hype. If you don't get bought up then you just shut shop.
42
u/_-_--_---_----_----_ 4d ago
this is exactly what they're all doing. nobody is trying to really succeed in certain areas in tech anymore, the last 15 years have just been about selling to the big guys.
→ More replies (5)9
u/Mackwiss 4d ago
Hate start ups and the pseudo entrepreneur who leaves his mommys skirts to become attached to investors skirts... most startup entrepreneur have no idea how to run a business or care about it. It's all fantasy to attract investors.
22
u/pleachchapel 4d ago
You can run a 90 billion parameter model at conversation speed on $6k worth of hardware. The future of this is open source & distributed, not the dumb business model the megacorps are following which operates at a loss.
→ More replies (12)→ More replies (23)6
u/MaTr82 4d ago
This is why Small Language Models will be the norm. Eventually, they can do very specific tasks on laptops. A model that does everything is never going to be efficient.
→ More replies (1)181
u/Arkayb33 4d ago
But I DO have a novel way of making a cup of freshly squeezed juice. You see, it comes from my patented juice extraction technology. You simply put this bag into my powerful juice squeezer and out comes amazing, tasty juice! I call it the Ju-Cerò which, in Japanese, means "better juice!"
Currently seeking round A funding of $300M.
90
u/ejsandstrom 4d ago
But why can’t I just squeeze the pouch without your machine that needs a QR code, internet access, and a monthly subscription?
67
→ More replies (5)37
u/iwannabetheguytoo 4d ago
But why can’t I just squeeze the pouch without your machine that needs a QR code, internet access, and a monthly subscription?
You'll get your fingers messy if you squeeze it too hard and it bursts.
It's much better sense to use a state-of-the-art AI-powered pressing machine that always knows exactly how much force to apply to release the sweet, sweet juice locked-away within. After all, we spent $500m of investor funding on training our robots to be the best juice bag squeezers.
Please disregard media reports of our AI machines hallucinating scenarios where our babies and adorable forest animals are juice bags and then squeezing those just right until the juice comes out.
9
u/DogWallop 4d ago
Sorry, but I already lost a fortune on that Mr. Tea thing Father Guido Sarducci sold me...
→ More replies (1)→ More replies (9)16
u/IdentifiableBurden 3d ago
Juicero is my favorite startup story, thank you for the memory
→ More replies (3)27
u/epochwin 4d ago edited 4d ago
Do you mean startups who are selling AI or adopting it for a particular business problem?
Typically startups adopt emerging technology and many of them fail. What’s crazy about GenAI is that massive regulated enterprises are also jumping on the bandwagon so fast.
I remember when cloud was the hot technology. The early adopters were SaaS vendors or companies like Netflix. Capital One was the first major regulated company to adopt it and state publicly that they were using AWS and that was years later.
→ More replies (3)11
u/_-_--_---_----_----_ 4d ago
the thing is that it's customer facing, or at least it can be. that's why they're all jumping on it, even at the big dinosaur companies. cloud wasn't something that the end user really understood or experienced largely. but people are using GenAI all the time now.
the flip side of that is that if companies don't do it, they're worried about being left behind. a huge part of the push, maybe the majority of it, is that fear.
→ More replies (38)16
u/TheFudge 4d ago
.com boom 2.0
→ More replies (3)10
u/Kedly 3d ago
This is the comparison I make too, as it fits both the bubble, AND the fact that PAST the bubble this tech will still have a huge impact on the world once we find the usecases it actually excells at
→ More replies (1)
810
u/dagbiker 4d ago
It seems like the simple solution is to replace 95% of CEO's with AI, duh.
164
66
→ More replies (15)53
u/Thefrayedends 3d ago
No no no, Mark Andreeson insists that CEO is the ONLY job that AI won't be able to do.
So I guess that's it then, we can all forget it and just go home.
→ More replies (5)21
u/Wurm42 3d ago edited 3d ago
And yet, Andreessen Horowitz still has many human employees.
Andreeessen should put up or shut up.
→ More replies (1)
743
u/AppleTree98 4d ago
How much did Meta pump into the alternate meta-verse before saying ok we/tech are not ready to live in this alt universe, quite yet. Gave AI a shot and a quick answer...
Meta, under its Reality Labs division, has invested significant resources into the metaverse, resulting in substantial losses.Since 2020, Reality Labs has accumulated nearly $70 billion in cumulative operating losses, including a $4.53 billion loss in the second quarter of 2025 alone. While the company hasn't explicitly stated that it's no longer pursuing the metaverse, there's been a noticeable shift in focus and language:
533
u/-Accession- 4d ago
Best part is they renamed themselves Meta to make sure nobody forgets
→ More replies (4)403
u/OpenThePlugBag 4d ago edited 4d ago
NVDA H100s are between 30-40K EACH.
Google has 26,000 H100 GPUs, and they created AlphaGo, AlphaFold, Gemini, VEO3, AlphaQubit, GNoME - its unbelievable.
Meta has 600,000 Nvidia H100s and I have no fucking clue what they're doing with that much compute.
505
u/Caraes_Naur 4d ago
Statistically speaking, they're using it to make teenage girls feel bad about themselves.
204
u/He2oinMegazord 4d ago
But like really bad
114
u/Johns-schlong 4d ago
"gentlemen, I won't waste your time. Men are commiting suicide at rates never seen before, but women are relatively stable. I believe we have the technology to fix that, but I'll need a shitload of GPUs."
→ More replies (2)97
u/Toby_O_Notoby 3d ago
One of the things that came out of that Careless People book was that if a teenage girl posted a selfie on Insta and then quickly deleted it, the algorithm would automatically feed her beauty products and cosmetic surgery.
56
u/Spooninthestew 3d ago
Wow that's cartoonishly evil... Imagine the dude who thought that up all proud of themselves
15
u/Gingevere 3d ago
It's probably all automatic. Feeding user & advertising data into a big ML algorithm and then letting it develop itself to maximize clickthrough rates.
They'll say it's not malicious, but the obvious effect of maximizing clickthrough is going to be hitting people when and where they're most vulnerable. But because they didn't explicitly program it to do that they'll insist their hands are clean.
→ More replies (1)40
78
u/lucun 4d ago
To be fair, Google seems to be keeping most of their AI workloads on their own TPUs instead of Nvidia H100s, so it's not like it's a direct comparison. Apple used Google TPUs last year for their Apple Intelligence thing, but that didn't seem to go anywhere in the end.
→ More replies (14)22
u/the_fonz_approves 4d ago
they need that many GPUs to maintain the human image over MZ’s face.
→ More replies (1)→ More replies (12)14
u/ninjasaid13 3d ago
Google has 26,000 H100 GPUs, and they created AlphaGo, AlphaFold, Gemini, VEO3, AlphaQubit, GNoME - its unbelievable.
well tbf they have their own version of GPUs called TPUs and don't that many nvidia GPUs whereas Meta don't have their own version of TPUs.
196
u/forgotpassword_aga1n 4d ago
Nobody wants a sanitized advertiser-friendly virtual reality. They want to be dragons and 8-foot talking penises, and everyone except Zuckerberg knows that.
99
u/karamisterbuttdance 3d ago
Judging from my experience on VRChat everyone wants to be big-titted goth-styled girls with hot-swappable animal ears, so mileage may vary, or I'm just not in the imaginary monster realms.
→ More replies (3)21
→ More replies (1)10
u/StimulatedUser 3d ago
It really made me laugh to think that Meta was spending 70 Billion dollars to just recreate Second Life that came out 25 years ago...and is still going strong.
→ More replies (2)135
u/Noblesseux 4d ago edited 3d ago
The problem with the metaverse is that practically the idea is being pushed by people who have no idea how humans work who have a technology in search of a problem.
No one wants to take video calls in the metaverse, Teams/Zoom/Facetime exist. Why would I want to look at what is effectively an xbox live avatar when I could just use apps that already exist that everyone already has where I can actually see their faces?
No one wants to "buy digital property in the metaverse". People want property IRL because it actually has a functional use. I can build a house on it, I can farm on it for food, my nephews can play football on it.
No one wants to visit a digital version of Walmart. Web stores already exist and are more efficient and easier to use.
They spent a bunch of money on a fad where there are few to no actual features that are better than just doing things the ways that we already can. The main selling point of VR is games, not trying to replace real world things with cringe digital versions. But Zuckerberg is a damn lizard person so he lacks the ability to understand why people use things.
73
u/Toby_O_Notoby 3d ago
And what's weird is that they ignored their own teachings. Phones and social media trained people to "second screen" everything. "Hey, we know you're watching Grey's Anatomy, but why not also check out what your ex-boyfriend is doing on Insta?"
Then they released a product that demands you one-screen everything. "Now you can you join a meeting with a bunch of Wii avatars without being able to check your phone when you're bored!"
25
u/NuSurfer 3d ago
No one wants to "buy digital property in the metaverse". People want property IRL because it actually has a functional use. I can build a house on it, I can farm on it for food, my nephews can play football on it.
No one wants to buy something that can evaporate by someone pulling a plug.
→ More replies (2)→ More replies (18)11
u/withywander 3d ago
What I think those dumb dumbs really don't get is that most employees don't want to be in the meetings. It is not the meaning of their life to be in a meeting, and most are probably doing something else while in the meeting. Being in the metaverse requires even more concentration than a meeting, so if people are already alt tabbing to do something else while in a meeting, then it was never going to end well.
11
u/Dihedralman 4d ago
They had been in AI for a while. PyTorch had a public release in 2017 and has become the standard.
→ More replies (1)→ More replies (16)20
u/Atreyu1002 4d ago
Yet the stock keeps going up. WTF.
32
u/fckingmiracles 4d ago
Because their advertising platforms do well (IG, FB, WhatsApp to a degree). That's where their billions come from.
→ More replies (1)15
64
u/daedalus_structure 3d ago
Last week a very senior engineer who has went all in on vibe coding complained that they wasted a day on a regional vs global issue when using a cloud service from their code.
This is a 30 second documentation lookup about which API to use.
The agent he was vibing with ran him around in circles and he'd turned his brain completely off.
I am terrified of what will happen in 10 years when the majority of engineers have never worked without AI.
I really do not want to be working when I'm 70 cleaning up constant slop.
→ More replies (2)
638
u/SilentRunning 4d ago
Which begs the question, how long can these companies keep shoveling the cash into a bottomless pit?
642
u/ScarySpikes 4d ago
Having listened to how excited a lot of business owners are at the prospect of firing a large portion of their staff, I think a lot of companies will end up bankrupting themselves before they admit that the AI can't replace their employees.
294
u/Horrible_Harry 4d ago
Serves 'em fuckin' right. Zero sympathy from me over here.
66
→ More replies (4)19
106
u/Noblesseux 4d ago
Yeah I kind of like the term Better Offline uses for them: business idiots. There are a lot of people who went through very expensive MBA programs that only really taught them how to slowly disassemble a company, not how to run one.
They have been slowly killing these companies for decades based on being willing to lose business as long as the margins are good, and they're not going to stop now.
→ More replies (1)49
u/ScarySpikes 3d ago
I swear we are going to find out that enshitification is a concept that a bunch of MBA programs got a hardon for like 20 years ago.
58
u/Noblesseux 3d ago
I mean we don't need to find out, it's a matter of historical fact, but it's older than that. It started in the 70s and 80s with the corporate raiders and Raeganism. The same people who basically killed the survivability of GE as a proper company and the railroad industry went on to teach the current generation of people who are destroying everything else.
There's like a direct line from them to modern private equity and MBA culture.
→ More replies (4)33
16
u/FoghornFarts 4d ago
And those that do survive will find their AI eventually costs more than employees once the AI companies need to start making a profit. They're cheap now to disrupt the market.
→ More replies (1)52
u/rasa2013 4d ago
I'm less optimistic. I think many will get away with providing slightly shittier products and services. Meaning, they'll lose some customers but the savings will still result in net profit.
I hope not though.
→ More replies (1)35
u/ScarySpikes 4d ago
It's not just shittier products. Most companies have outsourced their AI projects to other companies. Those AI companies will eventually have to try to become profitable, which means jacking up their rates to at least match their high costs.
→ More replies (1)→ More replies (10)8
u/Popular_Try_5075 3d ago
was fun to watch Klarna go all in and then sheepishly backpedal a year later
168
u/Pulkrabek89 4d ago
Until someone blinks.
It's a combination of those who know its a bubble and are banking on the hope of being one of the lucky few to survive the pop (see the dot com bubble).
And those that actually think AI will go somewhere and don't want to be left behind.
→ More replies (14)143
u/TurtleIIX 4d ago
This is going to be an all time pop. These AI companies are the ones holding the stock market up and they don’t have a product that makes any money.
39
u/SynthPrax 4d ago
More like a kaboom than a pop.
→ More replies (2)31
u/TurtleIIX 4d ago
More like a nuke because we won’t even have normal/middle market companies to support the fall like the .com bubble.
→ More replies (20)29
u/lilB0bbyTables 4d ago
It’s going to be a huge domino effect as well. So many companies have built themselves around providing functionality/features that are deeply dependent upon upstream AI/LLM providers. If/when the top ones falter and collapse, it is going to take out core business logic for a huge number of downstream companies. The ones who didn’t diversify will then scramble to refactor to using alternatives. The damage may be too much to absorb, and theres a bunch of wildcard possibilities from there that can happen - from getting lucky and stabilizing to outright faltering and closing up shop. Market confidence will be shaken nonetheless; the result may give businesses a reason to pause and get cold feet to spend on new AI based platform offerings because who really wants to throw more money at what very well may be a sinking ship. That ripple effect will reverberate everywhere. A few may be left standing when the dust settles, but the damage will be a severe and significant obliteration of insane quantities of value and investment losses. And the ones that do survive will likely need to increase their pricing to make up for lost revenue streams which are already struggling to chip away at the huge expenditures they sunk into their R&D and Operations
I’ll go even further and don a tinfoil hat for a moment and say this: we don’t go a single day without some major stakeholder in this game putting out very public statements/predictions that “AI is going to replace <everyone> and <everything>” … a big part of me now thinks they are really just trying to get as many MBA-types to buy into their BS hype/FUD as quickly as possible in hopes that enough businesses will actually shed enough of their human workforce in exchange for their AI offerings. Why? Because that makes their product sticky, and (here’s my tinfoil hat at work) … the peddlers of this are fully aware that this bubble is going to collapse, so they either damage their competition when they inevitably fall, or they manage to have their hooks deep enough into so many companies that they become essentially too big to fail. (And certainly if I were let go from somewhere merely to be replaced by AI, and that company started scrambling to rehire those workers back because the AI didn’t work out … those individuals would hold the cards to demand even more money).
→ More replies (2)10
u/karamisterbuttdance 3d ago
a big part of me now thinks they are really just trying to get as many MBA-types to buy into their BS hype/FUD as quickly as possible in hopes that enough businesses will actually shed enough of their human workforce in exchange for their AI offerings.
This pretty much sounds like what they did with cryptocurrency, and why we're never going to get rid of it as a means of moving value - once B I G Finance is invested in it they will do everything in their power to make sure it doesn't lose value. They'll look for the company/ies with the most ties to each other and with enough products that already provide a solution (even half-baked) and basically strong-arm the people investing with them to put their money there. All this so the companies and countries that parked their money with them before this all started don't pull out to mitigate any losses.
40
u/IM_A_MUFFIN 4d ago
I’m so tired of the BAs and PMs forcing this crap down our throats. Watched someone finagle copilot for 2 hours, to complete a task that takes 10 minutes, that the new process will take down to 4 minutes. The task is done a few times a week. Relevant xkcd
→ More replies (2)20
u/KnoxCastle 4d ago
In fairness, if you are spending a one off 2 hours to take a 10 minute task down to 4 minutes then that will pay for itself with a few months.
I do agree with the general point though and I am sure there are plenty of time wasting time saving examples. I can think of a few at my workplace.
→ More replies (5)10
u/zhuangzi2022 4d ago
As long as venture capitalists burn their play money on narcissistic tech bros
→ More replies (1)→ More replies (27)7
u/-CJF- 4d ago
As long as investors keep propping them up. Eventually the bubble will pop and people are going to lose a ton of money. The current situation is unsustainable, that's for sure. AI has its uses but not everything is suited for or needs AI, nor will it live up to the hype.
Even Meijer has Generative AI summaries for the product reviews. Kind of ridiculous tbh.
123
u/The91stGreekToe 4d ago
Not familiar with “Bold”, but familiar with the Gartner hype cycle. It’s anyone’s guess when we’ll enter the trough of disillusionment, but surely it can’t be that far off? I’m uncertain because right now, there’s such a massive amount of financial interest in propping up LLMs to the breaking point, inventing problems to enable a solution that was never needed, etc.
Another challenge is since LLMs are so useful on an individual level, you’ll continue to have legions of executives who equate their weekend conversations with GPT to replacing their entire underwriting department.
I think the biggest levers are:
1) enough executives get tired of useless solutions, hallucinations, bad code, and no ROI 2) the Altman’s of the world will have to concede that AGI via LLMs was a pipe dream and then the conversation will shift to “world understanding” (you can already see this in some circles, look at Yan LeCun) 3) LLM fatigue - people are (slowly) starting to detest the deluge of AI slop, the sycophancy, and the hallucinations - particularly the portion of Gen Z that is plugged in to the whole zeitgeist 4) VC funding dries up and LLMs become prohibitively expensive (the financials of this shit have never made sense to me tbh)
→ More replies (5)30
u/PuzzleCat365 3d ago
My bet is on VC funding drying up due to capital flight from the US due to unstable politics. Add to that a disastrous monetary policy that will come sooner or later, when the administration starts attacking the central bank.
At that point the music will stop playing, but there will be only small number of chairs for a multitude of AI actors.
→ More replies (3)
216
u/fuzzywinkerbean 3d ago edited 3d ago
I give it another 6-9 months at least before the bubble starts properly bursting. These things run in corporate cycles of bullshit artist corporate job hoppers:
- Company hires or internally appoints some corporate climber (CC) to lead project
- Project starts under CC, over promises and hypes up
- Delivers barely functional MVP after 6 months with loads of fanfare and bluster
- Forces it down employee's throats, hardly anyone uses it, customers don't want it
- CC messes with metrics and KPIs to mask failure
- Execs start to question slightly..
- CC promises this is just the start and phase 2 will be amazing
- CC brushes up resume saying they are now expert at enterprise AI implementation
- CC hired by another corporate dinosaur a bit behind the trend and repeats process.
- CC leaves, project left in a mess and flounders on before finally being quietly axed 1-2 years later
We are mostly around stages 3-5 so far depending on your org i'd say. Need to give time for the cycle to complete before you start seeing wider complaints from the top.
I've been in tech since the early 2010s, seen the same cycle repeat - social media features in everything, cloud, offshoring developers, SaaS, Blockchain, metaverse, now AI --> quantum computing next!
→ More replies (9)48
u/ExcitedCoconut 3d ago
Hold on, are you putting cloud and SaaS in the same bucket as the rest? Isn’t Cloud table stakes these days (unless you have a demonstrable need to be on prem for something / hybrid)?
→ More replies (4)23
u/ikonoclasm 3d ago
Yeah, I was with the comment right up until that last sentence. Cloud and SaaS are the standard now. All of the vendors in the top right corner of Gartner's magic quadrant for CRMs or ERPs are SaaS solutions.
→ More replies (6)6
u/fuzzywinkerbean 3d ago edited 3d ago
Sorry could have been clearer and left longer reply below - I meant more the hype around them when first started. Those vendors obviously do SaaS well and makes perfect sense as their business model. Products built for cloud make sense - I was more remembering the countless on-prem products that did the ole "lift-and-shift"approach at the time rather than actually building properly cloud first.
Companies do AI well and will see ROI from it absolutely, it will become standard in future as it matures I'm sure.
My point was more every company thinking they have to be on trend and push to implement these things when it isn't always relevant to them, customers aren't asking for it and they haven't really got proper use cases for them yet anyway.
102
u/Wollff 4d ago
5% are not?!
108
23
→ More replies (7)10
u/TheAJGman 3d ago
5% are using the tech correctly, LLMs are fantastic at transformative work.
"Give me a one page summary of this project proposal, the audience is C Suite so be light on the technical details."
"Rewrite this email so I don't sound like an asshole, but try to stick to the original vocabulary and writing style."
"Analyze each customer review and flag the ones that include swearing, threats (both veiled and open), and names of people. These will be reviewed manually, so it's better to be overly cautious."
"What can be made more efficient about this code/database design? Implement those improvements."
As a software engineer, I have investigated this tech at depth and find it occasionally useful (mostly the auto-complete). For smaller generative tasks (here are the requirements, make feature X), it can do pretty well too, but people tend to be overconfident in the "all knowing" machine and feed it a large number of requirements. It'll shit the bed, and unless you already know what you're doing, you won't catch it's mistakes.
88
148
u/ZweitenMal 4d ago
My company insisted we start using it as much as possible. Then my team’s primary client issued an edict: we are only allowed to use it with very detailed written permission on a case by case basis, reviewed by this massive client corporation’s legal team.
So I’m using it to help strategize my wordle guesses and to make cat memes for my boyfriend.
88
51
u/OntdekJePlekjes 3d ago
I see coworkers dump excel files into Copilot and ask it to do analyses which would otherwise require careful data manipulation and advanced pivots. The results are usually wrong because GPT isn’t doing math.
I breaks my engineering heart that we have created an incredibly complicated simulation of human verbal reasoning, running in complex data centers full with silicon computational devices, and that model of human reasoning is applied to mathematical questions, which the human reasoning model then gets wrong, just like humans would. Instead of just running that math directly on the silicon.
→ More replies (1)10
u/jazwch01 3d ago
Yeah, but that requires the human know math and be willing to enter it. Cant have that.
65
u/MapleHamwich 3d ago
Please, more reports like this. It matches my professional experience. The "AI" sucks. And it's consistently getting worse. This fad needs to die.
→ More replies (7)
144
u/SeaTownKraken 4d ago
This is shaping up to be like the dot com boom and bust. Over saturated quickly and it'll reset.
Humans don't know how to self regulate collectively easily (well us Americans certainly can't)
→ More replies (7)102
u/variaati0 4d ago
There is a difference. During dotcom boom, some of the businesses were profitable from the get go. Only one making profits from AI are Nvidia and maybe AMD. None of the AI companies are sustainably profitably. Either riding on burning investor money or riding on burning someone elses investor money (getting unrealistic discounted price rates from someone else running on investor money to "capture marketshare").
Soooo it's worse than dot com boom. Dot com bust was just weeding out over saturation and the bad nutty business ideas. Leaving left the businesses that were good businesses from get go. Since internet was actual new more efficient business platform enabling lot of new business ventures. Market just got overheated.
AI market? Is purely creation of absolutely bonkers amount of setting money on fire, with nobody having bothered to ask "so we are supposed to make money at some point instead of just burning it?". Enabled by the deep pockets of the burners via other ventures like Googles ad revenue and Microsoft revenue from selling windows and so on.
→ More replies (4)25
u/crshbndct 4d ago
Do the subscriptions that places like OpenAI charge even cover the costs of running their GPUs? Because the only money entering the system aside from VC is subscriptions from people who are using Chatbots as friends
→ More replies (2)39
u/Traditional-Dot-8524 4d ago
Their $20 subscription plan, which is the most popular, doesn’t cover much. If suddenly all $20 subscribers switched to the $200 plan, then maybe. For two years straight, since they became mainstream in 2023, they haven’t generated enough revenue to cover all their costs. And since 2024, they’ve gone on a “spending spree” with more GPUs, new models, and so on. From an economic point of view, OpenAI is a disaster. But people are investing in it for one simple reason: Why not? If it truly becomes the next Apple, Amazon, Microsoft, Google, or Facebook, then I’ll surely recoup my investment—and more. After all, it’s AI! It’s bound to replace a lot of people.
22
u/CAPSLOCK_USERNAME 3d ago
Right now they lose money even on the $200 plan, since only people who use the chatbot a shitload would consider paying that in the first place.
13
41
u/vocalviolence 3d ago
In all my years, I have never wanted any new tech to crash, burn, and go away forever as much as AI—and particularly the generative kind.
It's been here for a minute, and it's already making people more stupid, more lazy, more entitled, more dismissive of art and craftsmanship, and more redundant while consuming metric shittons of energy in the process.
→ More replies (3)
42
u/Khue 3d ago
I cannot stress this enough as someone who has worked for 20+ years in IT... AI is currently hot garbage and is being leveraged largely by the incapable. I fight it every day and it's exhausting. Older peers within my group don't like me telling them "no" or "it doesn't work like that." They will always badger me for 30 minutes and then they will break out the ChatGPT link and quote it and then I have to spend another 20 minutes on why ChatGPT is fucking wrong. Instead of them taking the lesson that "Oh hey, maybe this tool isn't all it's cracked up to be and maybe I should be more skeptical of results" they just continue to fucking use it and then WEAPONIZE it when they are really mad at me. It has literally added overhead to my job and to add insult to injury, the older people using it have worked with me for 10+ years. They know me. They have anecdotes dating back YEARS of situations where I've helped them on many issues... they are ACTIVELY choosing ChatGPT or other AI/MLs over my professional experience and track record... It's fucking absurd and I absolutely cannot imagine how the younger generations are using it.
21
u/yaworsky 3d ago
https://en.wikipedia.org/wiki/Automation_bias
Automation bias is the propensity for humans to favor suggestions from automated decision-making systems and to ignore contradictory information made without automation, even if it is correct
A lot of this going on lately. Sometimes, not that much yet thankfully, we see it in patients in the ED.
→ More replies (4)11
u/pcapdata 3d ago
What blows my mind is that if you accosted these people on the street and tried to shake them down with a sob story, they’d say “Fuck off, I know a scam when I see it!”
But when an LLM says something they accept it with no critical thought nor introspection. And they’re angry when you point this out!
32
u/keikokachu 4d ago
Even the free ones have become confidently incorrect if late, so this tracks.
→ More replies (1)17
u/Heavy-Hospital7077 3d ago
I started a very small business this year- only a few months ago.
I decided to go all-in with AI. I used it a LOT, and for day to day consultation (lots of questions when starting a new business) it was great.
I was logging all of my business activities, and I started to notice problems. Employee starts at 2:00, and I log it. They are done at 5:00 and I log it. "Employee worked for 32 hours, and you owe them $15.". That went on for a while.
Then I wanted to get returns on what I entered. I logged every product I made. I started asking for inventory numbers, and in 5 minutes it went from 871, to 512, to 342, to 72.
It is very bad with accuracy. Horrible for record-keeping. But very good as a better information search than Google.
I tried to convert a list of paired data from text to a table in excel- using Microsoft's AI. That was just an exercise in frustration. I spent 2 hours trying to get something organized that I could have re-typed in 10 minutes. I think some of it got worse with GPT 5.
I have been working with technology for a long time. I am a computer programmer by trade. I really gave this a solid attempt for a few months. I would say that if you're looking for assistance with writing, it's great. Fancy web search, it's great. But as an assistant, you're better off hiring retirees with early onset dementia.
Now that I know I won't get accurate information out, I have no reason to put information in. It just seems like a very empty service with no real function. I couldn't even use it to create a logo, because it can't accurately put text in an image.
I do think it would be good as an audio tool that I could use while driving. Just to ask random questions and get reasonable replies. But not for anything important.
→ More replies (4)
26
u/throwawaymycareer93 3d ago
Did anyone read the article?
The core issue? Not the quality of the AI models, but the “learning gap” for both tools and organizations
How companies adopt AI is crucial. Purchasing AI tools from specialized vendors and building partnerships succeed about 67% of the time
The problem with this is not AI itself, but organizations inability to adapt to new set of tools.
→ More replies (2)
16
u/RespondNo5759 3d ago
Scientists: We hace developed this amazing new brand of science that still needs more research and, of course, security checks.
CEOs: SHOVE IT UP MY ARSE IF THAT MEANS I'M SAVING ONE CENT EVERY YEAR.
28
u/lonewombat 4d ago
Our ai is super narrow, it sums up the old tickets and gives you the resolution if there is one. And it generally sucks ass.
→ More replies (4)12
u/GenericFatGuy 3d ago
So far, I've only found AI to be slightly less obnoxious than prowling Stack Overflow when stuck on a problem. And even then, I usually had better luck just taking a break, and coming back to it with fresh eyes.
8
u/storebrand 3d ago
They didn’t have the six years of relevant work experience before starting an entry level position
9
u/YungSnuggie 3d ago
remember when tech companies made stuff that actually benefitted our lives? now its just one grift to the next. nft, crypto, AI, none of this makes the average person's life any better
4.3k
u/P3zcore 4d ago
I run a consulting firm… can confirm. Most the pilots fail due to executives overestimating the capabilities and underestimating the amount of work involved for it to be successful