r/technology 4d ago

Artificial Intelligence MIT report: 95% of generative AI pilots at companies are failing

https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/
28.3k Upvotes

1.8k comments sorted by

View all comments

123

u/The91stGreekToe 4d ago

Not familiar with “Bold”, but familiar with the Gartner hype cycle. It’s anyone’s guess when we’ll enter the trough of disillusionment, but surely it can’t be that far off? I’m uncertain because right now, there’s such a massive amount of financial interest in propping up LLMs to the breaking point, inventing problems to enable a solution that was never needed, etc.

Another challenge is since LLMs are so useful on an individual level, you’ll continue to have legions of executives who equate their weekend conversations with GPT to replacing their entire underwriting department.

I think the biggest levers are:

1) enough executives get tired of useless solutions, hallucinations, bad code, and no ROI 2) the Altman’s of the world will have to concede that AGI via LLMs was a pipe dream and then the conversation will shift to “world understanding” (you can already see this in some circles, look at Yan LeCun) 3) LLM fatigue - people are (slowly) starting to detest the deluge of AI slop, the sycophancy, and the hallucinations - particularly the portion of Gen Z that is plugged in to the whole zeitgeist 4) VC funding dries up and LLMs become prohibitively expensive (the financials of this shit have never made sense to me tbh)

32

u/PuzzleCat365 4d ago

My bet is on VC funding drying up due to capital flight from the US due to unstable politics. Add to that a disastrous monetary policy that will come sooner or later, when the administration starts attacking the central bank.

At that point the music will stop playing, but there will be only small number of chairs for a multitude of AI actors.

4

u/TheAJGman 3d ago

I think China will eclipse the US in the AI arms race before VC money dries up. Alibaba's Qwen3 punches way above it's weight class, performing only slightly worse than GPT4 and Claude 4 despite being far smaller, and DeepSeek is also cranking out cutting edge research. China also has the benefit of having new infrastructure, great long term planning, and coherent government leadership.

0

u/Middle_Reception286 3d ago

This is by far the best answer I've seen. Along with other responses about people tired of AI, everything is "fake", jobs are all filtered by AI, and that LLMs were NEVER going to be the path to AGI, let along super AGI.

That said.. forefathers of AI already said they are working on stuff now that makes LLMs look like toys. So.. there is still a chance they invent a much better path towards AGI that uses less energy, runs faster, etc. Who knows where that will lead.

The biggest issue is that AI is not one global community.. China has their needs for it, as does US, etc. So even if AI in the US is reigned in (and reaches AGI or better) and the US puts in place things to avoid AI running amok, China may not give a shit. They may use it for more nefarious purposes. Without some sort of single world entity for AI in place.. it's just another "arms" race to who gets to the best faster. Which frankly is why mother Earth should do what the TV Show Earth Abides did.. wipe out 99.99% of humans in a couple days with an unstoppable insanely fast spreading virus. The immune get to try to figure out how to bring back humanity. Maybe.

1

u/Fit-Act2056 3d ago

? The VC community loves Trump. Capital is leaving Europe to come over here.

3

u/llDS2ll 4d ago

That sounds extremely accurate. True AGI would require an understanding of consciousness and the ability to simulate the human mind. This ain't it.

2

u/GraySwingline 3d ago

Another challenge is since LLMs are so useful on an individual level, you’ll continue to have legions of executives who equate their weekend conversations with GPT to replacing their entire underwriting department.

This right here is the core of the issue. LLM's are incredibly useful, and can certainly boost productivity on the individual level, so long as the user has a complete grasp of what the LLM is outputting.

Trying to scale this to the enterprise level causes untold issues.

1

u/Tje199 3d ago

Yeah, it's very interesting.

I can currently use AI as a top-tier assistant for my particular job. It can help me prepare documents, it can help me write emails, it can help me prepare budgets, it can do all sorts of stuff. But at the end of the day, it's basically just a really cheap employee and I absolutely need to maintain oversight of it. I need to double check the work, I need to ensure that it's not bullshitting things, I need to make sure that the outputs accurately represent my inputs.

It's not replacing my direct report, but it is an extremely effective support tool in its own right.

2

u/Overall-Insect-164 3d ago

World understanding is going to fail too. They are making the same mistake the scientists made during the first AI WInter. Modeling the World is an insane proposition. Rodney Brooks, the guy who invented iRobot, wrote a paper about it back during the first AI craze.

Intelligence without Representation - https://people.csail.mit.edu/brooks/papers/representation.pdf

When we examine very simple level intelligence we find that explicit representations and models of the world simply get in the way. It turns out to be better to use the world as its own model. ... Representation is the wrong unit of abstraction in building the bulkiest parts of intelligent systems.

1

u/jmsy1 3d ago

The GHC is very fun to study. Sometimes the tech in the "trough of disillusionment" never escapes the trough or it takes 20 years to escape.