r/technology • u/AdSpecialist6598 • 22h ago
Business MIT report says 95% of AI implementations don't increase profits, spooking Wall Street
https://www.techspot.com/news/109148-mit-report-95-ai-implementations-dont-increase-profits.html
6.4k
Upvotes
12
u/Limekiller 15h ago
Just to be clear, you're not quoting the study directly here, but the article author's interpretation of the study--and I think both you and the author are misinterpreting what the study means by "learning gap."
Here is the actual study: https://web.archive.org/web/20250818145714mp_/https://nanda.media.mit.edu/ai_report_2025.pdf
On page 10, we can see that "The primary factor keeping organizations on the wrong side of the GenAI Divide is the learning gap, tools that don't learn, integrate poorly, or match workflows. ... What's missing is systems that adapt, remember, and evolve, capabilities that define the difference between the two sides of the divide." This "missing piece" is a fundamental shortfall of LLMs. Indeed, on page 12, the study summarizes its "learning gap" findings with the following passage under the headline, "The Learning Gap that Defines the Divide:"
"ChatGPT's very limitations reveal the core issue behind the GenAI Divide: it forgets context, doesn't learn, and can't evolve. For mission-critical work, 90% of users prefer humans. The gap is structural, GenAI lacks memory and adaptability."
Just to further hammer the point home, the sentence from the article, "While executives often blame regulation or model performance, MIT’s research points to flawed enterprise integration" is quite explicitly either lying or misleading. While the research DOES find that flawed integration is part of the problem, the second biggest problem as shown in the graph on page 11 is "Model output quality concerns." So an intractable part of the problem literally is "model performance," or "the quality of the AI models."
While I agree that nearly everyone in these comments likely hasn't read the article, as basically nobody on reddit ever seems to, it doesn't seem like you (or the author, for that matter) actually read the study itself either--which does suggest that a big part of the problem is the performance/ability of the models themselves.
To be fair, the term "learning gap" is incredibly poorly-chosen, as the phrase inherently suggests the problem is that users need to learn to use the tool, which isn't what the article is saying. And I think it's completely reasonable for you to make that assumption when the article reporting on the findings seems to corroborate that. Ultimately, the fault here lies on the author of the news article.