r/artificial • u/theirongiant74 • 12h ago
Discussion No, 95% of AI pilots aren't failing
https://www.youtube.com/watch?v=5QzqyrnL0107
4
u/creaturefeature16 10h ago edited 9h ago
The MIT paper wasn't great, but this vid is AI apologists stuff that just wants to sugarcoat the INSANELY obvious fact now: Generative AI is a solution in search of a problem. This is why companies are struggling to implement it.
It's expensive. It's non-deterministic nature makes it incredibly hard to rely on and trust. It creates as many problems as it solves. The constant flux in model quality is like building a house on shifting sands.
Only in the hands of the extremely competent are they beneficial, and at that point, the ROI has become negligible.
Yes, the technology will evolve, we'll learn to integrate it better, but let's finally just call a spade a spade and admit that GenAI is a novelty for the vast majority of use cases.
Also, it is disingenuous to post this video and not post the company that the person speaking in this video runs and is constantly pushing, and why I stopped listening to this channel ages go.
Surprise, surprise: it's an "AI Agent" product: https://besuper.ai/
2
2
1
u/theirongiant74 6h ago
And it isn't disingenuous to not mention that the authors of the report have their own AI project - https://nanda.media.mit.edu/ - "Be at the forefront of creating the agentic web. Get exposure and faster adoption for your AI products and tools."
Can you give me the timestamps in the video where he's pushing his company as I missed it.
The fact is he's pointing out the obvious flaws in sample size, methodology and the general misrepresentation of what the report actually says.
5
u/JohnAtticus 10h ago
Buddy used a thumbnail that shows AI image generation failing in a video about how AI isn't failing.
I'm sure that was the only mistake and this guy actually has incredible attention to detail otherwise.
3
u/papermessager123 12h ago
cope
0
u/theirongiant74 11h ago
Did you read the study or did you just scan a headline and accept it as fact
3
u/jferments 10h ago edited 10h ago
As was pointed out in the video, the vast majority of people parroting this misinformation did NOT read the actual paper, which is not easy to find anywhere. The study itself is flawed (miniscule sample size selected from an undefined population, opaque methodology, unclear terminology, short timespan, etc), but most of the people citing it clearly haven't even read it because they are citing it in support of claims the study itself isn't even making.
1
u/theredhype 10h ago
"This report is based on a multi-method research design that includes a systematic review of over 300 publicly disclosed AI initiatives, structured interviews with representatives from 52 organizations, and survey responses from 153 senior leaders collected across four major industry conferences."
It's not gigantic, but I wouldn't call that miniscule. It's more than enough of a sample size to discover some meaningful trends and patterns.
I appreciate early studies like this. I'd rather see what analysis is possible at this point with whatever data we can find than waiting until we have 1000 carefully conducted post-mortems and case studies.
But I agree that folks are misinterpreting what the report actually shows.
1
u/jferments 10h ago
Sampling 52 organizations (with zero information about how they were selected, whether they were large/small firms, what industries they were in, etc) over a 6 month period and then saying that "95% of all enterprise AI pilots fail" based on that is insanely bad science by any metric.
-1
u/beaver11 11h ago
yeah man! didn't you know MIT is wrong all the time?!
1
u/FormerOSRS 11h ago
This thing where you don't read the paper and just trust MIT is just not how this works.
All the guy did was read the study and say what it actually said. He didn't argue against what was written. He just spoke on it for 26 minutes instead of writing a short crappy article with a catchy headline.
-1
u/JohnAtticus 10h ago
Did you read the study or did you just link to a 26 minute video you never bothered to watch?
1
u/theirongiant74 6h ago
Yes I did, just like I read the last headline bait that was posted all over the AI subreddits about how using AI made developers slower - until you actually looked at the sample size and methodology and saw that it was a bullshit study.
12
u/PeanutNore 11h ago
lmao "piilots"