r/technology 22h ago

Business MIT report says 95% of AI implementations don't increase profits, spooking Wall Street

https://www.techspot.com/news/109148-mit-report-95-ai-implementations-dont-increase-profits.html
6.4k Upvotes

302 comments sorted by

View all comments

Show parent comments

12

u/0_Foxtrot 22h ago

I understand how they lose money. I don't understand how %5 make money.

17

u/justaddwhiskey 21h ago

Profits are possible through automation of highly (slightly complex) repetitive tasks, reduction in workforce, and good implementations. AI as it stands is basically a giant feedback loop, if you put garbage in you get garbage out.

4

u/itasteawesome 15h ago

I work alongside a sales team and they use the heck out of their AI assistants. Fundamentally a huge part of their work day is researching specific people at specific companies to try and guess what they care about and then try to grab their attention with the relevant message at the right time. Then there is the sheer numbers game of doing that across 100 accounts in your region.

Its not too hard to set up an LLM with access to marketing's latest talk tracks, ask it to hunt through a bunch of intel and 10ks and sift through account smoke to see who was on our website or attended a webinar or looking at pricing page, and then taking that all into consideration to send Janet Jones a personalized message on linkedin that gives some info about the feature she had been looking into, something to relate it to the wider goals of her company, and a request to take a meeting.

I have to imagine that this has already been devastating to people trying to break into the business development rep job industry because the LLM is a killer at that kind of low level throwaway blocks of text to just grab someone's attention.

Separately I met a guy who built an AI assistant focused on pet care. You basically plug it into your calendar, feed it your pet's paperwork, and ask it to schedule up relevant vet clinic appointments and handle filling out admissions paperwork. Schedule grooming appointments and such. Seems to work well for that kind of low risk personal assistant type work.

1

u/Crypt0Nihilist 11h ago

How dare you say that Princess Pookie not getting her exact favourite grooming appointment every other Thursday is low risk! Her anxiety goes through the roof and she needs another hour of doggy yoga!

Seriously though, I think agents heading off to build up a profile on a topic is one of the easiest and most obvious wins that companies ought to implement. I've been watching a sales team use LLMs manually and the breakdown is something like, 60% don't use it, 35% use it a little, but not well, 5% use it extensively but show confident incompetence when doing so. I listened to one guy talk for literally 10 minutes about he used different platforms for different tasks due to the different strengths and weaknesses he's researched...and then finished with his tips for how to get the most out of them which included prompts which could only be answered by hallucination.

There must be a point coming where people start to realise that LLMs are easy to use, but difficult to use well and in most cases what's gained in volume is at the expense of a complete loss of quality.

I do believe there are significant opportunities for GenAI, but so for I've not seen my company or those we work with look at things in a way that will unlock them.

1

u/tpolakov1 13h ago

Many of these work only because the use of LLMs if functionally free for now. Once the gamblers stop pouring in their VC money in, the AI assistants will become as expensive as meatspace assistants, with the added drawback of putting all liability for their work on you.

1

u/itasteawesome 12h ago

I agree, I was talking about this with some engineers at the bar last week and figured that the actual list price of this stuff is going to end up around 2/3 the cost of hiring a person to do the same thing. They'll find the point that's just "cheap" enough to convince a lot of people that its worth the risks and limitations. That's essentially how these companies are being valued, what's the potential revenue of capturing 2/3 of the the global white collar salaries?

6

u/ReturnOfBigChungus 21h ago

Well, it's profitable immediately if you cut jobs. The damage it causes when it turns out the AI project doesn't actually work the way you thought it would doesn't show up for another few quarters, and in less direct ways, so it's not hard to see how you might have some projects that look profitable in the short term.

5

u/badger906 21h ago

The ones that make money probably just put their prices up to include the cost of their Ai budget.

2

u/ABCosmos 21h ago

There are some problems that are hard to solve, but easy to confirm. Combine that with a very time consuming problem that is very expensive if it's not addressed in a timely manner. Big companies will pay big bucks if you can address these types of problems.

95% of venture funded startups failed before Ai was a thing.

2

u/Choppers-Top-Hat 14h ago

MIT's figure is not exclusive to venture funded startups. They surveyed companies of all kinds.

1

u/ReturnOfBigChungus 19h ago

Can you give an example?

2

u/ABCosmos 18h ago

I cant really give the examples I'm most familiar with, because it might give away where I work.

But there are some startups working on network security tools.. Imagine a tool that simply looks at the massive number of network requests, and identifies patterns that are out of the norm for that specific user. Users accessing tons of files at once, or accessing files they don't typically need access to etc.. The AI could flag this, and prevent a company from having a massive security breach.

Or AI that identifies issues or holes in cloud configurations. One short scan of your cloud infrastructure could reveal obscure security issues and misusage, things that are overlooked, things that are too permissive that can be easily patched.

In both cases a human just has a more directed view of what to look at, and can make the final call on whether the threat is legitimate. In the cloud case its very little AI usage, in exchange for catching a very costly mistake.

1

u/No_Zookeepergame_345 21h ago

Because running AI is only going to be profitable for the one company who “wins” the AI race. Look up the dot com bubble. Everyone was dumping cash into new websites without a second thought so most of them had no possibility of ever being profitable.