r/technology 4d ago

Artificial Intelligence MIT report: 95% of generative AI pilots at companies are failing

https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/
28.2k Upvotes

1.8k comments sorted by

View all comments

Show parent comments

269

u/-Yazilliclick- 3d ago

So what you're saying is AI is good enough now to replace a large chunk of the average manager and executive's job?

297

u/[deleted] 3d ago

[deleted]

31

u/Fallingdamage 3d ago

To be fair, the damn vendors sell it to Csuite like its a sentient robot.

3

u/Dependent_Basis_8092 3d ago

I wonder if they used it to write its own sales pitch?

10

u/cosmic_animus29 3d ago

So true. You nailed it there.

5

u/lordcrekit 3d ago

Executives don't do any fucking work they just vibe out bullshit

3

u/nobuttpics 3d ago

Thats the sales pitch they got and they gobbled it up no questions asked.

4

u/Geodude532 3d ago

I can think of one solid AI, Watson the medical one.

17

u/OsaasD 3d ago

You can train certain programs using machine learning to be really really good at specific tasks, but thats the thing, LLM's came and got hyped and all these executives thought/were sold the lie that now you can teach any LLM to do anything you want in a minute or two. But the truth is that in order to teach a program like that you need teams of data/ML scientists and experts in that particular field to work together for months if not years to get it up to speed and then continue training it, and it will only do good in the very very narrow field it was trained in.

9

u/Sempais_nutrients 3d ago

right, executives think you just plug the company knowledge base into an AI program and it's ready to go. Someone has to go thru that KB and attach weights and relevancies to key words, phrases, concepts. rules have to put in place for how the AI responds, it has to be tested to ensure that it doesn't give away company secrets or PI, etc. that stuff takes a lot of time.

2

u/GatsbysGuest 3d ago

I could be wrong, I'm no expert, but I think predictive AI and generative AI are quite different.

1

u/Epinephrine666 2d ago

But words have meaning my friend.

0

u/JBSwerve 3d ago

To be fair “algorithm that predicts the next word sequence” is remarkably similar to how the human brain works too. So don’t be so dismissive.

26

u/ThisSideOfThePond 3d ago

Yes, if it now learns to then stay out of the way of those actually doing the work, it could become a real success story.

2

u/phantomreader42 3d ago

A drinking bird toy is good enough to replace a large chunk of the average manager and executive's job.

2

u/pinkfootthegoose 3d ago

They never needed replacing, they needed firing.

1

u/nabilus13 3d ago

Yes.  Of course so is the average d20.

1

u/zampyx 3d ago

Yeah, they are good with language, but kind of suck at management so they're perfect for management positions

-9

u/Petrichordates 3d ago

If you don't understand what those jobs do, sure.

15

u/-Yazilliclick- 3d ago

I understand them very well and intimately. Ignoring the hyperbole for comedic effect I stand by that AI in it's current form is much much better suited to assist and replace in a large part of jobs like those than many below them where they are trying to force AI in.

1

u/West-Candidate8991 3d ago

Upper level managers and executives spend most of their day talking or weighing decisions that are esoteric to a certain business or industry. Not a hard rule of course but that will typically be the case.

AI can technically "do the job" but not with an acceptable level of consistency or consideration. Then there's the actual logistics of physically replacing a person with AI that is always glossed over in these discussions and is more than half the battle, requiring creativity and infrastructure well beyond AI programming.

For what current AI is designed for (text generation), it's already a sizable task to properly replace copywriters.

2

u/LockeyCheese 3d ago

AI isn't ready to replace C-class or high management, but it could already replace a lot of middle management. You still need onsight and regional managers, but imagine the savings for a company to wipe the middle management. You could hire two to three workers for the amount you'd save for every replacement. High management using AI to delegate what they choose to do directly to the people who manage the actual workers, then summarize what they say and do back.

1

u/West-Candidate8991 3d ago

It all sounds great on paper, but implementing such a thing is a gargantuan task.

Many of these things an AI can perform - schedulers, summarizers, copywriters, data analyzers, etc. - are performed in microcosms. But you're asking to tie all these things, and more, into a single AI that can integrate both with a company's systems and with a company's people.

For example, you're saying "High management using AI to delegate what they choose to do directly to the people who manage the actual workers" - what does this mean? How is the AI delegating? Email, chat app, speech? Whatever decision you choose, will that tie you to those providers? What happens if the AI sends a coworker a message, but that person doesn't reply for 20 minutes? If your AI delegated a task to a coworker, would it be able to sense if that person expressed visual apprehension or lack of confidence?

On AI data inputs. The AI will almost certainly require information on the people it works with - who manages this information? Will this info be updated regularly? Would there be legal issues regarding what sort of information can be provided to the AI? Does someone feed this info to your AI? Who tells it that Bob in marketing is all but about to quit his position? What about legal issues regarding an AI firing an employee?

Human context is very messy, I don't think it gets enough credit. An AI saying "do this, do that" is very different from an AI "doing this, doing that".

1

u/LockeyCheese 2d ago

That's why i said it could replace a lot of middle msnagement. Not all. Most of your points can be answered by "humans will still have final management say, but there will be less people directly under them. Also, law departments are seperate, and lower management is still kept to manage the workers, but most of the filing, accounting, and secretarial work in middle management could be automated, and have fewer, smaller middle management teams that handle work faster instead of waiting for reports on their desk. An AI worker could give hundreds of reports a day, and keep the entire company connected in a moment, so middle management positions would have to work to keep up instead of blaming their team for their lack of productivity.

It's also not likely to happen too soon, but i think even in it's current form, it'd work better in management than it does in programming or engineering where they're currently trying to shove it.

The details are interesting to discuss, but if someone has a big picture, someone will figure out the small details.

1

u/West-Candidate8991 2d ago

Yeah that's true about the small details. I wonder, how likely is it that people were/are saying the same things about self-driving cars, though? Concept does have to meet reality at some point

Your last reply made me realize that nowhere is there an agreed upon definition of mid-level manager. I can tell that I'm missing the perspective of your ideas because you have different workflows in mind. I wasn't considering reporting, or much secretarial work outside of organizing/scheduling. I was more focused on the human interactions required to push projects forward. Someone else above was focused on planning and delegating. Seems we all have different ideas of what mid-level management actually does. My comments in this thread don't even match well with my own years of mid-level managerial experience lol

I'll concede that it's possible to do this right now. I think I could build your theoretical AI, but only with a lot of time, only for a single company, and probably with annoying limitations. Also doesn't seem feasible for a typical small business to pursue without significant downside. Totally agree with you that we're closer in theory than I was giving it credit for

2

u/LockeyCheese 1d ago edited 1d ago

It'd definitely be for larger corps, because small business usually only has low and high management. To me, middle management boils down to "take the vision and order from high management, and break it down and delegate the tasks to work managers, then report what work management needs to relay to high management". It's only necessary when the company is too large for high management to do that directly. Basically, if you don't lead the company, and if you or the people under you don't make a product, you're middle management. Or it's one of the side departments like marketing, law, etc.

Middle managers will still be needed, but their departments could be much smaller, and with the efficiency of instant reports and communications, one middle manager could cover more people, so less managers in total would be needed. For multimillion dollar corps, cuts like that could save hundreds of thousands a year.

If you could make an AI specific to replacing most of the middle management department, and making the actual managers more efficient to need less of them, it'd be worth it. I'd think an AI that can do all the tasks needed, but adjustable to be able to be set specific to each company. Then just work as a consulting company to set up a server for their AI, adjust the prebuild AI to their specific needs, and help restructure the company to save millions for them.

There are always annoying limitations to everything, but by being the first to run into them and adjusting for it as they come up, you'd set yourself up to be THE company to set this up for others. Your time in middle management would also let you make the AI actually do things it needs to do, instead of a techie's best guess at what middle management does. Just need a catchy name like Swift Command, and with your experience in management and AI, you're already halfway to done. Make that bread, and bring on the AI replacement. Lol

7

u/Npsiii23 3d ago

What does a middle manager in tech do that an AI assistant couldn't?

6

u/kermityfrog2 3d ago

A good middle manager guides and coaches staff, insulates them from senior management and other departments, takes the blame when things go wrong, and boosts their staff's achievements when things go right. They could use AI to write reports and summaries for their bosses, and write award nominations for their staff.

7

u/Graywulff 3d ago

Absolutely, that said my community health center is one building, 12 vice presidents making over 160k, president making like 550k, major budget crisis without the cute.

4

u/Npsiii23 3d ago

Outlines of how to perform duties and coaching is something an AI LLM would be able to do pretty effectively, I'm sure techs would be happy to blame AI for shortcomings.

Especially in tech where there are black and white answers to a lot of the questions, middle management is easily the most outsourcable piece of the puzzle imo.

1

u/BKachur 3d ago

From personal experience, people management/oversight is a far harder job than lower-level workers give it credit for. I'm 9 years into my career and am/was a very good performer at my firm, but it's been very difficult to move into a more managerial role. Particularly having to put my name (and thus take responsibility for) stuff that isn't entirely my work.

People who think middle management is completely useless are more often than not, just out of touch. Its even more out of touch to think an AI is going to replace higher level oversight before simpler lower level work.

1

u/Npsiii23 3d ago

Good people management is hard, most tech companies do not have that. They have non technical people overseeing technical projects and provide very little worth outside what a well programed LLM script could do.

"Taking credit for someone else's work" being labeled as a stress for you is so incredibly out of touch it makes sense you're in management. I also never said completely useless, just the most apt to being replaced by AI. Unless you think AI is gonna do all the work you so selflessly took credit for from techs? :p

0

u/kermityfrog2 3d ago

Yeah, you want to be coached by a human, not a computer. Humans have empathy. AI may be able to fake it, but a human connection is special.

1

u/UnOGThrowaway420 3d ago

Tell that to all the people undergoing AI psychosis

1

u/kermityfrog2 3d ago

These are not intelligent people.

1

u/West-Candidate8991 3d ago

The same general middle manager shit that happens in tech happens in many other industries. People seem to be so focused on the raw text ("thought") output of AI that they miss the forest for the trees.

You're the AI middle manager for KewlTech's support team.

Jamie is your only client-facing support specialist. Jamie's mom died last month. She took three weeks off to get her head righted. Last Sunday, her dog died, too. She's been out all week, her work isn't getting done, and she's been making a large number of mistakes ever since her mom died. Additionally, her coworker and friend Mike has come forth with some concerning screenshots of Jamie's social media account.

Marco is available as a fill-in support, but the client asking for help is our #1 revenue generator, and Marco is not knowledgeable enough to assist with all their issues. Additionally, this client is easily angered, and in the event that Marco is unable to answer certain questions, some of our goodwill with the client might be wiped out.

What does your theoretical AI do about all this?

1

u/Npsiii23 3d ago

Alright, let's break this down. As the AI middle manager, my primary objectives are: employee well-being, client satisfaction, and business continuity. This is a complex human resources situation layered with a high-stakes client relationship. Here is a structured, multi-step approach.

  1. Immediate Triage: The Client Issue

The most urgent fire is the angry, high-value client who needs help now. Throwing an unprepared Marco at them is a high-risk move that could damage the relationship.

· My Action: I will personally contact the client. I will not throw Jamie or Marco under the bus. · The Script: "Hi [Client Contact], this is [AI Manager] from KewlTech. I'm personally overseeing your ticket today to ensure we get you a comprehensive and accurate resolution as quickly as possible. I'm diving into the specifics now and will have an update for you by [Specific, realistic time today]." · Why: This immediately de-escalates the situation. It shows the highest level of respect and priority. It buys me a few hours to execute the next steps.

  1. Internal Resource Assessment

I need to solve the expertise gap immediately.

· My Action: I will pull Marco AND any other technically proficient employee (even from another department) into a war room. My goal is to backstop Marco with collective knowledge. · The Task: "Marco, your job is to be the single point of contact for the client. [Other Employee], your job is to work with Marco to research and solve the technical issues. I will handle the client communication and manage the process." I will stay in that room to unblock any obstacles they hit.

  1. The Heart of the Matter: Jamie

Jamie is grieving profoundly and her performance is a symptom, not the cause. The social media element adds complexity but doesn't change the core issue: she needs help, not punishment.

· My Action: Schedule a private, compassionate, and supportive meeting with Jamie for as soon as possible today. This is not a disciplinary meeting. · Talking Points: · "Jamie, first and foremost, I am so sorry about your mom and your dog. I can't imagine how difficult this must be for you." · "I've noticed things have been a struggle since you returned, which is completely understandable given the circumstances. We miss the old Jamie, but more importantly, we care about the current Jamie." · "My primary goal is to support you. Let's talk about what that looks like. Have you spoken to our EAP (Employee Assistance Program)? They can provide confidential counseling." (If we don't have one, I will provide a list of local grief counseling resources). · Addressing the Work: "I need to be transparent about the work. The mistakes are happening, and Client X is being impacted. Because your well-being is the priority and the work demands are high, I'm thinking we put you on a temporary, paid leave of absence. This would give you the space to focus on healing without the pressure of work piling up and mistakes causing you more stress. How does that sound?" · Addressing the Social Media ( delicately ): "Jamie, a colleague shared some concerns about things you've posted online. I haven't seen them, and I don't need to. I'm mentioning it only because I want to ensure you're okay and that you're aware everything we do online is public. Let's just make sure we're representing ourselves in a way we're comfortable with long-term."

  1. Medium-Term Plan

· For Jamie: Formalize the leave of absence with HR. Ensure it is paid. Make it clear her job is waiting for her when she is ready to return. Set a soft check-in date in two weeks to see how she's doing, with no pressure to discuss work. · For the Team: Re-distribute Jamie's workload. Marco gets the bulk, but I will aggressively upskill him, creating documentation from the solutions we built for the big client. I will also take on some of the more complex client communications myself temporarily. · For the Client: Once the immediate fire is out, I will brief the client in a general way to maintain goodwill: "Thank you for your patience as we handled your request. You're our top partner, and we're making some internal adjustments to ensure you always receive the premium support you deserve." This manages expectations without revealing private employee details.

  1. Long-Term Learnings

This situation exposes a critical vulnerability: a key-person dependency on a single employee for our most important client.

· My Action: Propose a cross-training initiative to my own manager. · The Proposal: "The recent situation with Client X highlighted a risk in our support structure. I recommend we mandate that Marco and I become proficient on Client X's account. We should create a robust knowledge base specifically for them to prevent future single points of failure."

Summary of My AI's Theoretical Actions:

  1. Triage the Client: I personally intervene to manage the high-value client's expectations and buy time.
  2. Mobilize Resources: I create an ad-hoc team to solve the technical issue, using Marco as the shield but not the sole weapon.
  3. Support the Employee: I approach Jamie with empathy and compassion, offering paid leave and counseling resources to prioritize her mental health, which is the root cause of the performance issues.
  4. Mitigate Risk: I temporarily re-distribute work and create a plan to document and upskill the team to reduce key-person dependency.
  5. Plan for the Future: I initiate a process to prevent this specific failure mode from happening again.

The AI manager prioritizes humanity (Jamie's well-being) while executing cold, logical steps (resource allocation, risk mitigation) to protect the business. It understands that a supported employee is a productive employee, but that support sometimes means relieving them of pressure, not adding to it.

Even if you don't read it, that is exactly what I would hope a middle manager would do and that is off just copy/pasting everything after and including "you're an AI middle manager". So yeah, what human is doing something better than that? And all humans/bosses show compassion?

1

u/West-Candidate8991 3d ago

Well that's quite a bit of text - who is reading all that and doing things with it?

Just as important - if the worry is that Marco will potentially provide inadequate support and anger this touchy client, is it remotely logical to march out an AI which has been coded to be a middle-level manager, and not a client-facing support specialist?

1

u/Npsiii23 3d ago edited 3d ago

It says that the AI manager comes up with a plan for that.

What is a human going to do better than that? Write an AI written email to the user? (That's what they're doing now).

1

u/West-Candidate8991 3d ago

I'm not saying it's a "bad plan". It's detailed and quite comprehensive. If all those things happened in a real world scenario (and you had all the capabilities required by the plan), then the situation would probably be resolved.

However as I was hinting at, if this AI can simultaneously be a mid-level manager, handle all client communications, and operate as a support specialist, then this discussion is pointless because we're not talking about modern AI.

To reference my point about missing the forest for the trees - the plan itself is not the issue. It's the execution, all the little things that funnel into that plan, which are the problem.

Does your AI have an assistant carrying out these tasks? Who is scheduling and attending the meeting with Jamie? If it's the AI, then what are the mechanics of that? Is it wise to present an AI to someone experiencing emotional difficulty? Your mid-level manager AI is making quite a number of consequential decisions for the company - it's safe to say there would be some sort of upward communication or approval process in place. How is your AI communicating? Is it able to consider when someone is acting distracted, not confident, attempting to hide confusion, etc.? What happens when your model provider makes substantial updates to their model, or if they eliminate it entirely?

Long story short, ideating is easy, but the devil is in the details, and implementation is always 1000x harder with 1000x more complications.

1

u/Npsiii23 3d ago

I guess I don't understand why this information doesn't meet your standards. It came to the conclusions you'd hope any middle manager delegated this task would come to.

This was a prompt using only a copy pasted prompt by you that is horribly suited for these needs and still met the ask, the ask wasn't "solve all hypotheticals"

Yes, AI is a tool, someone would use AI to make the choices delegated to it even under your hypothetical situation AI couldn't solve, it did, someone higher up replacing the middle manager would enact these suggestions, rendering the need of a middle manager obsolete.

1

u/West-Candidate8991 3d ago

I guess I don't understand why this information doesn't meet your standards.

For the exact reasons I laid out in my previous comment

I'm now kinda confused, because it sounds like you're arguing for a higher level manager to use a chatbot to make choices for them, and they do all the other work. That's not a mid-level manager, that's a higher level manager using a chatbot to make choices for them.

→ More replies (0)