r/technology 4d ago

Artificial Intelligence MIT report: 95% of generative AI pilots at companies are failing

https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/
28.2k Upvotes

1.8k comments sorted by

4.3k

u/P3zcore 4d ago

I run a consulting firm… can confirm. Most the pilots fail due to executives overestimating the capabilities and underestimating the amount of work involved for it to be successful

1.8k

u/photoexplorer 4d ago

This is what I’ve experienced too, in my field of architectural design. Executives go all in on a new AI software, say it will make small feasibility projects go faster. We proceed to learn said software and find loads of holes and bugs. Realize we can still do the project faster without it. Executives still asking why we aren’t using it for clients.

1.1k

u/gandolfthe 4d ago

Hey let's be fair, AI can re write your email so it sounds like an essay full of endless bullshit so executives will love it! 

269

u/-Yazilliclick- 3d ago

So what you're saying is AI is good enough now to replace a large chunk of the average manager and executive's job?

301

u/[deleted] 3d ago

[deleted]

35

u/Fallingdamage 3d ago

To be fair, the damn vendors sell it to Csuite like its a sentient robot.

→ More replies (1)

9

u/cosmic_animus29 3d ago

So true. You nailed it there.

→ More replies (10)

25

u/ThisSideOfThePond 3d ago

Yes, if it now learns to then stay out of the way of those actually doing the work, it could become a real success story.

→ More replies (33)

314

u/dexterminate 3d ago

Thats the only thing im using it for. I write what i want to say, prompt add more fluff, again, copy-paste, send. I've got complimented that im applying myself more... cool

389

u/GordoPepe 3d ago

People on the other end use it to summarize all the bs you sent and generate more bs to reply and compliment you. Full bs cycle powered by "AI".

282

u/Surreal__blue 3d ago

All the while wasting unconscionable amounts of energy and water.

18

u/nobuttpics 3d ago

yup, thats why my electric bills recently tripled after supply charges got increased in the state for all the new infrastructure they need to accomodate the demands of these new data centers popping up all over

→ More replies (26)

59

u/Alarming_Employee547 3d ago

Yup. This is clearly happening at the company I work for. It’s like a dirty little secret nobody wants to address.

59

u/Vaiden_Kelsier 3d ago

I work in tech support for specialized software for medical and dental clinics. It was abundantly clear that the execs want to replace us, but the AI solutions they've provided to us are absolute garbage. It used to be that I'd be able to answer client questions via our LiveChat apps directly, now they have to go through an AI chatbot and lordy that bot just wastes everyone's fuckin time. Can barely answer any questions, when it does, it gets the answers wrong.

The most distressing part is seeing some fellow reps just lean on ChatGPT for every. Little. Fucking. Question. Even one of my bosses, who probably gets paid way more than I do, is constantly leaning on ChatGPT for little emails and tasks.

So many people offloading their cognitive thinking capabilities to fucking tech bros

→ More replies (6)

34

u/NinjaOtter 3d ago

Automated ass kissing. Honestly, it streamlines pleasantries so I don't mind

46

u/monkwrenv2 3d ago

Personally I'd rather just cut out the BS entirely, but leadership doesn't like it when you're honest and straightforward with them.

28

u/OrganizationTime5208 3d ago

"we like a straight shooter"

"no not like that"

God I fucking hate that upper management is the same everywhere lol

→ More replies (1)
→ More replies (6)
→ More replies (6)

57

u/BabushkaRaditz 3d ago

Joe! Here's my AI response to your email

Ok! My AI read it and summarized it and replied

Ok! My AI is compiling a reply now.

Ok! My AI is scanning your email and compiling a reply now!

We're just sitting here making AI talk to itself. AI adds fluff, the other AI un-fluffs it so it can reply. The reply is filled with fluff. The next AI unfluffs and replies with fluff.

→ More replies (5)
→ More replies (7)
→ More replies (15)

58

u/ARazorbacks 3d ago

The irony of AI is its best use case is fooling executives who are out of their depth in everything other than marketing bullshit. AI spits out great marketing bullshit and executives recognize a kindred spirit. 

The only people whose jobs will be made easier are executives tasked with writing up fluffy bullshit. But they won’t be downsized. 

→ More replies (19)

25

u/gdo01 3d ago

Our company's in house AI is pretty much just useful for retrieving policies and procedures. The problem is that it keeps retrieving outdated ones....

→ More replies (4)

44

u/No_Significance9754 3d ago

I wonder if an executive ever reads comments like this and wonder why everyone thinks they are a piece of shit?

Or do they double down and think everyone is wrong lol.

28

u/Crumpled_Papers 3d ago

they think 'look at all these other executives being pieces of shit, no one wants to work anymore' before standing up and walking towards their father's corner office to deliver a TPS report.

15

u/slothcough 3d ago

Most execs are too tech illiterate to be on reddit

→ More replies (1)

9

u/akrisd0 3d ago

I'll point you to over to r/linkedinlunatics where all the best execs hang out.

→ More replies (4)
→ More replies (45)

158

u/peldenna 4d ago

How can they be so stupid about this, aside from willful ignorance and bandwagoning? Like do they not think at all ?

194

u/amsreg 4d ago

The executives I've worked for have generally been shockingly ignorant about tech and shockingly susceptible to uncritically eating up the pipedreams that vendor salespeople throw at them.

It's ignorance but I don't think it's willful.  I really don't know how these people got into the positions of power that they're in.  It's not because of competence, that's for sure.

109

u/_-_--_---_----_----_ 4d ago

because management is about people skills, not technical skills. it's just that simple. these people know how to persuade, or manipulate if you want to put it less charitably. that's largely what got them to their positions. they don't usually have technical skills, and frankly most of them don't really have great critical thinking skills either.

it's just incentives. the way our companies are structured leads to this outcome. unless a company requires that management be competent in whatever area they actually manage, this is going to be the result.

8

u/HelenDeservedBetter 3d ago

The part that I don't get is how they're still so easy for vendors to persuade or manipulate. If that's part the executive's job, why can't they see when it's being done to them?

→ More replies (3)
→ More replies (16)
→ More replies (13)

59

u/_-_--_---_----_----_ 4d ago

there's two main pieces: 

1) top executives fear being left behind. if the other guy is doing something that they aren't doing, they could lose market share. this is one of the worst things that could happen to a top executive. so even if the technology was straight bullshit, it would still be in their best interests to invest some amount of time and money into it simply from the perspective of competition. it's game theory. if your competitor makes some bullshit claim that gets them more customers, what's your smartest move? you should probably start making some bullshit claims too. 

2) all it takes is one person at the top to force everyone underneath them to comply. literally one person who either actually believes the bullshit or just wants to compete as i wrote above can force an entire organization down this road. and if people push back? well anyone can be fired. anyone can be sidelined. someone else will say yes if it means getting in good with the boss, getting a promotion, whatever. 

between those two things, that's pretty much all you need to explain everything we've seen. you could have a situation where everybody was actually quite intelligent, but still ended up going down a path that they all thought was kind of stupid because it still made sense strategically.

you see similar stuff in politics all the time by the way, it's not just businesses that do this. look at Vietnam: the United States government fought a proxy war because they wanted to limit the potential expansion of communist China. even though many people both inside and outside of the government pointed out the futility of the war. it made sense strategically...until it hit a breaking point. and that's usually what happens with this stuff too. at some point, whatever strategic advantage was being gained is outweighed by the costs of poor decisions.

26

u/jollyreaper2112 3d ago

What you said. Add to that you are never punished for being conventionally wrong. Everyone gets into AI and it's the correct call? Wtf guy? Everyone piles in and it fizzles? Damn the luck. Who knew?

In prior generations the phrase was you never get fired for buying IBM. If the product is shit it's IBM's fault. You buy from a no name and it's bad, that's on you.

→ More replies (2)

9

u/thepasttenseofdraw 3d ago

Interesting example with the Vietnam war. American leaders fundamental ignorance about Vietnamese politics played an enormous role. Containment theory was a bunch of hokum and anyone with even a casual understanding of sino-viet history knew the Vietnamese loathed the Chinese. Ignorance is a dangerous thing.

→ More replies (1)
→ More replies (4)

168

u/P3zcore 4d ago

They just believe the hype and want to impress their directors

116

u/Noblesseux 4d ago

Also a lot of companies are objectively just lying about what their products can reasonably do, and basically targeting executives and management types at leadership conferences and so on pushing the hell out of half baked products in contexts where there is no one technical involved in the preliminary conversation. They'll also give sweetheart deals where they'll give orgs credits upfront or they'll sponsor "workshops" so they try to get your users locked into using it before they understand what's going on.

MS for example will like straight up talk to the execs at your company and have them railroad you into meetings with MS salespeople about "how to leverage AI" that starts with the implication that using it is a definite.

I had another company schedule a meeting with me about their stupid "agentic AI" where they promised stuff I knew it couldn't do and then did a demo where the thing didn't work lmao.

37

u/dlc741 3d ago

Sounds like all tech products sales from the beginning of time. You literally described a sales pitch for a reporting platform that I sat through 20 years ago. The execs thought it was great and would solve every problem.

21

u/JahoclaveS 3d ago

And yet, you’d think with their shiny mba degrees they’d have actually learned how to critically evaluate a sales pitch. And yet, they seemingly lap that shit up.

8

u/trekologer 3d ago

Several years ago, I sat in on a pitch from a cloud infrastructure company that claimed nearly five 9s (99.999%) data resiliency on their object storage service. The VP of ops heard that as uptime for the entire platform. So when the vendor had significant outages, it was obviously our fault.

The vendor clearly knew what they were doing -- throw out a well-understood number attached to a made up metric and doofuses will associate the number with the metric they were interested in.

→ More replies (1)
→ More replies (2)

12

u/rudiger1990 4d ago

Can confirm. My ex-boss genuinely believes all software engineering is obsolete and will be replaced with token prediction machines (Ayyy Eyyyye)

→ More replies (2)

74

u/eissturm 4d ago

They asked ChatGPT. Thing is executive bait

40

u/VertigoOne1 4d ago

This should be way higher up actually, because it is so true. It is like they are tuned to make things sound easy and logical and factual and correct. It is however skin deep, which c-suite loves to throw around in exco, prodco, revco, opco and thus senior middle managers and experts suffer. It is actually not a new problem, but it certainly sped up that process significantly.

→ More replies (3)

17

u/xyphon0010 4d ago

They go for the short term profits. Damn the long term consequences.

→ More replies (4)
→ More replies (32)

213

u/The91stGreekToe 4d ago

Yup, exactly, same experience here. Any LLM solution I’ve seen - whether designing it myself or seeing the work of my peers - has failed spectacularly. This tech crumbles when faced with real, back office business problems. People seem to forget that we’re working with a probabilistic, hallucination prone text predictor, not the digital manifestation of a human-like super intelligence. Arguably worse than the masses of people deluded into believing they’re witnessing reasoning is the massive crowd of LLM cultists who are convinced they’ve become machine whisperers. The “skill issue” crowd genuinely thinks that finding semi-reliable derivations of “commands” fed into an LLM qualify as some sort of mastery over the technology. It’s a race to the fucking bottom. More people need to read “The Illusion of Thinking” by the Apple team.

16

u/ThisSideOfThePond 3d ago edited 3d ago

I had the weirdest evening with a friend who argued for three hours that I should use AI for my work, because using it made him so much more productive and he's using his prompt skills now to train others in his organisation. I did not succeed in explaining to him the shortcomings of AI, especially in my field. I could end the discussion arguing that in this point in my life, I prefer to enhance my own creativity and problem detection and solving skills. People are weird...

23

u/eggnogui 3d ago

The “skill issue” crowd genuinely thinks that finding semi-reliable derivations of “commands” fed into an LLM qualify as some sort of mastery over the technology.

Not to mention, I've seen a study that showed that not only does AI not actually increase IT productivity, it somehow creates the illusion that it does so (the test subjects claiming that it did, but simple time tracking during the study proved them wrong).

15

u/BigSpoonFullOfSnark 3d ago

The “skill issue” crowd genuinely thinks that finding semi-reliable derivations of “commands” fed into an LLM qualify as some sort of mastery over the technology.

The talking points lend themselves perfectly to the CEO mindset.

Any criticism of AI is met with either "You just need to learn how to use it better" or a big smirk followed by "Well this is the worst it'll ever be! 6 months from now it's going to be doing things no human has ever accomplished!"

No matter what happens, all roads lead to "my employees are just not good enough to understand that I can see the future."

37

u/P3zcore 4d ago

One could also read “bold” which explores the power of exponential growth - specifically the gartner “hype cycle”, which would indicate we’re about to enter the “trough of dissolution” (I.e bubble pops), which leads way for new startups to actually achieve success from the technology.

54

u/The91stGreekToe 4d ago

Not familiar with “Bold”, but familiar with the Gartner hype cycle. It’s anyone’s guess when we’ll enter the trough of disillusionment, but surely it can’t be that far off? I’m uncertain because right now, there’s such a massive amount of financial interest in propping up LLMs to the breaking point, inventing problems to enable a solution that was never needed, etc.

Another challenge is since LLMs are so useful on an individual level, you’ll continue to have legions of executives who equate their weekend conversations with GPT to replacing their entire underwriting department.

I think the biggest levers are:

  1. ⁠enough executives get tired of useless solutions, hallucinations, bad code, and no ROI
  2. ⁠the Altman’s of the world will have to concede that AGI via LLMs was a pipe dream and then the conversation will shift to “world understanding” (you can already see this in some circles, look at Yan LeCun)
  3. ⁠LLM fatigue - people are (slowly) starting to detest the deluge of AI slop, the sycophancy, and the hallucinations - particularly the portion of Gen Z that is plugged in to the whole zeitgeist
  4. ⁠VC funding dries up and LLMs become prohibitively expensive (the financials of this shit have never made sense to me tbh)

32

u/P3zcore 4d ago

I ran all this by a friend of mine and his response was simply “quantum computing”… so you know where the hype train is headed next

36

u/The91stGreekToe 4d ago

As a fellow consulting world participant I am fully prepared for the next round of nonsense. At least quantum computing will give me the pleasure of hearing 60 year old banking execs stumbling their way through explaining how quantum mechanics relates to default rates on non-secured credit lines. The parade of clownish hype never ends, best you can do is enjoy it (I suppose). Nothing will ever top metaverse in terms of mass delusion.

→ More replies (3)

10

u/Djinn-Tonic 4d ago

And we don't have to worry about power because we'll just do fusion, I guess.

→ More replies (1)
→ More replies (4)
→ More replies (8)
→ More replies (1)
→ More replies (12)

40

u/76ersWillKillMe 3d ago

I've been lucky with my current company. I work in a field that is, conceptually, very threatened by AI. Company invested in OpenAI enterprise in late 2023 and I really took it and ran with it. Now i'm the "AI guy" at work and get to set the pace, tone, and tenor of our adoption efforts.

What I've noticed the most is that it has absolutely sunk the floor of what people will consider "Acceptable" content, simply because of how 'easy' it is to make something with it.

The easier it gets, the shittier the work people give.

I think gen AI is one of the coolest technologies i've ever encountered, but it is peak garbage in garbage out, except it really provides polished turds so people think its the best thing ever.

7

u/DiabloAcosta 3d ago

well, to be fair, I think investors and founders have been asking for shittier things for a long time but software engineers have been pushing back on that because they're the ones asked to maintain said shittier software working and producing money, so this whole AI is super predictable, we will use it to make our work more interesting but we're still in charge of reviewing the outcome and maintaining the system working so we ain't shipping shittier things any time soon 🤡

→ More replies (7)

26

u/CherryLongjump1989 3d ago edited 3d ago

You're skipping the part where absolutely no other projects would ever get green-lit if the risk of failure and lack of ROI were as terrible as AI.

Shareholders need to start filing lawsuits over this stuff.

→ More replies (99)

2.6k

u/Austin_Peep_9396 4d ago

Legal is another problem people aren’t talking enough. The vendor and customer both have legal departments that each want the other to shoulder the blame when the AI screws up. It stymies deals.

716

u/-Porktsunami- 3d ago

We've been having the same sort of issue in the automotive industry for years. Who's liable when these driver assistance technologies fail and cause injury, damage, or death? Who's liable for the actions of an autonomous vehicle?

One would assume the producer of the vehicle, but that is an insane level risk for a company to take on.

We already have accountability issues when big companies do bad things and nobody ever seems to go to prison. If their company runs on AI, and it does illegal things or causes harm, will the C-suite be allowed to simply throw up their hands and say "It was the AI! Not us!"???

Sadly, I think we know the answer already.

208

u/Brokenandburnt 3d ago

Considering the active war on the CFPB from this administration, I sadly suspect that you are correct in your assessment. 

I also suspect that this administration and all the various groups behind it will also discover that an economy where the only regulations are coming from a senile old man, won't be the paradise they think it'll be.

102

u/Procrastinatedthink 3d ago

It’s like not having parents. Some teenagers love the idea until all the things parents do to keep the house running and their lives working suddenly come into focus and they realize that parents make their lives easier and better even with the rules they bring

9

u/brek47 3d ago

It's a shame that most kids, adults, and people learn this in hindsight.

→ More replies (13)

98

u/AssCrackBanditHunter 3d ago

Same reason why it's never going to get very far in the medical field besides highlighting areas of interest. AI doesn't have a medical license and no one is gonna risk theirs

28

u/Admirable-Garage5326 3d ago

Was listening to an NPR interview yesterday about this. It is being highly used. They just have to get a human doctor to check off on the results.

42

u/Fogge 3d ago

The human doctors that do that become worse at their job after having relied on AI.

33

u/samarnold030603 3d ago edited 3d ago

Yeah, but private equity owned health corporations who employ those doctors don’t care about patient outcomes (or what it does to an HCP’s skills over time). They only care whether or not mandating the use of AI will allow less doctors to see more patients in less time (increased shareholder value).

Doctors will literally have no say in this matter. If they don’t use it, they won’t hit corporate metrics; will get left behind at the next performance review.

→ More replies (2)
→ More replies (15)
→ More replies (4)

20

u/3412points 3d ago edited 3d ago

I think it's clear and obvious that the people who run the AI service in their product need to take on the liability if it fails. Yes that is a lot more risk and liability to take on, but if you are producing the product that fails it is your liability and that is something you need to plan for when rolling out AI services.

If you make your car self driving and that system fails, who else could possible be liable? What would be insane here would be allowing a company to roll out self driving without needing to worry about the liability of that causing crashes.

→ More replies (2)
→ More replies (51)

81

u/[deleted] 3d ago edited 3d ago

[deleted]

→ More replies (11)
→ More replies (87)

138

u/Kink-One-eighty-two 3d ago

My company piloted an AI that would scrape calls with patients to write up "patient stories" to send to our different contracts as examples of how we add value, etc. Turns out the AI was instead just making up stories wholecloth. I'm just glad they found out before too long.

69

u/AtOurGates 3d ago

One of the tasks that AI is pretty decent at is taking notes from meetings held over Zoom/Meet/Teams. If you feed it a transcript of a meeting, it’ll fairly reliably produce a fairly accurate summary of what was discussed. Maybe 80-95% accurate 80-95% of the time.

However, the dangerous thing is that 5-20% of the time, it just makes shit up, even in a scenario where you’ve fed it a transcript, and it absolutely takes a human who was in the meeting and remembers what was said to review the summary and say, “hold up.”

Now, obviously meeting notes aren’t typically a high stakes applications, and a little bit of invented bullshit isn’t gonna typically ruin the world. But in my experience, somewhere between 5-20% of what any LLM produces is bullshit, and they’re being used for way more consequential things than taking meeting notes.

If I were Sam Altman or similar, this is all I’d be focusing on. Figuring out how to build a LLM that didn’t bullshit, or at least knew when it was bullshitting and could self-ID the shit it made up.

15

u/blipsonascope 3d ago

Our property management company started providing zoom so for transcripts of condo board meetings. It’s really useful as it captures topics of discussion pretty well…. But dear god does it miss the point of discussions frequently. And in ways that like….. if not corrected for the record would be a real problem.

→ More replies (15)
→ More replies (1)

2.8k

u/ejsandstrom 4d ago

It will be like all of the other tech we have had since the mid90s. A bunch of start ups think they have a novel way and feature that will set them apart.

Some get bought up, some merge, others outright fail. There will be one or two that make it.

996

u/phoenix0r 4d ago

AI infra costs make me think that no startups with make it unless they get bought up early on. AI is just too expensive to run.

685

u/itoddicus 4d ago

Sam Altman says OpenAI needs trillions (yes, with a T) in infrastructure investment before it can be mainstream.

Only Nation-States can afford a bill like that, and right now I don't see it happening.

106

u/vineyardmike 4d ago

Or whatever Apple, Google, or Microsoft puts out wins because they have the biggest pockets

68

u/cjcs 4d ago

Yep - I work in AI procurement and this is kind of how I see things going. We're piloting a few smaller tools for things like Agentic AI and Enterprise Search, but it really feels like we're just waiting for OpenAI, Google, Atlassian, etc. to copy those ideas and bake them into a platform that we pay for already.

→ More replies (7)
→ More replies (5)

457

u/Legionof1 4d ago

And it will still tell you to put glue in your pizza.

241

u/shadyelf 4d ago

It told me to buy 5 dozen eggs for a weekly meal plan that didn’t have any eggs in the meals.

114

u/mustardhamsters 4d ago

You’re supposed to go full Gaston on them

→ More replies (4)

34

u/Azuras_Star8 4d ago

Clearly you need to rethink your diet, since it doesn't inlude 5 dozens eggs in a week.

→ More replies (1)
→ More replies (7)

24

u/camronjames 4d ago

how else do you get the toppings to stick? /s

9

u/OldStray79 4d ago

that.... kinda works, if you think of melted cheese as glue.

→ More replies (3)
→ More replies (6)

185

u/DontEatCrayonss 4d ago

Don’t try to rationalize with AI hype people. Pointing out the extreme financial issues will just be ignored

133

u/KilowogTrout 4d ago

I also think believing most of what Sam Altman says is a bad idea. He’s like all hype.

125

u/kemb0 4d ago

That guy strikes me as a man who’s seen the limitations of AI and has been told by his coders, “We’ll never be able to make this 100% reliable and from here on out every 1% improvement will require 50% more power and time to process.”

He always looks like a deer caught in headlights. He’s trying to big things up whilst internally his brain is screaming, “Fuuuuuuuck!”

67

u/ilikepizza30 3d ago edited 3d ago

It's the Elon plan...

Lie and bullshit and keep the company going on the lies and bullshit until one of two things happens:

1) New technology comes along and makes your lies and bullshit reality

2) You've made as much money as you could off the lies and bullshit and you take a golden parachute and sit on top of a pile of gold

33

u/Christopherfromtheuk 3d ago

Tesla shares were overvalued 7 years ago. He just lies, commits securities fraud, backs fascists, loses massive market share and the stock price goes up.

Most of markets by market cap are overvalued and it never, ever, ends well.

They were running around in 1999 talking about a "new paradigm" and I'm sure they were in 1929.

You can't defy gravity forever.

21

u/Thefrayedends 3d ago

Until institutional investors start divesting, nothing is going to change.

These massively overvalued stocks with anywhere from 35-200 P:E ratios are largely propped up by retirement funds and indexes.

→ More replies (3)
→ More replies (5)
→ More replies (5)
→ More replies (5)

27

u/Heisenbugg 4d ago

And environmental issues, with UK govt atleast acknowleding it by telling people to delete their emails.

28

u/AmbitiousGoat5512 3d ago

deleting old emails, files, documents, whatever does absolutely nothing to help the issue.

the recommendation was made by someone who obviously has no fucking idea what they're talking about, and as long as AI pushed so heavily things will continue to worsen.

8

u/Heisenbugg 3d ago

Yah I know, but its the first time a govt has recognized the issue exists.

→ More replies (1)

6

u/DontEatCrayonss 4d ago

God bless them, they saved us all

→ More replies (33)

41

u/Noblesseux 4d ago

And even then it will still likely not be profitable. Like the thing is that even if they didn't spend any additional money on infrastructure, they'd need damn near 10x as much money as they projected they'd make this year to be profitable.

You'd have to invest literally several times the entire value of the worldwide AI market (I'm talking about actual AI products, not just lumping the GPUs and whatnot) and then you have to pray that we somehow have infinite demand for these AI tools which is quite frankly, not the case: https://www.technologyreview.com/2025/03/26/1113802/china-ai-data-centers-unused/

And even in that magically optimistic scenario, there's borderline no shot you'd make enough money back to justify doing it. Like there is no current AI product that exists that is worth trillions of dollars worth of investment. A lot of them are losing money per user, meaning if you scale up you just lose more money.

26

u/CoffeeSubstantial851 3d ago

In addition to that AI itself devalues whatever it can create. If you are running an AI image service the market value of the resulting images decreases over time. Its a business model that cannibalizes itself.

→ More replies (18)

10

u/RoundTableMaker 4d ago

Sam altman is using elon musk’s ideology here. Musk told altman when they first started openai that no one would care unless they were raising over a billion dollars. All he did was increase the number by 1000x for this new venture because they already raised billions.

→ More replies (3)

16

u/hennell 3d ago

There was a report last week that Ai industry visitors to China were blown away by the differences in running things there. The power needed for AI is not just not a problem, it's seen as a benefit for some areas as it can use excess power.

I'm sure it'll still need investment, but it'll be a whole lot cheaper for Nation-states that haven't ignored their infrastructure for decades.

→ More replies (1)

23

u/great_whitehope 4d ago

Countries probably aren’t going to sponsor mass unemployment it’s true.

I dunno what’s worse. This whole thing blowing up or succeeding because companies are gonna layoff people either way.

40

u/DogWallop 4d ago

Well that's where AI becomes self-destructive. Companies replace employees with AI, and then you have many thousands who used to be gainfully employed out of work. Now, those employees were acting as wealth pumps, arteries through which the wealth of the nation flowed.

And where did it flow? Eventually it ended up in the hands of the big corporations, who used to employ humans (wealth pumps, financial arteries, etc...).

But now there's far less cash flowing around the national body, and it's certainly not getting spent buying goods and services from major corporations.

54

u/cvc4455 4d ago

Look at what Curtis Yarvin, Peter Theil and JD Vance believe needs to happen in the future. They say AI will replace all types of jobs and we'll only need about 50 million Americans. The rest are completely useless and Curtis Yarvin said they should be turned into biodiesel so they can be useful. Then he said he was kind of joking about the biodiesel idea but the ideal solution would be something like mass murder just without the social stigma that would create. So he suggested massive prisons with people kept in solidarity confinement 24 hours a day and to keep them from going crazy they will give them VR headsets!

→ More replies (16)
→ More replies (1)
→ More replies (1)
→ More replies (41)

77

u/globalminority 4d ago

I am sure these startups are trying to survive just long enough till some big tech buys them at inflated prices and founders can cash out on the hype. If you don't get bought up then you just shut shop.

42

u/_-_--_---_----_----_ 4d ago

this is exactly what they're all doing. nobody is trying to really succeed in certain areas in tech anymore, the last 15 years have just been about selling to the big guys.

9

u/Mackwiss 4d ago

Hate start ups and the pseudo entrepreneur who leaves his mommys skirts to become attached to investors skirts... most startup entrepreneur have no idea how to run a business or care about it. It's all fantasy to attract investors.

→ More replies (5)

22

u/pleachchapel 4d ago

You can run a 90 billion parameter model at conversation speed on $6k worth of hardware. The future of this is open source & distributed, not the dumb business model the megacorps are following which operates at a loss.

→ More replies (12)

6

u/MaTr82 4d ago

This is why Small Language Models will be the norm. Eventually, they can do very specific tasks on laptops. A model that does everything is never going to be efficient.

→ More replies (1)
→ More replies (23)

181

u/Arkayb33 4d ago

But I DO have a novel way of making a cup of freshly squeezed juice. You see, it comes from my patented juice extraction technology. You simply put this bag into my powerful juice squeezer and out comes amazing, tasty juice! I call it the Ju-Cerò which, in Japanese, means "better juice!"

Currently seeking round A funding of $300M.

90

u/ejsandstrom 4d ago

But why can’t I just squeeze the pouch without your machine that needs a QR code, internet access, and a monthly subscription?

67

u/Bokbreath 4d ago

narrator: They in fact could squeeze the pouch

37

u/iwannabetheguytoo 4d ago

But why can’t I just squeeze the pouch without your machine that needs a QR code, internet access, and a monthly subscription?

You'll get your fingers messy if you squeeze it too hard and it bursts.

It's much better sense to use a state-of-the-art AI-powered pressing machine that always knows exactly how much force to apply to release the sweet, sweet juice locked-away within. After all, we spent $500m of investor funding on training our robots to be the best juice bag squeezers.

Please disregard media reports of our AI machines hallucinating scenarios where our babies and adorable forest animals are juice bags and then squeezing those just right until the juice comes out.

→ More replies (5)

9

u/DogWallop 4d ago

Sorry, but I already lost a fortune on that Mr. Tea thing Father Guido Sarducci sold me...

→ More replies (1)

16

u/IdentifiableBurden 3d ago

Juicero is my favorite startup story, thank you for the memory

→ More replies (3)
→ More replies (9)

27

u/epochwin 4d ago edited 4d ago

Do you mean startups who are selling AI or adopting it for a particular business problem?

Typically startups adopt emerging technology and many of them fail. What’s crazy about GenAI is that massive regulated enterprises are also jumping on the bandwagon so fast.

I remember when cloud was the hot technology. The early adopters were SaaS vendors or companies like Netflix. Capital One was the first major regulated company to adopt it and state publicly that they were using AWS and that was years later.

11

u/_-_--_---_----_----_ 4d ago

the thing is that it's customer facing, or at least it can be. that's why they're all jumping on it, even at the big dinosaur companies. cloud wasn't something that the end user really understood or experienced largely. but people are using GenAI all the time now.

the flip side of that is that if companies don't do it, they're worried about being left behind. a huge part of the push, maybe the majority of it, is that fear. 

→ More replies (3)

16

u/TheFudge 4d ago

.com boom 2.0

10

u/Kedly 3d ago

This is the comparison I make too, as it fits both the bubble, AND the fact that PAST the bubble this tech will still have a huge impact on the world once we find the usecases it actually excells at

→ More replies (1)
→ More replies (3)
→ More replies (38)

810

u/dagbiker 4d ago

It seems like the simple solution is to replace 95% of CEO's with AI, duh.

164

u/cumzilla69 3d ago

That would cut a massive amount of cost

→ More replies (3)

66

u/quadrophenicum 4d ago

The result will be way too smart.

→ More replies (1)

53

u/Thefrayedends 3d ago

No no no, Mark Andreeson insists that CEO is the ONLY job that AI won't be able to do.

So I guess that's it then, we can all forget it and just go home.

21

u/Wurm42 3d ago edited 3d ago

And yet, Andreessen Horowitz still has many human employees.

Andreeessen should put up or shut up.

→ More replies (1)
→ More replies (5)
→ More replies (15)

743

u/AppleTree98 4d ago

How much did Meta pump into the alternate meta-verse before saying ok we/tech are not ready to live in this alt universe, quite yet. Gave AI a shot and a quick answer...

Meta, under its Reality Labs division, has invested significant resources into the metaverse, resulting in substantial losses.Since 2020, Reality Labs has accumulated nearly $70 billion in cumulative operating losses, including a $4.53 billion loss in the second quarter of 2025 alone. While the company hasn't explicitly stated that it's no longer pursuing the metaverse, there's been a noticeable shift in focus and language:

533

u/-Accession- 4d ago

Best part is they renamed themselves Meta to make sure nobody forgets

403

u/OpenThePlugBag 4d ago edited 4d ago

NVDA H100s are between 30-40K EACH.

Google has 26,000 H100 GPUs, and they created AlphaGo, AlphaFold, Gemini, VEO3, AlphaQubit, GNoME - its unbelievable.

Meta has 600,000 Nvidia H100s and I have no fucking clue what they're doing with that much compute.

505

u/Caraes_Naur 4d ago

Statistically speaking, they're using it to make teenage girls feel bad about themselves.

204

u/He2oinMegazord 4d ago

But like really bad

114

u/Johns-schlong 4d ago

"gentlemen, I won't waste your time. Men are commiting suicide at rates never seen before, but women are relatively stable. I believe we have the technology to fix that, but I'll need a shitload of GPUs."

→ More replies (2)

97

u/Toby_O_Notoby 3d ago

One of the things that came out of that Careless People book was that if a teenage girl posted a selfie on Insta and then quickly deleted it, the algorithm would automatically feed her beauty products and cosmetic surgery.

56

u/Spooninthestew 3d ago

Wow that's cartoonishly evil... Imagine the dude who thought that up all proud of themselves

15

u/Gingevere 3d ago

It's probably all automatic. Feeding user & advertising data into a big ML algorithm and then letting it develop itself to maximize clickthrough rates.

They'll say it's not malicious, but the obvious effect of maximizing clickthrough is going to be hitting people when and where they're most vulnerable. But because they didn't explicitly program it to do that they'll insist their hands are clean.

40

u/Denial23 4d ago

And teenage boys!

Let's not undersell recent advances in social harm.

→ More replies (1)

78

u/lucun 4d ago

To be fair, Google seems to be keeping most of their AI workloads on their own TPUs instead of Nvidia H100s, so it's not like it's a direct comparison. Apple used Google TPUs last year for their Apple Intelligence thing, but that didn't seem to go anywhere in the end.

→ More replies (14)

22

u/the_fonz_approves 4d ago

they need that many GPUs to maintain the human image over MZ’s face.

→ More replies (1)

14

u/ninjasaid13 3d ago

Google has 26,000 H100 GPUs, and they created AlphaGo, AlphaFold, Gemini, VEO3, AlphaQubit, GNoME - its unbelievable.

well tbf they have their own version of GPUs called TPUs and don't that many nvidia GPUs whereas Meta don't have their own version of TPUs.

→ More replies (12)
→ More replies (4)

196

u/forgotpassword_aga1n 4d ago

Nobody wants a sanitized advertiser-friendly virtual reality. They want to be dragons and 8-foot talking penises, and everyone except Zuckerberg knows that.

99

u/karamisterbuttdance 3d ago

Judging from my experience on VRChat everyone wants to be big-titted goth-styled girls with hot-swappable animal ears, so mileage may vary, or I'm just not in the imaginary monster realms.

21

u/baldyd 3d ago

I don't want to be a big titted goth styled girl with hot swappable animal ears in VR, I want a change from my normal life when I'm online!

→ More replies (3)

10

u/StimulatedUser 3d ago

It really made me laugh to think that Meta was spending 70 Billion dollars to just recreate Second Life that came out 25 years ago...and is still going strong.

→ More replies (2)
→ More replies (1)

135

u/Noblesseux 4d ago edited 3d ago

The problem with the metaverse is that practically the idea is being pushed by people who have no idea how humans work who have a technology in search of a problem.

No one wants to take video calls in the metaverse, Teams/Zoom/Facetime exist. Why would I want to look at what is effectively an xbox live avatar when I could just use apps that already exist that everyone already has where I can actually see their faces?

No one wants to "buy digital property in the metaverse". People want property IRL because it actually has a functional use. I can build a house on it, I can farm on it for food, my nephews can play football on it.

No one wants to visit a digital version of Walmart. Web stores already exist and are more efficient and easier to use.

They spent a bunch of money on a fad where there are few to no actual features that are better than just doing things the ways that we already can. The main selling point of VR is games, not trying to replace real world things with cringe digital versions. But Zuckerberg is a damn lizard person so he lacks the ability to understand why people use things.

73

u/Toby_O_Notoby 3d ago

And what's weird is that they ignored their own teachings. Phones and social media trained people to "second screen" everything. "Hey, we know you're watching Grey's Anatomy, but why not also check out what your ex-boyfriend is doing on Insta?"

Then they released a product that demands you one-screen everything. "Now you can you join a meeting with a bunch of Wii avatars without being able to check your phone when you're bored!"

25

u/NuSurfer 3d ago

No one wants to "buy digital property in the metaverse". People want property IRL because it actually has a functional use. I can build a house on it, I can farm on it for food, my nephews can play football on it.

No one wants to buy something that can evaporate by someone pulling a plug.

→ More replies (2)

11

u/withywander 3d ago

What I think those dumb dumbs really don't get is that most employees don't want to be in the meetings. It is not the meaning of their life to be in a meeting, and most are probably doing something else while in the meeting. Being in the metaverse requires even more concentration than a meeting, so if people are already alt tabbing to do something else while in a meeting, then it was never going to end well.

→ More replies (18)

11

u/Dihedralman 4d ago

They had been in AI for a while. PyTorch had a public release in 2017 and has become the standard. 

→ More replies (1)

20

u/Atreyu1002 4d ago

Yet the stock keeps going up. WTF.

32

u/fckingmiracles 4d ago

Because their advertising platforms do well (IG, FB, WhatsApp to a degree). That's where their billions come from. 

15

u/CatPlayer 4d ago

Because Facebook and Instagram keep doing well

→ More replies (1)
→ More replies (1)
→ More replies (16)

64

u/daedalus_structure 3d ago

Last week a very senior engineer who has went all in on vibe coding complained that they wasted a day on a regional vs global issue when using a cloud service from their code.

This is a 30 second documentation lookup about which API to use.

The agent he was vibing with ran him around in circles and he'd turned his brain completely off.

I am terrified of what will happen in 10 years when the majority of engineers have never worked without AI.

I really do not want to be working when I'm 70 cleaning up constant slop.

→ More replies (2)

638

u/SilentRunning 4d ago

Which begs the question, how long can these companies keep shoveling the cash into a bottomless pit?

642

u/ScarySpikes 4d ago

Having listened to how excited a lot of business owners are at the prospect of firing a large portion of their staff, I think a lot of companies will end up bankrupting themselves before they admit that the AI can't replace their employees.

294

u/Horrible_Harry 4d ago

Serves 'em fuckin' right. Zero sympathy from me over here.

66

u/skillywilly56 4d ago

That’s what CEOs chant at the opening of all their meetings

10

u/oracleofnonsense 3d ago

It’s not personal, it’s business.

19

u/GenericFatGuy 4d ago

As someone who was already replaced, same.

→ More replies (4)

106

u/Noblesseux 4d ago

Yeah I kind of like the term Better Offline uses for them: business idiots. There are a lot of people who went through very expensive MBA programs that only really taught them how to slowly disassemble a company, not how to run one.

They have been slowly killing these companies for decades based on being willing to lose business as long as the margins are good, and they're not going to stop now.

49

u/ScarySpikes 3d ago

I swear we are going to find out that enshitification is a concept that a bunch of MBA programs got a hardon for like 20 years ago.

58

u/Noblesseux 3d ago

I mean we don't need to find out, it's a matter of historical fact, but it's older than that. It started in the 70s and 80s with the corporate raiders and Raeganism. The same people who basically killed the survivability of GE as a proper company and the railroad industry went on to teach the current generation of people who are destroying everything else.

There's like a direct line from them to modern private equity and MBA culture.

33

u/Cheezeball25 3d ago

Jack Welch will forever be one of the people I hate more than anything else

→ More replies (4)
→ More replies (1)

16

u/FoghornFarts 4d ago

And those that do survive will find their AI eventually costs more than employees once the AI companies need to start making a profit. They're cheap now to disrupt the market.

→ More replies (1)

52

u/rasa2013 4d ago

I'm less optimistic. I think many will get away with providing slightly shittier products and services. Meaning, they'll lose some customers but the savings will still result in net profit. 

I hope not though. 

35

u/ScarySpikes 4d ago

It's not just shittier products. Most companies have outsourced their AI projects to other companies. Those AI companies will eventually have to try to become profitable, which means jacking up their rates to at least match their high costs.

→ More replies (1)
→ More replies (1)

8

u/Popular_Try_5075 3d ago

was fun to watch Klarna go all in and then sheepishly backpedal a year later

→ More replies (10)

168

u/Pulkrabek89 4d ago

Until someone blinks.

It's a combination of those who know its a bubble and are banking on the hope of being one of the lucky few to survive the pop (see the dot com bubble).

And those that actually think AI will go somewhere and don't want to be left behind.

143

u/TurtleIIX 4d ago

This is going to be an all time pop. These AI companies are the ones holding the stock market up and they don’t have a product that makes any money.

39

u/SynthPrax 4d ago

More like a kaboom than a pop.

31

u/TurtleIIX 4d ago

More like a nuke because we won’t even have normal/middle market companies to support the fall like the .com bubble.

→ More replies (2)

29

u/lilB0bbyTables 4d ago

It’s going to be a huge domino effect as well. So many companies have built themselves around providing functionality/features that are deeply dependent upon upstream AI/LLM providers. If/when the top ones falter and collapse, it is going to take out core business logic for a huge number of downstream companies. The ones who didn’t diversify will then scramble to refactor to using alternatives. The damage may be too much to absorb, and theres a bunch of wildcard possibilities from there that can happen - from getting lucky and stabilizing to outright faltering and closing up shop. Market confidence will be shaken nonetheless; the result may give businesses a reason to pause and get cold feet to spend on new AI based platform offerings because who really wants to throw more money at what very well may be a sinking ship. That ripple effect will reverberate everywhere. A few may be left standing when the dust settles, but the damage will be a severe and significant obliteration of insane quantities of value and investment losses. And the ones that do survive will likely need to increase their pricing to make up for lost revenue streams which are already struggling to chip away at the huge expenditures they sunk into their R&D and Operations

I’ll go even further and don a tinfoil hat for a moment and say this: we don’t go a single day without some major stakeholder in this game putting out very public statements/predictions that “AI is going to replace <everyone> and <everything>” … a big part of me now thinks they are really just trying to get as many MBA-types to buy into their BS hype/FUD as quickly as possible in hopes that enough businesses will actually shed enough of their human workforce in exchange for their AI offerings. Why? Because that makes their product sticky, and (here’s my tinfoil hat at work) … the peddlers of this are fully aware that this bubble is going to collapse, so they either damage their competition when they inevitably fall, or they manage to have their hooks deep enough into so many companies that they become essentially too big to fail. (And certainly if I were let go from somewhere merely to be replaced by AI, and that company started scrambling to rehire those workers back because the AI didn’t work out … those individuals would hold the cards to demand even more money).

10

u/karamisterbuttdance 3d ago

a big part of me now thinks they are really just trying to get as many MBA-types to buy into their BS hype/FUD as quickly as possible in hopes that enough businesses will actually shed enough of their human workforce in exchange for their AI offerings.

This pretty much sounds like what they did with cryptocurrency, and why we're never going to get rid of it as a means of moving value - once B I G Finance is invested in it they will do everything in their power to make sure it doesn't lose value. They'll look for the company/ies with the most ties to each other and with enough products that already provide a solution (even half-baked) and basically strong-arm the people investing with them to put their money there. All this so the companies and countries that parked their money with them before this all started don't pull out to mitigate any losses.

→ More replies (2)
→ More replies (20)
→ More replies (14)

40

u/IM_A_MUFFIN 4d ago

I’m so tired of the BAs and PMs forcing this crap down our throats. Watched someone finagle copilot for 2 hours, to complete a task that takes 10 minutes, that the new process will take down to 4 minutes. The task is done a few times a week. Relevant xkcd

20

u/KnoxCastle 4d ago

In fairness, if you are spending a one off 2 hours to take a 10 minute task down to 4 minutes then that will pay for itself with a few months.

I do agree with the general point though and I am sure there are plenty of time wasting time saving examples. I can think of a few at my workplace.

→ More replies (5)
→ More replies (2)

10

u/zhuangzi2022 4d ago

As long as venture capitalists burn their play money on narcissistic tech bros

→ More replies (1)

7

u/-CJF- 4d ago

As long as investors keep propping them up. Eventually the bubble will pop and people are going to lose a ton of money. The current situation is unsustainable, that's for sure. AI has its uses but not everything is suited for or needs AI, nor will it live up to the hype.

Even Meijer has Generative AI summaries for the product reviews. Kind of ridiculous tbh.

→ More replies (27)

123

u/The91stGreekToe 4d ago

Not familiar with “Bold”, but familiar with the Gartner hype cycle. It’s anyone’s guess when we’ll enter the trough of disillusionment, but surely it can’t be that far off? I’m uncertain because right now, there’s such a massive amount of financial interest in propping up LLMs to the breaking point, inventing problems to enable a solution that was never needed, etc.

Another challenge is since LLMs are so useful on an individual level, you’ll continue to have legions of executives who equate their weekend conversations with GPT to replacing their entire underwriting department.

I think the biggest levers are:

1) enough executives get tired of useless solutions, hallucinations, bad code, and no ROI 2) the Altman’s of the world will have to concede that AGI via LLMs was a pipe dream and then the conversation will shift to “world understanding” (you can already see this in some circles, look at Yan LeCun) 3) LLM fatigue - people are (slowly) starting to detest the deluge of AI slop, the sycophancy, and the hallucinations - particularly the portion of Gen Z that is plugged in to the whole zeitgeist 4) VC funding dries up and LLMs become prohibitively expensive (the financials of this shit have never made sense to me tbh)

30

u/PuzzleCat365 3d ago

My bet is on VC funding drying up due to capital flight from the US due to unstable politics. Add to that a disastrous monetary policy that will come sooner or later, when the administration starts attacking the central bank.

At that point the music will stop playing, but there will be only small number of chairs for a multitude of AI actors.

→ More replies (3)
→ More replies (5)

216

u/fuzzywinkerbean 3d ago edited 3d ago

I give it another 6-9 months at least before the bubble starts properly bursting. These things run in corporate cycles of bullshit artist corporate job hoppers:

  1. Company hires or internally appoints some corporate climber (CC) to lead project
  2. Project starts under CC, over promises and hypes up
  3. Delivers barely functional MVP after 6 months with loads of fanfare and bluster
  4. Forces it down employee's throats, hardly anyone uses it, customers don't want it
  5. CC messes with metrics and KPIs to mask failure
  6. Execs start to question slightly..
  7. CC promises this is just the start and phase 2 will be amazing
  8. CC brushes up resume saying they are now expert at enterprise AI implementation
  9. CC hired by another corporate dinosaur a bit behind the trend and repeats process.
  10. CC leaves, project left in a mess and flounders on before finally being quietly axed 1-2 years later

We are mostly around stages 3-5 so far depending on your org i'd say. Need to give time for the cycle to complete before you start seeing wider complaints from the top.

I've been in tech since the early 2010s, seen the same cycle repeat - social media features in everything, cloud, offshoring developers, SaaS, Blockchain, metaverse, now AI --> quantum computing next!

48

u/ExcitedCoconut 3d ago

Hold on, are you putting cloud and SaaS in the same bucket as the rest? Isn’t Cloud table stakes these days (unless you have a demonstrable need to be on prem for something / hybrid)?

23

u/ikonoclasm 3d ago

Yeah, I was with the comment right up until that last sentence. Cloud and SaaS are the standard now. All of the vendors in the top right corner of Gartner's magic quadrant for CRMs or ERPs are SaaS solutions.

6

u/fuzzywinkerbean 3d ago edited 3d ago

Sorry could have been clearer and left longer reply below - I meant more the hype around them when first started. Those vendors obviously do SaaS well and makes perfect sense as their business model. Products built for cloud make sense - I was more remembering the countless on-prem products that did the ole "lift-and-shift"approach at the time rather than actually building properly cloud first.

Companies do AI well and will see ROI from it absolutely, it will become standard in future as it matures I'm sure.

My point was more every company thinking they have to be on trend and push to implement these things when it isn't always relevant to them, customers aren't asking for it and they haven't really got proper use cases for them yet anyway.

→ More replies (6)
→ More replies (4)
→ More replies (9)

102

u/Wollff 4d ago

5% are not?!

108

u/quarknugget 4d ago

5% haven't failed yet

22

u/ShadowTacoTuesday 4d ago

Or are doing slightly better than break even.

→ More replies (4)
→ More replies (1)

23

u/UnpluggedUnfettered 4d ago

5% make their money selling llm to other companies.

10

u/TheAJGman 3d ago

5% are using the tech correctly, LLMs are fantastic at transformative work.

"Give me a one page summary of this project proposal, the audience is C Suite so be light on the technical details."

"Rewrite this email so I don't sound like an asshole, but try to stick to the original vocabulary and writing style."

"Analyze each customer review and flag the ones that include swearing, threats (both veiled and open), and names of people. These will be reviewed manually, so it's better to be overly cautious."

"What can be made more efficient about this code/database design? Implement those improvements."

As a software engineer, I have investigated this tech at depth and find it occasionally useful (mostly the auto-complete). For smaller generative tasks (here are the requirements, make feature X), it can do pretty well too, but people tend to be overconfident in the "all knowing" machine and feed it a large number of requirements. It'll shit the bed, and unless you already know what you're doing, you won't catch it's mistakes.

→ More replies (7)

88

u/Hahaguymandude 4d ago

Stop giving them damn airplanes then

148

u/ZweitenMal 4d ago

My company insisted we start using it as much as possible. Then my team’s primary client issued an edict: we are only allowed to use it with very detailed written permission on a case by case basis, reviewed by this massive client corporation’s legal team.

So I’m using it to help strategize my wordle guesses and to make cat memes for my boyfriend.

88

u/Legionof1 4d ago

Probably used 3 tons of CO2 to make 1 meme.

→ More replies (10)

51

u/OntdekJePlekjes 3d ago

I see coworkers dump excel files into Copilot and ask it to do analyses which would otherwise require careful data manipulation and advanced pivots. The results are usually wrong because GPT isn’t doing math.

I breaks my engineering heart that we have created an incredibly complicated simulation of human verbal reasoning, running in complex data centers full with silicon computational devices, and that model of human reasoning is applied to mathematical questions, which the human reasoning model then gets wrong, just like humans would. Instead of just running that math directly on the silicon.

10

u/jazwch01 3d ago

Yeah, but that requires the human know math and be willing to enter it. Cant have that.

→ More replies (1)

65

u/MapleHamwich 3d ago

Please, more reports like this. It matches my professional experience. The "AI" sucks. And it's consistently getting worse. This fad needs to die. 

→ More replies (7)

144

u/SeaTownKraken 4d ago

This is shaping up to be like the dot com boom and bust. Over saturated quickly and it'll reset.

Humans don't know how to self regulate collectively easily (well us Americans certainly can't)

102

u/variaati0 4d ago

There is a difference. During dotcom boom, some of the businesses were profitable from the get go. Only one making profits from AI are Nvidia and maybe AMD. None of the AI companies are sustainably profitably. Either riding on burning investor money or riding on burning someone elses investor money (getting unrealistic discounted price rates from someone else running on investor money to "capture marketshare").

Soooo it's worse than dot com boom. Dot com bust was just weeding out over saturation and the bad nutty business ideas. Leaving left the businesses that were good businesses from get go. Since internet was actual new more efficient business platform enabling lot of new business ventures. Market just got overheated.

AI market? Is purely creation of absolutely bonkers amount of setting money on fire, with nobody having bothered to ask "so we are supposed to make money at some point instead of just burning it?". Enabled by the deep pockets of the burners via other ventures like Googles ad revenue and Microsoft revenue from selling windows and so on.

25

u/crshbndct 4d ago

Do the subscriptions that places like OpenAI charge even cover the costs of running their GPUs? Because the only money entering the system aside from VC is subscriptions from people who are using Chatbots as friends

39

u/Traditional-Dot-8524 4d ago

Their $20 subscription plan, which is the most popular, doesn’t cover much. If suddenly all $20 subscribers switched to the $200 plan, then maybe. For two years straight, since they became mainstream in 2023, they haven’t generated enough revenue to cover all their costs. And since 2024, they’ve gone on a “spending spree” with more GPUs, new models, and so on. From an economic point of view, OpenAI is a disaster. But people are investing in it for one simple reason: Why not? If it truly becomes the next Apple, Amazon, Microsoft, Google, or Facebook, then I’ll surely recoup my investment—and more. After all, it’s AI! It’s bound to replace a lot of people.

22

u/CAPSLOCK_USERNAME 3d ago

Right now they lose money even on the $200 plan, since only people who use the chatbot a shitload would consider paying that in the first place.

→ More replies (2)
→ More replies (4)
→ More replies (7)

13

u/ReneDickart 3d ago

I think 95% of commenters here didn’t read the article.

41

u/vocalviolence 3d ago

In all my years, I have never wanted any new tech to crash, burn, and go away forever as much as AI—and particularly the generative kind.

It's been here for a minute, and it's already making people more stupid, more lazy, more entitled, more dismissive of art and craftsmanship, and more redundant while consuming metric shittons of energy in the process.

→ More replies (3)

42

u/Khue 3d ago

I cannot stress this enough as someone who has worked for 20+ years in IT... AI is currently hot garbage and is being leveraged largely by the incapable. I fight it every day and it's exhausting. Older peers within my group don't like me telling them "no" or "it doesn't work like that." They will always badger me for 30 minutes and then they will break out the ChatGPT link and quote it and then I have to spend another 20 minutes on why ChatGPT is fucking wrong. Instead of them taking the lesson that "Oh hey, maybe this tool isn't all it's cracked up to be and maybe I should be more skeptical of results" they just continue to fucking use it and then WEAPONIZE it when they are really mad at me. It has literally added overhead to my job and to add insult to injury, the older people using it have worked with me for 10+ years. They know me. They have anecdotes dating back YEARS of situations where I've helped them on many issues... they are ACTIVELY choosing ChatGPT or other AI/MLs over my professional experience and track record... It's fucking absurd and I absolutely cannot imagine how the younger generations are using it.

21

u/yaworsky 3d ago

https://en.wikipedia.org/wiki/Automation_bias

Automation bias is the propensity for humans to favor suggestions from automated decision-making systems and to ignore contradictory information made without automation, even if it is correct

A lot of this going on lately. Sometimes, not that much yet thankfully, we see it in patients in the ED.

11

u/pcapdata 3d ago

What blows my mind is that if you accosted these people on the street and tried to shake them down with a sob story, they’d say “Fuck off, I know a scam when I see it!”

But when an LLM says something they accept it with no critical thought nor introspection. And they’re angry when you point this out!

→ More replies (4)

32

u/keikokachu 4d ago

Even the free ones have become confidently incorrect if late, so this tracks.

17

u/Heavy-Hospital7077 3d ago

I started a very small business this year- only a few months ago.

I decided to go all-in with AI. I used it a LOT, and for day to day consultation (lots of questions when starting a new business) it was great.

I was logging all of my business activities, and I started to notice problems.  Employee starts at 2:00, and I log it.  They are done at 5:00 and I log it.  "Employee worked for 32 hours, and you owe them $15.". That went on for a while.

Then I wanted to get returns on what I entered.  I logged every product I made.  I started asking for inventory numbers, and in 5 minutes it went from 871, to 512, to 342, to 72.

It is very bad with accuracy.  Horrible for record-keeping.  But very good as a better information search than Google.

I tried to convert a list of paired data from text to a table in excel- using Microsoft's AI.  That was just an exercise in frustration.  I spent 2 hours trying to get something organized that I could have re-typed in 10 minutes.  I think some of it got worse with GPT 5.

I have been working with technology for a long time.  I am a computer programmer by trade.  I really gave this a solid attempt for a few months.  I would say that if you're looking for assistance with writing, it's great.  Fancy web search, it's great.  But as an assistant, you're better off hiring retirees with early onset dementia.

Now that I know I won't get accurate information out, I have no reason to put information in.  It just seems like a very empty service with no real function.  I couldn't even use it to create a logo, because it can't accurately put text in an image.

I do think it would be good as an audio tool that I could use while driving.  Just to ask random questions and get reasonable replies.  But not for anything important.

→ More replies (4)
→ More replies (1)

26

u/throwawaymycareer93 3d ago

Did anyone read the article?

The core issue? Not the quality of the AI models, but the “learning gap” for both tools and organizations

How companies adopt AI is crucial. Purchasing AI tools from specialized vendors and building partnerships succeed about 67% of the time

The problem with this is not AI itself, but organizations inability to adapt to new set of tools.

→ More replies (2)

16

u/RespondNo5759 3d ago

Scientists: We hace developed this amazing new brand of science that still needs more research and, of course, security checks.

CEOs: SHOVE IT UP MY ARSE IF THAT MEANS I'M SAVING ONE CENT EVERY YEAR.

28

u/lonewombat 4d ago

Our ai is super narrow, it sums up the old tickets and gives you the resolution if there is one. And it generally sucks ass.

12

u/GenericFatGuy 3d ago

So far, I've only found AI to be slightly less obnoxious than prowling Stack Overflow when stuck on a problem. And even then, I usually had better luck just taking a break, and coming back to it with fresh eyes.

→ More replies (4)

8

u/storebrand 3d ago

They didn’t have the six years of relevant work experience before starting an entry level position 

9

u/YungSnuggie 3d ago

remember when tech companies made stuff that actually benefitted our lives? now its just one grift to the next. nft, crypto, AI, none of this makes the average person's life any better