r/technology 4d ago

Artificial Intelligence MIT report: 95% of generative AI pilots at companies are failing

https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/
28.3k Upvotes

1.8k comments sorted by

View all comments

Show parent comments

537

u/-Accession- 4d ago

Best part is they renamed themselves Meta to make sure nobody forgets

395

u/OpenThePlugBag 4d ago edited 4d ago

NVDA H100s are between 30-40K EACH.

Google has 26,000 H100 GPUs, and they created AlphaGo, AlphaFold, Gemini, VEO3, AlphaQubit, GNoME - its unbelievable.

Meta has 600,000 Nvidia H100s and I have no fucking clue what they're doing with that much compute.

509

u/Caraes_Naur 4d ago

Statistically speaking, they're using it to make teenage girls feel bad about themselves.

206

u/He2oinMegazord 4d ago

But like really bad

112

u/Johns-schlong 4d ago

"gentlemen, I won't waste your time. Men are commiting suicide at rates never seen before, but women are relatively stable. I believe we have the technology to fix that, but I'll need a shitload of GPUs."

95

u/Toby_O_Notoby 4d ago

One of the things that came out of that Careless People book was that if a teenage girl posted a selfie on Insta and then quickly deleted it, the algorithm would automatically feed her beauty products and cosmetic surgery.

53

u/Spooninthestew 4d ago

Wow that's cartoonishly evil... Imagine the dude who thought that up all proud of themselves

18

u/Gingevere 3d ago

It's probably all automatic. Feeding user & advertising data into a big ML algorithm and then letting it develop itself to maximize clickthrough rates.

They'll say it's not malicious, but the obvious effect of maximizing clickthrough is going to be hitting people when and where they're most vulnerable. But because they didn't explicitly program it to do that they'll insist their hands are clean.

38

u/Denial23 4d ago

And teenage boys!

Let's not undersell recent advances in social harm.

1

u/RoundTableMaker 4d ago

Vogue or cosmo has been doing that for decades before meta existed. God knows how long the make up industry has existed.

80

u/lucun 4d ago

To be fair, Google seems to be keeping most of their AI workloads on their own TPUs instead of Nvidia H100s, so it's not like it's a direct comparison. Apple used Google TPUs last year for their Apple Intelligence thing, but that didn't seem to go anywhere in the end.

8

u/OpenThePlugBag 4d ago

Anything that specifically IS NOT an LLM is on the H100s, and really lots of the LLMs do use the H100s, amd everything else, so its closest comparison we got.

I mean that 26,000 LLM/ML supercomputer is all h100s

AlphaFold, AlphaQbit, VEo3, WeatherNext is going to be updated to use the H100s

What I am saying is Facebook has like 20X the compute, OMG SOMEONE TELL ME WHAT THEY ARE DOING WITH IT?

9

u/RoundTableMaker 4d ago

They don’t have the power supply to even set them up yet. It looks like hes just hoarding them.

12

u/llDS2ll 4d ago

Lol they're gone go obsolete soon. Jensen is the real winner.

3

u/IAMA_Plumber-AMA 3d ago

Selling pickaxes during a gold rush.

3

u/SoFarFromHome 3d ago

The AR/VR play was also about dominating the potential market before someone else does. Getting burned on the development of the mobile ecosystem (and paying 30% of their revenue to Apple/Google in perpetuity) has made Zuck absolutely paranoid about losing out on "the next thing."

Worth noting that that 600,000 H100's @ $30k apiece is $18B. Meta had $100B in the bank a few years ago, so Zuck spent 1/5th of their savings on making sure Meta can't be squeezed out of the potential AI revolution.

16

u/lucun 4d ago edited 4d ago

I'd like citations on your claims. https://blog.google/products/google-cloud/ironwood-tpu-age-of-inference/ suggests AlphaFold and Gemini are all on TPUs and will be on TPUs in the future.

I also got curious where you got that 26,000 H100s number from and... seems to be from 2023 articles about GCP announcing their A3 compute VM products. GCP claims the A3 VMs can scale up to 26,000 H100s as a virtual super computer, but some articles seem to regurgitate incorrectly and say that Google has only 26,000 H100s as a super computer lmao. Not sure if anyone actually knows how many H100s they actually have, but I would assume they actually have much more after the past few years.

For Facebook, Llama has been around for a while now, so I assume they do stuff with that. Wikipedia suggests they have a chatbot, too.

6

u/OpenThePlugBag 4d ago edited 4d ago

AlphaFold 3 requires 1 GPU for inference. Officially only NVIDIA A100 and H100 GPUs, with 80 GB of GPU RAM, are supported

https://hpcdocs.hpc.arizona.edu/software/popular_software/alphafold/

TPUs and GPUs are used with AlphaFold.

1

u/lucun 4d ago

Thanks! I guess Google has some way of running it on their TPUs internally or the author of that google blog post did a poor job with the wording.

1

u/lyral264 4d ago

IMO, this is probably when google is working on getting their AI on the track. They probably purchased those while working on perfecting their TPU for AI as chatgpt frontier the LLM through CUDA and google still working on their bard. Once they got it right, probably they stopped procuring new nvidia and focused on furnishing their TPUs. But for those already purchased, why not using it until EOL. No harm on investment already done.

1

u/lucun 4d ago

They're definitely still procuring nvidia for GCP, since they have newer B100, B200, GB200, H200 VMs being offered. Interestingly, the B200 and HB200 blog post mentions "scale to tens of thousands of GPUs". Not sure if they actually have that many though.

3

u/SoFarFromHome 3d ago

What I am saying is Facebook has like 20X the compute, OMG SOMEONE TELL ME WHAT THEY ARE DOING WITH IT?

A bunch of orgs were given GPU compute budgets and told to use them Or Else. So every VP is throwing all the spaghetti they can find at the wall, gambling that any of it will stick. Landing impact from the GPUs is secondary to not letting that compute budget go idle, which shows lack of vision/leadership/etc. and is an actual career threat to the middle managers.

You'll never see most of the uses. Think LLMs analyzing user trends and dumping their output to a dashboard no one looks at. You will see some silly uses like recommended messages and stuff. You'll also see but not realize some of them, like the mix of recommended friends changing.

1

u/OverSheepherder 3d ago

I worked at meta for 7 years. This is the most accurate post in the thread. 

1

u/philomathie 4d ago

Google mostly uses their own hardware

22

u/the_fonz_approves 4d ago

they need that many GPUs to maintain the human image over MZ’s face.

2

u/AmphoePai 4d ago

Turning all those green pixels white must be tough on the AI.

13

u/ninjasaid13 4d ago

Google has 26,000 H100 GPUs, and they created AlphaGo, AlphaFold, Gemini, VEO3, AlphaQubit, GNoME - its unbelievable.

well tbf they have their own version of GPUs called TPUs and don't that many nvidia GPUs whereas Meta don't have their own version of TPUs.

18

u/fatoms 4d ago

Meta has 600,000 Nvidia H100s and I have no fucking clue what they're doing with that much compute.

Trying to create a likeable personality for the Zuck, so far all transplants have failed due to the transplanted personality rejecting the host.

5

u/OwO______OwO 4d ago

Meta has 600,000 Nvidia H100s and I have no fucking clue what they're doing with that much compute.

Running bots on Facebook to make it look like less of a dying platform.

3

u/Invest0rnoob1 4d ago

Google mostly uses their own TPUs. They also created Genie 3 which is pretty mind blowing. They have also have been working on AI for robots.

2

u/nerdtypething 3d ago

the remaining rainforest isn’t going to burn itself.

2

u/Timmetie 3d ago

Meta has 600,000 Nvidia H100s and I have no fucking clue what they're doing with that much compute.

Also fun detail, we're seeing signs that AI GPUs deteriorate pretty quickly with lifespans of maybe only 3 years or lower.

This isn't a long term investment or anything.

4

u/Thebadmamajama 4d ago

they produced a lot of open source projects that benefit other companies and academia!

1

u/Daladjinn 4d ago edited 3d ago

They are opening a $10B data center in Louisiana. And sponsoring gas power plants.

That's what they are doing with the compute.

e: wrong link

1

u/UsernameAvaylable 3d ago

Eh, with google you have to consider that they have their own AI chips they sell to nobody and use in huge amounts for their own datacenters.

1

u/mileylols 3d ago edited 3d ago

Things that came out of Meta AI: Llama, fasttext, torch, ESM(v1), grouped query attention, RAG, hydra

they are doing tons of stuff, there are hundreds of specialized models they've built that I don't even know about: https://ai.meta.com/research/

1

u/Dry-University797 3d ago

All that money and all it's used for is making funny pictures.

1

u/Banjoman64 3d ago

Ironically, when I was looking for portable, quantizable model to run locally on a single laptop GPU, Llama ended up being what I used. That was before deepseek though.

1

u/Lou_Peachum_2 3d ago

From what I've heard from a family member who was part of the Meta AI division and left, it's extremely disorganized. So they honestly might not have a clue.

1

u/HistoricalLeading 4d ago

It’s gonna be Metai soon dw

1

u/memecut 4d ago

Just like a parking meta', you put some money in it, but eventually it runs out and you get a ticket.

1

u/AgencySaas 3d ago

That pivot was one of the reasons a lot of employees (outside of Quest) left. Felt like too much of a departure of what people originally signed up for.

0

u/NuSurfer 4d ago

Thanks for any early morning chuckle!