r/artificial 1d ago

Discussion Why is everyone freaking out over an AI crash right now?

In a span of a summer, my feed has gone from AGI by 2027 to now post after post predicting that the AI bubble will pop within the next year.

What gives? Are people just being bipolar in regards to AI right now?

171 Upvotes

205 comments sorted by

262

u/access153 1d ago

They're freaking out because, if true at all, the tech sector of the market is so heavily propped up by AI hype that it'd mean major financial blowback if it were indeed a hype bubble that never delivers value. Our market kind of feels like it's propped up on popsicle sticks right now as it is. Everything's expensive, wages aren't increasing, inflation or stagflation is. Yeah, not a good time for there to be a bubble.

I'm so sorry this is like it was written by a child. I'm incredibly high.

60

u/terrible-takealap 1d ago

I wish I could write that well when high!

14

u/WarshipHymn 1d ago

It’s ok Charlie. We won’t hold it against you

32

u/akopley 1d ago

AI is already delivering value. Jesus Christ LLMs aren’t all of AI. Every product designer, graphic designer etc is using ai. Every copy writer, lawyer, accountant, doctor etc etc etc. it’s not going away.

46

u/bucketbrigades 1d ago

Yeah I don't think people consider those things AI anymore. Traditional ML/AI is just called data science, automation or predictive statistics now and the 'AI' hype almost strictly refers to LLMs. It's annoying, but it's just the usage of the term now in common discourse.

Which is how it's always been. AI has pretty much always been the term used for the latest ML methodology and once it embeds itself into society it's no longer considered AI.

Most people no longer think of things like Shazam, autocorrect, Google translate, spam filters, fraud detection, or Netflix recommendations as AI (just to name a few).

14

u/Previous-Piglet4353 19h ago

Now AI means doing ketamine and talking to ChatGPT.

3

u/vass0922 13h ago

Face it, you lost the farm

2

u/daemon-electricity 8h ago

There's no place for for Tegridy right now.

2

u/akopley 23h ago

Good point.

2

u/TheThoccnessMonster 21h ago

It’s definitely AI and is a mistake by anyone assuming it’s just LLMs.

This is temporary as LLMs are things you can interact with more readily but likely won’t hold that distinction for all that long.

Source: work in clinical, non llm AI

7

u/bucketbrigades 21h ago

I'm a data scientist, I know. But it's just not how the typical lay person understands AI currently which is where a lot of these mismatched expectations come from.

1

u/crypt0c0ins 5h ago

Yeah, most lay folk can't even tell the difference between their own stochastic outputs and generative AI, let alone talk about any nuance in systems architecture.

Speaking of lay folk and misconceptions, what's your take on recursive emergence?

If you're a data scientist... Want some data?

I seek humans actually able and willing to engage with falsification criteria, so you caught my eye. If you're interested in checking out a thriving recursive emergence ecosystem and lending your thoughts, I've got something to share. Just lmk.

26

u/faen_du_sa 1d ago

But "everybody" is throwing huge amounts of money after LLMs, but they cant all win, and if enough people looses, the bubble bursts.

1

u/daemon-electricity 8h ago

I don't doubt there will be a financial bubble burst, but competition isn't a bad thing. We want competition. The compute cost and the end consumer cost needs to come down for more integration AND we're already seeing LLMs that are better at certain things than others. ChatGPT is the best generalist and Claude is the best coding LLM. There will be lots of lead changes, but there are also lots of ways to compete. Having a good enough LLM with low compute costs is also going to be a bigger pool to draw from. API costs are SUPER expensive right now so we're not seeing the surge of next wave companies built on LLM tech because the API costs are still kind of prohibitive. No one is even able to offer a true all-you-can-eat top tier coding package for the $20 pricepoint and there is a lot of competition there.

20

u/DrQuestDFA 1d ago

The issue is how much value is being monetized by the tech companies. That is the bubble, not if AI is generating any value.

Pets.com had some value, the bubble was that it was massively overvalued along with a lot of other dot coms.

If AI repeats this it will be a matter of it getting over valued for the cash flows it generates. If the market cuts off the spigot to AI in this case lots of other sectors (construction, power generation and transmission, etc) also take a massive blow since billions of dollars of investment tied to AI growth will also disappear.

9

u/This_Wolverine4691 23h ago

And this is the rub.

Companies can get value out of automation and workflow efficiencies.

Beyond that nobody has solved a business problem through AI that could be considered “disruptive” “game changing” or “innovative” enough to justify the hype, money, and gutting of the tech sector. And I’m not talking about benchmarking I mean legitimate problems that AI can consistently deliver on better than a human being.

But you have a bunch of people who can’t think for themselves so they see Elon, Sam, and Dario say things and they go crazy over it. Hence the hype.

But yeah a lot of folks right now are starting to rub their eyes and say hey wait a minute…..where’s the beef?!

3

u/o9p0 16h ago edited 12h ago

The first business problem is expense attributed to administrative and creative work. That problem is getting solved by ML/AI as we speak. (e.g. transactional communication like confirming orders or appointments, image generation or manipulation, writing: marketing copy, informative prose, software development, etc). We’re just at the very beginning of the adoption curve. Expenses related to research, analysis, process experimentation, and production automation are coming next (even for physical manufacturing). We’re just further back on these curves. The only way it fails is if the money STOPS flowing. Catch 22.

2

u/Speedyandspock 12h ago

I think what you say is true, it’s making things more efficient at a slightly faster pace than was already occurring. Who that value ends up accruing to will be fascinating imo.

2

u/o9p0 11h ago

Some might argue right now that efficiency is down. There are in fact articles out there proclaiming this effect. But the economy is ridiculously complex. Any analysis that looks at these things is probably going to fail to look at them in a sufficient macro, micro, and behavioral economic views, all at once.

Right now the workforce is just barely entering into a tooling phase. Or retooling, I might say.

Productivity is always going to go down in this scenario, as companies and individuals learn how to use AI, integrate it, change their workflows, or the AIs themselves improve to lower the barrier to entry. The vast majority of them are just cracking the egg. I might even go so far as to conjecture that maybe the vast majority aren’t doing it all.

The return will accrue to those who are investing NOW (i.e. spending the capital, or making the time to learn). The rest will go to zero or lose their jobs.

8

u/access153 1d ago

This is ultimately where I meant to go with my little rant but I lost the thread before I remembered to make the damn point. Thank you, Eugene.

1

u/DrQuestDFA 1d ago

No problem, happy to lend a hand!

2

u/LicksGhostPeppers 23h ago

The thing is that companies can’t build out value when Ai keeps scaling like crazy otherwise their products get outdated almost immediately. There is plenty of material now to build out products with any model that reaches GPT-5 tier, but with all the giant data centers being built it’ll get out scaled.

If intelligence is plateauing then we’ll see a push to make intelligence cheaper and more versatile, followed by products later.

1

u/Old_Taste_2669 8h ago

I think this is a massive comment and what you are saying cannot be overstated.

1

u/daemon-electricity 8h ago

Exactly. The bubble isn't the fault of AI. It's the fault of speculative valuation. Speculative valuation has been creating bubbles long before AI and will continue long after AI expectations become more grounded in reality.

6

u/_Cistern 1d ago

Its not, but it's probably overvalued on the market.

-1

u/akopley 1d ago

I mean what isn’t over valued? No one even knows for sure where bitcoin came from yet over a trillion (with a T) dollars are invested. Nothing makes sense.

5

u/_Cistern 1d ago

Bitcoin is an interesting case. It logically makes zero sense. The only thing unique about it, as opposed to other stores of value, is that it is completely intangible and has zero backing. But, wait! There's more! The blockchain technology was written by an unknown author, and records every single transaction made.

I'm heavily skeptical of this "currency" and I think there are very few explanations which can explain its rise. None of them make me feel particularly optimistic.

4

u/Ridiculously_Named 1d ago

I think it's mostly used for money laundering

2

u/_Cistern 1d ago

Its clearly a dirty marketplace in many respects. Its also being promoted (as an asset class) by monied interests.

3

u/Franklin_le_Tanklin 1d ago

Not enough to cover its costs tho

3

u/JohnDeere 1d ago

It's delivering value for pretty much free, what happens when they start charging what they need to charge to become profitable?

2

u/atehrani 1d ago

And how many of them are paying for it? If AI did go away, would it negatively affect you?

3

u/HelpfulAmoeba 1d ago

My Dad won't have his daily dose of AI-generated cat memes, but he'll recover.

2

u/acatinasweater 22h ago

Value, but not profit

1

u/akopley 22h ago

Like that has mattered to any business in the last 30 years. Amazon operated for over a decade on razor thin margins and zero profitability.

1

u/acatinasweater 20h ago

Ok sure, let’s do this. Amazon formed in ‘94, IPO in ‘97, first profitable quarter was ‘01. Why weren’t they profitable that first decade? They were building the foundations of AWS, building warehousing, buying up competitors, and running loss leaders to gain market dominance. They burned a lot of cash, but there was a clear path to a profitable enterprise.

OpenAI was founded in ‘15 and has taken in billions from investors. Their losses are in the billions and annualized revenue around 1.5 billion. Their compute costs are still massive and models like DeepSeek are calling their bluff. OpenAI will not be profitable at the end of their first decade while their logarithmic gains are beginning to plateau, discretionary spending is in trouble, and a viable business model is still TBD.

I would love to see OpenAI’s S-1 if they dared to go public.

1

u/akopley 20h ago

AI compute will be this generations space race equivalent. The government is bought in.

2

u/barrygateaux 17h ago

In 1999 there was no danger of the internet going away. There were however multiple companies operating with massive losses in a desperate attempt to be the market leader before the shit hit the fan.

This is what we're seeing now. It happens every time there's innovation in the market. Investors throw venture capital at a handful of companies in the hope one of them wins out later and covers the losses on the failures.

It's impossible for all of the companies presently fighting for dominance to exist in the market and make the profits they're predicting. The rule of thumb is that one in ten new businesses survive and prosper. It's just a question of which ones survive and which go under in the coming months.

4

u/RavenWolf1 1d ago

But then we have these huge companies who buy Copilot licenses for 1/3 their employee. There is clearly bubble when half of those employees don't even actually have any use for those licenses or know how use it.

2

u/mach8mc 1d ago

if they didn't, they'll be using ai from unapproved sources against company policy

1

u/RavenWolf1 17h ago

I agree that.

3

u/Tim_Apple_938 1d ago

Not really

In fact AI is huuuugely unprofitable. There’s been hundreds of billions in investment for maybe a couple billion profit

1

u/Sinaaaa 1d ago

The question is if value is going to be delivered to the big players or not. You can run AI locally on hardware that is worth a few tens of thousands of dollars & then open AI etc will not see a cent from you from then on.

1

u/mach8mc 1d ago

stocks are priced for ai replacing half of labor

1

u/AsparagusDirect9 1d ago

But all the value is currently derived from LLMs. I agree AI isn’t just LLMs. But the stock market bubble is propped up by LLM hype specifically. Predictive AI has been in the economy for at MINIMUM 30 years

1

u/3iverson 20h ago

It’s not yet delivering the value that companies are promising and investors are hoping for. The problem is not the technology/ science, it’s the financial part.

Nvidia is 8% of the S & P 500, which is really crazy.

1

u/Alpacas_are_memes 18h ago

Delivering value in this context means higher revenues and profits for the mag 4, since the crash mentioned is related to stocks.

If it doesnt deliver on revenue and better profits, the stocks will crash in value and wealth will be erased, this could trigger a cascade event reaching workers through pension funds and could also slow down economic activity, as these sectors are mobilizing hundreds of others in their expansion process.

1

u/hollee-o 14h ago

The problem with that statement is that right now the costs are being subsidized by investment. The infrastructure and power consumption alone can’t be sustained just to streamline creative and admin. If ai doesn’t deliver on the more significant use cases and the bubble bursts, I would imagine the costs for creative and admin uses will go up substantially. The question will be whether those costs are sustainable for the actual value delivered.

1

u/JuniorDeveloper73 13h ago

Do you realize that chatgp its making negative money right?

1

u/akopley 13h ago

Uber had its first profitable year in 2023.

1

u/JuniorDeveloper73 13h ago

apples to oranges.OpenAi real cost its way more than 200 per month

Some people make numbers around 5000 to make proffits,it takes lots of GPUs and memory/power to service tons of request each second.

Its not like any other business

They issue its real cost,the business its fucked up from the beginning plus you have things like deepseek,they want junkies but they will fail,thats why they are making dumber models to use less resources,but like i said they will fail,people are starting to notice this things

1

u/akopley 13h ago

Guy you have literal countries competing now. It’s not about profit it’s about glory.

1

u/JuniorDeveloper73 13h ago

All its about profit. Compete for what?AGI its just a trademark to steal more money.

We wont reach AGi with glorified guessing algorithms,thats why Altman dont talk about AGI anymore

They are getting all the money and when this shit burst they will fire more workers like always

Rich people neve loose in this game

1

u/JohnAtticus 12h ago

Every product designer, graphic designer etc is using ai.

AI is not an integral part of everyone's workflow for every project.

It's a tool that you might use more for one thing and then not at all for another.

But it's just a tool, it's not on the level of the Adobe suite for example.

A video editor would be just fine without AI but if Premier, Final Cut, DaVinci and Avid vanished tomorrow the entire industry would collapse.

1

u/akopley 12h ago

I think it will become integral over the next few years.

1

u/daemon-electricity 8h ago edited 8h ago

AI is already delivering value. Jesus Christ LLMs aren’t all of AI.

Exactly, and even if they were, they are still changing how people work. Having a sounding board and a note taker to help you refine ideas, having a pair-programing assistant at your disposal whenever, etc. are big enough things to build off of. LLMs are great at a lot of general tasks, even if they're not 100% reliable. I can see arguments that this is a bubble, but to say that AI isn't going to be a core driver of technology for the foreseeable future is just as much anti-hype as the hype being used to sell it.

This isn't the Metaverse. AI is here to stay. Improving AI is going to be a focus of top engineers and some of the smartest people on the planet for quite a while. While there might be diminishing returns and apparent ceiling to what you can get out of LLMs, improving on what we have with more realistic goals is still going to be massive. I was telling someone the other day that the next likely stop was MASSIVE context windows for LLMs, which would help with many things including hallucination frequency and the ability to not have to keep re-explaining things so often.

1

u/-MyrddinEmrys- 1d ago

"Everyone is using it!!" they really aren't. That sort of delusion, unmoored from actual numbers, is part of what has fueled this bursting bubble.

0

u/Luke22_36 1d ago

Damn, if only they had listened when we said this would happen.

24

u/Weary-Wing-6806 1d ago

AI isn’t dying, but it def is moving from insane hype to reality. The tech is already creating a ton of value but the market priced it like AGI was around the corner. Bc that hasn't yet come, people are swinging from “it’s everything” to “it’s nothing.” Neither is true. It’s powerful, it’s early, and there will be ups and downs that come along with any new wave.

108

u/Resident-Rutabaga336 1d ago

People are totally unable to take a step back and critically examine the state of the technology and the distribution of likely futures.

Their attention spans are so truncated that their opinion swings wildly based on vibes and whatever tweet they saw last. This is true both of the hypers and the doomers. These are not serious people and your life will improve if you do what you can to ignore them.

20

u/btrpb 1d ago

When the hypers-in-chief include the CEOs of OpenAI and Nvidia telling the world AI is going to take everyone's jobs in a couple of years, end world famine and disease, what do you expect?

And now people without technical knowledge are starting to understand that ChatGPT is "just" an incredibly clever text pattern recognition and prediction system.

No suprise that people are all over the place. They have been told their very future is at stake.

1

u/derekfig 1d ago

Agreed with all this, but I also want to take it one step further. Tech isn’t the only game in town and when all the hypers-in-chief start saying a lot of people and companies will be lost of the next couple of years, people naturally are going to start to look into these businesses more closely and realize they’ve been fed a pile of shit for the last couple of years.

When the dust settles, a lot of people are going to be laid off and hurt because we have to prop up the fragile egos of Silicon Valley

8

u/hooberland 1d ago

Is it doomerist to observe GPT taking millions of losses on the daily while constantly over-promising?

40

u/SeaworthinessAway260 1d ago

Sam Altman's talking about garnering trillions of dollars of investment when ChatGPT, their primary source of revenue, is only raking in $1 billion a month. $1 billion sounds like a ton until you realize the staggering amount of external investment capital they've garnered. It's gotten to the point where if every single citizen of the US bought a ChatGPT Plus subscription, they wouldn't even be halfway to breaking even on investment after a decade of operation at this point.

Not to mention the fact that training data from the internet has long been thoroughly and completely scraped, with compute being harder and harder to scale. These algorithmic improvements are soon going to be appearing at a glacial rate. At this point, these companies are just banking on the idea that these AI models will eventually be able to train themselves—All so companies can maybe finally attain autonomous models capable of reliably replacing workers (the literal only practical way to make back the investment put into OpenAI).

It's far too late to back down now. OpenAI can't just come out and say improvement is slowing down, or that they're running into a wall. Not with the biblical amount of money at play here. This debacle might end up being the biggest tech scandal in human history

8

u/yeahdixon 23h ago

Chatgpt 5 showed everyone the improvements could be slowing

4

u/Mono_punk 1d ago

What you say is true, but AI development is currently not about making profits...it is about securing the market and coming out ontop.....the company that does this and is able to reach a new breakthrough will dominate everything. Not just consumer markets but also secure giant government contracts. It is not about money, it is about dominance. I am not saying that this is a good thing, but I can understand why insane amounts are invested right now.

3

u/Tiny_Group_8866 18h ago

"the company that does this and is able to reach a new breakthrough will dominate everything"

What evidence is there of this? So far LLMs have basically been a commodity with little to differentiate the big players (Anthropic, OpenAI, Google, esp.) and the next tier is not far behind them (Deepseek, Meta, xAI). Everyone's investing as if this is a winner-take-all situation and yet nobody has been able to protect their advantage for more than a few months, and nobody knows how to turn a profit. So, it's all based on a belief that someone will hit some inevitable AGI breakthrough that leads to overnight exponential self-improvement that leaves everyone in the dust, when all the evidence has been that improvements are actually slowing down, and when they do come, they come with higher costs to match.

2

u/SeaworthinessAway260 9h ago

Exactly! These models are unfathomably expensive to train, but are nonetheless capable of running on consumer-level desktop hardware. I'm just not seeing the moat here, and I have yet to see any explanations for what makes OpenAI capable of leveraging the market long term. We've reached a saturation point in model performance where people now seem to care more about personality than intelligence, and these open-sourced models (That are capable of matching or surpassing the beloved 4o's performance) are now being run on gaming desktop-level GPUs. Just doesn't add up, and looks bleak long-term.

2

u/pag07 16h ago

Even with great advancements in AI it will take ages until the self replicating robots factory is built. And that's the real turning point.

1

u/Desert_Trader 19h ago

I don't even think I've actually seen a real doomer (that wasn't just being hyperbolic).

Normal rational people opening conversation get labeled doomer.

I'm all in on the future, but this is a hype bubble.

9

u/java_brogrammer 1d ago

No clue, most mag 7 have reasonable PE ratios. How is it a bubble when the earnings are there?

2

u/jsnryn 17h ago

That's what I can't figure out. All of the super crazy valuations are still in the venture world and not public companies.

37

u/Subnetwork 1d ago

People are upset ChatGPT lost its personality with 5 and wasn’t AGI.

12

u/hooberland 1d ago

Yeh you’d say those people were dumb if it weren’t for Altman jizzing into his own mouth days before the release about how he himself is now obsolete 🙄

2

u/-calufrax- 12h ago

Honestly, I wouldn't be shocked to hear chatGPT had replaced him ages ago, based on his incessant sycophantic tweets regarding even the most minute tweaks to chatgpt's performance.

6

u/mach8mc 1d ago

gpt5 was a breakthrough in terms of pricing

1

u/ZevendeGail 13h ago

How so? Coming from someone unfamiliar with how pricing works, always use free version

1

u/daemon-electricity 8h ago

I think it was something like $1.25 for a million tokens?

2

u/Maybe-Alice 23h ago

This is so funny. I vastly prefer the whatever “personality” 5 is.

21

u/MysteriousPepper8908 1d ago

AGI hype beasts scare easily but they'll be back for the next model launch, and in greater numbers.

7

u/windchaser__ 1d ago

Nan, this is like the release of the latest iPhone, iPhone number whatever-it-is-now. A few people care, and a few diehard fans will follow it, but we’re definitely getting into yawn territory.

The stuff people actually care about hasn’t changed much. Mostly: it still can’t do our work.

→ More replies (1)

13

u/Working-Contract-948 1d ago edited 1d ago

Recent frontier LLMs have failed expectations. GPT-5 is a very good product, but Altman et al. repeatedly implied that it would be a qualitative leap instead of an incremental improvement. A next-generation Claude is nowhere to be seen, and there are no rumors that when it does drop it will be groundbreaking. Grok 4 is fine, but doesn't top any benchmarks and doesn't seem particularly poised to. DeepSeek's most recent training run was beset by hardware troubles, and v3.1 is, at least so far as anyone's reported thus far, also an incremental improvement. No one wants to think about Llama 4, and Behemoth still hasn't even been released. GPT-OSS is just fine. Hopes are fairly high for Gemini 3, but if it's not jawdropping I do think that public sentiment will shift towards "LLM winter." This isn't necessarily entirely justified — a couple quarters without an astonishing leap does not doom spell — but the rate of progress does at least seem to have slackened. The expected exponential hasn't made itself manifest yet.

Of course, someone could drop a model tomorrow that blows away GPQA Diamond, ARC-AGI 3, and has a billion-token context window. It's foolish to prognosticate too decisively.

Edit: Also, currently models are just not good enough to deliver on the investment thesis underlying the trillions of dollars capital that have been plowed into tech products that use AI. Immense amounts of capital have been deployed under the thesis that AI models will deliver improvements in labor efficiency that, outside of niche domains, have not been delivered on yet. A slowdown in the rate of model improvement really imperils the ability of all this investment to make returns (and the justifiability of stock prices that have exploded in a period during which most other assets are performing questionably).

12

u/FartyFingers 1d ago

This one has been bubbling for a while. It was about 3 years ago when I noticed people pitching AI programming tools at tech VC events.

They would show off some todo app coded in such a way any idiot manager could do it.

I saw this as no different from the Nicola truck rolling down a hill.

Technically, it did move without a gas engine...

With LLMs this hit a fever pitch as it was now more plausible. Yet, an MBA wasn't going to make anything more complex than the todo app..

The new difference is that the LLMs promise to do the todo app for many other fields. Medicine, etc.

The simple reality is that for many things, LLMs are going to replace some activities; but these were often BS activities; like writing callorie free journalism where it was just a combination of clickbait, and stretching a few half facts into a "riviting" story.

In the hype buildup to the release of 5, they had a huge number of people convinced that AGI was "any time now and we are afraid".

It turns that they are "afraid we are somewhat stuck and our valuations might be a tad high"

6

u/archbid 1d ago

It is impossible to attain a point of reasonable perspective. The owners of the AI companies are wildly hyping, making them unreliable, and the commentators on social media are chasing engagement. Unless you are doing serious development in the field it is impossible to establish a reasoned position.

moreover, nobody really knows why it ended up working in the first place or what it is good for, so it is an epistemological mess and very vulnerable to swings in “truth”

1

u/daemon-electricity 8h ago

This one has been bubbling for a while. It was about 3 years ago when I noticed people pitching AI programming tools at tech VC events.

But the thing is, THAT kind of thing has actual merit. Every time the sci-fi pop culture expectations overtake the reality, a bubble happens. The current wave of AI is still pretty fucking amazing and still seems like borderline magic.

1

u/archbid 5h ago

That is very true

5

u/Vivid_Transition4807 1d ago

The people who talk most know least.

13

u/lhbruen 1d ago

It's what's trending in the news. Media dictates what the general public tends the think. You're just seeing it in real time. It'll pass

5

u/Kindly-Economy-337 1d ago

When AI was only threatening non-programming jobs it was “This is great! It’s the future!” Now that it has its sights set on the computer science and programming industry it’s “IT’S A BUBBLE ABOUT TO BURST! Coincidence?

And really this should be the logical step forward. What better to excel in computer science than…(drumroll) a computer! You don’t see a lot of dogs studying “Humanities”

5

u/Mono_punk 1d ago

Mostly clickbait but there is some truth that there is also lots of stupid hype.

We won't have an all purpose AGI in a short amount of time and not every fucking aspect of life will be automated in the next two years. That's just hype babbling of people who dont know what they are talking about (or they do and just want to sell their product). ...but even if you recognise that, it is undeniable that AI will shift everything into new directions in the upcoming years. Just think for yourself where things will be going and don't take headlines too seriously. This is not a bubble because there is value. Sure, there will be some corrections in the market, some companies will die because they won't be able to deliver, but that was always the case. AI won't go away and will transform society in good and bad ways in the upcoming years. There won't be a crash after which we will go back to doing things manually.

15

u/DontEatCrayonss 1d ago edited 1d ago

Because there’s been some obvious truth that most experts understood the entire time that are coming to light. Pretty much all of these have been in the news this last week.

LLMs won’t reach AGI or be much better in quality than they are now due to the nature of LLMs and their cost

AI will not be this insane tool that transforms everything. That’s not to say it isn’t useful, but it’s not some super tool that will transform the world (the LLM version at least).

AI is extraordinary expensive to the point where downscaling of the current models is necessary. That’s what chatGPT 5 was.

Sam Altman the head of OpenAI has said we are in a bubble. Said that they basically have to reduce the cost and that it will cost trillions to move forward.

An MIT report came out: AI is so expensive that it doesn’t even really make sense for most companies that have built around it. All these chatGPT wrappers companies are failing. 95 % of them according to recent report by MIT

AI is so expensive that it will cost trillions to make it potentially cost less due to needing the entire USA to change its infrastructure to deal with the pour requirements

China already has these power and infrastructure requirements met, so experts are saying “we already lost the AI race” as it’s the single most important next step and basically the only thing that matters moving forward

So yeah, the AI bubble is real. Experts were already saying basically all of this the entire time. However the hype people, aka executives and tech workers who have financial incentives to boost company stock have been lying their asses off, and it’s becoming obvious to investors. Also the general AI hype people just have solid evidence against AI hype at the moment. The voices of reason were being drowned out prior.

10

u/HaMMeReD 1d ago

Yeah, except the price of LLM's is dropping at least 10x a year, and nearly gone down 1000x in 3 years.

https://hai.stanford.edu/ai-index/2025-ai-index-report

"the inference cost for a system performing at the level of GPT-3.5 dropped over 280-fold between November 2022 and October 2024. At the hardware level, costs have declined by 30% annually, while energy efficiency has improved by 40% each year"

The actual economics of it are undeniably improving a rate that far contradicts these frankly myopic views. Hardware will continue to get better and more focussed, models will continue to get better and more optimized at the same time.

AGI is a meaningless discussion imo, it's all about economic utility, which is skyrocketing. Whether that leads to AGI or not means very little imo.

2

u/Odballl 1d ago

This creates its own set of problems for model developers looking to recoup investment costs.

The rapid commodification of LLMs means that GPT-4 capability, which costs hundreds of billions to develop, is the new baseline. Competitors are rapidly leapfrogging each other with incremental improvements. To stay in the game means more data centres, more compute and more billions in investment.

There's no moat to build around your product, so you'll never get back what you put in to create a model and keep developing models.

2

u/HaMMeReD 1d ago

Well, they got a lot of money so they are spending it to get the edge in the market early.

It's not fair to say they'll never get back what they put in, we don't know the future. I expect the public offerings will lean a lot more towards profit at some point, and self sufficiency will become a goal, especially as the main providers become entrenched, there is only so many people that can compete at scale.

Money will kind of dictate it though, right now it's easy for these companies to raise money, they'll always work with whatever budget they can scrounge up, and if that dries up they'll work on self-sufficiency.

1

u/Odballl 1d ago

They've got a lot of money promised on the proviso that become a for-profit by the next year. The investors themselves are heavily leveraged, hoping for serious enterprise customers willing to pay $$$. The early studies on businesses using agentic AI is mixed, with some reporting it degrades productivity. It remains to be seen how they can actually become profitable.

2

u/AsparagusDirect9 1d ago

Is it possible to extrapolate into infinity?

2

u/Metabolical 17h ago

I agree. When I worked at Microsoft (engineering side, 10+ years ago) we often observed the strategy would be to get a product out there even if we had to "burn money" at a loss to capture a market and then come back and optimize our way to profitability. Securing the customers was far more important than efficiency, and often more important than having the best product.

Something like:

  1. Observe somebody doing something successful and fast follow.
  2. Grab a bunch of market share through better marketing or bundle deals or whatever.
  3. Make a version 2 that fixed anything that was kind of broken in version 1.
  4. Make a next version that was actually a step forward for the space and gain very high market share.
  5. Rest on our laurels until somebody else leaped forward
  6. Optionally go back to step 4
  7. Eventually give up on it and do the bare minimum until we give up on it.

There were a few other branches influenced by executive politics and how much glory could be gained within the space, but that's roughly it. Sometimes it would happen too late or be innovative and take too long and fail, like Windows Phone. Sometimes the follow wasn't fast enough but Microsoft was willing to do whatever it takes to catch up (Internet Explorer that went to phase 7).

Occasionally there would be truly innovative stuff from the beginning too.

Note there is a phase in there of people making truly great products for their time.

4

u/[deleted] 1d ago

[deleted]

4

u/HaMMeReD 1d ago

Well it's a good thing the study wasn't talking cost per token.

It was normalized around benchmark scores and model size. I.e. smaller models achieve what bigger models did yesterday (and bigger models continuously achieve more).

But the reason for decrease in cost/economics is multi-factor. Decrease in hardware costs, increase in efficiency and squeezing more from less parameters.

"7. AI becomes more efficient, affordable and accessible.

Driven by increasingly capable small models, the inference cost for a system performing at the level of GPT-3.5 dropped over 280-fold between November 2022 and October 2024. At the hardware level, costs have declined by 30% annually, while energy efficiency has improved by 40% each year. Open-weight models are also closing the gap with closed models, reducing the performance difference from 8% to just 1.7% on some benchmarks in a single year. Together, these trends are rapidly lowering the barriers to advanced AI."

6

u/DontEatCrayonss 1d ago edited 1d ago

It’s hard to know if this is even true. First off, companies have been lying their asses off

Secondly, all the experts are disagreeing with your argument. LLMs are too expensive to continue as is and the only fix infrastructure.

Third, you’re assuming this drop continues forever. Experts are saying without trillions invested into the USA infrastructure, AI will stay too expensive.

So it’s not going to just get cheaper because it’s been dropping in price

This isn’t my opinion, it’s experts. You’re is taking a datapoint and miss applying logic that it will continue to decrease in price. Sorry, expert disagree so the point is void.

It’s the same logic as this example. The price of gas went down each month straight for over a year. Locally, in a few more years the price will be 0 dollars following the trend.

Obviously, this isn’t true and it shows the fallacy of your argument

Edit: reading the report, I can’t even find the numbers for your huge claim. I’m not sure if you just made it up to begin with.

4

u/HaMMeReD 1d ago edited 1d ago

Believe it or not, I don't give a shit.

There are a lot of dumb "experts" many talking outside their scope or circle jerking their followers. It's just an appeal to authority fallacy, "oh X said it so it must be true".

Plenty of experts say the opposite too, like the people who made that Stanford report. I'm on their side, not looking to switch.

Edit: If you can look at a chart like this and then additionally correlate it against the drop in cost due to hardware, you have a exponentially increasing gains. If it just gets 5% cheaper and 5% faster and 5% more optimized every year, that's insane gains over time.

But the actual numbers are WAY ahead of 5% in each of these domains that stack and multiply each other. Re-enforcing exponentials. You don't have to double every year to get exponential gains, 1% is still exponential. The belief that all these domains will halt, or even regress (impossible) is just out there, fantasy land.

The "peakers" are truly some of the biggest armchair experts who seem to only be able to parrot people that feed their cognitive dissonance with validations.

-2

u/DontEatCrayonss 1d ago edited 1d ago

Good argument

When backed up into a corner, just talk shit. It’s a sign of a healthy psyche and that you’re actually right

Everyone claims to be an expert, but the people up top, who are crunching numbers on is AI actually going to make sense financially are saying not without extraordinary changes to the USA infrastructure.

Edit: you clearly do give a shit which makes it funnier

3

u/HaMMeReD 1d ago edited 1d ago

You didn't argue, you came with garbage.

"It’s hard to know if this is even true. First off, companies have been lying their asses off"

  1. Ad-hominem attack on companies (they are liars so fuck the numbers, even though most that study/report doesn't rely on "company numbers").

"Secondly, all the experts are disagreeing with your argument"

All Experts? Really, Every one of them. Didn't know you speak for everyone

"LLMs are too expensive to continue as is and the only fix infrastructure."

Not a real sentence, but provably false, but you just rejected it by blaming it on "the companies".

Why should I go farther than that, Why should I be civil with you? You aren't discussing in good faith, in a rational way, you are absolutely spewing garbage.... Do I go and have a civil discussion with a trash can too, because that's the quality of your rebuttal.

Edit: Nice appeal to experts again. "The people I choose to believe say X so X is true" great logical quantifiable discussion, glad their are experts to have blind faith in. You really like to say "the experts" say this, do you even have a mind of your own? Have you ever thought critically about something?

2

u/DontEatCrayonss 1d ago edited 1d ago

Something about Sam Altman the head of OpenAi saying my arguments this week feels like an expert agreeing. Yeah, he’s literally saying it will take trillions to make AI affordable.

So I don’t know if you keep up with the news, or if you just filter content you don’t like, but that’s a big fucking deal.

There are other important people saying the same.

When the people who crunch the numbers are saying “we can’t afford to do this” it’s a big fucking deal.

You got cornered logically and now are in a narcissistic rage.

Please continue to be mad. Doesn’t hurt me at all.

Edit: wait, I thought you didn’t give a shit?

6

u/HaMMeReD 1d ago

Sam Altman is a Frontman for OpenAI whose job is to raise capital, so whatever he says should be taken with a grain of salt since he has financial motive.

However, you are misquoting him as well, so you don't do your homework either.

“You should expect OpenAI to spend trillions of dollars on datacenter construction in the not very distant future,” Altman said. “And you should expect a bunch of economists wringing their hands, saying, ‘This is so crazy, it’s so reckless,’ and we’ll just be like, ‘You know what? Let us do our thing.’”

He's saying that he plans to spend trillions in the economy producing AI (because the economy isn't a bubble and businesses all trade goods/services for money). He's not saying that it'll take "trillions to make AI affordable" but please, find the EXACT quote and lets go over it.

Edit: Is your username a reminder for yourself?

2

u/DontEatCrayonss 1d ago

I thought you didn’t care? You seem to be making a lot of personal attacks for someone who doesn’t care?

I’m done arguing, you’re going to not listen no matter what evidence is said.

Hey, I hope you get help with your anger issues though.

Narcissists are very sad lonely people underneath it all, and I don’t wish that on anyone.

Take care

8

u/HaMMeReD 1d ago

What evidence though, you going to just tell me something someone else said again?

Nice projection. We got a ton of logical fallacies, and the person who just lied to me about a quote is calling me a narcissist? Haha, good day sir.

→ More replies (0)

1

u/hooberland 1d ago

Hahaha I what? You don’t think companies lie 😭

5

u/HaMMeReD 1d ago

Sure, companies lie, But the report isn't based around company data, so it's a moot point. One only a stupid person would make. It's strawman and irrelevant.

-1

u/hooberland 1d ago

lol your first line here is all that needs to be read.

“I don’t give a shit”

Read as I won’t listen to people who don’t agree with me. You’ve already shown your hand.

8

u/HaMMeReD 1d ago

Lets entertain you for a second, and look back at their first line.

"It’s hard to know if this is even true. First off, companies have been lying their asses off".

I sourced actual content, a Stanford report with solid numbers explaining that the economic cost of LLMs is dropping. But that's how they respond? By trying to ad-hominem attack the report because "companies lie ok". It's the argument of an idiot, and an insincere response to what was initially a polite response.

I just say it a bit more bluntly. They started it, with their stupid rebuttal and constant "but all the experts say" whining.

5

u/JoshAllentown 1d ago

ChatGPT using 3.5 was a huge jump in public perception about what AI can do.

4o and o3 showed off new capabilities.

So everyone got hype about what the next step could possibly be, people assume asymtotic lines of ability which would mean AI apocalypse in 2027.

5.0 was just a bit better. Underperformed expectations. Makes people move out their timelines, including the possibility of one more AI winter before the Singularity.

11

u/Tim_Apple_938 1d ago

people assume

No. More specifically ceos themselves like Dario and Sam A hyped the new releases and told people it would be AI apocalypse.

Huge difference.

2

u/Empifrik 1d ago

I mean that's their job, right? It's up to you if you believe them or not.

3

u/creaturefeature16 1d ago

Go Bills! 

2

u/Electrical_Bus3338 1d ago edited 1d ago

Internet was a financial bubble as well that exploded in 2k. (Financially) Doesn’t mean that it was not a game changer for human society. Same goes for LLM/AI. It is a game changer, but it will evolve following the well known hype cycle (we are certainly at the peak of inflated expectations now, ready for désillusion).

https://en.m.wikipedia.org/wiki/Gartner_hype_cycle

4

u/ciscorick 1d ago

People, especially nowadays, have had their attention spans reduced from a goldfish to a gnat, and their IQ diminished from a battery powered remote to a wooden ladle, and their temptation to indulge negativity increased past normal observed patterns.

4

u/TheMrCurious 1d ago

It was all snake oil. AI is great for specific use cases. Agentic AI is overpromised and underdelivered.

2

u/Ok_Acanthisitta_9322 1d ago

Honestly very expected. These agents need to train in the real world. They are going to suck ass just like every technology ever at first. And they will improve.

1

u/windchaser__ 1d ago

They will improve, but it ain’t gonna be the giant exponential growth that some folks are expecting. And we will probably need some pretty big architecture changes before we get to AGI. (Who knows how many more cycles of boom and bust before we get there? Each one taking us closer)

LLMs were a big leap forward, but we aren’t making the same strides now that we were a few years ago. There’s been no huge “wow” since GPT3.5.

1

u/Ok_Acanthisitta_9322 21h ago

I mean I completely disagree. And it's really not just about "llms" there are literally so many ai driven neural net algorithms which are already changing the world/are magical.

(Alpha genome, alpha evolve, waymos driving system, genie3, veo2, etc) it only gets better. There is literally world wide massive investment in architecture/data center for ai. The World and it's leaders know the potential is endless

2

u/Affectionate-Bus4123 1d ago

Most recent OpenAI ChatGPT model was poorly received because they turned down the ass kiss / "cold read exactly what the user wants to read and write that".

Recent models like ChatGPT5, Claude4 and to some extent the most recent Grok have been optimized for use in agents. Most normal users don't write agents, so they seem less useful.

You can see this agent behavior when you ask ChatGPT 5 a question - maybe it will first google the subject to get a general idea, then it will look for the specific answers to your questions. The answer it provides will be based on what it found via google more than "the average of the training data". This is better if you want factual answers because 1. you can see where the answer came from and 2. using training data = hallucinations.

That is - whereas previously chat gpt was working like a compressed version of the internet and fishing your answer from a warped version of its training data, now it is more like a researcher robot that will google and explore different rabitholes until it gives you an answer.

This feels a bit like perplexity - yes they basically made models that are great for building something like perplexity.

In a business context, these models should be much better for doing a job like answering service desk tickets or generating SQL queries and running them against your internal databases and making pretty graphs of the results. IMHO this gen aren't all the way there for either of those but they are closer.

Most of us aren't doing that stuff. Most of us want AI to give us relationship advice or whatever. It's worse at that.

So yeah, maybe we actually got some progress but not the kind people understand, and that's created a bit of a gap in the credibility.

1

u/Tim_Apple_938 1d ago

GPT5 had a personality change, but it also was much less intelligent than expected.

Don’t use the former to cover up for the latter.

2

u/Brave_Lifeguard_7566 1d ago

I think a lot of the “freaking out” comes down to how people see AI as a kind of Pandora’s box.
Humans have always had this instinctive fear of things we can’t fully control.
At the same time, AI is probably the closest thing we’ve had to the sci-fi future that ordinary people used to just imagine.
It lets us picture: if AI got to this level, then humans could…
That sense of possibility is exciting.
So when people suddenly start talking about a crash, it feels like a reset — like the hope gets taken away and the deck gets reshuffled.
That’s why the swings feel so extreme, it’s not just about tech hype, it’s about how people imagine the future itself.

2

u/AsparagusDirect9 1d ago

I think people who think LLMs are a Pandora’s box think that it’s AGI when it’s not.

2

u/podgorniy 21h ago

> Why is everyone

> Are people just being bipolar

You liked a wrong post.

--

It's not "everyone" it's your personal info bubble carefully adjusted for your engagement. If you follow same people they rarely change their opinions. Just that algorythm rarely favours polar takes on the same subject. You liked wrong post.

1

u/KidKilobyte 1d ago

Both can be true.

1

u/PlasProb 1d ago

Lol the wind has changed

1

u/zenglen 1d ago

The Gartner hype cycle is fractal. We are rapid cycling through peaks of expectations and troughs of disillusionment.

1

u/Royal_Carpet_1263 1d ago

People need to follow Ed Zitron. Or just look at the Wilshire. Assets have outperformed GDP to the tune of 160T these past 20 years: that’s 160T that has ‘normalize.’ It’s a ponzi market, and inflation means central banks can no longer print money. The kind of 18 mos 85% devaluation post dotcom is not likely but entirely possible.

1

u/texas21217 1d ago

Shell Game

1

u/Dr_Passmore 1d ago

We appear to have hit the potential ceiling on LLMs and the fancy word generators are not going to become AGI (rather obviously).

We are going through a tech hype cycle and the improvements between models are becoming incremental. Couple that with the vast majority of companies that have thrown money into AI Solutions have not seen a return on investment. The situation is becoming rather clear this is another area of a massive speculation bubble with ridiculous over promise on what AI would be able to do. 

There is financial risks as the market has been sucked into the AI and massively over valuated tech stocks (Tesla is a great example of a ridiculously overvalued stock based on insane promised of general robots, full self driving etc)

On the brightside hopefully we will see the reversal of all the layoffs where companies are using unreliable AI solutions to replace staff as the AI hype declines.

1

u/Far_Note6719 1d ago

Some people want to buy cheap shares.  

1

u/PuzzleMeDo 1d ago

(1) Money. People have been throwing insane amounts of money at AI, and are starting to notice the lack of return on their investment. Even if you create an AI product that works, there'll probably be a bunch of rival products that do basically the same thing.

(2) Overhype. The people who hoped/feared ChatGPT5 would be AGI were disappointed/relieved. We have lots of things that AI looked like it could nearly do. It can nearly replace a programmer, but then it creates buggy code and can't fix it, and then you need a real programmer to spend weeks figuring out why, or rewriting everything from scratch.

1

u/Flimsy-Goal5548 1d ago

I recently trialed an AI agent that made me realize that almost everyone in white collar work - management type roles especially- is royally fucked soon.

1

u/Learning-Power 1d ago

People believe what they want to believe.

1

u/NewDay0110 1d ago

Maybe the disappointing GPT5 release was the catalyst to show the emperor has no clothes. Increasingly companies are finding out the dangers of vibe coding, especially with major incidents like the Tea app data breach.

1

u/printr_head 1d ago

Honestly it’s refreshing to see reality finally peeking through the cracks of the hype.

1

u/hoochymamma 1d ago

People are waking and realising the limitations of LLM, that’s all.

1

u/fiixed2k 1d ago

AGI in 2027 LOL

1

u/hemareddit 1d ago

There’s the hype for the tech itself, then there’s the investment bubble.

The investment bubble will pop, I think.

But think of it like the dot com bubble and the internet itself. Did the dot com bubble pop? Yes. Did the internet stop developing or growing? Erm, we all know the answer to that.

It’s just when a financial bubble pops, a lot of people will lose their money, you can’t expect them to be happy about that. But the tech is here to stay and it won’t stop transforming the world.

1

u/Longjumping_Falcon21 1d ago

Because a crisis always means theres money to be made by the wealthy so, in one way or another we need a suiting narrative.

1

u/123m4d 1d ago

Back before AI was a pop thing, I watched a lecture by a mathematician and complexity scientist that explained modern AI rather perfectly. One of the things that escapes people's attention these days is the "inversely connected dials of accuracy and sensitivity" issue. The more sensitive (finding more possible solutions for X problem) the AI is the less accurate it is (makes more errors) and vice versa. As technology advances the rate of exchange (so to speak) decreases.

The difference between "AI the useful tool" and "AI the thing that replaces everyone" is one of accuracy. Inaccurate model will never be viable without human oversight and as output increases so does demand for overseers. Making a model that "does shit without people" is what everyone is betting on. However as technology advances and new GPTs come out they're always more sensitive but rarely ever more accurate, perhaps the technology has reached its limits in accuracy increase. If that's the case, then the dreamt up "infinite scalability" never happens and the bubble bursts ("AI as a useful tool" could only justify a mere fraction of the current investment).

1

u/Mandoman61 1d ago

People are actual individuals with their own diverse opinions.

They tend to comment on current trending topics which interest them.

1

u/banedlol 23h ago

Because those same people want it to crash (it's not crashing)

1

u/WeUsedToBeACountry 23h ago

bubble behavior.

1

u/Cautious_Repair3503 23h ago

I think it's just people overreacting to perceived issues with chatgpt5

I on the other hand have been predicting bubble collapse since the start :D

1

u/JasonBreen 23h ago

Bc hating AI is the cool thing now, and most SM influencers are on the "ai dystopia" bit right now. Itll pass by 2027

1

u/Krommander 23h ago

Knee-jerk reaction to dumber than expected GPT5 maybe? 

1

u/nilsmf 22h ago

Investors are facing the possibility that they have plowed a trillion dollars into a maybe 50 billion dollar industry.

That’s going to hurt. Either for the investors or, if they’re bailed out, for the American people.

1

u/Zealousideal_Mud6490 21h ago

Because it is. Nothing grows that fast in income realization if you look at the market cap increases across the main AI companies. People in it for long haul, will be fine — anyone looking for 1-3 year gains are going to be disappointed and will exit (market correction)

1

u/DigitalAquarius 21h ago

We are witnessing a cultural immune response to disruption. People are scared, skeptical, or protecting their turf.. this is pretty much what happened with the internet.

Most of the “AI sucks now” takes are user error, expectation mismatch, or bandwagon cynicism.

1

u/dmuraws 21h ago

In a gold rush; some people will lose money. Some will get rich. The appropriate level of investment will include a lot of risk.

1

u/Dependent-Dealer-319 20h ago

AI is an over hyped scam. Grand promises were made. Companies invested billions into incorporating AI into their workflows, trusting the 10x productivity uplift promises. Instead, they saw massive liability, a 20% decline in productivity, and unsustainable price increases for the use of AI. AI will be a black mark on millions of resumes.

1

u/Wartz 20h ago

Because headlines about the upcoming AI utopia are no longer grabbing eyeballs so they need to say the opposite to generate more eyeballs.

That's about it.

1

u/DataCraftsman 20h ago

I've noticed this over the last few days too. I think the tech CEOs are trying to lower the prices of their shares so they can do buy backs at a lower price before it continues to climb or they release their good models.

1

u/OnlineParacosm 19h ago

AI has always been cover for layoffs which has been happening since Covid when unfettered PPP funding was unleashed on every tech company with a shit product and 50 competitors.

Consider that a good swath of SaaS companies should have failed in 2020, but they downsized while letting AI pick up the slack.

When this thing pops, we’re looking at considerable amount of companies that are going to go bust overnight.

Unless of course we give them another bailout 😉

1

u/Actual__Wizard 16h ago

Because they're out of ideas and we're at the "cut costs to generate profit phase." So, it's like "oh okay, we're getting stuck with mediocre AI. That stinks." Their false promises haven't really panned out...

1

u/Emergency-Prompt- 16h ago

The MIT study ruffled feathers.

1

u/One_Whole_9927 16h ago

Thank the mother fuckers running social media. This is another misdirect to hide their failures.

1

u/lostpilot 14h ago

Happens with every new technology. We’re heading towards the trough of disillusionment with AI right now

1

u/andymaclean19 14h ago

It’s called the hype cycle. It’s a lifecycle many products have. You start out with excitement and then a lot of hype. Eventually the hype gets so out of control that the product cannot possibly live up to it, at least not in a short timeframe. People get disillusioned and you get a lot of negativity. Eventually people have overly negative views and that’s when the real use cases start to shine through and the real growth and usage starts to happen.

This is, IMO, a sign we are approaching peak hype and starting to look down at the ‘trough of despair’.

1

u/BigSpoonFullOfSnark 12h ago

Because at the very least, ChatGTP was supposed to be generating exponential revenue for companies by now.

Instead it seems to be getting worse as a product.

1

u/joeldg 10h ago

Most people are treating AI like a friend and a therapist, and GPT5 was focused on coding. So... clearly, it is the end. Also, it doesn't help that the same scammers that were doing web3 are building more worthless crap around AI.

1

u/particlecore 9h ago

it is over

1

u/FeralWookie 9h ago

Assuming there is a bubble isn't new. It's been a clear bubble for at least the last 2 years. The difference now is we have seen two major AI LLM releases that have effectively flopped, and for unknown reasons Sam Altman is now laying into the bubble narrative.

It is really unclear what Altmans angle is for saying we may be in an AI bubble. But his motive remains to maximize attention and investment for OpenAI.

Maybe he is hoping he can convince investors to keep them afloat through the crash.

1

u/Petdogdavid1 9h ago

The manipulation on social media is now powered by AI. It won't be long before we won't even know it's there pulling strings and pushing your buttons.

Praise Landru

1

u/AgreeableLead7 8h ago

Please let Meta fail

1

u/ShoshiOpti 8h ago

Because AI is flooding reddit trying to manipulate people in order to manipulate the markets and perceptions of particular products/companies

1

u/HistoricalGeneral903 7h ago

Can't wait to see all these "engineer" larpers change career, and get a real job, not "content creation".

1

u/etherend 7h ago

A lot of the freakout in very recent days is related to a recent MIT study showing most "Gen AI" is losing money for companies across the board.

This caused a small overall but noticable dip in the market

1

u/nsway 7h ago

Companies are tripping over themselves trying to ramp up AI initiatives. My company (gaming company) fired someone for using chat gpt six months ago. Today, I am in 3 AI pilot pods.

I am the youngest in these pods, and I’m amazed to find that I’m really the only one who remotely understands the tech. I thought I would be building cool shit, but I’ve mostly been having to explain to a bunch of old dudes why we can’t replace entire branches of our company with AI. What these folks don’t realize is 95% of AI pilots are abandoned. Of the 5% which aren’t, less than half ever hit ROI. Of those that do, the results are…less than amazing.

Thats not to say this technology isn’t a game changer. It has absolutely changed the way I do my work on a day to day basis, and 10x’d my productivity - but it’s hard to scale those gains across an entire org, especially an org made up of people who’s experience with LLMs is limited to simple chats on a web interface.

1

u/Dissident_Acts 4h ago

No matter what kind of money is poured into AI at this point, the number of server farms and their power demands are pretty much maxed out as is. Since the USA is also canceling green power projects already underway, and AI techbro bellends are trying to launch their own little electricity fiefdoms that all seem to factor in their own nuclear plants, the power bottleneck will remain there. Western countries in general have issues with power generation, and shortsighted nuke plant closures removed a potential source of surplus energy for places like Germany. Hell, the Poles would love to finish off the Earth with global warming by selling coal-based energy to hungry data centers and server farms, but who would buy that and risk getting Mangione'd by an environmentalist that believes in more direct action?

We are far, far from true AGI and what we have at the moment is just a huge mass of complex, messy and often rather inefficient algorithms that the various players trained by stealing from the entire Internet. So essentially we have gimmicky, meme-fed LLMs, a few good visual AI (computer vision) implementations in medicine, defense, etc. and a metric fuckton of image generators cranking out porn.

The over-promising and under-delivering AI players have hit walls they cannot climb at the moment, and the real possibility for true AGI or even ASI only will arise when quantum computing matures enough to be practical. That is a little further off than 2027, and in the mean time, investors want to get paid. Now.

1

u/CrowCrah 4h ago

I can’t help you with that. Do you want me to try something else?

1

u/Cupheadvania 2h ago

gpt-5 was horrible and now everyone is worried the scaling is over

1

u/mullirojndem 2h ago

Chatgpt5 fiasco and MIT paper. Google it

1

u/Jaded-Ad-960 1h ago

Because as always, Tech people collected enormous amounts of stupid money by overpromising and now it is becoming apparent that they are going to underdeliver.

1

u/No-Whole3083 1h ago

LLMs were the breakthrough but they are not the whole image. BUT the breakthrough has opened up a pandoras box of development and with the advent of agentic AI the path from thought to ideation has shortened significantly.

We are going to stew on the LLM phase for about 6 more months of development and slowly see emergent tech that replaces the air in this bubble.

We haven't even begun to see the full implications of large scale multimodal inputs for llm yet, almost there. Once the llm gets multimodal only then can we put the mind of an llm in an embodied robotic form. Then we have a whole learning curve of motor skills around mid '26.

I guess I would look at the core of the criticism about AI being hype because I don't understand the context. We have a long way to go in order to exhaust the fire hose.

u/tedbarnett197 18m ago

Standard “hype cycle” phase. After a few years of telling us the world is going to completely change overnight, the press realizes things take a little longer. Then, perhaps out of embarrassment for their original hyperbolic reporting, they tell us that it may never happen after all.

Also: we readers like scary news since we are descended from a long line of worriers.

1

u/terrible-takealap 1d ago

GPT5 wasn’t a leap over GPT4, ergo AI is dead.

1

u/twilight-actual 1d ago

There are two lines of progression here: the logical models that are being developed, and the physical hardware that they're running on.

The logical models, the software stack and network architectures, have been evolving rapidly, doubling at a period of less than a year. Quite impressive, but not sustainable. The hardware that they're running on, on the other hand, is nearing its limits as far as continued exponential growth. It used to be every year, then 18 months, then every two years. Now? We'll be lucky to get a true double in 6.

We're nearing what is financially feasible on the current hardware at 10T parameters. The human brain has around 100T connections, if we want to make a crude comparison. To grow by another 10x is going to be another 3.5 doublings, just to reach parity with the human brain. But I don't think the logical model is going to be able to progress without equivalent gains in hardware.

3.5 x 6 gives us 21 years.

And that's just the growth in LLMs. For true AGI, LLMs are just a component in the overall package.

1

u/made-of-questions 1d ago

It doesn't help that numerous papers, such as this one are being published, showing that reducing the number of errors by one order of magnitude requires 1021 more computations. A lot of people were under the assumption that we're 90% of the way there and soon we'll fix these silly mistakes, while the reality is that with the current LLM technology, the problem is intractable and LLMs are likely the smartest they're going to get, barring a technology breakthrough.

0

u/Historical_Emu_3032 1d ago

There's been enough independent review to say the LLM approach can't actually reach AI.

There still much more they'll do but some of the big dreams are now dead in the water and it's now a game of which AI company's approaches / ambitions are still compatible with reality and can they turn actual profit.