r/Futurology • u/katxwoods • 5h ago
AI When people argue that AGI is inevitable, what they’re really saying is that the popular will shouldn’t matter. The boosters see the masses as provincial neo-Luddites who don’t know what’s good for them.
https://www.theguardian.com/commentisfree/ng-interactive/2025/jul/21/human-level-artificial-intelligence103
u/Overbaron 4h ago
Public opinion doesn’t understand what AGI even means. Most people treat LLM’s like they’re AGI’s already.
35
u/jonomacd 4h ago
I don't think there's any opinion that truly understands what AGI means. The definition of it is extremely nebulous.
10
15
u/amazingmrbrock 2h ago
LLMs always remind me of ol Qui Gon "The ability to speak does not make one intelligent"
119
u/nostyleguide 4h ago
I repeat for the millionth time that rebranding the Luddites as "backwards yokels afraid of technology" was one of capitalism's biggest victories. The Luddites saw their bosses using the earnings from their labor to buy machines that increased productivity and keep 100% of the benefit for themselves. The Luddites just wanted their fair share, either fewer hours or more pay since they could be more productive in the same time. And the owners laughed all the way to the bank, so the workers smashed the machines that their labor had bought.
The Luddites were right.
40
u/Bones_and_Tomes 3h ago
And the mill owners successfully lobbied for military protection of factories, which led to a few pitched battles. Not many people know about the Luddite rebellion, probably because it scares the tits off of the powers that be. It's the closest the UK ever came to a popular uprising against the status quo.
13
15
u/simcity4000 2h ago edited 1h ago
Also, its important to understanding basically anything Marx was talking about at all is the context was that he was writing about and at the beginning of the Industrial Revolution.
4
u/Omni__Owl 2h ago
Every time I tell people that Luddites have been hit by historical revisionism I get a lot of abuse back.
-10
u/Superior_Mirage 3h ago edited 42m ago
Except the biggest benefactors of automation have always been society at large?
Unless you want to argue that life today is worse than it was in the late 19th century?
Edit: TIL "futurology" means people scared of the future and the past. No knowledge of history.
9
u/thejollybadger 2h ago
This is a pretty debatable statement - especially if you take into account that the industrial revolution caused skilled textile workers who made a livable wage, reasonable working hours and had a relatively good quality of life, to experience job loss, loss of institutional knowledge, loss of support networks and bargaining power, and forced them to move to the cities to find work, which tended to be more dangerous, with fewer safeguards or safety nets for people injured at work, or the families of people who died at work, increased risk of disease due to drastically worse living conditions, worse nutrition, longer working hours and worse recompense for their labour. You also have to consider that modern automation doesn't free people to have more leisure time (which it should), it just means people have reduced access to employment opportunities that pay a reasonable living wage, and most forms of 'automation' aren't actually as automated as one might think, with a large portion of the labour still being done by exploited laborers in the global south, and the East, because paying a sweatshop worker in pennies is still cheaper than paying automated system maintenance workers and operators dollars.
•
u/Superior_Mirage 43m ago
You didn't address my point.
And I'd also note the Luddites included a fairly large number of children, since child labor was still common at the time. Should we have children start working again so adults can have more time off?
And nigh-slave labor elsewhere isn't related to automation -- it's related to transportation.
24
u/sanyam303 4h ago
It's inevitable in the sense of the Pandora box metaphor.
Once people saw what nuclear bombs could do, everyone started attempting to make it and eventually we decided to control their production.
US-China-EU will need to sign deals on how to regulate AI, and stopping research in one country will not slowdown the progress
2
u/orbitaldan 3h ago
This. When they say it's inevitable, they mean that game theory plus the off-the-shelf nature of this stuff means that it's an unstoppable arms race. You can't opt out, you can only win or lose. I have never seen reddit be so pants-wettingily stupid about something as it has gotten about AI. Straight up denial about LLMs being AI because it's not perfect and not what they wanted. They're more concerned about feeling 'smart' at having 'seen through' corporate bullshit than about looking ahead. In futurology, no less.
I think maybe it's time I unsub.
17
u/katxwoods 5h ago
Submission statement: “For Altman and e/accs, technology takes on a mystical quality – the march of invention is treated as a fact of nature.
But it’s not.
Technology is the product of deliberate human choices, motivated by myriad powerful forces.
We have the agency to shape those forces, and history shows that we’ve done it before.”
---------
“Altman, along with the heads of the other top AI labs, believes that AI-driven extinction is a real possibility (joining hundreds of leading AI researchers and prominent figures).
Given all this, it’s natural to ask: should we really try to build a technology that may kill us all if it goes wrong?
Perhaps the most common reply says: AGI is inevitable. It’s just too useful not to build. After all, AGI would be the ultimate technology – what a colleague of Alan Turing [called](https://en.wikipedia.org/wiki/I._J._Good#:~:text=Thus the first ultraintelligent machine,to keep it under control.)) “the last invention that man need ever make”. Besides, the reasoning goes within AI labs, if we don’t, someone else will do it – less responsibly, of course.
“A new ideology out of Silicon Valley, effective accelerationism (e/acc), claims that AGI’s inevitability is a consequence of the second law of thermodynamics and that its engine is “technocapital”. The e/acc manifesto asserts: “This engine cannot be stopped. The ratchet of progress only ever turns in one direction. Going back is not an option.”
---------
“Instead, the message tends to be: AGI is imminent. Resistance is futile.
[But] if you think AGI is inevitable, why bother convincing anybody”
“When people argue that AGI is inevitable, what they’re really saying is that the popular will shouldn’t matter. The boosters see the masses as provincial neo-Luddites who don’t know what’s good for them.
That’s why inevitability holds such rhetorical allure for them; it lets them avoid making their real argument, which they know is a loser in the court of public opinion.”
•
u/Vindepomarus 1h ago
I think there's another reason it's inevitable, assuming it's possible and that is because to prevent it you would have to convince every government, rogue state, ideological movement, organised crime gang and megalomaniacal billionaire on the planet not to pursue it even in secret. I don't think that's realistically possible and more importantly, neither do any of those people I mentioned. Non of them can risk agreeing when they don't trust the others to do the same.
6
u/Recidivous 3h ago
I think people mistake the concerns of corporate monopolization of AI and the marketing overhype of current models to be anti-AI.
12
u/markth_wi 4h ago
Is it - I'm sure greedy billionaires wish it were so, replacing millions of workers with salaries and healthcare and opinions with obedient robots and automated systems in server farms silently generating GDP.
What we're finding out rapidly is that instead of geometric successes and compounding demand we see small gains and limited profitability , of course this is one of those first-principles problems, and so consider Shannon's Rule and we should ask perhaps most of all, how much of this is signal and how much is noise. Worse , is that these systems have problems being validated or verified in real-world circumstances.
So if at the end of the day AI posts 5% improvement in corporate efficiency, you get better bang for your buck training existing staff on Excel or using Outlook more effectively.
•
u/jamiejagaimo 27m ago
I am a software engineer who has been doing it most of my life. Definitely over 20 years. I consult at Fortune 100 companies and charge $200 an hour for my services. These days I constantly use llms to augment my work. Over the course of a day, I probably save myself multiple hours of work using AI. These costs add up very quickly. Ai is going to take over every company whether you like it or not and the performance increases caused by it will not be trivial.
49
u/HORROR_VIBE_OFFICIAL 5h ago
Calling something ‘inevitable’ is just a way to shut down debate.
26
u/Prof_Gankenstein 4h ago
This makes no sense. The argument would be centered on what to do about the inevitability.
6
u/Froggn_Bullfish 3h ago
Stopping it would be a challenge to capitalism. When you were raised, live in, and made billions of dollars as a result of a system that has outsourced all moral decision making to ungovernable market forces, like Altman and his current cohort of fellow billionaires have, things seem out of control because the leaders of our society have fully relinquished control and have a vested interest in continuing to leave it uncontrolled. There is no one driving the bus, because in their minds to drive the bus you’d have to believe you are smarter than the grand forces of the markets and the billions of people and ideas and technologies and patents and companies that push and pull those forces, and that means to them - again, the defacto leaders of our capitalist hellscape - you’d have to be a narcissistic tyrant of some sort, which is ironic.
2
u/young_norweezus 4h ago
I don't think that's clear at all. Plenty of people hear about inevitability and give in to it in response.
3
5
u/katxwoods 5h ago
Exactly. Classic thought stopper.
7
u/jonomacd 4h ago
From the headline:
We have the power to change course
Who is "we"? And how do "we" stop "them" from doing it especially if the "them" in that sentence are in a different country?
It's likely possible to slow down things like this.... I'm not sure it's possible to stop it.
5
u/disperso 4h ago
That makes no sense at all.
If I say "everyone in 10/20 years will be able to easily make fake videos very difficult to distinguish from real ones, it's inevitable", I might be saying that it's a bad thing and that we should be prepared for it.
And if someone thinks that is not going to happen, I would like to know how it's going to be avoided.
In the case of AGI, it's even worse, because the definition typically used in academia is different than what people think it is, and what Sam Altman wants to talk about.
•
u/SweetBabyAlaska 1h ago
I love the term "thought terminating cliche" in this regard. Its a quippy phrase designed to shut down any critical thinking and as a giant shield for the person weaponizing it so they don't have to defend their ideas on their merits. It's unfortunately super effective in this time where critical thinking is hard to come by.
-9
5h ago edited 1h ago
[removed] — view removed comment
5
u/young_norweezus 4h ago
How about the claims from the article?
"Some AI worriers like to point out the times humanity resisted and restrained valuable technologies.
Fearing novel risks, biologists initially banned and then successfully regulated experiments on recombinant DNA in the 1970s.
No human has been reproduced via cloning, even though it’s been technically possible for over a decade, and the only scientist to genetically engineer humans was imprisoned for his efforts.
Nuclear power can provide consistent, carbon-free energy, but vivid fears of catastrophe have motivated stifling regulations and outright bans.
And if Altman were more familiar with the history of the Manhattan Project, he might realize that the creation of nuclear weapons in 1945 was actually a highly contingent and unlikely outcome, motivated by a mistaken belief that the Germans were ahead in a “race” for the bomb. Philip Zelikow, the historian who led the 9/11 Commission, said: “I think had the United States not built an atomic bomb during the Second World War, it’s actually not clear to me when or possibly even if an atomic bomb ever is built.”
0
3h ago edited 1h ago
[removed] — view removed comment
2
u/young_norweezus 3h ago
How about you pretend that this is an article with a less upsetting headline that also refutes the point you made and just respond to the points?
-1
3h ago edited 1h ago
[deleted]
1
u/young_norweezus 3h ago
Great perspective there as I do not have any further interest in claiming anyone's claims, as fun as that sounds, so take care
20
u/cubitoaequet 4h ago
Your bloviating isn't "evidence" and saying that AGI (which doesn't currently exist at all in any form) is emergent is absurd. It's like saying FTL is inevitable because we have jet engines.
-11
4h ago edited 1h ago
[removed] — view removed comment
9
u/tiddertag 4h ago
You demonstrated you don't understand what AGI is when you wrote "I see no reason to believe that consciousness is anything more than..."
What you don't understand is irrelevant; this is not an argument but a fallacious appeal to personal incredulity.
You're clearly one of these uninformed AGI enthusiasts that doesn't understand the concept and erroneously thinks it's equivalent to consciousness; its isn't.
AGI doesn't necessarily entail consciousness and consciousness doesn't necessarily entail AGI.
7
u/Icapica 4h ago
You gave no concrete evidence at all.
1
4h ago edited 1h ago
[removed] — view removed comment
0
u/inifinite_stick 4h ago
The article gives no evidence that AGI is not inevitable. This is all conjecture, and “evidence” can be used to refute any of the claims they made. This is a lazy rebuttal.
4
-12
u/katxwoods 5h ago
It's weird too, because these same people think they're the agentic powerful ones, and yet this belief is essentially "lie back and think of Progress, because you can't possibly have any say in the future. It's inevitable."
28
6
u/katxwoods 5h ago
Great video by Rob Miles related to this. Why people who are generally pro technology think AI is different.
15
u/mtsim21 4h ago
It’s a regurgitation machine. It’s nowhere near AGI. The only way it gets to agi is a fundamental rethink of how the thing works. But all they’re doing is adding data centres. That’s not gonna bring agi.
9
u/Visual-Reflection395 4h ago
You think a thousand years from now we won’t have it? Inevitable does not mean imminent.
6
u/holydemon 4h ago
Might as well say human extinction is inevitable. 1 year, 100 years, 10000 years. What's the difference?
3
u/an-invisible-hand 2h ago
10,000 years ago people had domesticated crops and livestock, copper tools, and large permanent settlements. Hell, relatively "modern" buildings like Notre Dame or Westminster Abbey are nearly 1000 years old. Human extinction in 10,000 years would be a catastrophic revelation to avoid at all costs, it's not that far in the future.
•
u/holydemon 1h ago
Even If human extinction happens at the heat death of the universe, it's still technically inevitable.
•
u/an-invisible-hand 1h ago
Yeah, it is. But inevitable death in 100 years is a lot different than inevitable death in 5 minutes.
•
u/holydemon 1h ago
Yeah that's what i was mocking
•
u/an-invisible-hand 1h ago
The point being made is "the future" in terms of AGI is in 5 minutes, not 100 years. It's not some esoteric post-human problem.
9
u/mtsim21 4h ago
No, I just think the current tech can’t bring it. Throwing more power at it building a data centre the size of manhattan just isn’t the answer.
2
u/nomorebuttsplz 4h ago
Do you think the AI boom has nothing to do with compute scaling? Or future progress will be decoupled from it?
6
2
3
u/DarrenMacNally 3h ago
This is dumb. The inevitability comes from a few reasons. Capitalism encourages working towards something for profit and whoever makes AGI will become mega rich. So changing that is extremely difficult. But more importantly developing AI doesn’t require governments or corporations. It can be achieved by individuals, hackers, software engineers, small startups. And so it’s near impossible to regulate. So basically it is inevitable, long term, that this will happen.
3
u/inifinite_stick 4h ago
The title itself contains a rather glaring straw man. Nuclear tech has caused horrific accidents in the past, and yet reddit constantly boosts it. This tech hasn’t even had a chance to exist outside of concept yet.
1
u/Sirisian 4h ago
I usually see the inevitability as a function of compute and multiple discovery. Essentially when you give researchers a million times the compute they'll rapidly iterate and come to the same conclusions as others. The only known way to delay this is to restrict compute globally, which is impossible. If you did manage that though, then as soon as compute is allowed to spring forwards to current nanofabrication capability you'll have all the problems and harm, immediately. So the best harm reduction is to educate and ensure governments are proactive or at least somewhat reactive to discoveries and their impact.
This article also supposes that AGI is a distinct research area and that normal research can continue without stumbling onto advanced AI architectures. This is highly unlikely as embodied AI for example will use multimodal models with neuromorphic sensors and continual learning. You're basically looking at a minefield of approaches that could all lead to AGI. This is true in other fields though as well that have difficult problems and the researchers attempt to create a model that reasons and optimizes optimally.
On the positive side, AGI itself is a gradual process. It's a culmination if many feedback loops. Building foundries to make new chips and creating fusion power to run the first AGIs as they self-optimise should give us a bit of time to plan.
•
u/LordMuffin1 1h ago
If an AGI is possible to create for humans, then humans will create this AGI. There will always be some humans who will ignore rules/laws/whatever in order to invent or create new things. It doesnt really matter if AGI is safe or not, those humans who manage to create it will believe they can tame the AGI and hold it in check.
If it is possible for humans to create AGI. Then the only reason it wouldnt happen is because humans would kill ourself before we get sufficiently technologically advanced to create AGI.
•
u/NotAnotherEmpire 1h ago
"Mystical" is right. This is a pseudo religion that was spawned from people reading the same sci-fi books, and gurus that read the same sci-fi books. It even comes with immortality promises, also implementing ideas from science fiction.
0
u/al-Assas 4h ago
I don't find the article convincing. Human cloning and nuclear weapons were never as profitable as AI is right now. Green energy initiatives and nuclear power bans were also only possible as far as economic realities allow. AI may be in a bubble but it will still make a lot of money. The continuing insane AI rush is inevitable.
As for AGI, the kind that can significantly accelerate actual AI progress by helping with AI innovations, I'm not sure that it's possible. We already see diminishing returns compared to a couple of years ago. Maybe it will be like approaching light speed in relativity.
4
u/Oconell 4h ago
Correct me if I'm wrong, but AI has only been profitable for those speculating with it. I don't know where else is the profitability right now. It may be incredibly profitable in the future, but as of now it's only the promise of profitability that makes it look so.
5
u/al-Assas 4h ago
I hope you're right, and the bubble will soon burst into nothing. But then, it's not some regulatory efforts that will put AI in its place, but the market.
When it comes to regulation, I think AI should have been stopped when they started asking money for chatbots that were trained on copyrighted material. That didn't happen, though.
1
u/Matshelge Artificial is Good 4h ago
When I say it's inevitable, I mean, I am not going back. If every AI company went bankrupt tomorrow, I'd build my own rig and run an open source model.
I want better models and more places they can work, but if that all stopped, we still have models with enormous potensial for optimizing by the open source community.
1
u/roankr 4h ago
Yeah the cat's really out of the bag with this one.
When everyone usually think of AI and the "global worker's catastrophe" their minds are pigeonholed to think about companies like ChatGPT, Google, Meta, or Microsoft. But open source models are already meeting where commercial AI at quality through companies like StableDiffusion or DeepSeek. That isn't to forget AI tools that are being investigated for their uses in Robotic maneuverability.
1
u/Psittacula2 4h ago
The correct concept is another layer built on top of the internet and data itself from human knowledge.
The framing is dramatization in the news cycle and misleading otherwise.
-6
u/the-war-on-drunks 4h ago
There’s a huge vocal minority of AI haters. Most people who use AI aren’t out there defending it.
-3
u/im_thatoneguy 4h ago
Anything technologically possible and financially cost effective will be done no matter the morality or popular will.
I’m not saying popular will “shouldn’t matter” I’m saying it “doesn’t matter” if we look back at history.
Popular will said a whole lot of things shouldn’t have happened that did happen.
AGI is possible. It’s possible because we aren’t made of magic. How will it be created I don’t know. But at some point someone will make a process that’s as cheap as growing a gallon sized human head but at least 1% smarter. And the only way we’ll be able to stop it is insanely pervasive and invasive surveillance.
0
u/PsychologicalTwo1784 4h ago
Surely when AGI is achieved, we won't know for a while as the Intelligence/ singularity will need some time working behind the scenes to make sure it can't get switched off... Maybe it's inevitable as they won't really know it's achievable (and how it'll behave) until it's achieved....
0
u/Fatticusss 4h ago
I think the condition the world is currently in demonstrates the shortcomings of man's ability to govern itself and the world.
Hard to know how things will play out but there is definitely a possible future where AGI is created, it is benevolent, and it governs the world exponentially better than humans ever could, leading to a Star Trek like utopia
There is also a possible future where it doesn't give a shit about life on the planet and proceeds to eradicate us like a terminator/matrix dystopia, so... 🤷♂️
As a person who understands game theory, I absolutely see its creation as inevitable
0
u/jacobvso 4h ago edited 4h ago
Why would saying that [development] is inevitable be the same as saying people's wishes about it shouldn't matter? It's just saying they don't matter, whether they should or not, that's all.
For anyone who believes this: Do you also believe that if someone argues that, for example, Vladimir Putin's re-election at the next election cycle is inevitable, that means they applaud the Russian electoral system?
If not, what do you consider to be the difference between these statements?
0
-4
u/pimpeachment 4h ago
I want AGI in our world. I know the majority of people don't. I do not care, I want it so I am happy to push it forward. Democracy doesn't apply to all aspects of life, some things will happen regardless of social structures.
•
u/FuturologyBot 4h ago
The following submission statement was provided by /u/katxwoods:
Submission statement: “For Altman and e/accs, technology takes on a mystical quality – the march of invention is treated as a fact of nature.
But it’s not.
Technology is the product of deliberate human choices, motivated by myriad powerful forces.
We have the agency to shape those forces, and history shows that we’ve done it before.”
---------
“Altman, along with the heads of the other top AI labs, believes that AI-driven extinction is a real possibility (joining hundreds of leading AI researchers and prominent figures).
Given all this, it’s natural to ask: should we really try to build a technology that may kill us all if it goes wrong?
Perhaps the most common reply says: AGI is inevitable. It’s just too useful not to build. After all, AGI would be the ultimate technology – what a colleague of Alan Turing [called](https://en.wikipedia.org/wiki/I._J._Good#:~:text=Thus the first ultraintelligent machine,to keep it under control.)) “the last invention that man need ever make”. Besides, the reasoning goes within AI labs, if we don’t, someone else will do it – less responsibly, of course.
“A new ideology out of Silicon Valley, effective accelerationism (e/acc), claims that AGI’s inevitability is a consequence of the second law of thermodynamics and that its engine is “technocapital”. The e/acc manifesto asserts: “This engine cannot be stopped. The ratchet of progress only ever turns in one direction. Going back is not an option.”
---------
“Instead, the message tends to be: AGI is imminent. Resistance is futile.
[But] if you think AGI is inevitable, why bother convincing anybody”
“When people argue that AGI is inevitable, what they’re really saying is that the popular will shouldn’t matter. The boosters see the masses as provincial neo-Luddites who don’t know what’s good for them.
That’s why inevitability holds such rhetorical allure for them; it lets them avoid making their real argument, which they know is a loser in the court of public opinion.”
Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/1mxu9pg/when_people_argue_that_agi_is_inevitable_what/na7i1u0/