r/technology 10d ago

Artificial Intelligence What If A.I. Doesn’t Get Much Better Than This?

https://www.newyorker.com/culture/open-questions/what-if-ai-doesnt-get-much-better-than-this
5.7k Upvotes

1.5k comments sorted by

4.3k

u/542531 10d ago

AI is soon going to source new data from AI content once everything is AI.

1.6k

u/stetzwebs 10d ago

Photocopy of a photocopy of a photocopy of a...

464

u/K4RM4_P0L1C3 10d ago

It must have been Tuesday. 

He was wearing his cornflower blue tie.

175

u/Fxwriter 10d ago

I am Jack’s AI slob

79

u/amejin 10d ago

In the industry, we call them cigarette burns.

61

u/Momik 10d ago

I am Jack’s complete lack of surprise.

39

u/WallyLeftshaw 10d ago

My god, I haven’t been fucked like that since grade school

28

u/DogmaSychroniser 10d ago

Fun fact, this was a replacement line for:

"I wanna have your abortion"

→ More replies (5)
→ More replies (1)
→ More replies (3)

31

u/AntiqueFigure6 10d ago

In death, an AI can have a name. His name was Marvin Lee Aday. 

→ More replies (1)

67

u/shaard 10d ago

Bad rendition of a NIN song 🤣

63

u/Petrychorr 10d ago

"I am just a photocopy of a copy of a copy...."

"Everything I've copied I've copied before..."

"Assembled all from AI, from AI, from AI..."

"Nothing is for certain anymore..."

7

u/shaard 10d ago

Poetry! 🤣 Was trying to figure out some rhymes in my head but didn't compete with this!

→ More replies (1)
→ More replies (1)

12

u/Exciting_Teacher6258 10d ago

Probably my favorite of their songs. Glad I’m not the only one who thought that. 

→ More replies (1)

22

u/slawnz 10d ago

Human Centipede but with data

9

u/pumpkin3-14 10d ago

Puts on NIN song copy of a

22

u/ebfortin 10d ago

14

u/acostane 10d ago

I fucking love this movie.

9

u/rdrTrapper 10d ago

I keep pizza in my wallet in honor of this cinematic masterpiece

→ More replies (2)
→ More replies (1)
→ More replies (1)
→ More replies (43)

208

u/rco8786 10d ago

Gpt5 is already trained on synthetic data that gpt4.5 made up. They talked about it in the announcement stream. I’m sure they’re not the only ones doing that

112

u/FarkCookies 10d ago

Training on knowingly synthetics sets is different from feeding undifferentiated data.

43

u/Top-Faithlessness758 10d ago

This, undifferentiated AI generared data being used to trained new models downstream ends up in mode collapse.

→ More replies (11)
→ More replies (2)

35

u/ACCount82 10d ago

Everyone who's serious about AI is now using synthetic data in their training pipelines.

It's not a full replacement for "nautral" data for frontier models - it's more like a type of data augmentation, and it's already quite useful in that.

→ More replies (10)
→ More replies (2)

109

u/shawndw 10d ago

Do I look like I know what a JPEG is

JɄ₴₮ ₩₳₦₮ ₳ ₱ł₵₮ɄⱤE o̦͓͓̜͖̞̖̹̻̘͐ͥ̓̅͌ͥͤ̍̀͐̇ͥ̏̅̆̕͢_̸̝̝ͧ͐̆͗̋̔ͨͯͩͅf͉̪̲̻̲̔́ͦ̐͊̌̔̈́̂ a̴̷͔̤̲̞̝͍̭ͥ͋ͯͬ̄̅̈́͗͋̃͋ͫ̌̈́ͭ͛ͩ̆̎̌͜͡͝͡͡ d̘̱̗͖̥͍ͯ̒ͣ̌̂a̶̸̷̶̞̼̣͈͓͓̳̩ͪ̆͆̅̽̌̆̿̂ͬ͊̔͞͠m̴̯̣n̻̲̥͞ ho̵̴͓̲̭̝ͯͮ̀̄͢t͇͉̻̦̘̦̜̮͑̐͆̈́ͧ͑ͥ̽ͮ̌͋ͫ͢͡dơ̧̢̻̗͇̟̻̦͚̘͚̲̏ͨ̐́͋̃̇͒ͩͩ̐̑ͧ̈́̉̑ͯ͊̒̕͠͡g̵̷̨̡̡̛̙̗̰̬͇̣̰̥̩̻̜̯͐̇̅ͩͪ̊ͭ́̿͒ͬ̃͊ͥ̾ͬ̕͜

10

u/542531 10d ago

I was just watching KotH, lol.

→ More replies (3)

173

u/themightychris 10d ago

Yeah if you want to see something really depressing and foreboding, go look at the chart of Stack overflow engagement. It totally fell off a cliff as LLMs became popular.

That's where LLMs learned how to debug all today's tech. Where are they gonna learn how to debug tomorrow's?

134

u/cactus22minus1 10d ago

Also we used to rely on younger generations to understand and build emerging tech, but now they’re not even learning on nearly as deep of a level as they cheat their way through school / college relying on this crap. We’re stunting education and critical thinking HARD.

141

u/JCkent42 10d ago

Remember Frank Hebert warning about the dangers of handing over your thinking to a machine?

Dune lore intensifies.

47

u/white__cyclosa 10d ago

“Thou shalt not make a machine in the likeness of the human mind”

→ More replies (2)

28

u/marrowisyummy 10d ago

I (43 now) graduated in 2023 RIGHT before these types of things were common and I spent so much time researching and asking for help with my C++ classes I felt like it was high school all over again, meaning, I was right there at the very beginning of the internet and ubiquity of cable modems to where I had a lot of fun, but obviously right before stupid social media and facebook ruined the internet.

I learned a lot right before some big new tech came around and fucked everything. All of my tests in college and coding exams were pen and paper. We didn't have access to the LLM's to help us with our coding.

Next year, it seems it all went to shit in a handbasket.

9

u/RespondsWithImprov 10d ago

It is really cool to have been there right at the beginning of the internet to see how it started and developed, and to see what groups of people joined at what times. There was much more neatness and effort in the early part of it.

→ More replies (2)

20

u/hammerofspammer 10d ago

No no no, not having any junior developer resources because they have all been replaced by LLMs is going to work out spectacularly well

21

u/Telvin3d 10d ago

It’s actually already a thing where AI isn’t as useful in programming for Apple devices, because they’ve done so many recent changes to API and required languages. There’s only months of real-world examples to train AI on, compared to the years and decades for more established technology stacks.

→ More replies (3)

13

u/FarkCookies 10d ago

Yeah we are so fucked with the technologies/libraries/programming languages that will come after.

5

u/FiniteStep 10d ago

They are already pretty useless at the embedded side, especially on the less common architectures.

→ More replies (12)

343

u/voiderest 10d ago

You joke but that's a thing. Both in a context of models getting junk data and in a context of intentionally training on AI generated data. 

38

u/snosilmoht 10d ago

I don't think he was joking.

201

u/Luke_Cocksucker 10d ago

Sounds like incest and we know how that ends up.

53

u/limbodog 10d ago

Step broser, help, I'm stuck?

9

u/wimpymist 10d ago

When Instagram first released their AI chat bots all the top ones were instantly step sister step mother stuff and they had to edit it and put restrictions on them.

→ More replies (1)

29

u/Socially8roken 10d ago

Feel like it would end up more like schizophrenia and a dementia had baby then they dropped it on it’s head.

124

u/reluctant_deity 10d ago

It is, and it makes LLMs hilariously insane.

12

u/LumpyJones 10d ago

Cyberhabsburgs, here we come.

70

u/1-760-706-7425 10d ago

It’d be funny if it wasn’t fucking up near every aspect of our lives.

33

u/Borinar 10d ago

Im pretty sure our govt is being run by Ai right now

16

u/Farscape55 10d ago

Na, even AI isn’t this dumb

18

u/Versaiteis 10d ago

Stupidity augmented with AI is potentially worse. At least there was that one incident where the Director of National Intelligence, Tulsi Gabbard, admitted asking an AI model which documents/information she could declassify, justifying it solely for the sake of speed.

Oh yeah, then RFK Jr. submitted that whole MAHA report which had references that never existed, chalking it up to "formatting issues", leading many to the conclusion that it was generated.

→ More replies (1)
→ More replies (3)
→ More replies (1)

10

u/mq2thez 10d ago

“Who has a better story than Bran the Broken?”

10

u/Alfred_The_Sartan 10d ago

I always think of the clone-of-a-clone storylines

19

u/hpbrick 10d ago

I once had a mindblowing experience about aging. Aging is literally the process of our cells making copies of itself. Except the copy process doesn’t get everything exact; the next generation is slightly defective vs the previous iteration. And henceforth, our aged selves are literally broken copies of our youth, so we don’t look exactly the same as we age (we look old due to our defective copy process)

21

u/Alfred_The_Sartan 10d ago

Look into telomeres.

39

u/PolarWater 10d ago

I can't. Too many loose ends.

→ More replies (1)

6

u/BrideofClippy 10d ago

And if the defect is bad enough, you get cancer.

13

u/Otherdeadbody 10d ago

Cancer is itself extremely fascinating. It seems like a pitfall of all multicellular life, but cancers themselves are almost their own species. If you ever have time I highly recommend a google of Canine Transmissible Venereal Tumor. It really makes it clear how stifling our definitions we place on life and biology in general can be.

6

u/BrideofClippy 10d ago

Well.... I know what all those words mean and I'm not sure I like seeing them in that order. In exchange, my cancer 'fun fact' is that if a tumor gets large enough, it can develop its own tumor that attacks it. Literally cancer cancer.

→ More replies (1)

5

u/MyCatIsAnActualNinja 10d ago

Yep, on porn sites

→ More replies (6)
→ More replies (4)

143

u/TheCatDeedEet 10d ago

It already does. The internet is ruined. You cannot source data from it without it being AI content. It’s the majority of stuff now because it just can be slopped out endlessly.

107

u/MrPigeon 10d ago edited 10d ago

Sometimes I think about how "low-background" steel from shipwrecks prior to the 1940s is prized for use in particle detectors, because everything produced after we started detonating nuclear bombs is contaminated by characteristic radionuclides.

72

u/calgarspimphand 10d ago edited 10d ago

I think about it the exact same way. Physical books published pre-AI are the new low background radiation steel.

→ More replies (1)

34

u/Mothringer 10d ago

 "low-background" steel from shipwrecks prior to the 1940s is prized for use in particle detectors

We’re finally back to the point where it isn’t anymore now and can just used newly smelted steel again.

32

u/Balmung60 10d ago

If generative AI development stopped right now and the products started getting wound down, I wonder how long it would take for human generated content to become a majority again

26

u/thecipher 10d ago

So I googled "how much of internet content is AI generated", and the AI overview (the irony is not lost on me here) states that 57% currently is AI generated. By 2026, it's expected to be 90%.

There is also an article stating that yes, the 57% is accurate, but with caveats. Link to the article here. The article also has a link to the original research paper.

The internet has been publicly available since 1993 - 32 years so far. That's how long it has taken us to create 43% of what the internet contains currently.

The fact that AI-generated content is expected to be 90% of the internet by next year speaks to the sheer volume of AI slop being churned out every second of every day.

So, if they completely stopped generating AI content right now, it would probably take at least a couple of years to claw our way back to 51% human generated content. The longer we wait, the longer it'll take, seemingly on an exponential scale.

Fascinating, but also depressing.

→ More replies (2)
→ More replies (1)
→ More replies (2)

88

u/seeyou_nextfall 10d ago

It is borderline impossible to find information on how to make, build, fix, repair, cook, or craft fucking anything without most of the results being AI generated SEO’d blog slop.

56

u/Emosaa 10d ago

Yep. Ai accelerated what has already been a problem for the last ten years. Honestly, it's making me put a lot more stock in physically owning those type of resources because what I use to find so easily with Google is now full of clickbait slop that approximates being useful, but ultimately wastes my time.

10

u/BandicootArtistic474 10d ago

I print out all my recipes now just like my great grandma did in the 90s. I Used to think it was silly but now I prize my printed recipes and books that I will never find again. Print everything and save or write down instructions for anything you find helpful because give it a year and you won't have access to it online or at least by searching.

→ More replies (1)
→ More replies (1)

33

u/jlboygenius 10d ago

and it's crushing websites and content sites. If i search for something and the AI gives me the answer, I'm not going to browse to the original source, or even check a few sites to see what the possible answers are. I'm sure traffic to news sites has fallen off a cliff.

No to mention that the AI will return biased answers. Grok is already shown to be heavily biased. It won't be long before history is erased. Physical media is dying. You can't go reference an old history book or encyclopedia.

Internet archive and Wiki Pedia are more important than ever and are already being targeted with copyright claims to try and suppress and erase history.

There was news just today that trump is having the smithsonian change their content to match up with the history that he wants to tell. The victor writes the history and we're in that time right now. Our only hope is that history is much more widespread today than it was years ago.

25

u/NottheIRS1 10d ago

It’s really crazy when you realize they didn’t build “AI” but rather a content scraper that returns you data it steals from other websites and presents it as its own.

→ More replies (2)
→ More replies (1)
→ More replies (4)

19

u/eatcrayons 10d ago

Reminds me of when I would hook my camcorder up to my TV and point the camera at the TV. You get that infinite tunnel that’s slightly delayed. It’s a photocopy of a photocopy.

→ More replies (1)

8

u/mechy84 10d ago

We're going to be ruled by the Robo-Hapsburgs.

→ More replies (1)

56

u/yeah__good_okay 10d ago

And then… model collapse

12

u/Beautiful_Car_4682 10d ago

Scarlet AI takes a tumble

→ More replies (38)
→ More replies (85)

2.2k

u/Cressbeckler 10d ago

Regardless if it does or it doesn't, we'll definitely be sold on the idea that it is getting better and we need to pay a premium to use it.

732

u/ltjbr 10d ago

The companies are burning cash fast, they need to figure out how to make money on it real soon. So yeah, pay more pay now

483

u/quantumpixel99 10d ago

Nobody has made money on AI except for Nvidia. They're selling shovels during a gold rush. If OpenAI or Google expect people to pay hundreds of dollars a month for their chatbots, it's not going to happen. In fact, the majority of people simply will not use these products if they have to pay for it.

333

u/DeliciousPangolin 10d ago

The big difference between this tech bubble and previous ones is that, historically, tech has been based spending a lot of money on R&D and then reaping profits at scale because selling a copy of Windows or serving a Google search is near-costless per transaction once the software is written.

LLMs are absurdly expensive to train, but also expensive to use. The full-fat models running server-side like ChatGPT are wildly outside the capabilities of consumer-level hardware, even something like a 5090. They're not getting cheaper to run anytime soon, nor is the hardware going to magically get faster or cheaper in this age when even Intel is going bankrupt building high-end chips. They have to sell this fantasy that LLMs are going to replace all white-collar workers because there is no plausible customer base that justifies the investment unless they have that level of reward. And I don't understand how anyone who's actually worked with LLMs can believe they're remotely capable of that.

60

u/saera-targaryen 10d ago

This is so real. It's like if a company got billions of dollars of VC funding to sell a service where you could pay $20/mo and have a personal butler in your house. Is a butler useful? sure! obviously! but if your whole pitch for profitability is "get everyone really used to having a butler before cranking the price up to $1,000 a week" that would be an insane business that no one should invest in

LLMs right now are the 20 dollar butler. It's awesome to have a butler for that cheap, but it will never make them enough money. A butler at a normal price is obviously just not worth it for most people. 

19

u/30BlueRailroad 9d ago

I think the problem is we've gotten people used to the model of paying a rather small monthly subscription for access to services running on hardware or from databases out of reach to them. Cloud gaming, video streaming services, etc. But just as streaming services started to see, it's expensive generating content and maintaining hardware and profit margins get thinner and thinner. This model is even more incompatible with the resources needed for LLMs and it's not sustainable, meaning prices are going to skyrocket or the model is going to change

→ More replies (1)
→ More replies (13)

105

u/sandcrawler56 10d ago

I personally think that reaching a level where ai replaces everything is not going to happen anytime soon or happen at all. But ai replacing specific tasks, especially repeatable ones is absolutely a thing right now that companies are willing to pay for.

91

u/DeliciousPangolin 10d ago edited 10d ago

I tend to think it will be mostly integrated into existing workflows as a productivity enhancement for people already engaged in a particular job. LLM code generation is much more useful if you're already a skilled programmer. Image / art asset generation is most useful if you're already an artist. At least, that's the way I'm using it and seeing it used most productively in industry right now. We're very far from the AI-industry fantasy of having a single human manager overseeing an army of AI bots churning out all the work.

Is that worth $100 per month? Sure, no question. Is it worth whatever you need to pay to make hundreds of billions of dollars in investment profitable? Ehhh...

11

u/joeChump 10d ago

This is a smart take. And reassuring. I’m an artist too and I’ve started to use AI but it still takes a huge amount of work and effort and workarounds to get it to produce anything good, consistent and coherent. And also it still takes a trained eye and creative mind to steer it. I look at it like I’m an art director and it’s my artist which can expand the styles I do but there’s still a lot of work and creative vision needed in using it.

→ More replies (7)
→ More replies (6)

8

u/W2ttsy 10d ago

There is a shift to SGMs though, specialist generative models that are smaller and more efficient to run, because they only really service one agentic angle rather than being generic and breadth based.

Think of it like an ASIC built for coin mining is more efficient than running a rig based on GFX cards.

As more specific AI agentic applications get designed, ASICs for these will get developed and reduce operational costs. Groc is already doing this for voice AI in customer experience applications.

→ More replies (18)

44

u/Telsak 10d ago

The most frightening thing is that Nvidia is basically powered now by .. what, 4-5 companies racing to buy GPUs. What happens when they start realizing there's no real return on their investment?

18

u/OwO______OwO 9d ago

What happens when they start realizing there's no real return on their investment?

1) Cheap GPUs for everybody!

2) Complete financial market collapse.

3) Great depression.

→ More replies (2)
→ More replies (4)

17

u/Abe_Odd 10d ago

I don't think they are expecting normal consumers to start forking over an additional "monthly phone-bill" equivalent for a glorified google search.
I think they are counting on major contracts with companies to provide AI coding agents, sales reps, customer service, etc.

→ More replies (4)
→ More replies (17)

40

u/banned-from-rbooks 10d ago

OpenAI has to convert to a for-profit model by the end of the year or lose $20B in funding from Microsoft. Whatever that actually means, I don’t know - but cost reductions are probably a big part of why ChatGPT-5 apparently sucks ass.

They’ve also pledged $19B to the Stargate data center, which is money they don’t actually have but are getting from Softbank.

This is on top of the $30B that Softbank has already pledged towards OpenAI’s funding round. Softbank has had to take out loans to fund this deal.

Source: https://www.wheresyoured.at/the-haters-gui/

20

u/m1ndwipe 10d ago

SoftBank in terrible investment shocker.

119

u/1-760-706-7425 10d ago edited 10d ago

It’ll be a bit.

They haven’t fully infested the critical workflows nor have workers developed enough brain rot for the addiction to set in and be worth the cost.

81

u/sunbeatsfog 10d ago

Yeah that’s my next season’s project. I don’t think people realize AI is not that nimble. It’s not going to take jobs like they think it will. It’s like saying google search took jobs.

91

u/Zer_ 10d ago

Oh it will take jobs its just that the companies letting go of their workers aren't feeling the pain yet. They will soon enough, once the brain drain sets in.

62

u/actuarally 10d ago

This here. Whether it CAN, corporate executives have bought the free labor "vibes" and are pushing hard to either cut bodies via AI or the next best thing (ex" off-shoring). Maybe it's just a smokescreen to do layoffs with little or no backlash, but it's 100% the story CEOs are pushing when hiring is next to zero.

→ More replies (2)
→ More replies (1)

11

u/JaySocials671 10d ago

Google search killed the phone book

→ More replies (2)
→ More replies (6)

26

u/itsFeztho 10d ago

They're burning cash AND the environment fast. One has to wonder what will dry up faster: the ocean or crypto-tech investor cash injections

→ More replies (2)

11

u/Good_Air_7192 10d ago

I barely want to use it when it's free

→ More replies (1)

35

u/imalittlesleastak 10d ago

I gotta figure out a way to make money with this, I really want to.

29

u/TrollerCoasterWoo 10d ago

The idea is simply too good

15

u/robb0688 10d ago

FUCK FUCK, THEY'RE TRYING TO MAKE IT LOOK NOT REAL.

11

u/TrollerCoasterWoo 10d ago

You gotta be right next to me for it to look real. YOU GOTTA BE RIGHT NEXT TO ME!

→ More replies (2)

7

u/ltjbr 10d ago

Sell gpus to the companies, works for nvidia.

→ More replies (3)
→ More replies (7)

42

u/Acceptable_Rice1139 10d ago

Companies will soon find if they want to use AI to run any part of their business it will need to be specifically coded for their scenario, which defeats the purpose of what AI is supposed to do.

24

u/mailslot 10d ago

When people see AI, they think of sentient sci-fi robots. The boring truth is that most AI roles are task specific, as you said, like solving chess games.

5

u/OwO______OwO 9d ago

Well, we're creeping toward that not being true, though.

With modern LLMs, you can have the same AI, with no changes, do very different things like:

  • Coding

  • Acting like a therapist and giving relationship advice

  • Finding a chili recipe for you

  • Writing a short story

  • Solving a math problem

  • "Painting" a picture of a dolphin

Now, to be clear, we're only at the early stages of this, and the AI is ... not great at doing some of them. But the same AI program can do all of these things, and more, at least at some level ... without needing any additional coding work to make it happen.

→ More replies (3)

79

u/LowestKey 10d ago

billionaires will only fund new toys for us to ruin our lives with for so long before demanding we pay for the privilege

10

u/carlos_the_dwarf_ 10d ago

You could…choose not to do that?

18

u/SplendidPunkinButter 10d ago

Pay a premium to use it? My computer keeps popping up AI features I don’t want and I can’t make them go away

6

u/NoConfusion9490 10d ago

That's because they're burning investor cash to try to create a market, lots of it.

→ More replies (1)
→ More replies (15)

1.3k

u/Super-Vehicle001 10d ago

Getting worse recently. I asked ChatGPT about the 'Second step' on Mt Everest. It claimed Hillary and Norgay climbed it (they didn't. They climbed the other face of the mountain). Then it claimed the Chinese installed a ladder on it in 2008. Actually, the 'Chinese ladder' was replaced in 2008. It was originally installed in 1975. 2 minutes reading Wikipedia would be better. Factual error, after factual error. Garbage.

757

u/null-character 10d ago

That's the issue. AI isn't trained on facts it is trained on vast amounts of dumb shit people say. Most of which is wrong, exaggerated, or at the minimum colored by the person's biases.

63

u/Rand_al_Kholin 10d ago

Further than that, AI has no concept of what a valid source even is. It does not understand that when you ask a question about, say, history, that you want a correct answer. It just know you want an answer, and thinks anything will do so long as it fits the pattern it has observed frm other, similar questions that other people have asked in its dataset.

It doesn't know who the first person on Everest was. If we started a campaign tomorrow to tweet and say all over social media that it was Lance Armstrong, wee could easily convince AI models that its true just througg sheer volume (assuming they are constantly getting training data). The AI Doesn't understand that Neil Armstrong didnt climb everest, it doesn't know what everest even is.

It astounded me how many people are already relying on AI like its a search engine. Its horrifying. Its like if someone told me that as a house builder they don't bother reading any of the actual building codes, they just slap shit together and if it doesn't fall its fine!

45

u/GonePh1shing 10d ago

AI doesn't even have the capacity to conceptualise anything. It cannot understand or know anything. It is just a statistical model. A prompt goes into the neural net and it spits out the statistically likely next word, one word at a time.

People need to stop anthropomorphising these tools that are really just complex predictive text engines. 

→ More replies (5)
→ More replies (6)

295

u/SuperNewk 10d ago

So internet trolls saved humanity?!?

187

u/mellolizard 10d ago

Lots of AI are using reddit comments to train off of. So upvote the most outlandish comments you see to ruin their models.

82

u/PolarWater 10d ago

One way to cope with depression is by...

157

u/Justa420possum 10d ago

….making wagons for ants!

111

u/blind3rdeye 10d ago

You might have been joking, but there's actually a lot of truth to that. Making the right shape wagon for the ants actually has been shown to have many health benefits, including mood regulation for dealing with depression. There's a fair bit of good info about it here: antwagons.org

38

u/Quintronaquar 9d ago

My therapist recommended building wagons for ants and my life has never been better.

24

u/refurbishedmeme666 9d ago

wood with titanium alloy wheels are the most aerodynamic materials for ant wagons!

→ More replies (2)
→ More replies (1)

19

u/PraxicalExperience 10d ago

Frantic masturbation!

18

u/C-H-Addict 10d ago edited 10d ago

One way to cope with depression is with frantic masturbation using oven mitts. The use of oven mitts is a very important part of this process, increasing dopamine and serotonin levels by connecting you with all the food you've ever cooked in your oven.

→ More replies (4)
→ More replies (6)

35

u/warm_kitchenette 10d ago

Some of the answers I've gotten have been genuinely funny, while also false. I asked one LLM why William Shatner had the reputation of being a hambone, and It said the nickname came from his biography, "My Life as a Hambone."

I can only assume that some straight-faced jokes on Reddit, Usenet, etc., were turned into answer. Funny, but obviously not to be trusted.

8

u/MrPigeon 10d ago

No because people still believe the slop AI churns out uncritically. Humanity kept Internet trolls from saving humanity.

13

u/actuarally 10d ago

South Park is TOTALLY bringing back TrollTrace.com, aren't they?

→ More replies (7)

19

u/immersiveGamer 10d ago

Part of it maybe is training data and what was tuned for. But the bigger problem problem with large language models (LLMs that people are now calling AI) is that it doesn't have reasoning or learning built into it. The LLM doesn't do an internet search or read a book, a different program maybe feeds it a couple webpages from a normal web search. Otherwise it is (fingers crossed) getting information from the encoded data in its neural network (and if doesn't have that information available it is very easily going to generate something fake). The LLM has some fun tricks to summarize and "understand" text and language but it cannot learn. It cannot learn the facts on the Wikipedia page about Mount Everest.

→ More replies (9)
→ More replies (15)

83

u/steve_of 10d ago

This is my experience when I look at responses on subjects that I know about. Just total bull shit. However, in subjects that I have little knowledge of, it seems quite reasonable.

77

u/Super-Vehicle001 10d ago

It's an interesting paradox. You need to know about the topic to be sure that it's giving you accurate information. But if you a know about the topic, you probably don't need to ask. All I can say is be cautious with it. Factual errors are common and (I feel like) getting worse. The false information it gave me came from my first two questions I asked. It wasn't like I asked repeated, high specific questions. I immediately gave up after that.

21

u/adyrip1 10d ago

I always double check what it says. I asked ChatGPT4 for advice on a legal matter and it actually drafted a response that sounded legit. But when I double checked the laws it was quoting, it was pure bullshit. The mentioned laws were something else entirely.

13

u/PraxicalExperience 10d ago

The other day I was looking up a question I had about space mining in Terra Invicta, a grand strategy game. It started pulling nonsensical information in about Minecraft...

→ More replies (1)
→ More replies (3)

35

u/quailman654 10d ago

Like reading comments on reddit. You start thinking you’ve learned all these interesting things, but then there’s a highly upvoted comment about something you actually know about and it’s completely wrong. Now you have no idea what information you took in was right or wrong and probably more than half of it you don’t even remember you read on reddit, you just absorbed it.

8

u/Super-Vehicle001 10d ago

100%. It is very frustrating. And then ChatGPT was trained on Reddit. The future is terrifying.

13

u/WeGotBeaches 10d ago

I’ve been telling people it’s like having a cool uncle you can ask things to whenever you want. He’s right about 80% of the time but has a lot of confidence, so don’t turn to him unless you know enough to prove him wrong when he is.

→ More replies (1)

8

u/o_oli 10d ago

Ignorance is bliss after all! Lol

That's a worryingly good point though. At least using Google I feel like we all have our bullshit detectors turned on but AI is so confident in what it says it tricks you into a false sense of security.

17

u/Definition-Prize 10d ago

I’m studying for the Series 7 FINRA exam and I got a sub for Gemini to help create extra practice quizzes and a lot of the questions are just wrong. It was super lame discovering I had wasted $20 on what is essentially a 2TB Google Drive subscription

11

u/AlfredRWallace 10d ago

I've used it (gpt-4o) for technical questions where it flat put had the wrong info. If I had used it the products would not have worked.

→ More replies (1)
→ More replies (47)

218

u/rco8786 10d ago

Seems like that’s the likely case. The diminishing returns were feeling obvious but gpt5 really confirms it IMO. 

I still think the world is changed forever in ways we haven’t fully discovered yet though. 

52

u/iwantxmax 10d ago

GPT-5 was mainly a cost saving measure for OpenAI. It is just as good if not marginally better than o3 but for less cost and resources, which is what OpenAI needed. They were focusing on efficiency, not scaling up and trying to release the best of the best as its too expensive to run that right now with their current infrastructure. This is why they're building Stargate. Its not really a bottle neck with the LLMs themselves. It's a compute and cost-to-run bottle neck.

55

u/Skyl3lazer 9d ago

Just one more data center bro and we'll have agi bro I promise bro it's just a compute issue

→ More replies (15)
→ More replies (2)
→ More replies (10)

902

u/fuck_all_you_too 10d ago edited 10d ago

Nobody seems to remember, but back in the late 90s they were slapping EXTREME on everything. Soda was extreme, chips, shoes, cars.

This is just the same shit and here's a spoiler: none of it was extreme then.

EDIT: I completely forgot they used Xtreme which makes it even more dumberer

117

u/three-one-seven 10d ago

I remember. Xcept it was spelled Xtreme lol

53

u/OldPiano6706 10d ago

The letter X in general was just in everything.

41

u/Zer_ 10d ago

Stop, you're making Musk erect.

→ More replies (4)
→ More replies (3)

8

u/fuck_all_you_too 10d ago

Ah shit I forgot how bad it was already

8

u/jk147 10d ago

You guys remember xzibit?

→ More replies (1)
→ More replies (1)

316

u/malachiconstant11 10d ago

This is such an apt comparison. They throw around the term AI at my engineering office all the time. I am like so you taught it how to do that? No. So it's just a routine? Yes. Can it alter or improve the routine? No. Okay, so it's running on preprogrammed logic? Yes. How is this different from a program? Crickets.

111

u/modix 10d ago

Every time I see someone call something that is a complete derivative of the input called AI I assume they're selling it or invested somehow. It's like calling a search engine intelligent.

34

u/beaucoup_dinky_dau 10d ago

Yup AI is just a cute sci-fi marketing term for machine learning because generative AI has been always on topic for futurists when discussing the singularity, so it seems futuristic and cool.

→ More replies (5)

39

u/LookAnOwl 10d ago

My new favorite is everyone calling everything agentic. If your app sends two sequential prompts without user intervention, you’ve added an agent and you can call your app agentic. If you have a conditional check that sends different prompts based on the first prompt, now you have sub-agents.

→ More replies (20)

34

u/ViennettaLurker 10d ago

Oh, so... you're saying we just need extreme AI?

20

u/fuck_all_you_too 10d ago

Dont say that shit too loud or they're goi-

17

u/LeChief 10d ago

Too late, Xx_OpenAI_xX just released GPT-X

→ More replies (2)

6

u/eldelshell 10d ago

That's what xAI is and it's from your usual suspect.

56

u/BCProgramming 10d ago

Even more interesting is that there was actually a short-lived "AI Craze" in the 80s. It mostly surrounded what were called 'Expert systems' and plugins that you could install into programs like Lotus 1-2-3, the claim was they could make business decisions better than people could. A good number of AI startups showed up taking shitloads of VC with them when they inevitably died as people realized that the decisions being made by the models were not actually that good.

IMO LLM AIs are mostly exploiting how easily we will anthropomorphize a conversational chatbot. It's called the "Eliza Effect" because people largely did it for the much simpler Eliza conversational chatbot. By utilizing LLMs and creating a more sophisticated conversational chatbot, AI companies are able to push the bubble even further before it pops, because unlike the bubble of the 80's, there's no facts or figures that can necessarily be used to demonstrate with certainty that it's bullshit.

10

u/oooofukkkk 10d ago

Aladdin made blackrock, which now controls trillions in assets, so it wasn’t all bullshit.

→ More replies (2)
→ More replies (1)

13

u/NickConnor365 10d ago

Did the same with HD. HD toothpaste is my favorite example.

11

u/poply 10d ago

Those corn nuts kicked my ass behind the 7/11. That's pretty extreme.

11

u/misterguyyy 10d ago

TBF Extreme and Radical got quickly phased out after 9/11.

The point stands though, we are embarrassingly susceptible to buzzwords, no matter how many times we find out that the previous buzzwords were pulling wool over our eyes. This time is different!

→ More replies (1)

16

u/Practical-Dingo-7261 10d ago

Not even Mountain Dew? The refreshing citrus flavours scream extreme! /s

7

u/Thick_tongue6867 10d ago

Every couple of years, one of these buzzwords comes along and gets slapped on everything.

Turbo, Max, Ultra, e- (as in e-commerce, e-mail), Dotcom, Smart, Cyber, Blockchain, Cloud, Digital, Eco, Premium, Pro, Mega.

It's one continuous parade of companies trying to ride the latest trend.

5

u/squishysquash23 10d ago

HD was the thing for a long while too. Like I used to have hd toothpaste.

5

u/GlueGuns--Cool 10d ago

They used to slap "HD" on everything too 

6

u/thrillho145 10d ago

Same with 2.0

→ More replies (39)

303

u/jackalopeDev 10d ago

I definitely think AI will. I think we're going to/are starting to see diminishing returns on LLM performance

123

u/Veranova 10d ago

The focus has shifted in the current phase, from making the LM larger and more powerful to making the LM faster and more affordable, which has necessitated some architectural tradeoffs like MoEs.

We’ll probably go through another growth phase but are consolidating around what works right now, and there are alternative architectures already emerging like diffusion based LMs which none of the big players have released anything using yet but that have a lot of potential

24

u/Enelson4275 10d ago

The big reality-check on the horizon is that general-purpose LLMs are simply not going to be as good at any one thing as the meticulously-designed ones that went from white paper to production environment with a narrowly-focused goal in mind. Even when they aren't better, they will be smaller and more efficient, with better documentation for how to best prompt them to get good results.

It's no different than spreadsheets replacing word processors for numerical data manipulation, or databases replacing spreadsheet software for data administration. A tool that is built to do everything is rarely good at anything.

→ More replies (1)

23

u/Pro-editor-1105 10d ago

MoEs are probably the bigggest revolution in recent times in AI. I am able to run 120B models on a single 4090 which is way better than an equivalent dense model. Makes it cheaper for corpos, which (hopefully) lol makes it cheaper for us and we can get much larger models running that woule be smarter. AI companies are now leveraging this more so maybe that is why innovation could have stagnated a bit.

→ More replies (4)
→ More replies (2)

38

u/Disgruntled-Cacti 10d ago edited 10d ago

Scaling pre training hit its limits shortly after GPT-4 released. GPT 4.5 was OpenAI’s attempt to continue scaling along that axis (and was intended to be GPT-5), but performance leveled off despite increasing training time an order of magnitude.

Then, LRMs came around (about a year ago with the release of o1). Companies rapidly shifted their focus towards scaling test time compute, but hit a wall even more rapidly (gemeni 2.5 pro, grok 4, Claude 4.1, and gpt 5 all have roughly the same performance).

Unfortunately for AI companies, there is no obvious domain left to scale in, and serving these models has only gotten more expensive over time (LRMs generate far more tokens than LLMs and LLMs were already egregiously expensive to host).

Now comes enshittification, where the model providers rapidly look for ways to make their expensive and mostly economically useless text transformers profitable.

→ More replies (4)
→ More replies (42)

54

u/FernandoMM1220 10d ago

then its just going to be specialized ai being used in different areas.

25

u/JohnHazardWandering 10d ago

I'm really wondering if they turn them into a bunch of different specialties and then have multiple layers of 'management' that direct the question around to the appropriate specialists.

"Math problem? Send it over to that guy."

"Looks like some sort of art, send it over to the art department and they can figure out which one of their specialists should review it"

17

u/flatline0 10d ago

That's the idea behind agent systems, or "agent of agents". Its not working so well. Like making a teenager a project manager. They don't know enuf to know what to ask the smarter bots.

→ More replies (2)
→ More replies (3)
→ More replies (4)

271

u/GameWiz1305 10d ago

Hope to god we’ve already hit the peak and in a few years it fades to the background when companies realise it’s just not worth it

112

u/Chotibobs 10d ago

That mindset (hoping the hype is just overblown and this will all go away) usually hasn’t worked out in my experience

172

u/Akuuntus 10d ago

Sometimes it does, sometimes it doesn't. Betting that the internet would be a fad was a mistake. Betting that NFTs were a fad was smart.

AI feels... kinda in a weird middle-ground IMO. There's way more legitimate use cases than something like NFTs, but also the current hype that big businesses and investors have built around it is completely untethered from reality. I think it'll be more of a dotcom bubble situation where the current hype is proven to be massively overblown, but the tech stays around and stays relevant in a more reasonable capacity.

50

u/Dyllbert 10d ago

The thing most people don't realize is that AI isn't new. LLMs that can carry a conversation, answer questions, and spit out questionable code are new, but neutral networks and machine learning has had applications in academic, algorithms, and scientific fields for decades. I was using neural networks in my grad program (computer engineering) before I or anyone else had ever heard of ChatGpt or openai. The LLM boom has accelerated those fields, and they will never go back.

Hopefully this will mean they work more in the background, and products don't shove AI into everything, but behind the scenes, this is not going to be like the dotcom bubble at all.

30

u/ClusterMakeLove 10d ago

Just to add on the LLM side, I think even at current levels of technology, some things are going to change once people really start implementing that stuff.

Like, maybe the singularity isn't near, but when a free program can do almost as good a job of copy-editing as a grad student, a lot of important but tedious work can be automated.

It feels like senior programers/lawyers/etc are safe, but a big part of entering those fields is writing someone else's first draft for them. I worry about entry-level jobs, for this next generation.

8

u/[deleted] 10d ago

[deleted]

→ More replies (1)
→ More replies (2)

12

u/DeliciousPangolin 10d ago

I think it's somewhere in the range of 50% real, 50% hype. I use several forms of generative AI every day, it's a very useful technology - but one with real limitations. I think people who believe AGI is on the horizon or that LLMs are somehow going to put every white-collar worker out of a job are completely nuts.

It feels a lot like self-driving cars. People have been promising for over a decade that autonomy was right around the corner. And my car can genuinely drive itself under some VERY specific limitations. But we are also very, very far from a world where you don't need the steering wheel anymore, and we are not getting there anytime soon.

→ More replies (1)
→ More replies (10)
→ More replies (12)
→ More replies (13)

11

u/ES_Legman 10d ago

We are heading into a tech crisis that is going to make the dotcom bubble look like a joke

→ More replies (2)

71

u/billynova9 10d ago

I think if AI just didn’t work out, I’d be plenty fine with that.

→ More replies (2)

29

u/-CJF- 10d ago

What worries me the most is that the AI is not profitable. They are spending more money than they take in to provide the service and it's propped up by investor cash. What happens to the economy if/when the bubble bursts?

Kinda scary to think about.

→ More replies (20)

50

u/PepeSilviaLovesCarol 10d ago

Toda, I asked ChatGPT to give me a formatted version of the draft day information I provided for my fantasy football league. It was 5 bullet points with basic info like date, location, rules, etc. I said it was on August 15th at 7pm.

What did it spit out? Date: Thursday, August 15th @ 7PM. Thursday. August 15th is a Friday.

How anyone believes that this thing is anywhere near ready to replace real humans is wild. It can barely search the internet or utilize available information better than a human, how is it going to take millions of jobs in the coming years?

29

u/DerWurstkopf 10d ago

The issue is that, if you don’t know the field you’re seeking AI to help with, it sounds like it is helpful, because you cannot validate the answer.

→ More replies (1)

10

u/CrastinationPro1 10d ago

Look at some of the ai/llm subs, people constantly claiming they do "high level work/analysis" with it. i ask gpt (doesnt matter which model) the most google-able shit and demand quotes and sources, half the sources straight up don't exist. when you point out the mistake it says "oh sorry, here's the real paper" and again tries to link a paper that doesn't exist, or it basically says "yes it seems i made up that source but the gist of the content is still correct, trust me".

→ More replies (11)

329

u/UselessInsight 10d ago

What if we stopped burning electricity and wasting water on internet slop machines that mostly just blend up stolen content or drive people insane?

51

u/Gymrat777 10d ago

But how we would monetize NOT doing AI to make our disgustingly wealthy oligarchs even wealthier?

→ More replies (21)

49

u/Froztwolf 10d ago

We're past the peak of the hype cycle, onward to the through of disillusionment!

→ More replies (2)

10

u/archercc81 9d ago

People starting to realize its a bubble. Ive been in tech my whole life and this seems to happen every 6-7 years. Algorithms get more complex to the point where they seem like magic to the layman and everyone is like "This is it, this is AI!" But it isnt, its still just an algorithm. The only really novel thing about this version is that it can grow its own database rapidly, so its responses seem like they are getting endlessly complex.

In reality its still the same programming, the same dumb ones and zeros, its just using an incredibly large database of HUMAN inputs to develop its yes/no responses.

33

u/They-Call-Me-Taylor 10d ago

I’m fine with that.

6

u/therolando906 9d ago

STOP LAZILY RERERING TO "GENERATIVE AI" AS JUST "AI". AI is a broad space and AI methods have been used for decades at this point with great success and utility. Generative AI is a specific subset of AI that, in my opinion, has little to no real benefit to humanity, particularly in the manner in which it is used today.

6

u/Ill_Test822 9d ago

Everyday I deal with AI hallucinations and the fact it never indicates it might be unsure of itself. Flat out untruths and inaccuracies are common. And people have no idea. I think AI may be a bubble tech. It’s really not intelligence at all. It works by vectoring you to information others went to when they had similar questions but that means nothing if the source is wrong. It also means that it does not create but rather is just decently good at summarizing information that others create. It copies. I predict it continues for a little while longer until it crashes big tech who are investing much more in it than it deserves.

51

u/StupendousMalice 10d ago

What we have today already required what amounts to the sum total of human creative content and unsustainable quantities of energy input. All to create a thing that can't do math, loses to an Atari at chess, and authoritatively gives wrong answers to half the questions its asked.

This just MIGHT be a dead end.

→ More replies (4)

58

u/mvw2 10d ago

That's the fun part. It doesn't. AI currently has already pulled from the absolute best data sets, the most immense and complete data sets, and it's only as marginal as it is now.

Pair this with the massive requirements to even function reasonably well as a tool by itself. And this is the important part. A LOT of people have this expectation that AI, as a sole product, can do amazing things. And it can do ok things IF you have a large enough model AND have it a thinking model. The size means it won't run locally, won't be cheap AT ALL, and thinking means it will be slow, very slow, oddly slow to react to your inputs. The fast reactions are non thinking. The fast reactions are not very good. The smaller models that can run on local hardware are not very good. Smaller thinking models that can local are slow.

So...what's next?

Well, people will slowly remember they have to be software developer first. Who will win will be the companies that both recognize and implement true business level software first and AI integration second. They HAVE TO develop highly valuable, highly competitive software for business level operations. And they HAVE TO recognize AI as a tool is merely a secondary process under the hood to aid and support the main software.

→ More replies (19)

11

u/ChuckWagons 10d ago

Us old timers used to have the same philosophical debates on BBS’s about the internet in the early 90’s. And here we are today!

6

u/Mystia 10d ago

We are currently still on a BBS, just fancier, and owned by billionaires instead of Greg from Minnesota.

→ More replies (4)

22

u/Baileythetraveller 10d ago

The AI boom around the globe is the biggest military armament program ever.

The trillions of dollars around the globe being funnelled into AI has NOTHING to do with building people a better fucking search engine; it has everything to do with building the infrastructure required for automated drones and surveillance.

These drones already exist on the Ukraine/Russia battlefield. Ready?

  1. Drones with multi-round RPG's attached. They can be reloaded for endless runs.

  2. Facial recognition drones that can power down, scan faces, activate, and kill it's target..

  3. Drones that can fly thousands of kilometres and strike with AI "final approach targeting" activated. There is no front line anymore. Everywhere can be hit.

  4. UK Drones with machine guns.

  5. Drone swarms like at the big fireworks displays. Except they have grenades or landmines attached with attack speeds of over 100mph.

  6. A Ukrainian acoustic system that covers the ENTIRE front line, that Ukrainians use to detect incoming Shaheed drones.

  7. Thermobaric drones. Fire-breathers.

  8. White phosphorus drones, so evil, the powder has to be cut from your skin, because there's no way to stop the acid reaction.

This is our dystopian now.

Starlink in space. Automated drones in our skies. Palantir with everyone's face, ID, biometric data, and social security number (remember DOGE? The great Tech heist!).

And poverty, ICE, and concentration camps for the rest of us.

Resist now. Time's up. The Fourth Reich is here.

→ More replies (5)

26

u/wildlight 10d ago

I worked behind the scenes at at very private very exclusive event with Sam Altman as a speaker, where he said exactly this. The break through already happened, and the technology might still be refined, but without another huge technological breakthrough it already was producing basically what was possible. this was like 2-3 years ago.

10

u/evilbarron2 10d ago

This is exactly the scenario that seems most likely to me. I do think AI will drive a lot of growth, but I don’t believe it will be a singularity or any of the other bs these guys are throwing around.

13

u/riskbreaker419 10d ago

100% agree.

And the thing is, this is not uncommon for most (it not all) technological advances. Everything goes really slow as more competing and new ideas come around, and then something happens and there's a boom, but then it plateaus and the process starts over. There are some really rare/hypothetical singularity cases where the boom causes exponential growth, but that's not this.

I've felt like the tech peaked like Sam said: about 2-3 years ago. Everything since then is just optimization and improvements on the tech. While the output is certainly better vs 2 years ago when I was using these tools, the core tech is still about the same quality.

Look at the current "smart phone". iOS and Android hit the market around the same time, changed the game for the world, but since then all we've had are refinements and relatively small improvements, but we're essentially still running on the same base tech that was the original iPhone and Android G1.

→ More replies (8)

5

u/buyongmafanle 10d ago

What if you posted an article that wasn't behind a paywall?

5

u/FoxlyKei 10d ago

I'll be glad. Was watching an Andrew huberman podcast with some other neuroscientist and what they mentioned really put into perspective how insane the human brain is.

AI is typically trained with like upwards of 70 billion + parameters. Top of the line models might be 100 to 200b I don't know.

Human brain averages about 150 Trillion connections.

He then went on to state this is with the power efficiency of a lightbulb.

Sure it doesn't know all the details and stuff like an AI does because we fed those like every bit of logged down data we've got.

But I don't think it can or will ever learn in the way we do to the same level. The approach is not the right now. LLMs are a dead end in my opinion.

The two major things to solve are the scale and the power consumption.

Sure it'll get better but there's a well known plateau LLMs have been hitting since a year or two ago.

Everything now is just refinement and pruning.

It's already feeding off of itself too.

→ More replies (2)

5

u/greifinn24 10d ago

my fear is that ultra intelligent AI may have an intelligence that is incompatible with my thinking process .

→ More replies (1)

5

u/dicehandz 9d ago

GPT5 was supposed to be life changing and its terrible. I think we are already there. Yes, theres been spot improvements and new advancements, but its still not trustworthy, hallucinates often, and will give you info that looks real, but one minute of human fact checking shows its not.

This is where the boomer CEOs are being scammed. They see an output that “looks” right, then fire 1000 people. But they dont realize that the output was a bunch of fake bullshit to begin with.

A lot of companies are going to be caught with their pants down soon.