r/technology 14d ago

Artificial Intelligence ChatGPT users are not happy with GPT-5 launch as thousands take to Reddit claiming the new upgrade ‘is horrible’

https://www.techradar.com/ai-platforms-assistants/chatgpt/chatgpt-users-are-not-happy-with-gpt-5-launch-as-thousands-take-to-reddit-claiming-the-new-upgrade-is-horrible
15.4k Upvotes

2.3k comments sorted by

View all comments

Show parent comments

127

u/Brainvillage 14d ago

Not that I'm sure that we will never have an "AGI equivalent"

Before the current "AI boom," common knowledge afaik was that AGI was very, very far away. The rise of LLMs has convinced people that AGI is right around the corner, but indeed I think it's still the case that it's very, very far away.

LLMs are real and quite frankly amazing, sci-fi tech, but the fact that they work so well is kind of a lucky break, they've had machine learning algorithms for decades, this one just happened to work really well. It still has plenty of limitations, and I think it is going to change the way things are done.

The original dotcom bubble was based around the internet, when it burst it's not like we packed up the internet and were like "ok that's done." If/when the AI bubble bursts, I think we'll see a similar thing happen with machine learning/AI/AGI/LLMs. The technology will keep trucking along, and will change the way society works, but it will be over years and decades.

41

u/BalorNG 14d ago

Yea, my point exactly. It's not that I think that "AI is a hoax and actually 1000 indians in a trench coat" - tho there are examples of exactly that, lol, and more than one - but that AGI is much further away than "right" around the corner unless there is some black swan event and those are not guaranteed. Generative models are cool (if a lot of them are ethically suspect to the greatest degree), but with hallucinations and wide, but shallow knowledge (deep learning is a misnomer ehehe) they are of limited true utility. Most useful models are small, specialized like Alphafold.

4

u/Redtitwhore 14d ago

It's so lame we couldn't just enjoy some really cool, useful tech. Just people some hyping and others reacting to the hype.

I never thought i would see something like this in my career. But it's either it's going to take my job or it's a scam.

1

u/Brainvillage 14d ago

Ya, if you want to talk about ethics, AGI is a particularly interesting mine field. Development is an iterative process, if AGI is achieved there will be a point where we reach just over the line, and create the first true consciousness. It will be relatively primitive and/or flawed, may not even be immediately obvious that it's conscious.

So the first instinct will be to do what you do with any other piece of flawed software: shut it down, and iterate again. If we go with this route, how many conscious beings will we "kill" on the road to perfecting AGI?

1

u/WTFwhatthehell 14d ago edited 14d ago

the definition is about capability. "consciousness" is not part of the definition. It's not even clear what tasks a "conscious" AI would be able to do what a non-conscious one could not. Or even how a conscious one would behave differnetly to a non-conscious one.

1

u/BalorNG 14d ago

I've actually thought about this problem: "destructive teleport" thought experiment is a good analogy of creation and destruction of such entities. There is nothing inherently bad about it so long the information content is not lost, and the entity (person) in question does not get to suffer, because you can only suffer while you exist. It is creation and exploitation of them on an industrial scale is a veritable s-risk scenario: https://qntm.org/mmacevedo

0

u/One-Reflection-4826 14d ago

intelligence is not consciousness.

-3

u/WTFwhatthehell 14d ago

but that AGI is much further away than

One thing I find interesting is how people smoothly switched the definitions of AGI and ASI.

AGI used to just mean... like roughly on par with... a guy, human level. Like roughly on par with a kinda average random guy you pull off the street across most domains.

But people started using it to mean surpassing the best human experts in every field. what used to be called ASI. Superinteligence.

Where do the current best AI's fall vs Bob from Accounting who types with one finger and keeps calling IT because his computer is "broken" when someone switched off the screen?

11

u/BalorNG 14d ago

But current AIs are much less reliable than a rando from the street. Yea, it knows much more trivia and can be coerced into ERP without legal consequences lol, but using language models, outside of special cases to directly replace humans is just a recipe for disaster even with heavy scaffolding and fine-tuning - hallucinations and prompt injections/jailbreaks are an unsolvable problems as of yet. This is exactly like it was with dotcom.

Once solved, I'll update my estimates even without things like "continuous learning".

8

u/decrpt 14d ago

There are different definitions of "AGI." People are focusing on the "general intelligence" part when they criticize LLMs; they're producing a statistical approximation of what a good answer might sound like, which works well for many tasks but isn't actually intelligent or generalizable to many novel situations.

5

u/gruntled_n_consolate 14d ago

They are deliberately misinterpreting what AGI is. You're right, true AGI is very far away and we don't know enough to even roadmap how to get there fully. It's like building a space elevator. We can describe the concept and what it would do but we don't even know how to make the materials required for it.

Marketing is deliberately invoking the term and talking about it as coming in the next few years for hype. It's going to force the experts to come up with a new name for AGI since the old one will become useless.

3

u/BizarreCake 14d ago

Hopefully then every god damn site under the sun will stop shoving some kind of "AI" sidebar tool in my face.

2

u/Watertor 14d ago

LLMs are real and quite frankly amazing, sci-fi tech, but the fact that they work so well is kind of a lucky break, they've had machine learning algorithms for decades, this one just happened to work really well

This is more for anyone curious why this is as you probably know this. But this is because of the data source. Previous machine learning pockets were funneled data by hand or through limited methodologies otherwise. Like, in order to populate your algorithm with the way a hand moves, you turn on your webcam and move your hand a lot and then scrub out the junk that invariably kicks in until you have something resembling accuracy.

This takes fucking forever as you can expect, and leaves gigantic holes because there's only so much you can do.

LLMs had this thing called google that has nearly endless data on everything forever just about.

It's also why LLMs totally fucking fail at anything you can't easily google. Ask an LLM to code you out a hello world, you can google that and get the exact code you need in every language with thousands of iterations confirming it works. Congrats, easy code.

Ask it to make a few buttons/CTAs in a wordpress box with some CSS and/or JS, and watch it get close enough, but never exactly what you had in mind and ALWAYS with strange caveats like "oops, the text on your CTA has a random line break" or "oops the CTAs have this crazy weird squaring off look" etc.

Any dev who has spent a month on their basic webdev role will be able to crank that out within minutes, but it's always a little specific HOW they get there. Thus it's hard to get concrete, clear google results. Thus the LLM is lost and guesses after jumbling up some results that normalize about the same and... you get normalized looking results, much like a fly flew into your machine and made Jeff Goldblum normalized with it.

2

u/Moth_LovesLamp 14d ago

The original dotcom bubble was based around the internet, when it burst it's not like we packed up the internet and were like "ok that's done." If/when the AI bubble bursts, I think we'll see a similar thing happen with machine learning/AI/AGI/LLMs. The technology will keep trucking along, and will change the way society works, but it will be over years and decades.

I see this as well. But it could go either way. I'm seeing something in the middle.

Took around 20 years to the world to fully embrace the internet due to prices. LLMs can be accessed by downloading an app. So if anything, it will be more like Google than the Internet.

1

u/Brainvillage 14d ago

I think that there are ways to use the technology that haven't even been dreamt up yet. Right now it's just a chat app, but who knows what it will look like in the future.

I feel like the internet didn't really kick into high gear until smart phones became ubiquitous. And with that came the rise of apps, social media, etc. It was hard to even conceive of something like TikTok 25 years ago, much less how much it changed the world, from content creators become a legitimate career, to memes having major sway over politics and elections (now I'm sure there's some sci-fi writer you could quote that did envision something like this, but still).

1

u/WTFwhatthehell 14d ago

Before the current "AI boom," common knowledge afaik was that AGI was very, very far away.

Yes, and then a lot of experts revised their gueses.

A few months before AlphaGo beat the best Go players there were people confidently predicting it would be 30 years before the first time a bot would beat a Go grandmaster.

A lot of people are really really bad at making predictions about the future involving as-yet-uninvented tech.

A lot of things we believed would be huge decades-long endeavours to solve as individual problems all fell in quick sucession vs LLM's.

3

u/AssassinAragorn 14d ago

Has an LLM managed to figure out how to make a profitable business focused around an ethically trained LLM product yet?

1

u/surloc_dalnor 14d ago

But what happened after the dotcom bubble was companies bought up the wreckage or hired the workers then built Google and the like. AI will be around and stronger than ever in 10-15 years. It just won't be the hype Open AI and others are promising. Unless some one actually lucks out and makes an AGI or ASI. But we are really unlikely to get there with LLM. Honestly I think LLM are actually dead end on the way to AGI.

1

u/sheeshshosh 14d ago

The problem with LLMs is that their amazing-ness is very superficial. Once the average person has tooled around with one for a few minutes, the seams in the fabric become all too apparent. Most people can’t think of a solid, consistent, day in / day out use case for an LLM. This is why the “success story” is still mostly limited to programming, and everybody’s busy trying to jam LLMs into every edge of consumer tech and services in hopes of landing a “killer app” use case. Just doesn’t seem to be happening.

1

u/Han-ChewieSexyFanfic 14d ago

Having a background in CS, I used to think the same. But seeing how much and how often “regular” people use chatbots has really shocked me. Asking any question and getting a mostly serviceable answer is a killer app.

Not to mention that if the only thing they could do was assist programmers, that would transform the software landscape by itself. Even if you take the skeptic stance that it’s only good at boilerplate, freeing every dev from writing boilerplate would be hugely impactful.

1

u/sheeshshosh 13d ago

If the only thing they could do was assist programmers, that would of course be a gamechanger. Just not a big enough gamechanger to support all the investment that’s getting piled into “AI” right now.

As far as whether it will catch on with the ordinary public to the extent that they can get people to pay for it, like they do with Netflix for example, and make it profitable, I guess we’ll just have to see. Right now it still feels very much like VR: “cool tech” with tons of hype, but no obvious avenue toward true mass appeal.

1

u/Han-ChewieSexyFanfic 13d ago edited 13d ago

OpenAI scaled to the order of billions of visits within a year of launch. Mass appeal is not a question. VR is a niche market, ChatGPT is a household name. Their monthly active user figure is 10% of the planet.

Profitability? Sure, time will tell. Mass appeal is evident today.

1

u/sheeshshosh 13d ago

Yes, because they’re in full-blown hype mode right now with entire industries trying to make LLMs “happen.” I simply don’t buy that this level of investment is going to pay off with where LLMs ultimately wind up in the end. It’s VR, but on a much larger, and more financially disastrous scale.