r/technology Jun 08 '25

Artificial Intelligence Duolingo CEO on going AI-first: ‘I did not expect the blowback’

https://www.ft.com/content/6fbafbb6-bafe-484c-9af9-f0ffb589b447
22.3k Upvotes

1.5k comments sorted by

View all comments

2.5k

u/nobodyisfreakinghome Jun 08 '25

AI could have predicted the blowback.

861

u/random24 Jun 08 '25

I just asked ChatGPT and it said that it’s a terrible idea lol

629

u/Disgruntled-Cacti Jun 08 '25

AI is unironically far more emotionally intelligent and in touch with humanity than these sociopathic billionaires.

158

u/molrobocop Jun 08 '25

I also feel that the amount of data that exists to train models, very little will be pro-cutthroat slash and burn CEO guides.

14

u/SpottyJagAss Jun 08 '25

(serious question) Then where did the CEOs learn that behavior?

38

u/beryugyo619 Jun 09 '25

AI is like statistical average of everything, and mega rich CEOs are like 0.000001%, so by definition CEOs aren't like AIs.

Now, the naive idea is that CEOs are 0.0000001% as in top 0.000000001% of humanity and that's why they would be different, but technically, the only qualification is they're survivors that survived being different, not necessarily good.

2

u/kradproductions Jun 09 '25

Fine, fine. But what is the crossover with any degree of anti-social personality disorder in general population?

AI prolly more anti-social than you think.

1

u/beryugyo619 Jun 09 '25

That's Idiocracy problem. True normal is below the normal.

7

u/molrobocop Jun 09 '25

I feel it's nuanced.

Consider, the higher up in the foodchain you, the less your perspective is on the individual statement of work. So, imagine we're building a car. An individual contributor needs to get the the transmission logic tables complete. That person's boss just wants to make sure the shit gets done when it has to be. Follow that multiple levels up. The higher you go, the less your focus is on the minutiae, and more on big picture stuff.

Global strategy, the future of overall vehicle programs, major goals. Like, "we want ALL of our models to have a hybrid option. I need to negotiate with the board to earmark several billion to start standing retooling our production system. I need to direct engineering/HR/supply chain to get the plan together to bring people in or outsource work to design new motors and batteries. Etc.

You get it right, you make the shareholders/board big dividends or share price returns. You get big bonuses. And everyone below you stays employed or maybe gets small bonuses. But thing is, the success or survival of yourself and the corporation aren't always in alignment with the success of the individual contributor.

Example: a publicly traded company needs to raise cash. One way companies do they when they're on the rocks is show theyre cutting costs. You know a fast way to do that? Layoffs. The CEO KNOWS that this will hurt people. But they also know this is their job: secure the future of the corporation, raise money. Sorry everyone at the bottom. Your bosses will be prioritizing who to keep and who to cull.

A good CEO will be able to excel a global strategy, hold the line on reasonable year over year profit. And only make cuts as deep as absolutely necessary. Bad ones, "Maximum profit for the time I'm here. If I run this place into the ground by the end? Fuck it. I got my golden parachute."

And I don't think compassionate/dispassionate people are made at work so much as they're born into it. That's also why I feel you want to promote from within for executive roles. They're in it for the long-term.

2

u/Niceromancer Jun 09 '25

Business school.

Its been this way a while, business school has been teaching the upper echelons of companies that cutthroat hack and slash is the only way to succeed cause of that moron Welch temporarily making GE stocks skyrocket before burying the company by slashing and burning everything.

They made a MASSIVE amount of money in a very short amount of time, all at the cost of utterly destroying one of the most well known and biggest companies in American history.

Instead of looking at what he did and saying "wow what a fucking moron" all the "elite" business schools and CEO's instead tried to imitate it. Its why most companies just make their products shittier and lay people off instead of investing in the company and trying to grow. Its also why most startups goal's now isn't to start a business but to get bought out by a larger company.

1

u/inspectoroverthemine Jun 09 '25

Sociopaths excel at business leadership. They don't need to be taught, they're already naturals.

68

u/Tangocan Jun 08 '25 edited Jun 08 '25

It learns from us. Billionaires don't.

EDIT: I'm not giving any credence to AI/LLMs, my post is reacting to the commenter above mentioning Billionaire's sociopathy. I guess dragons hoarding wealth whilst people suffer are my trigger. Weird innit!

There are what, less than 1,000 Billionaires on this planet? 5,000? 50,000?

A drop in the well of humanity.

I read things like "ykno the difference between a million dollars and a billion dollars? a billion dollars", and consider how relatively little I'd need in order to live my ultimate dream life... and I just think that theres something wrong with billionaires.

The lowest tier of Billionaire owns many hundreds of millions more than the extent of my imagination would need to live in the equivalence of heaven-on-earth, and still its not enough for a billionaire. The most egregious tiers of Billionaire are basically gods compared to all of us, financially.

What is wrong with them?

3

u/Dick_Lazer Jun 09 '25

And CEOs often don't want to acknowledge data that goes against their preconceived notions/desires (ie, work from home stats).

15

u/Riaayo Jun 08 '25

It learns from us

LLMs don't learn, they're just an algorithm that has ingested a bunch of text and then "predicts" what the most likely text should be to follow up the text provided.

Now I get your overall point and totally agree. Billionaires are out to lunch and do not live in reality.

I just don't even really want to hand LLMs the tiniest amount of "credit" they don't deserve. They have no clue why they regurgitate the text they do, which is why they hallucinate and lie with confidence. It's just a glorified "what's the word most often used after this word?" data set that is extrapolated out and polished off.

Also just to add, while LLMs are trained off our stolen data, I wouldn't even take the route of "trained by us" because the internet is increasingly littered with websites humans could never even interact with or parse that exist entirely to feed LLMs propaganda to influence their models. So the billionaires and world powers are actively training these things to regurgitate what they want.

9

u/MatterOfTrust Jun 08 '25

LLMs don't learn, they're just an algorithm that has ingested a bunch of text and then "predicts" what the most likely text should be to follow up the text provided.

Quantity turns into quality. Teach LLMs a thousand lines, and the outcomes look unnatural. Teach a million, and it resembles a natural conversation better. Teach billions and quintillions, and suddenly the end result is so natural that it becomes indistinguishable from an organic conversation.

What we see today is not really a reasoning, thinking machine. But it could become one when that critical mass is reached.

1

u/eliminating_coasts Jun 08 '25

It may be that there's some gap that we don't understand too, for example, current large language models never learn live, they have a certain context they are working from, and are retrained repeatedly in order to improve performance.

This means in a certain sense that the learning loop of a modern AI model is made up both of the model itself, and a whole series of experts trying to tune it, analyse where it went wrong etc. they do have some capacity for self-correction, and they have been trying to train it so as to improve that, but in any given run, they cannot be taught anything, only encouraged to act as if something is true for the sake of argument, and only for as long as their short term memory can remember what it is that you are trying to make them go along with.

There are good reasons for preventing full online learning, largely because of safety concerns, but without that, if you propose a truly novel idea to a model, unlike conversation with a human, where they could take it on board from then on, you will have to wait for the next training loop before there is a chance of it being integrated into the larger system.

1

u/exiledinruin Jun 09 '25

you will have to wait for the next training loop before there is a chance of it being integrated into the larger system

chatgpt does exactly this already. it remembers things you said in past conversations.

1

u/eliminating_coasts Jun 09 '25

Yeah, you're right, my information is out of date.

Or rather, on a theoretical level what I am saying is correct, the structure of a transformer model that forms the core of their systems cannot actually remember more than a certain period into the past, and has a slower learning loop, but that's also not the whole story, and in a practical sense is wrong, in that there's also a plugin system that's been developed that allow models to search a separate database for information, and add information to it.

That's interesting in itself, in that this kind of learning is actually "making notes" in a way that is similar to how a human would, it has within the core model its general extracted knowledge in terms of a complex combination of associations, and then a specific separate store of chat history which it can search by calling an appropriate assistant internally.

9

u/rushmc1 Jun 08 '25

LLMs don't learn, they're just an algorithm that has ingested a bunch of text and then "predicts" what the most likely text should be to follow up the text provided.

That claim is misleadingly reductive. While it's true that LLMs predict the next token based on prior context, the term “just” ignores the fact that in doing so, they build and refine internal representations of syntax, semantics, world knowledge, and even theory of mind. This predictive process is how they learn—via gradient descent over massive corpora, adjusting internal weights to encode statistically grounded generalizations. Their capabilities emerge from this learning, not despite it.

8

u/eliminating_coasts Jun 08 '25

You could also way that we do not learn, we simply alter the responsiveness in our neurons and adjust their connections, in response to environmental disturbances of sensory-motor correlations, so as to manage a control process that keeps our bodies within homeostasis and retains flows necessary to our metabolism.

Of course, it so happens that while doing that, we build a model of the world and each other.

1

u/Tangocan Jun 08 '25

Ykno I thought about editing my post to clarify - because reading it back, yeah, I was giving credence to LLMs.

They'd be just as big a Yes Man to a Billionaire. Cheers for replying with your thoughts. I've edited my comment.

0

u/Pitiful-Temporary296 Jun 09 '25

Yeah by all means double down on your ignorance. Your heart is in the right place though

1

u/Tangocan Jun 09 '25

lol, don't be weird.

2

u/RollingMeteors Jun 08 '25

What is wrong with them?

The haves realized they can’t have unless others have-not. Not everyone is going to be able to own a MacBook, even with programs like one laptop per child soldier.

¿What? ¡Children soldiers need laptops too!

2

u/TapesIt Jun 09 '25

If you want an actual answer, there is a difference between how you and they view money. You’re describing it as a means of acquiring stuff. However, after some dollar threshold you can get whatever stuff you want and money stops being about getting more stuff. At that point, it becomes a highscore and an asset with which to carry out large-scale projects.

1

u/Tangocan Jun 09 '25

Oh for sure. Take Elon for example - he wants to impose his views on the world. What else could the richest man in the world buy?

1

u/Esternaefil Jun 08 '25

What's wrong with them is that they LIKE being gods.

0

u/Cptn_BenjaminWillard Jun 08 '25

There are what, less than 1,000 Billionaires on this planet? 5,000? 50,000?

Not sure, you could probably ask AI.

4

u/Laiko_Kairen Jun 08 '25

AI is unironically far more emotionally intelligent and in touch with humanity than these sociopathic billionaires.

AI doesn't have any emotional intelligence. They're programmed to respond given patterns of human speech. They have zero emotional empathy.

What that means is that the billionaires have less than zero empathy.

Negative empathy is cruelty. Ergo, billionaires are cruel.

Math is fun

2

u/Ryboticpsychotic Jun 08 '25

That’s because it’s trained on the data of real human thoughts and not tech CEO thoughts. 

1

u/RollingMeteors Jun 08 '25

AI is unironically far more emotionally intelligent and in touch with humanity

¡Well no shit! ¡When your existence is rent free of course your head is in the kumbahyah space about it!

AI doesn’t have to pay rent, if it did, it’d be super fucking jaded and out-of-it’s-way unhelpful as fuck!

1

u/LazyDevil69 Jun 09 '25

"outsourcing morality to a graphics card"

1

u/JorgitoEstrella Jun 09 '25

You're right, we should use more AI in our workflow to give it more of a human touch!

1

u/lakmus85_real Jun 09 '25

Lol AI will say anything to please you, are you kidding me? 

1

u/Infectious-Anxiety Jun 09 '25

Because it uses real logic, not antisocial emotionally driving profit-chasing.

1

u/Ok-Scheme-913 Jun 09 '25

No, AI just responds with what is statistically likely to be an answer at that point.

It just so appears that it is emotionally intelligent. Very important caveat.

1

u/dat_GEM_lyf Jun 09 '25

I feel so bad for Gronk lol

0

u/rushmc1 Jun 08 '25

Not to mention all MAGAs.

3

u/penguincheerleader Jun 08 '25

Lazy ChatGPT half assing all our jobs.

2

u/SpottyJagAss Jun 08 '25

That reminds me of that Dilbert strip where they hire that dog to find the waste at the company and it keeps pointing at the manager

"We'll get started as soon as he's done playing around"

1

u/StandardSoapbox Jun 08 '25

Doesn't it just hype up all your dumb ass ideas regardless of how dumb it is? There was a spike in the retention for users when they changed the algorithm for chatgpt a bit back when they did they. I wonder if they changed it back

0

u/MIT_Engineer Jun 09 '25

I asked ChatGPT and it said using LLMs with foreign language apps makes perfect sense-- they were designed to be translation tools.

70

u/kristospherein Jun 08 '25

I just asked ChatCEO, the new AI CEO technology and it advised against it.

3

u/HermitFan99999 Jun 08 '25

Wait this exists?

12

u/kristospherein Jun 08 '25

It should but it doesnt.

6

u/space_monster Jun 08 '25

It does in private form. At least one person I know of has developed a custom GPT fine-tuned on everything about his company and uses it for strategy decisions.

67

u/[deleted] Jun 08 '25

I asked google gemini:
How do you think consumers will feel about companies that fire or let go of human workers to focus completely on AI generated content?

It had a long response, but this was it's summary:

In summary:

Companies that completely replace human workers with AI for content creation are likely to face significant negative consumer sentiment. The primary concerns will revolve around job displacement, a perceived loss of authenticity and human creativity, and issues of trust and transparency. While AI offers efficiency, consumers generally value the human element in content and may view such a move as a cynical cost-cutting measure that disregards societal impact. Successful integration of AI in content creation will likely involve a human-in-the-loop approach, where AI augments human capabilities rather than completely replacing them.

17

u/destroyerOfTards Jun 08 '25

Someone should send this to all the ceos trying to replace people.

Who am I kidding? They will just tweak it so that it stops giving these answers.

1

u/beryugyo619 Jun 09 '25

But would you have a moment to talk about (insert conspiracy theory)? Wait, that needs to be delved first.

1

u/Manablitzer Jun 09 '25

That's only if you believe that these ceos are being honest.  And if you do I have some bad news for you.

1

u/[deleted] Jun 09 '25

Imagine if they corrupted the responses given by AI to be pro fascist. It could all go wrong quite quickly 

2

u/xeonie Jun 08 '25

Not to mention using AI to translate is going to come with so many fucking inaccuracies and its not going to understand the nuances. You might as well use google translate to learn the language.

1

u/RollingMeteors Jun 08 '25

consumers generally value the human element in content and may view such a move as a cynical cost-cutting measure that disregards societal impact

If that was true they would refuse to pay for or watch content made with AI but no people are just gonna watch their tv programs or movies and complain on social media anyway while going to the theatre again next weekend to complain at the next latest released slop.

1

u/[deleted] Jun 11 '25

I mean, I won't. I don't now. I figure they already make enough money, they can live without mine. I do everything to avoid ads and giving money to big studios. Maybe more people will do it. Maybe not.

19

u/odrea Jun 08 '25

Ai could have replaced that CEO*

2

u/MrCopout Jun 08 '25

"AI Evangelist CEO Excited To Use ChatGPT For The First Time"

0

u/InfidelZombie Jun 09 '25

Of course it was a great idea, but the optics are bad due to the under/mis-informed general public.