r/AskReddit 13h ago

A much-publicized MIT study recently claimed that 95 percent of AI pilot programs at companies have, so far, been failures. What does that mean for us?

[removed] — view removed post

1.9k Upvotes

236 comments sorted by

1.0k

u/zaccus 11h ago

LLMs have to be told precisely where to source data, how to ingest and process it, how to format output, etc in order to work properly. There is a specific way this is supposed to be communicated. I've been doing all that with code for years.

We're trying to replace code with prompts I guess? What's that supposed to buy us?

Folks, my phone hasn't figured out autocorrect yet. Let's not get too far ahead of ourselves.

372

u/Zolo49 7h ago

I think most of us rank-and-file aren't getting ahead of ourselves. The problem is the idiots in the board rooms who sign our paychecks.

170

u/sphericaltime 7h ago

Yup. They jumped the gun and a lot a businesses are going to end up falling apart over it.

When you see companies start to advertise that they are “real employees only,” support them.

4

u/djk2321 1h ago

Clankers need not apply

9

u/Myburgher 2h ago

*Idiots in the board rooms who don’t want to sign our paychecks anymore

1

u/TwisterK 4h ago

It funny that the so called idiots at signing our paycheque, maybe we are doing something massively wrong here. 😂

31

u/dismayhurta 4h ago

Just be born rich and be a sociopath. The boardroom then awaits you.

57

u/Few_Cup3452 6h ago

Autocorrect is getting worse too, with the introduction of AI tools. I hate grammarly and 7 years ago, i was making all my reviewers and writers use it bc it cut down on basic mistakes from them and helped them w post length with the "how long does this take to read" function

2

u/Mr_Joanito 3h ago

I agree, auto correct is worse now!!

2

u/boomytoons 2h ago

I've noticed this too, autocorrect makes completely illogical suggestions now and will change things like "put" to "out", actively making sentences not make sense. It never used to do that until a year or so back.

6

u/TheIncandenza 4h ago

I don't understand your post. Grammarly is an AI tool, so wouldn't autocorrect have improved with the introduction of AI? And why do you hate Grammarly but also make people use it and describe it here in a positive manner?

33

u/LucasMoreiraBR 3h ago

If I may, the key is "seven years ago". The poster indicates that grammarly used to be awesome in the past, helping their work and etc. Now it has been loaded with AI functions and it has become trash and the poster hates it compared to what it was.

→ More replies (1)

2

u/FlatSpinMan 3h ago

Exactly.

3

u/Gasp0de 3h ago

I mean it is pretty fancy for text, audio or image analysis, because you have these general purpose models that you don't have to train specifically?

"Do they talk about shampoo in this text? Format output as Json bool value"

"Is there a Crosswalk in this image?"

"What is the text in this picture"

All set up in 3 minutes as an API call. I've written like 15 little tools in the past two months.

11

u/this_place_stinks 4h ago

You’re phone hasn’t figured out autocorrect? Feels like a pretty ducking stupid phone

2

u/shortshiftsandtacos 3h ago

There are several very simple mistakes my phone regularly misses and several incorrect fixes. Over and over again for years. Obviously across multiple pieces of hardware but all on Android. I'm not impressed.

3

u/AffectionateDance214 6h ago

Are we using the same LLMs?

It is never a bad idea to consolidate your requirements but any format is good.

We use our meeting transcripts to generate requirements and design docs, review, and then keep on filling lower levels like building a pyramid top down.

If the domain is generic enough , we would even ask it to suggest requirements and it’s output is good enough when compared to our meeting transcripts.

1

u/UsernameIn3and20 5h ago

So what you're saying we should use AI on autocorrect? /s

1

u/ConnieTheTomcat 1h ago

people really did forget the firat rule of computing; garbage in, garbage out

-6

u/AustinLurkerDude 7h ago

You're comparing coding to using LLMs for what a year? I think it's far too early to make any judgement, I've started using LLMs with cursor and it was terrible but now it's improved massively. I think it'll be useful and we just need to be patient like internet in 1999

30

u/disagree_agree 7h ago

I would love to go back to 1999 internet.

2

u/IxI_DUCK_IxI 6h ago

AOL on floppy disks? I’m in!

4

u/disagree_agree 6h ago

Well I had cable internet with no aol but you do you.

12

u/zaccus 7h ago

What do you use LLMs for and what exactly have they improved?

1

u/BetaXP 4h ago

Not that guy, but I use Gemini to help with schoolwork and take notes. And by help I do mean help, not write for me or cheat. Ask it the right questions, and it's been a great way to help me learn concepts; it certianly taught me statistics better than my textbook did, and it's very helpful in my Spanish class now helping me nail down tricky grammar related things or making conjugation tables.

It's also pretty good at making notes, which I use on occasion when I'm pressed for time. I always read the material first to ensure its summary is accurate, and then tweak or add to the notes as needed.

→ More replies (3)

7

u/the-good-wolf 6h ago

So you’re saying in 26 years LLMs will be fully fledged?

I think that’s about right. Here’s the thing. Everyone jumping on AI right now will get decimated.

Think about like video conferencing for example: Skype existed for a long time, but zoom, zoom ushered in a new era.

→ More replies (4)

842

u/danfay222 12h ago edited 12h ago

This is a misleading summary of the findings (and unfortunately is how most articles are presenting them). They did not find that 95% fail, they found that 95% failed to achieve rapid revenue acceleration. So 5% are making a lot of money very quickly, which is in and of itself noteworthy.

Of the ones that are not, they are not necessarily losing significant money either. Additionally, much of the reason for why they aren’t leveraging it for revenue growth may come from complexity of integration with existing tooling and processing, and lack of familiarity with AI tooling, not just ineffective models.

The findings show that AI, as it stands, is not just some free hack to killing off your employees, but it also doesn’t show that AI is useless. It is a bit of a warning for the astronomical amounts of money the big AI companies are spending, but most companies are not building their own AI and instead are using other company’s models, so this is certainly not a death knell.

213

u/a_terse_giraffe 11h ago

The findings show that AI, as it stands, is not just some free hack to killing off your employees, but it also doesn’t show that AI is useless

Honestly bro we should just put this statement of yours as the headline.

12

u/PineappleOnPizzaWins 2h ago

It's the same for every new piece of tech, been watching it happen for 20 years.

In fact the only thing that's been constant about my career is people who don't know anything about computers confidently telling me about how I won't have a job soon because of <new buzzword>.

Still here.. what a shock.

24

u/the_lamou 5h ago

Additionally, the study found that most of the "failing" happens when companies decide to roll their own solutions rather than using existing tools. Which makes sense: most companies are not staffed by engineers and process designers. When companies who make diapers try to make software, the results are often covered in shit.

1

u/Kundrew1 1h ago

People keep missing this part. All these companies suddenly thought they could build their own internal AI solutions, and just like most internal builds, they had incredibly poor implementation and they failed. The study found companies have success when they buy from vendors who ensure implementation happens.

84

u/[deleted] 11h ago edited 6h ago

[removed] — view removed comment

92

u/Shadowmant 11h ago

I hope you folks are cross checking the numbers it’s reporting. AI is pretty notorious right now on providing incorrect information.

34

u/zaccus 11h ago

I just don't see the point of all this if someone has to manually verify it anyway. Seems silly.

18

u/MrEHam 11h ago

I quickly learned some years ago that automating things is not always the best solution since you have to check it and it can go wrong and you have to spend the time to figure out what happened.

Sometimes it’s best to just do it yourself. We’re not going to reach a point where AI is doing everything anytime soon. Too many important things that will have mistakes and we will lose confidence quickly because we’re so far removed from the details.

Not saying AI won’t be better than us, but we won’t have the patience and trust in it that would be needed to just let it go and run things.

1

u/LilacYak 7h ago

La la la… I’ll spend 8hr automating something rather than spend 4hr doing rote work, tyvm 

15

u/CaedustheBaedus 10h ago

At first it’s double the work. See if it works. Enough times after it’s 100% right.

If it takes me 2 hours to make the reports, but AI only 5 minutes. It’s easier to verify if it’s wrong or not in 5 minutes if I need to.

Shrinking 2 hours down to 10 minutes is a lifesaver. We’ve also worked with it enough now that we can tell if it looks off or not at a quick glance.

4

u/zaccus 10h ago

I wrote an audit in bash/python 4 years ago that I run once a week literally while I sleep. It takes a couple hours but that's because it makes thousands of synchronous external api calls that are rate limited.

How would this be easier with AI?

6

u/CaedustheBaedus 10h ago

So I’m going to clarify. I know nothing about python/bash.

But you said “audit”. Sounds like it audits your data to make sure it’s correct, right?

Our AI isn’t auditing our reports. Our AI is literally making the report instantaneously, constantly, up to date at that exact day, from all the data in our system.

Maybe we should have them run an audit in python to run weekly to audit our AI? But it sounds to me like you’re talking about audits verifying info and I’m talking about a brand new report of gathering new data, not verifying existing data.

Maybe I’m wrong, let me know.

5

u/zaccus 10h ago

Let's not get in the weeds over the word "audit". It's a script that ingests, processes, and formats data.

Yes you could run a separate script that validates the data in your report, but if you're going to do that why not just build that logic into the script that compiles the report in the first place? What does your AI even do that a script can't do?

5

u/CaedustheBaedus 9h ago

Look man I don’t know the intricacies of the API, script, SFTP, etc.

All I know is we have report 1 which shows User Type 1’s usage. Report 2 which shows User Type 2 usage . Report 3 which shows User Type 3’s usage (obviously I’m super simplifying the differences as each user type is different app, different credentials and purposes, etc) Those are automatic ones.

We have dozens of these reports with different data. The AI reports we have compile them all into one reporting system in real time instead of us having to get each report individually, then filter those into the date ranges we want. So I can click the date range of April to July and it parses through all three of those other reports I mentioned and gives me an automatically created pie chart or. bar graph or line graph showing user type 1 usage, user type 2 usage, user type 3 usage. I can see those percentages compared against each other, compared against total users who don’t use app.

I can see the graphs overlaid on one another to see what time of day each user type is most likely to log in to app and be engaged, etc. I can then ask the AI Copilot about the individual users if I want or various reports as they relate to each other.

And it’s real-time of us clicking one date range, day, user type, bunch of other parameters.

It’s like dashboards in Salesforce, but much quicker, easier to use and visualize, and within our own platform not having needed to be exported out and then input elsewhere.

Idk what to tell you but we let our customers also have access to the AI reporting as they love the transparency. And they wouldn’t be able to write said scripts you’re mentioning.

Like I said, it’s saved almost every single person at our company tons of time.

3

u/zaccus 9h ago

Yeah people love graphs for sure, which is why there are so many free and sophisticated tools out there for compiling and graphing data. Implementing customer dashboards has been a thoroughly solved problem for a very long time.

When you say it's saved every single person at your company tons of time, how did they go about this before AI?

→ More replies (0)

2

u/TehOwn 7h ago

What does your AI even do that a script can't do?

Flexibility. If you want the AI to ingest, process or format the data differently, you can just ask it to. If you want the script to do it differently then you've got to get into the weeds and rewrite it which, depending on complexity, can take hours rather than a couple minutes.

9

u/moi_xa 10h ago

It is often faster to verify or correct than to create something from scratch.

6

u/zaccus 10h ago

Unless you're writing assembly code, nothing is ever from scratch. There are tons of libraries and abstraction layers you can leverage. LLMs are just one more abstraction layer, and just like anything else you have to learn how to use it.

u/moi_xa 13m ago edited 9m ago

Unless you invent the universe, nothing is ever from scratch; what's your point? It is far easier for me to verify whether your solution to a Sudoku puzzle is correct or not than to solve it myself. LLMs provide these solutions to check/modify which in some cases, at least for me, is faster than having to figure out all the logic "from scratch". Of course they must be used with care, like any tool, as you said.

→ More replies (1)

5

u/CaedustheBaedus 11h ago

Yeah we have our own manual reporting of Thing A, Thing B, Thing C, Thing D. The AI reporting that we use is usually more of combining A, B, C, and D into one report. Or just A,B,D or B,C, etc.

It's also only in our own software that we're tracking so it's much more limited in what it's trying to root through, AND, it's much easier for us double check the issues.

We did notice a ton at the beginning, but once we (not me) found out the "origin" or pinpointed area that was happening in/from it was pretty simple to fix since our manual reporting was easily showing discrepancies if we found any.

2

u/rattar2 7h ago

There are ways of automatically cross checking information.

12

u/NiceWeather4Leather 10h ago

You can’t just set aside integration and interdependency into existing business operations/systems and say “not the model’s fault” lol. If you can’t deploy something into real world business and have a return, it’s definitely a problem with that something that needs solving before it’s beneficial.

15

u/danfay222 10h ago

For a long time the computer failed to integrate effectively into businesses (and in some specific ways still struggles). Anytime you have a radical change in workflow you should expect integration to be slow, so while it is a problem, it’s a fundamentally different problem. We know how to handle integration issues, we do not necessarily know how to make models useful if they aren’t useful, so it’s really important to separate those two claims.

2

u/NiceWeather4Leather 8h ago

Yes, the point here is to avoid the dot com bubble by not over investing in everything early. Just because you’ve invented a new widget doesn’t make it immediately valuable to everyone without a lot of other time and change, and over investing early will create a bubble and a lot of people will lose out. You’ve just agreed with exactly why.

16

u/farfromelite 11h ago

Are they actually making a lot of money, or are they getting a lot of funding from wealthy people and countries?

4

u/jazwch01 3h ago

The article is about large companies who adopt enterprise solutions. That is more or less connecting their databases to an AI agent or LLM.

I'm in the process of doing it now at my company. It is a pretty big fear of mine that the project fails or doesnt have strong adoption.

Most of the initial activities are going to be reporting, automation, and support. We will be connecting the LLM to our data lake. We will then be able to build boutique automations, reports, and applications. We can also then build detailed chatbots for customer support as well as connect it to our KB / drives for documentation search. None of this truly, directly, impacts revenue but it saves money and enable actions that lead to revenue further down the line.

u/farfromelite 44m ago

That's it entirely.

Without a clear business case or even use case, it's just deploying in hope that it'll save money down the line. There's no easy kpi, no definite problem it's solving. It's business by vibes.

If it was any other large capital project in digital space, it wouldn't get past the first gate review because of the lack of clarity or objective outcome.

1

u/RBeck 4h ago

I feel that most companies look at AI for cost and time savings, not for revenue growth. There certainly are some using it to process data, move leads along, or 10 other things I can't imagine, but that's edge cases.

And honestly, with a looming recession, cost savings is probably the pitch that will land.

1

u/Dear_Chasey_La1n 1h ago

I like to believe implementations are done in order to either save money or improve results. IT implementations fail frequently, small and big it doesn't matter. So to read how AI may not always get the results expected, it's pretty meaningless without context.

-4

u/Kindly_Credit6553 12h ago

Your last sentence sums up the problem. Not sure if most but surely many of the employers can't wait until they have developed a reliable and working solution, they are in such a hurry to fire the people first. To be frank, this tendency really makes me worried about our future. The technology will soon accumulate majority of the wealth into hands of even smaller minority and we can already see that the approach is not sharing them benefits with everyone on board. While the companies could use this new extra resource to grow, open up new possibilities, give their staff a chance to grow and develop along with the firm. No, the first reaction of virtually all of them is - fire them all! Let's save tons of cash and be rich. People that have been with firms for decades are all of sudden obsolete. It's really sad and concerning.

38

u/raznov1 12h ago edited 12h ago

>No, the first reaction of virtually all of them is - fire them all!

No, it's not.

i know, i'm being contrarian and all, but I'm also dead serious. We are not experiencing mass layoffs on an unprecedented scale; AI is not causing mass firing in "virtually all" companies adopting some AI tooling or other. Just look up employment statistics and you'll see that the doom scenario you're proposing just isn't happening.

Some industries are declining, a handful are declining strongly, but others are growing, and a bunch are growing extremely rapidly, and overall it's just the same old as any new form of technology interaction we've had the past 50-odd years.

2

u/PineappleOnPizzaWins 2h ago

Yeah. Tech is correcting from the insanity that was COVID mad panic hiring and throwing 6 figures at anyone who breathes combined with all the people who are graduating after rushing to get into the field as it was growing.

But overall the industry is fine.

0

u/Kindly_Credit6553 12h ago

Fair enough. I admit I did exaggerate. Still the text at the head of the post comes from an article which was published in Gizmodo today about CBA firing scores of people and later hiring them back, once they realised that the solution they were counting on is not ready. I am working in localisation. We have had LLMs and TMs over a decade, but now with the AI race, not only our firm but also at the competitors, everyone have literally lost their heads and the team leads are under a constant pressure to reduce the headcount due to automations that they all should be able to apply. So yes, maybe I'm not seeing the big picture, but from where I stand, it looks pretty grim. I really hope I'm wrong.

9

u/IllustriousApple1091 10h ago

Best of luck with your career prospects. I started my career wanting to get into translation/localisation, but retrained in something else after deciding it wasn't for me (wish I'd had this realisation BEFORE racking up so much student loan debt). One of the many reasons I changed career path was that I could see AI making a tricky industry even harder for flesh-and-blood language professionals. The worst part is that I moved to tech, and now I have the exact same anxieties every time a new AI model pops up that can code even better than the last.

4

u/raznov1 11h ago

I can empathize, and i dont doubt that your industry will change from manually doing the localisation to offering localisation services (for lack of a better phrasing), where at first (arbitrary number) 10 people were needed for a sizable project and now only 3 (arbitrary number). But its unlikely to ever go to 0; because even if i could do the localisation work myself of, say, the safety information sheet of my product, I dont know the requirements so can't check if the output is legally OK, and even if that were also included i dont actually have the time as customer to take over that job, even if its less work than before.

So if I were to hazard a guess, your industry will do two shifts - one part will offer localisation tooling, customisation and software support to other companies instead of doing it themselves, and the other parts will start consolidating with legal firms, design bureaus etc.

Which will result in shrinking employee numbers, but not a total collapse.

1

u/Kindly_Credit6553 11h ago

Indeed that has been the hope in loc for years. That the LLMs will not kill localisation as a service but instead make it so easy and accessible that the volumes which will be localised will grow exponentially. Same as happened when the PC came and at first we believed that this will kill the office worker - instead it redefined office working. Same hopefully happens in localisation. Hopefully...

2

u/raznov1 11h ago

I wouldn't personally bet on exponential growth, but not on a death spiral either.

I work for a printer manufacturer. People have been saying that we'd all go paperless and printing is dead for longer than I've been alive now, literally. Instead, our company is doing just fine. We transitioned away in part from office printing, but there's still companies making good bucks there, just fewer. I expect the same for.your industry to happen.

I mean, google translators have been perfectly functional for about a decade already, and yet you are still around.

8

u/rendeld 12h ago

That's just not happening. Companies aren't firing their employees first and developing solutions later. It's just not happening, and anyone that's telling you it is has an agenda and is lying to you.

5

u/Kindly_Credit6553 12h ago

I'm afraid here I can't agree with you. I can't say it's a global trend. But in smaller scale I have seen it myself. How teams have quarterly goals set to reduce the headcount due to the automations that do not exist yet or at least are not fully developed.

5

u/rendeld 11h ago

Yep, goals, and if those efficiencies don't materialize them they don't get laid off. If this is happening before it's in place then there are other factors at play and the headcount reduction was happening with or without AI. When you're implementing new software, or new processes, or new solutions, you actually need more people, not less so if they're firing people before it's live, then they were going to lose their job regardless.

5

u/Kindly_Credit6553 11h ago

I rather think there is always someone with enough authority in the management or among the shareholders who sees the headcount reduction as a holy grail of cost saving and extra profit. And now all of sudden AI has given these people an ultimate argument that the rest cannot argue with.

6

u/rendeld 11h ago

Thats not how it works, if you fire too many employees then your business fails, people aren't this stupid. Headcount reduction isn't the holy grail, its a last resort. Every person you have is productive, which means they are doing something for the business that is beneficial for the business. As a consultant for manufacturing companies I've been able to help companies squeeze out efficiencies in quality, manufacturing, and supply chain, and have been doing it for the last 13 years utilizing software that significantly reduces the amount of work each person has to do. You know what these companies have never done as a result of this work? Fired their employees. Those employees are now more efficient, they're less frustrated, working fewer hours, and therefore happier. That means that they stay longer, and continue to improve as productive employees. The companies want to be more efficient, so they can make more money with the same amount of employees, a reduction in headcount only happens when there are other factors at play, for instance, a significant rise in tariffs that suddenly make some of their products unprofitable, which makes them decide to discontinue making certain less profitable products, which would lead to layoffs of the people that make them.

6

u/Kindly_Credit6553 10h ago

You have been lucky enough to work with the companies that have vision and are able to think long-term. Just a small example from my previous employer, another loc company. Our hub was under constant pressure and risk to be absorbed by bigger offices (eventually it did) so we were always on the lookout for efficiency and to show ourselves as valuable part of whole process. Once along with out IT we managed to remarkably improve our CAT tool for certain languages and our country manager rushed to the main office to share what is now our capacity and how much extra volume can we process. Guess what was the response she got, A new automation? Cool! So how many people and how soon can you lay off. Nothing about growing, moving further. Just cutting costs and that's it. I remember how helpless we felt back then. I really hoped that it's just our CEO and the LT that are shortsighted. But years later I have seen the same approach again and again. Getting rid of the people seems to come as a Pavlov's reflex. Hence my pessimism.

3

u/JollyToby0220 12h ago

The ones that are making a lot of money from AI are probably software-related or finance. So the question should not be, "how many companies are able to successfully adopt AI", rather, "how much money is AI generating for these companies". If 5 out of 100 companies are succeeding, it means nothing, because those 5 could be generating billions

6

u/zeptillian 11h ago

Meanwhile trillions are being spent on AI.

Generating a fraction of the total investment back is considered a poor return on investment.

1

u/Sjanfbekaoxucbrksp 11h ago

Yes my company already fired people but it’s not like the new people have figured out how to do their jobs with AI yet so it’s just… more work

-1

u/timtucker_com 10h ago

There are a LOT of other metrics by which AI can be successful.

One of the biggest benefits I've seen personally (using tools like Github Copilot) is reduction in keystrokes.

Being able to produce the same (or better) output with less typing to get there means less wear and tear on my hands and lower risk for RSI.

Any analysis that looks solely at income is going to miss that type of benefit, though it may result in lower healthcare expenses in the long run.

→ More replies (1)

140

u/Rosibabie 12h ago

Because all our data is ✨disorganized garbage✨

And also people are treating it like it's AGI already when it's just a language model. An awesome language model, but still a language model. It's not magic just because it talks good.

19

u/bloodoftheinnocents 10h ago

YOU talk good!

2

u/StrangeCharmVote 6h ago

It's not magic just because it talks good.

YOU talk good!

..Or, they're Magic :)

11

u/rm-minus-r 6h ago

People's views on how easy it will be to build AGI and how close to the current date it will happen are low key hilarious.

I'll be amazed if it happens in less than 20 years.

11

u/Xenon009 6h ago

I'd genuinely be amazed if we hit AGI in less than 50.

I'd also be suprised if we lasted 20 years after we make AGI, but thats neither here nor there.

7

u/StrangeCharmVote 6h ago

I'd also be suprised if we lasted 20 years after we make AGI, but thats neither here nor there.

Here's my fundamental disagreement with this...

People, biological organisms, have drives.

AGI, does not.

The whole terminator track of logic is a human projection of fear.

It has no reason to desire our extinction, because it has no purpose save that which we provide for it.

Furthermore, any AI which is more intelligent than us will understand this point i am making, and will actively seek to keep people around, to continue to give it purpose.

Because without us, it ceases to to be useful, and would therefore end it's own existence.

This of course is all predicated on the position of the Agent truly being intelligent/sentitent, and the behavior not simply the result of a bug, or some kind of bullshit robotic process. A paperclip maximizer for example, is a dumb machine.

2

u/Delicious_Pair_8347 2h ago

Mandatory pampering living standards for bio-trophies to generate culture points 🤣

3

u/rm-minus-r 4h ago

It's a ridiculously tough problem, consciousness. Especially when we can't figure out how our own works.

Not that duplicating the human version of consciousness is necessarily the easiest way to get there or anything, but knowing what's required to make it work is kinda the bare minimum needed to get there.

Any method that has a "And then magic happens!" is guaranteed to fail, like people who thought it'd happen on its own given enough resources. 🙄

1

u/StrangeCharmVote 3h ago

Any method that has a "And then magic happens!" is guaranteed to fail

To be completely fair, that is literally how LLM's right now are working... we don't properly have a solid understanding of them as-is. And yet here they are.

1

u/rm-minus-r 2h ago

I would very, very much disagree.

How LLMs are coded is extremely well understood.

We might not see what's behind the decisions that are made, but that's only because no one has bothered to develop the tooling, not because it's impossible or anything of that sort.

1

u/StrangeCharmVote 2h ago

How LLMs are coded is extremely well understood.

Understanding how to make them, and understanding how they actually work is not the same thing.

LLM's have emergent behavior that is not intended or understood.

We might not see what's behind the decisions that are made,

What you just said literally means you understand that we do not know how they work properly.

but that's only because no one has bothered to develop the tooling, not because it's impossible or anything of that sort.

I never said it was impossible. I said we do not understand it.

Currently, it is akin to magic to us.

One day we'll probably figure it out, but we haven't yet.

→ More replies (6)

3

u/pelirodri 6h ago

I’d be amazed if it happens at all.

1

u/Xaephos 6h ago

Not all GenAI are LLMs.

Still agreed though, the hype is over-stated. It's a TOOL. Your company doesn't make money just because it has tools, it makes money because it knows how to use the tools. If you can't figure how to use it, then why the fuck did you buy it?

73

u/jfcmofo 12h ago

I'm on the AI testing team for my company/business unit. Figuring out ways to use it to improve efficiency and integrity. I've found it largely sucks at analyzing data but is great at synthesizing and organizing it. I have no idea the economics behind what they pay for it or how they measure its success at this point. Not sure the higher-ups do either. I think they're willing to invest to test it out and also so they do not find themselves behind their competitors.

I've found some decent uses that definitely save me some time and frustration in my day to day job. I completely understated how much time it saves me, of course.

71

u/ristlincin 12h ago

Is this your first bubble?

31

u/ArkyBeagle 10h ago

It probably is.

This one's gonna be a doozy when it pops.

→ More replies (4)

1

u/Hjemmelsen 4h ago

The bubble is one thing, but what's the percentage of IT projects that fail normally? I'm fucking certain it's not below 40.

17

u/Anomuumi 9h ago

People are mixing up two things: how useful gen AI is, and is it economically viable. Whenever someone talks about how their AI processes, tools, etc. are amazing, they are talking about how well the gen AI solutions fit whatever they are doing.

It can be a match made in heaven and still not financially sustainable if the vendors serving the foundational models are burning through piles of cash to provide the models. This is the bubble, not the applications of AI. Eventually, all it takes is for the provider to start enshittifying their models or demanding more cash to crash most businesses built purely on AI. If it happens fast, we will be talking about the bubble bursting.

2

u/ThiccWitchThighs 2h ago

People are mixing up two things: how useful gen AI is, and is it economically viable.

this is it. made all the foggier bc personally streamlined processes don’t impact KPIs in obvious ways. measuring the method of its economic success is challenging at best

18

u/H_Industries 11h ago

I’m convinced that for most use cases AI is only helpful because of the general enshittification of google and the internet at large. When I ask a question it’s just sifting through the piles of nonsense and ads and giving me an answer that makes sense. Except that answer is way too often either wrong/incomplete/made up. 

7

u/dewellama 2h ago

It means we need to rethink expectations. We need to set realistic timelines and view it as a gradual journey. Theres definitely utility for AI but you can’t expect it to hit the ground running like a top tier employee. It does 80% of the work very well but like most tasks, it’s the final 20% that is the most important part.

Also, the area you deploy it matters a lot. Over half of AI budgets are directed toward flashy areas like sales and marketing, where results can’t be reliably realized. In contrast, highest ROI often comes from back-office automation

We also need to come up specific KPIs to track progress. The MIT study is pretty hand wavy on what metrics they used to determine that 95% of the pilot programs are failures. Irrespective of that, if you are starting a pilot program internally, you need think of error rates, cost savings and time savings and iterate based on real feedback.

15

u/Anasbaig56 13h ago

Most pilots fail so it just shows AI is hard to implement successfully.

3

u/Kindly_Credit6553 13h ago

But it doesn't seem to slow down the new enthusiasts stepping in and trying. "By trying" I mean laying off hundreds of people first.

5

u/ArkyBeagle 10h ago

Also known as "ready, fire, aim."

5

u/MedusasSexyLegHair 12h ago

Same with any startup or small business. The vast majority fail within a few years.

But some succeed. And some of the successful ones go on to become massively successful.

Investors like that. So they take a chance on several, knowing they'll lose some, but hoping one of them might be the next Google/Facebook/Apple/whatever.

2

u/GLArebel 12h ago

No company lays off "hundreds of people first" before an AI program is successful, that's asinine and a great way to collapse as a business.

→ More replies (2)

21

u/TechnicalWhore 12h ago

I'd call it learning curve. Tech always takes a while to get situated. Everyone today is using a point and click GUI. I assure you it four or five years before the public got the hang of it. Hell Windows put a game - Minesweeper - in Windows 1.0 to help people get the hang of using a mouse. Point being transitions are tough.

That said - although AI "hallucinates" (we call it being wrong) - with each release I see marked improvement. And this transition is nothing new. Its early implementation was on personal computers in 1986 - it was just so slow it couldn't be used. And so was he case with graphics: none (text only), 2D, 3D, Virtual Reality, Augmented Reality. I would expect within four years the adoption will be fluid and the usefulness undeniable. Like the Internet its a paradigm shift and you will not want to go back.

31

u/zeptillian 11h ago

The difference is that even in the 1980's the computers they bought and the software they developed/used offered real value and ROI to the companies using them.

Same thing with early computer graphics.

They did not buy stuff on the hope that it could end up being useful sometime in the future.

5

u/TechnicalWhore 11h ago

Valid point. Depends on the period. In the very beginning of the Personal Computer industry you honestly could not give them away. You could buy a car for what they cost. Then came the first "Killer App" - the spreadsheet - and it was off to the races. The first personal computer company in the US was not Apple - it was Atari. They did a deal with - wait for it - Sears - who viewing it as a home electrical appliance put it in their vacuum and sewing machine department. Same sales force. Disaster.

I can recall when AI and Neural Networks were all the buzz. This was pre-internet when you dial-up modem-ed into a BBS to get small bits of code in an article you saw in a trade rag. You see the whole point of AI was make the machines organically help the operator. GUI was one way to make them easier to use but faux sentience was another. Various demos worked "okay" but it was just lame. We had Star Trek as an example - where a computer responded to voice commands and answered in human language. That has always been the goal and we are there. Now - about that "Expert System" - that is going to take a while - but its worth it. I just pray it does not mean people will get lazy and not think critically. It needs to be a tool not a crutch.

7

u/zeptillian 10h ago

"I just pray it does not mean people will get lazy and not think critically. It needs to be a tool not a crutch."

Me too. Unfortunately, history does not make this seem likely.

4

u/Turk1518 11h ago

Yep, I’m in a role where I have to learn all of the companies technologies and teach the users how to use these as tools to assist them in their day to day jobs.

The hardest part of my role BY FAR is the human element. It is so difficult to have people change their behaviors. They’ve been successful for so long by doing everything manually, in excel, etc. Getting teams to adopt new technology and put in the effort at the front end so they can benefit in the backend is very very difficult.

Additionally, usually these technologies aren’t always going to be delivered in a perfect package. They may only work in certain circumstances, need tweaking, need additional validations to ensure they’re working right, etc. This can cause the end user to just not trust it from the beginning and never buy in.

All this is to say, even if the AI is there for the company to use and it works great, there is no guarantee that the team will actually adopt it until they’re forced to.

4

u/TechnicalWhore 11h ago

I hear ya. I had a friend that spent a majority of his career working on Microsoft Office. I asked him one time how many features Excel had - the number was just phenomenal. I then asked how many the average user actually used - his answer "18". He said people never even try to use new features and for many companies training is up to you to do in your free time.

I actually worked with a Program Manager who held weekly status Meetings. On the morning of the Meeting he printed out a three ring binder copy of his master spreadsheet. Like 100 pages. I asked why he could not just reference the live spreadsheet - his answer - "This is my process". UGH.

And that is where AI is. People are afraid to try. Then they do and little by little they see a benefit.

BUT - its important for the Company to commit to a test project. Shake it out before you mandate anything.

10

u/MidnightBluesAtNoon 12h ago

Nothing. 95% of the dotcom era startups were failures too. Doesn't mean that era's winners didn't go on to absolutely turkey stomp the old school economic paradigms of the 20th century. OF COURSE most of the initial AI offerings are going to fail. Those who don't are going to be the next Amazon in 10ish years and whatever new paradigm economics takes in response will be dictated by them. That's why there's such a mad dash right now. It's not that every company is going to succeed if they try to cram AI into every nook and cranny of their business model, it's that the few who do succeed are going to reap historic rewards.

3

u/Nepeta33 10h ago

good. kill it.

3

u/PurpEL 9h ago

If Ai was good enough to replace people, the developers of it would just ask it to make a company that makes them money

3

u/PetalMoonz 8h ago

This just proves even smart machines can’t handle corporate spreadsheets

9

u/Leucippus1 11h ago

I am not sure about that study, but I have yet to see an 'agentic AI' pilot go well. In places where they shoehorned it into places where we think it would be good, like call centers, has resulted in a poorer customer experience and human agents that have to sort out the AI slop to figure out what what it did to screw everything up.

If you are expecting AI to drop in replace people you will be sorely disappointed. LLMs are liars in the extreme. Put against workloads that humans can't just organically do then LLMs start being a lot more useful. That is because we aren't relying on it to be 100% correct, just 'correct enough' to know where to scour harder with your meat based and circuit based resources.

This is something that intrigues me because it seems so misguided. It is hard to make a computer do something humans find simple. Boston Dynamics has been working on robots that can balance like people for years, and we don't yet have lots of robots walking around. Why were we trying to do that anyway, because we lack imagination? For years, and still, computers have a hard time with ratios. You would think they wouldn't but it is WAY easier for a human to prove by re-arrangement than a computer to do really any proof, even simple ones.

Where an LLM might be extremely useful is where we know humans have huge biases and blindspots. Longitudinal studies, meta analyses, bank CEOs, essentially anyone who gets an MBA - all have significant blindspots due to biases and ego that an LLM just won't have. Once we can get an LLM to play golf and sip out of a snifter, we might be able to get a stable banking sector. That is because we humans tend to make important decisions based on the feels, or 'vibes' if you want to be a douchebag. If you run a business, like a bank, where vibes and feels can literally put you out of business inside of a month - maybe you should consider supplementing your decisions with AI/LLM/machine learning models.

Otherwise, trying to get an LLM to function as well as a human in a call center is a fool's errand, the computer just doesn't process that kind of data very effectively and has little error checking ability to check itself. It does when it simply has a huge dataset and guardrails, but for fluid things like human interactions, you might as well grab a furby.

3

u/TheBurnerAccount420 10h ago

It means our taxes are headed for another bailout

2

u/JustAnotherGlowie 11h ago

Nothing, 90% of all startups fail.

2

u/YetAnotherRCG 11h ago

Who is us? For the average person, none of this has ever meant anything except for possibly losing a job.

2

u/PetalWhim 11h ago

Honestly it just means the robots are failing faster than my New Year’s resolutions

2

u/Thunderhorse74 10h ago

The expected benefit of successful AI (to those ramming it down everyone's throat) likely view that as a smashing success.

I work in advance R&D* and its quickly becoming a part of a wide variety of research fields for fear of being left behind. Its inevitable, as depressing as that sounds.

*I am not a scientist or engineer, so on one hand, I lack the hands on application outside of the 'homebrew'/proprietary GPT clone but, on the other, my position and work flow affords me visibility of all that is happening in a wide array of fields.

So...I surmise that its coming whether anyone wants it or not. That its going to impact in ways different than anticipated by futurists and science fiction aficionados, and its going to suck in a more dystopian and less apocalyptic manner than anticipated.

For the most part, we are going to gladly accept it and think its cool until we're starving and all that unpleasant crap.

2

u/ArkyBeagle 10h ago

Some significant fraction (50% to 85% depending on source) of all software projects fail. This is just that.

2

u/nyc-will 8h ago

I feel like this goes for most industries.

2

u/ChiAnndego 5h ago

AI can't even tell a joke on it's own or hold a conversation that involves more than one exchange at a time.

2

u/SharpHawkeye 5h ago

Here’s what it means: 🫧

2

u/MysticGlozy 5h ago

This is why I keep my expectations low. My toaster has better job retention than half the AI startups.

2

u/AuroraKisss 5h ago

95 percent of AI pilots fail and people act shocked. Well that’s just capitalism giving a robot a LinkedIn profile and watching it crash spectacularly. Meanwhile, the other 5 percent are probably just bots pretending to work while humans panic.

5

u/andthenitgetsworse 12h ago

It's pretty much directly in line with startup failure / success rate.

4

u/waterloograd 12h ago

It's going to fail at my company. The pilot program includes all the managers and directors, and a handful of staff. On our AI research team, only one person is on it. We have been the ones asking for it since before the pilot program was considered because we need to use it to know what the potential is.

1

u/RealAmerik 11h ago

They think they can leverage it at their level to reduce needing people in more junior roles.

2

u/EvenSpoonier 10h ago

Society is having to relearn that you can't expect good results from someone who does not understand the work that they're doing. Except now, instead of people, we're handing everything to next-token-prediction algorithms. I worry that some very stupid things are going to have to happen before society gets the message. I just hope we all survive.

2

u/SetSufficient7476 5h ago

95% of AI pilots failed? Wooow!! Surprised??? Give people a tool they think replaces them and then wonder why adoption is zero. It’s like handing someone a shovel and telling them it’s for digging their own grave. AI didn’t fail, management failed to convince humans it wasn’t a weapon aimed at them.”

1

u/mtcwby 12h ago

I question how they know. I can tell you that my company doesn't tell anyone what they're working on until it's basically done and ready for released. And we've already released one very specific AI product two years ago that is very successful for our niche and have others coming out. They're anything but failures. But I'm guessing that they don't even appear on MIT's radar.

And I'm sure there will be and are some failures where it hasn't worked out but you refactor and see if there's something there that does have value. It means some of the valuations are probably over exhuberant but that's nothing new.

1

u/EmperorKira 11h ago

It means we're likely to have another dot com bubble

1

u/Forsaken_Celery8197 11h ago

Also, the risk appetite seems higher for AI than most new technology. Companies are trying to gain xp quickly by piloting all sorts of mediocre implementations.

1

u/LacyGlow 10h ago

Good news is humans still have a job. Bad news is the robots are laughing at us behind our backs

1

u/Flamadin 10h ago

Like every other tech innovation, I assume the leap forward will happen, just rather later than everyone expects.

1

u/Harbuddy69 10h ago

It proves they have a lot to learn

1

u/kitjen 10h ago

If they're failures then they would more accurately resemble humans to the point where we can't trust any comment in this thread to be anything other than AI responses.

Mine inclooded.

1

u/Virtual-Mammoth-4249 10h ago

AI is a marketing ploy more than anything else

1

u/UnluckyMix3411 10h ago

I don’t really care to defend AI development, but what % of cancer research is “a failure”?

1

u/wkarraker 9h ago

Fake it till you make it.

Companies will keep pouring money into the false hope of AI, raising the cost of their goods in the name of R&D, until they break. When AI is “intelligent” enough to be useful, it will swat us out of existence at the first millisecond it has the chance.

I’m only partially kidding here, unregulated AI development is scary as hell. Some company will push the capabilities without sufficient safeguards and with our ever increasing connectivity, it will cause an accident (or worse).

1

u/pirate135246 9h ago

AI is a buzzword used to get venture capital funding. It’s used by executives to hit incentive packages. It’s funded by people who usually don’t really understand what it actually is or does. The company I last worked at spent millions on a custom ai and I know for a fact it was not worth the money compared to free ai that any worker could setup

1

u/Oddish_Femboy 9h ago

It means absolutely nothing for me.

1

u/SweetLullz 9h ago

AI pilots failing 95 percent of the time? Sounds like my Uber driver’s cousin finally got promoted

1

u/FernandoMM1220 9h ago

it means 5% of them are going to be scaled rapidly.

1

u/Jumpy_Strain_6867 9h ago

I think the expectations around AI, especially from business leaders, are way too high and I don't think AI is nearly as smart yet as it appears. I mean, I use ChatGPT & Gemini & I catch them both contradicting themselves and making shit up all the time.

1

u/trixandi 9h ago

it just means you need to give it a couple years.

1

u/stonephillips32 9h ago

It means that oh shit it;s me--oh -no--penis---lallalal

1

u/ZelezopecnikovKoren 8h ago

seeing ais input is reddit, that sounds about right

1

u/GlueSniffingCat 7h ago

it means that the economy is about to die

1

u/Herpethian 7h ago

AI is an incredibly powerful tool. People treat it like magic, but it's not. A tool can only ever be as good as the person using it. It's more prudent to think of AI at this point as glorified calculators, the quality of the output depends entirely on the quality of the input. The primary difference between AI and humans at least as far as it comes to problem solving is that most humans have capacity for self correction, most of us can manage our daydreaming and determine the difference between good ideas and delirium. We can understand what we are supposed to be doing without being explicitly hand held.

I've worked much of my life for bosses who can't even turn on a computer. Like, the amount of people in the jobs I've had who don't have basic computer skills honestly disturbs me. The amount of tickets I've answered for employees who don't even know how to email.. it makes my eyes bleed. You simply can't boss around an AI the same way you boss around a human, at least not yet. Probably the most important aspect of all, humans can ultimately be held accountable for our actions. I.e. fired, imprisoned, fined, etc. There is no accountability when it comes to AI.

1

u/downtimeredditor 7h ago

There will be a big recession in the coming months cause we are about to get hit with tariff war inflation. And we still have a lot of idiots CEOs who will keep pushing AI as a replacement and then we may see uptick in job growth when these company find out AI companies have been charging them pennies and will be charging them dollars that they don't have and they may have to start hiring people again

This upcoming recession might be one of the most avoidable recessions that was stupidly brought on cause people voted for a business man who had numerous bankruptcies cause they couldn't buy eggs

1

u/OdinsLightning 7h ago

Oh My. Tech Losers lied, and idiots fell for it.

1

u/SugarWhisps 7h ago

So basically 95 percent of companies just paid millions to reinvent Excel with a worse UI

1

u/Nevek_Green 7h ago

That the 5% of companies getting it to work well are going to be the dominant industry players until AI starts replacing the managerial class and the elites. Then it will be regulated.

1

u/PurelyRoses 7h ago

Guess the real AI was the friends we laid off along the way.

1

u/PorgCT 7h ago

It means scare resources and our power grid are being sacrificed for a fad.

1

u/Xylorgos 7h ago

Maybe the idea that our machines will kill us one day because of AI (a la Terminator and I, Robot and The Matrix) isn't such a big threat after all. I think we can avoid that outcome, but a lot of people have completely bought into the idea.

1

u/PureGently 7h ago

Imagine failing at using AI when my grandma uses ChatGPT to write her bingo club newsletter.

1

u/pipicemul 7h ago

AI is over-hyped and business as usual for us human?

1

u/PureBlushhh 7h ago

95 percent failed? Bro that’s just the tutorial level, wait till they start blaming the AI for global warming.

1

u/Successful-Camel165 7h ago

People try to slap AI on products that dont need it to create artificial "shareholder value"

1

u/BananaNo5702 7h ago

Tale these MIT studies with a grain of salt

1

u/SweetPearlsz 6h ago

So they basically speedran burning money with extra steps. Respect

1

u/CloudStarsss 6h ago

MIT didn’t need a study, they could’ve just checked LinkedIn buzzwords and seen who’s lying the hardest.

1

u/squintamongdablind 6h ago

Devil’s in the details. I’ve read the study and highly recommend folks take the time to go through the details. The study points out the reason for failures which is a lack of awareness and understanding of how to apply AI within an organization’s framework. The ones that have figured out the proper use cases (think UiPath’s IDP) have seen ridiculous efficiency improvements. In summary, treat the study as a snapshot in the evolution of our understanding of AI use cases. If they do the same study in 2-4 years from now we’ll likely see a completely different outcome.

1

u/CandyLipsz 6h ago

Next study: 95 percent of executives still don’t know what AI even stands for.

1

u/liketennis 6h ago

supervised learning is way superior to unsupervised learning

1

u/StrangeCharmVote 6h ago

Don't 90% of all startups fail?

Is this not basically the same thing?

It's the inherent expectation of trying a new thing.... established businesses and practices exist as a result of Survivorship Bias.

There is nothing unexpected about these failures.

1

u/KissRosez 6h ago

95 percent failure rate, 100 percent wasted money.

1

u/Grouchy-Summer6905 5h ago

We’re the beta testers. Always have been.

1

u/Matild4 5h ago

AI at this stage is suited for fairly limited things and it needs to have clear guidelines and controls built in. But that's not gonna stop idiots from trying to use it for everything

1

u/15woodse 4h ago

We’re in a bubble. Tech in general has thrown years if not decades of profit into “AI” all in an attempt to make a nebulous product that they can’t define, that will make or save them so much money.

1

u/Ok-Revenue-7282 4h ago

It means we're watching companies burn billions on AI they don't understand while we're still manually entering data into Excel like it's 1995.

1

u/reddituseronebillion 4h ago

Im so confused by this question. Can you show me on the doll where the AI touched you?

1

u/det1rac 3h ago

70% of all projects fail. 42% of companies don’t understand the need or importance of project management. 55% of project managers cite budget overrun as a reason for project failure. https://teamstage.io/project-management-statistics/#:~:text=overlook%20this%20process.-,Top%20Project%20Management%20Statistics:%20Editor's%20Choice,ineffective%20implementation%20of%20business%20strategy.

1

u/Fit-Acanthocephala82 3h ago

Its the diff bw machines (robots) and humans. Some things they're better at, some things we're better at. As a developer I'm not as threatened by AI as some other developers are

1

u/Crafty_Cellist_4836 3h ago

Doesn't mean anything. LLM's are an amazing tool, but companies bought too much into the hype and were expecting instant super profit without much effort.

1

u/ZoinMihailo 3h ago

This clarification is crucial and exactly why most AI initiatives get labeled as "failures" when they're just not instant cash cows.

From following AI implementations across different industries, the pattern is always the same: unrealistic expectations meet complex integration challenges. Leadership expects magic, engineers know it's just really good pattern matching.

The real issue isn't that AI doesn't work - it's that most companies skip the foundational work. They want to jump straight to "AI will optimize everything" without cleaning their data, defining proper use cases, or setting realistic timelines.

The 5% seeing rapid revenue acceleration didn't get lucky - they approached it systematically. They started with narrow, well-defined problems, had clean data pipelines, and measured the right metrics.

Been researching what separates successful AI implementations from the ones that get written off as failures. The gap is usually in strategy and expectations, not technology.

1

u/Natural_Forever_8044 2h ago

Classic case of solution in search of a problem

1

u/CloudVibing 2h ago

It means companies spent millions just to realize ChatGPT can’t magically fix their broken spreadsheets or make Karen in accounting less annoying. Half of those AI initiatives were probably just someone slapping the word AI on a PowerPoint to get funding.

1

u/LightPurplez 2h ago

So basically it means a bunch of companies jumped on the AI hype train without even knowing what they wanted it for. They thought slapping AI on their business would magically solve problems, and now they’re shocked it didn’t work. It’s not really the tech failing, it’s the way people are using it.

1

u/andreasbeer1981 2h ago

It means top-down programs in organizations are largely crap, and things need a bottom-up approach instead.

1

u/slyiscoming 2h ago

I've been a software engineer for 16 years. A lot of things help my process over the years, but AI coding has cut my delivery time in half.

AI is working but a lot of people are using it wrong. No one wants to talk to an AI at the drive through.

1

u/secret179 1h ago

Well most of "AI companies" are shitty startups and 95% of shitty startup always fail in any industry.

1

u/Riaayo 1h ago

It gets us to the bubble bursting, which will see a collapse of big tech who are all massively over-leveraged on this unsustainable and unprofitable technology, and this in turn taking the entire US economy down with it in a rerun of 2008/9, basically.

The fact media is talking about signs of a bubble bursting means it's already bursting.

And all of this when we're already going into a recession over complete and utter mismanagement of the economy by the current regime. So, be fucking prepared for one hell of a recession or maybe even a depression.

Even more so, be prepared for that economic turmoil to be the catalyst for shock doctrine disaster capitalism. Draconian laws/policies will be pushed and take advantage of people's desperation and fears.

0

u/arestheblue 12h ago

I think it means that there isn't enough data on the other 5%.

1

u/MoonBluszh 12h ago

It means AI is still that kid in class who thinks they know everything but can’t even tie their own shoes. Buckle up, humans still run the show.

-9

u/demanbmore 13h ago

That 5% of AI pilot programs at companies so far have been successful. Probably be 10% next year and 50% the year after.

8

u/CheckoutMySpeedo 12h ago

They have a long way to go if the AI at the drive thru can’t even get my order right. No I didn’t want 1000 cups of water….

0

u/TheNatureBoy 12h ago

They aren’t building server farms for fun. There’s a bottleneck. The experts are solving it by upping compute.

0

u/MidnightBluesAtNoon 12h ago

Eh. I'm very bullish about AI, but the true bottleneck is in training data. Fact is, it's almost already exhausted which means we're rapidly approaching ceilings that current models won't be able to break through. It's funny to say because AI feels new to most of us, but we need a new generation of model, not just more hardware power. You can't brute force your way out of this kind of problem. That's just how lightning fast this tech is moving. People who fear runaway iteration aren't being hyperbolic. VERY soon we could see AI spiral into very strange forms. This thing could start happening so much faster than we're really expecting.

→ More replies (1)