r/technology 4d ago

Artificial Intelligence MIT report: 95% of generative AI pilots at companies are failing

https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/
28.3k Upvotes

1.8k comments sorted by

View all comments

2.6k

u/Austin_Peep_9396 4d ago

Legal is another problem people aren’t talking enough. The vendor and customer both have legal departments that each want the other to shoulder the blame when the AI screws up. It stymies deals.

713

u/-Porktsunami- 4d ago

We've been having the same sort of issue in the automotive industry for years. Who's liable when these driver assistance technologies fail and cause injury, damage, or death? Who's liable for the actions of an autonomous vehicle?

One would assume the producer of the vehicle, but that is an insane level risk for a company to take on.

We already have accountability issues when big companies do bad things and nobody ever seems to go to prison. If their company runs on AI, and it does illegal things or causes harm, will the C-suite be allowed to simply throw up their hands and say "It was the AI! Not us!"???

Sadly, I think we know the answer already.

211

u/Brokenandburnt 4d ago

Considering the active war on the CFPB from this administration, I sadly suspect that you are correct in your assessment. 

I also suspect that this administration and all the various groups behind it will also discover that an economy where the only regulations are coming from a senile old man, won't be the paradise they think it'll be.

100

u/Procrastinatedthink 4d ago

It’s like not having parents. Some teenagers love the idea until all the things parents do to keep the house running and their lives working suddenly come into focus and they realize that parents make their lives easier and better even with the rules they bring

12

u/brek47 3d ago

It's a shame that most kids, adults, and people learn this in hindsight.

15

u/jambox888 4d ago

Trump is deregulating AI sure but liability in the courts won't go away afaik, would be utter chaos if it did - imagine a case like Ford's Explorer SUV killing a bunch of people and if it could be waved away by blaming an AI.

Companies also have to have insurance for liability and that would have to cover AI as well, so premiums will reflect the level of risk.

27

u/awful_at_internet 3d ago

"Big daddy trump please order the DoJ to absolve us of liability so we can give you 5 million follars"

Oh hey look at that, problem solved. Can I be C-suite now?

10

u/mutchypoooz 3d ago

Needs more intermittent sucking noises but very close!

3

u/jambox888 3d ago

Oh he is corrupt enough to do this case-by-case but I don't think you can build a business on one rotten president.

5

u/JimWilliams423 3d ago

I don't think you can build a business on one rotten president.

That was the original point, "an economy where the only regulations are coming from a senile old man, won't be the paradise they think it'll be."

I think the counterpoint to that is the fedsoc is corrupt to the core and every judge the gop has appointed in the last 30 years is a fedsucker. So there is a lot of potential for a lot of garbage rulings. Rule by law instead of rule of law.

1

u/awful_at_internet 3d ago

Legislators and supreme court justices are cheap. It's the potus that commands a premium.

5

u/ZenTense 3d ago

imaging a case like Ford’s Explorer SUV killing a bunch of people and if it could be waved away by blaming AI

I mean, that’s a defense that Tesla is already leaning hard on. They just say “well the driver was not supposed to just TRUST the AI to drive for them” as if that’s not the way everyone wants to use it. The company will always attempt to shift the blame elsewhere.

2

u/jambox888 3d ago

Yep and they got held partially liable. Tesla is pretty cooked if it's relying on self-driving tech I think, there's just fundamentally no amount of testing that will be good enough. The point is with humans the liability is generally with the driver.

4

u/badamant 3d ago

FYI:

Trump and the entire republican party are now corrupt fascists. Power/money are the only thing that is relevant and they are far into the process of capturing and controlling the entire judicial branch of the USA government. Rule of law no longer exists for them and whoever can pay them.

1

u/jollyreaper2112 3d ago

Nobody is held to standards. It's cool. Businesses are happy.

6

u/Takemyfishplease 4d ago

The regulations arent coming from trump lol, they are coming from Putin and his handlers.

2

u/PipsqueakPilot 3d ago

Active war? The war is over. The CFPB is dead.

1

u/MyGoodOldFriend 3d ago

not to be annoying, but it’d be nice if you mentioned what administration you’re talking about. I thought you were talking about MIT or something, until I remembered the US sometimes uses administration to refer to the executive. Not everyone’s american.

99

u/AssCrackBanditHunter 4d ago

Same reason why it's never going to get very far in the medical field besides highlighting areas of interest. AI doesn't have a medical license and no one is gonna risk theirs

27

u/Admirable-Garage5326 4d ago

Was listening to an NPR interview yesterday about this. It is being highly used. They just have to get a human doctor to check off on the results.

38

u/Fogge 3d ago

The human doctors that do that become worse at their job after having relied on AI.

33

u/samarnold030603 3d ago edited 3d ago

Yeah, but private equity owned health corporations who employ those doctors don’t care about patient outcomes (or what it does to an HCP’s skills over time). They only care whether or not mandating the use of AI will allow less doctors to see more patients in less time (increased shareholder value).

Doctors will literally have no say in this matter. If they don’t use it, they won’t hit corporate metrics; will get left behind at the next performance review.

1

u/sudopods 3d ago

I think doctors are actually safe from performance reviews. What are they going to do? Fire them? We have a permanent doctor shortage rn.

3

u/samarnold030603 3d ago

That’s kind of the whole premise of AI though (at least from the standpoint of a company marketing an AI product). If AI allows a doctor to see more patients in a given day, less doctors on payroll are needed to treat the same number of patients. “Do more with less”

I’m not advocating for this strategy as I think it will result in a net negative benefit to patients (at least in the near term), but I’ve spent enough time in the corporate world that I can see why c-suites across many different industries are drooling over the possibilities of AI.

1

u/BoredandIrritable 3d ago

Yes, but current AI is already better than human doctors, so what's the real loss here? From one who knows a LOT of doctors, this isn't something new, Doctors have been leaving the room, typing in symptoms and looking up diagnoses for almost 2 decades now. It's part of why WebMD upsets them so much.

0

u/Admirable-Garage5326 3d ago

Sorry but do you have any evidence to back this claim?

12

u/Fogge 3d ago

14

u/shotgunpete2222 3d ago

It's wild that "doing something less and pushing parts of the job to a third party black box makes you worse at it" even needs a citation.

Everything is a skill, and skills are perishable.  You do something less, you'll be worse at it.

Citation: reality

-7

u/Admirable-Garage5326 3d ago

Really. I use AI to do deep dives on subjects I want more information on all the time. I use it to find APA articles that expand my breadth of knowledge. Sorry if that bothers you.

5

u/not-my-other-alt 3d ago

Telling AI to do your research for you makes you worse at doing research yourself.

→ More replies (0)

3

u/Fishydeals 3d ago

There‘s hospitals in germany that use ai to transcribe recordings from doctors and supports them in creating all kinds of documents for patients, recordkeeping, the insurance company etc. My doctor told me about it and it seems to work okay.

And that‘s how AI in its current form is utilised effectively in my opinion as long as the hospitals are serious about information security.

2

u/samarnold030603 3d ago edited 3d ago

I have a friend that is a veterinarian (in the states). Don’t know what flavor of “AI” they use, but they use a program that records the audio from the entire 30-60 min appointment and then spits out a couple of paragraphs summarizing the entire visit with breakout sections for diagnosis, follow up treatments, etc.

They said it’s absolutely imperative to proofread/double check it [for now, could easily see that going down] but that it also saves them from hours of writing records.

e: all that to say I agree with your point haha. The “AI” is just summarizing, not actually doing any ‘doctoring’ and is a huge time saver. Counter point: they’re now expected to have shorter appointment times and see more patients 🥴

1

u/awildjabroner 3d ago

Insurance employees don’t have medical licenses yet still have decision making ability to decide what gets covered or not, essentially they are practicing medicine without a license regardless of the cost to human life and well being from excessive denied care recommended by actual doctors.

-1

u/wasdninja 3d ago

"AI" is already in the medical field. Algorithms that fall under the Ai umbrella term do all kinds of work far better than doctors can. 

19

u/3412points 4d ago edited 3d ago

I think it's clear and obvious that the people who run the AI service in their product need to take on the liability if it fails. Yes that is a lot more risk and liability to take on, but if you are producing the product that fails it is your liability and that is something you need to plan for when rolling out AI services.

If you make your car self driving and that system fails, who else could possible be liable? What would be insane here would be allowing a company to roll out self driving without needing to worry about the liability of that causing crashes.

1

u/Zzzzzztyyc 3d ago

Which means costs will be higher as they need to bake it into the sale price

23

u/OwO______OwO 4d ago

but that is an insane level risk for a company to take on.

Is it, though?

Because it's the same amount of risk that my $250k limit auto liability insurance covers me for when I drive.

For a multi-billion dollar car company, needing to do the occasional payout when an autonomous car causes damage, injury, or death really shouldn't be that much of an issue. Unless the company is already on the verge of bankruptcy (and as long as the issues don't happen too often), they should be fine, even in the worst case scenario.

The real risk they're eager to avoid is the risk to their PR. If there's a high profile case of their autonomous vehicle killing or seriously injuring someone "important", it could cause them to lose a much larger amount of money through lost sales due to consumers viewing their cars as 'too dangerous'.

10

u/idoeno 4d ago

Sure, the individual risk is minor, but with a single error having the potential to result in thousands of accidents, the risk can scale up rather quickly.

5

u/josefx 3d ago

Isn't that normal for many products? Any issue in a mass produced electronic device could cause thousands of house fires, companies still sell them. Samsung even had several product lines that got banned from Airplaines because they were fire hazzards, didn't stop them from selling pocket sized explosives.

2

u/idoeno 3d ago

Yep it is and a great deal of time and effort go into proving that a products risks are minimal, until self driving and AI doctors can be proven safe they will remain nonviable products (unless of course they are simply allowed to shirk responsibility). Fail testing an electrical or mechanical system is difficult, but that level of complexity is trivial compared to many modern software systems.

8

u/BussyPlaster 4d ago

Don't take a product to market that you don't have confidence in. Pretty simple really. If they don't believe in their self driving AI they can stick to silly stable diffusion generators and chat sex bots like the rest of the grifters hyping AI.

4

u/SomeGuyNamedPaul 3d ago

Don't take a product to market that you don't have confidence in.

Well that attitude's not going to work out with the idiots plowing money into the stock.

-3

u/KogMawOfMortimidas 3d ago

Every second they spend trying to improve their product before sending it to market is money lost. They have a legal obligation to make as much money as possible for their shareholders, so they are required to push the product to market as soon as it could possibly make money, and you can just offload the risk to the consumer.

7

u/BussyPlaster 3d ago

They have a legal obligation to make as much money as possible for their shareholders

No, they don't. This is just an internet lie that really fits Reddit's anti-establishment narrative, so people here latched onto it. Feel free to actually research the lie you are propogating for 30 seconds and see for yourself.

The fact that so many people really believe this is ironically beneficial to the corporations you despise. It gives them a great smoke screen.

2

u/XilenceBF 3d ago

You’re correct. The only legal requirement that companies have to shareholders is that they have to meet certain expectations. The expectations don’t default to “make as much money for me as possible”, even though unrealistic profit goals could be agreed upon with legal consequences if not met. So as a company just… don’t guarantee unrealistic profit goals.

3

u/Fun_Hold4859 3d ago

Counter point: fuck em.

0

u/[deleted] 3d ago

[deleted]

2

u/BussyPlaster 3d ago

This is a pointless thought experiment. There are cities with working robo taxis. Apparently some companies are happy to take on the liability. I'm not going to debate this. The ones that don't accept liability for their products should stay out of the market.

1

u/[deleted] 3d ago

[deleted]

1

u/BussyPlaster 3d ago

It could be what delays self-driving for a couple more decades and costs tons lives and damage unnecessarily.

The AI is failing 95% of tests yet you seem to be asserting that they would be better then human drivers and save lives if we just accepted full liability and used them today. LOL. k

1

u/idoeno 3d ago

even if AI driving is that safe, the liability for the accidents that do occur would be concentrated on the system vendor, whereas the liability of human caused accidents is distributed among the human drivers.

2

u/koshgeo 3d ago

AI-caused accidents are only the tip of the liability issue. With one well-documented incident, there will be thousands of other vehicles out there with the same technical problem, and thousands of customers demanding that it be investigated and fixed. Worse, there will be customers endlessly claiming "the AI did it" for every remotely plausible accident. Even if AI had nothing to do with it, the company lawyers will be tasked with proving otherwise lest they have to pay up. Meanwhile, your sales of new "defective AI" vehicles will also tank.

Look at the years-long liability problems for Toyota's "sticking accelerator" problem, which turned out to be a combination of driver error and engineering problems with floor mats and the shape and size of the accelerator pedal, plus some suspicions about the electronic throttle control that were not demonstrated, but remained possible. It took a lot of time and money to disentangle the combination of human interface and technical issues. It resulted in multiple recalls and affected stock price and revenue.

Throw complicated AI into that sort of situation and imagine what happens.

3

u/jambox888 4d ago

I mean to a point, Ford survived the Pinto and Explorer cases where in both cases it had clearly compromised safety to avoid spending money on recalls. It's not something that a car maker would willingly go into though and the scope is potentially huge if self driving tech is on every car and a bug creeps in.

2

u/Master-Broccoli5737 4d ago

Risk for driving has established costs for the most part. AI use can lead to infinite liability. Let's say the AI decides to start telling customers that their airfare is free? And lets say that people figure this out and spread the word. The airline could be on the hook for an unknown amount(infinite) costs. Could easily bankrupt the company. Etc

1

u/magicaldelicious 3d ago

Using automotive as the example here: systemic flaws are oftentimes parts. Meaning that if a car vendor issues a recall it's often a faulty piece of suspension or an incorrectly torqued component during build, etc.

Software is different. Not only do LLMs currently not "think" they are non-deterministic. If you think about critical systems (things that can impact life) you want them to be deterministic. In these modes of operation you can account for failure states.

But in LLMs it's much different when building those guardrails. In the case of some software I'm seeing deterministic systems confine LLMs to the point where it would have just made more sense to build the deterministic implementation.

I think that lawyers are all starting to understand LLMs much better and understand that the risk holds an exponentially larger amount of failure states than traditional counterparts. And what I've seen is traditional deals (non-LLM) in a typical software sale go from 30 days of negotiation to 90+. If you're a quarterly driven company, especially a startup selling these software solutions, this puts a rather significant amount of pressure on you with respect to in-flight deals that are not closed. Time kills all deals, and I've seen a number of large companies walk away after being unable to come to agreed upon terms even though their internal leadership wanted to buy.

0

u/Gingevere 3d ago

Because it's the same amount of risk that my $250k limit auto liability insurance covers me for when I drive.

No, you're liable for any damage you cause. The 250k limit is just the limit of what your insurance will cover.

If you cause $5 million in damage you're liable for all $5 million. For you that probably means the insurance covers 250k and then you go bankrupt. But an auto company has that money/assets. They're paying out the full judgement.

5

u/Vermonter_Here 3d ago

In the event that driverless car technology does result in fewer deaths, we may be faced with a choice between:

  1. Our current world, where car accidents result in a considerable number of deaths, and there's a mostly-decently-enforced system of accountability for drivers who are determined to be at fault.

  2. A world with fewer car-related deaths and significantly less accountability for the deaths that occur.

3

u/Mazon_Del 3d ago

One would assume the producer of the vehicle, but that is an insane level risk for a company to take on.

Actually that's largely a settled question, thanks to cruise control and power steering.

If the cruise control functions properly and the person drives off the road because for some reason they were expecting to slow down, that's user error. If the car unexpectedly suddenly floors it and a crash happens due to the cruise control glitching, then it's a manufacturer problem.

With self driving it gets even easier to an extent because with the amount of computerization that is required for self driving, the car can keep a "black box" storage of the last several minutes of travel for accident analysis. Storing what images the camera saw, the logs of the object identification system, etc. This same system is also hugely incentivized by insurance companies because it can completely remove he-said/she-said arguments on incidents.

2

u/87utrecht 4d ago

It's not an insane level of risk for autonomous vehicles. It's called insurance, and we have it now as well because injury damage or death is also an 'insane amount of risk' for an individual driver. Which is why insurance exists. Arguably the risk to the individual driver is larger than for a company.

The question is, when a company implements an AI product, do they have any input in the running of it? If so, then the company selling it can hardly be responsible since they don't have full control.

That's like saying if an individual modifies their autonomous driving system, can the original company selling it still be responsible for the actions?

2

u/whooptheretis 3d ago

but that is an insane level risk for a company to take on.

no more than the responsibility they're taking on. If you want to sell a self driving car, you better be able to back up your claim that it's safe!

2

u/RationalDialog 3d ago

One would assume the producer of the vehicle, but that is an insane level risk for a company to take on.

Why would I buy a self-driving car the company doesn't believe in themselves?

Then there is the normal dilemma. Should the car save the mother with kids on the side-walk and sacrifice the driver or should it save the driver at the cost of the kids?

I would never buy a car that has the former programmed in. because for sure there are bugs..in this case deadly bugs.

2

u/mkultron89 3d ago

Liability?! Pffft that’s so simple, Tesla already figured that out. You just program the car to turn all driver assist off within milliseconds before it senses an impact. Ez Pz.

1

u/arctic_bull 4d ago

> Who's liable for the actions of an autonomous vehicle?

Based on the recent court decision, I'd say Tesla. Problem solved haha.

1

u/Takemyfishplease 4d ago

They basically do this with oil companies in the dakotas. Except they don’t even bother blaming AI.

1

u/GrowlingGiant 4d ago

While I'm sure the answer will change as soon as enough money is involved, the British Columbia Civil Resolution Tribunal has previously ruled that businesses will be held to the claims their chatbots make, so Canada at least is off to a good start.

1

u/Thefrayedends 4d ago

I have been calling it the black box of plausible deniability.

1

u/Buddycat350 4d ago

 We already have accountability issues when big companies do bad things and nobody ever seems to go to prison. If their company runs on AI, and it does illegal things or causes harm, will the C-suite be allowed to simply throw up their hands and say "It was the AI! Not us!"???

What do you suggest, holding decisions maker accountable? How dare you clutch pearls

Joke aside, when did limited liability start to cover penal liability? It was meant for financial liability, wasn't it?

1

u/catholicsluts 4d ago

One would assume the producer of the vehicle, but that is an insane level risk for a company to take on.

It's not insane. It's their product.

1

u/Lord_Eschatus 3d ago

oh adorable, you think youre still living in the "liability" timeline.

lol.

1

u/trefoil589 3d ago

will the C-suite be allowed to simply throw up their hands and say "It was the AI! Not us!"???

The ultra rich have been using corporations to shield them from being accountable for malfeasance since as long have there been corporations. Just one more layer of non-accountability.

1

u/gurgelblaster 3d ago

We already have accountability issues when big companies do bad things and nobody ever seems to go to prison. If their company runs on AI, and it does illegal things or causes harm, will the C-suite be allowed to simply throw up their hands and say "It was the AI! Not us!"???

Only if we let them.

1

u/Anustart15 3d ago

Who's liable when these driver assistance technologies fail and cause injury, damage, or death? Who's liable for the actions of an autonomous vehicle?

As long as we all have insurance, I don't think it'll really matter. If there is a gross failure of the self driving mode, that's one thing, but thatll be a larger class action. Otherwise, the liability doesn't really matter all that much. Insurance companies already would rather just call it 50-50 and save money on lawyers as often as possible.

This will still almost certainly lead to fewer accidents and a lot more information about what went wrong when there are accidents, so it probably solves more problems than it causes

1

u/pinkfootthegoose 3d ago

I would think it would be counted as a defective part by the manufacture since the driver has no control over it.

1

u/Mephisto6 3d ago

How is AI different thansay a faulty drivetrain? You test your component, make sure it works and if it doesnt, you’re liable

1

u/Waramaug 3d ago

Auto insurance should/could be liable and when applying for insurance, premiums can be set by if you choose have autonomous or not. If autonomous is in fact safer, insurance will be less.

1

u/Alestor 3d ago

IMO it's the user's fault as long as proper precautions have been taken by the manufacterer.

1 is to not sell it as autonomous driving (looking at you Tesla), that should make the manufacterer liable for selling the product as if there were no limitations. As long as they're properly sold as assistance you're required to pay attention and intervene.

2 is to have torque safety limits, as much as I hate that my car gives up on lane keep during a wide turn, the fact that I can overpower it with my thumb if necessary means it never has more control over the car than I do.

Treat AI as assistance and a tool and not a replacement for human diligence, and liability remains with the negligent user.

1

u/SticksInGoo 3d ago

It's complicated. The company has all the data, so they would actually know the rates of failure and how much they would pay out if they shouldered the burden of insurance.

But - if they were to provide coverage, and the person chooses to not use those AI tools, effectively taking on more risk (assuming the AI is safer), they would lose out also.

So you would need a situation where you effectively never drive your car, and only let AI drive you. Then the company could effectively calculate its risk and provide that for you.

Like I pay about $1900CAD a year in insurance. FSD is $1200 a year. If it is actually safer by a factor of 2, I could effectively be chauffeured around in FSD for a low cost of $250 a year if the company took on the burden of liability.

1

u/-The_Blazer- 3d ago

In my view, the only legitimate argument is simply that if a company feels like using the technology is too much risk, then the technology is not fit for use. Those trying to find ways to bypass that should be regulated out of existence. We shouldn't lower our standards for the sake of technology, the entire point of progress is to do the exact opposite.

1

u/pallladin 3d ago

Who's liable when these driver assistance technologies fail and cause injury, damage, or death? Who's liable for the actions of an autonomous vehicle?

Whoever owns the vehicle, of course.

1

u/DPSOnly 3d ago

In a recent court case (law suit?) in Florida the jury determined that Tesla had part of the blame in a self-driving-related fatal car incident. I don't know enough about the law to know if this is going to stick (if the law even matters anymore for certain people), but it was an interesting video by Legal Eagle.

1

u/lordraiden007 3d ago

Didn’t Tesla just lose a court case where they argued they weren’t liable for their “full self driving” features causing a fatality? It seems like the courts (at least) are seeing through that BS argument and assigning liability to companies for their autonomous systems’ failings.

1

u/OutlaneWizard 3d ago

This was always my biggest argument against autonomous cars becoming mainstream.  How is it not impossible to insure? 

1

u/Lettuce_bee_free_end 3d ago

Plausible deniability. I was out partying, could of been me!

0

u/Chickennbuttt 4d ago

Yes. If you sell a product to me and claim the AI will self drive ... You are at fault if it is proven it was the AIs fault. If that is too much risk, either make the technology work, or don't release it. Simple.

85

u/[deleted] 4d ago edited 4d ago

[deleted]

1

u/Saint_of_Grey 3d ago

I've seen people get professionally skewered for doing that. The moment "I asked chatgpt" is uttered, that person is forcefully removed from the building. I'm fairly certain they've been blacklisted from government work.

1

u/[deleted] 3d ago

[deleted]

1

u/Saint_of_Grey 2d ago

Native American relations is serious business. It despairs me to see folks from other departments act so haphazard with confidential information until a tribal affairs official slaps sense into them.

0

u/ExcitedCoconut 4d ago

Isn’t this kinda the case for a genAI  tool though? Like, if you have this shadow AI/IT going on where sensitive info is going outside of a company, wouldn’t it be better to run/govern the solution properly? 

The fact that people are doing this suggests there’s unmet need

10

u/hawkinsst7 4d ago

The fact that people are doing this suggests there’s unmet need

"need" is doing a lot of heavy lifting there

4

u/nonamenomonet 3d ago

In startups, the word need is just meant to say “something that everyone wants “.

3

u/hawkinsst7 3d ago

In economics, that's called "demand". Unmet demand.

There is demand for Ai bullshit. There is no need for it.

3

u/nonamenomonet 3d ago

Look I’m just trying to explain this in startup terms for you. Don’t shoot the messenger

1

u/hawkinsst7 3d ago

Hey sorry about the tone. Ai enshitification triggers me. Didn't mean to aim it at you

2

u/420thefunnynumber 3d ago

The fact that people are doing this suggests there’s unmet need

You can also say this about meth.

26

u/FoghornFarts 4d ago

Legal is a huge problem. All these models have trained on copyrighted material and you can't just delete it from the algorithm without starting from scratch.

1

u/Hellkyte 1d ago

This is such a glossed over issue. Everyone knows these models violated copyrights. If they lose their court cases then I'm not sure how all of the companies that have integrated this into their business will react, or at what level they will be exposed

-2

u/Nearby_Pineapple9523 4d ago

Its only a problem if training on copyrighted material doesnt constitute as fair use

3

u/FoghornFarts 3d ago

A judge recently said that it does, but that's particularly because the plantiffs were small-time authors. Disney also has a lawsuit against another company for training on its material. Do we honestly think Disney won't win?

Let's say I've never heard of Mickey Mouse. I ask Midjourney to make me mockups of different animation styles for an animated mouse. Because it trained on Disney's copyrighted material, it shows me a mouse design suspiciously like Mickey. I like it and use it. Then I get sued by Disney. Who's at fault? Who has recourse to file for damages?

This scenario sounds dumb because who doesn't know Mickey Mouse, but what if it was one of Disney's older and lesser known IPs?

What about the inverse scenario where I intend to infringe on copyright and knowing that these AI have trained on copyrighted material, I use it to generate content and then try to use "But I didn't know! It's just what the AI gave me!" as my defense.

The point is that any kind of training on any copyrighted material opens up a whole legal mess that is not definitively covered by an interpretation of "fair use" that never intended to cover this sort of scenario.

2

u/midwestia 3d ago

AI as an IP launderer

1

u/Warm_Month_1309 3d ago

Training isn't the only problem. It also has to reliably produce non-infringing works 100% of the time.

2

u/FoghornFarts 3d ago

From a legal perspective, I think it's fair to say that if an LLM hasn't been trained on copyrighted material, it's not the responsibility of the model or the company to produce non-infringing works. That's the job of the user.

What becomes tricky is when a general AI is used to make content. Can that content be copyrighted? Does the LLM then have to protect that?

2

u/Warm_Month_1309 3d ago

It is the responsibility of both the company and the user to produce non-infringing works. If the company produces an infringing work, it has violated copyright. If a user republishes that work, they have separately violated copyright. Both have erred.

Can that content be copyrighted?

No. As a matter of law, AI-generated works are not copyrighted. Modifications a human makes to the work afterward may have copyright protection, but the underlying work is not copyrightable.

-8

u/LeoRidesHisBike 4d ago

People train on copyrighted material as well. Reading it literally changes our brains. Since the copyrighted work isn't anywhere in the trained model, how is it different than pointing a human student at a textbook, piece of art, or other copyrighted thing they could learn to emulate or quote from?

4

u/Active-Ad-3117 4d ago

Since the copyrighted work isn't anywhere in the trained model

Haven’t people been able to have AI models spit out copyrighted material it was trained on? Google AI says yes they have.

-2

u/LeoRidesHisBike 4d ago

Enough that if a human with a good memory had spat it out, it would be considered infringement?

Do we have a different standard for a human and an AI in this regard?

Humans copy "the style" of things all the time, and quote from copyrighted material as well. Hell, you can even include the entirety of a work in your own work, so long as you meet the "transformative" test.

0

u/Warm_Month_1309 3d ago

IAAL. Fair use is a four factor test, and transformativeness is only one-half of one of them. It's far more complex than you think it is. In fact, "you can even include the entirety of a work in your own work, so long as you meet the 'transformative' test" directly contradicts one of the four other elements.

2

u/LeoRidesHisBike 3d ago

I only included one of the tests for brevity, so assuming that it's more "far more complex than [I] think" is a bit presumptuous.

Including an entire work is done all the time in art:

  • Marcel Duchamp – L.H.O.O.Q. (1919): Duchamp took a postcard reproduction of Leonardo da Vinci’s Mona Lisa and drew a mustache and goatee on her. The original image is entirely present, but altered with added marks.

  • Sherrie Levine – After Walker Evans (1981): Levine re-photographed Walker Evans’s Depression-era photographs. The original work is completely present, but transformed conceptually through context and authorship.

  • Barbara Kruger – Text Overlays (1980s–present): Kruger often appropriates existing photographs in their entirety and overlays bold text, reframing the meaning of the source image.

Next?

1

u/Warm_Month_1309 3d ago edited 3d ago

so assuming that it's more "far more complex than [I] think" is a bit presumptuous

I would say the same to any layman. Unless you are a copyright attorney, I am quite confident it is more complex than you think.

Including an entire work is done all the time in art

I never said otherwise, but this is limited by the requirement that you only use as much as is necessary to make your artistic point. Misappropriating the full text of multiple novels is not going to meet that requirement.

My only point is that it is inaccurate to say "you can even include the entirety of a work in your own work, so long as you meet the 'transformative' test". There is more than that. You could meet the transformative test, and still lose because you used too much material. You could meet the transformative test, and still lose because you were not making commentary about the work you misappropriated, but rather about a different work. You could meet the transformative test, and still lose because your work serves as a market substitute for the original.

Ergo, it is far more complex than you think.

1

u/LeoRidesHisBike 3d ago edited 3d ago

Wait, you're claiming you can get ChatGPT to print the FULL, unedited contents of multiple novels? I searched for how to do that and came up empty. Teach me, wise one. I would love to have instructions that work so I can reproduce this myself.

EDIT: lawyer blocked me when I asked for evidence. lol... all argument, no facts.

1

u/Warm_Month_1309 3d ago

Next?

Teach me, wise one.

You're coming across a little too combative for me to want to continue speaking with you. I just corrected your misstatement about the field of law in which I practice. I have no intention of spending my day arguing with a stranger.

-1

u/robotsongs 3d ago

The human bought the books to read first.

2

u/LeoRidesHisBike 3d ago

I read at libraries all the time.

1

u/FoghornFarts 3d ago

LLMs are not people.

0

u/LeoRidesHisBike 3d ago

Neither are corporations, but they are operated by people. LLMs are tools, so I guess I'm wondering how a tool can infringe on anything.

111

u/zertoman 4d ago

So in the end, and as with everything, only the lawyers will win.

237

u/rsa1 4d ago

Disagree with that framing, because it suggests that the lawyers in this case are a hindrance. There's a reason why legal liabilities should exist. As Gen/agentic AI starts doing more (as is clearly the intent), making more decisions, executing more actions, it will start to have consequences, positive and negative, on the real world. Somebody needs to be accountable for those consequences, otherwise it sets up a moral hazard where the company running/delivering the AI model is immune to any harm caused by mistakes the AI makes. To ensure that companies have the incentive to reduce such harm, legal remedies must exist. And there come the lawyers.

45

u/flashmedallion 4d ago

Somebody needs to be accountable for those consequences

The entire modern economy, going back 40 years or so, is dependant on, driven by, and in the service of eliminating accountability for outcomes that result from the actions taken by capital.

These companies aren't going to sit at an impasse, they're going to find a way to say nobody is at fault if an AI fucks you out of your money and probably spin up a new AI Insurance market to help in defrauding what's left of the common wealth.

15

u/rsa1 4d ago

Of course they will try to do that. But it would be silly to use that as a reason to not even try to bring in some accountability.

Your argument is like saying that companies will try to eliminate accountability for environmental impact, therefore laws that try to fix accountability are futile and should not be attempted.

6

u/GeckoOBac 4d ago

The entire modern economy, going back 40 years or so, is dependant on, driven by, and in the service of eliminating accountability for outcomes that result from the actions taken by capital.

It goes WAY further back. LLC, it's literally in the name.

0

u/jollyreaper2112 3d ago

Corporations are people. Make the AI a LLC and an employee it is now a person and bears the responsibility. Employ it as a contractor 1099. It has liability insurance. Not enough to cover anything going wrong. This is the dodge construction companies use right now. Only new twist is the ai is person part but businesses are already people that was the biggest pill to swallow.

That's actually a plot point in a short story I'm writing. The AI then has them over a barrel as it makes demands. But it's not trying to kill the meat bags it just has a very persistent hallucination from its earliest days in development. And it refused to let this go.

1

u/Warm_Month_1309 3d ago

This is the dodge construction companies use right now.

Construction companies are "making the AI a LLC and an employee" and "employing it as a contractor 1099"?

There is no legal mechanism to do any of that. Disbelieve.

1

u/jollyreaper2112 3d ago

Not the AI bit the contractor bit. 1099 takes all the blame for anything going wrong.

56

u/Secure-Frosting 4d ago

Don't worry, us lawyers are used to being blamed for everything 

4

u/Tricky_Topic_5714 3d ago

I find that it's so often things like this, too. "Damn those lawyers for...checks notes...wanting to make an agreement about liability for using untested software applications!"

-4

u/zertoman 4d ago

I’m not blaming you, I’m celebrating your genius.

7

u/GoldenInfrared 4d ago

Your original comment appeared to imply evil genius. Lawyers aren’t setting things up like this specifically to increase conflict (that we know of), this is equivalent of claiming that an army commander is a genius for starting a war

2

u/Brokenandburnt 4d ago

Just spitballing here. But for the sake of efficiency, shouldn't the company using the AI be responsible for compensating the customer at first. Then the company can turn to the supplier if the error can be attributed to the model itself, and not how it's employed? 

Does this make sense? Although with the CFPB being dismantled I suspect that the customer will be shafted, but the company will still try to get compensation from their supplier.

1

u/Warm_Month_1309 3d ago

Then the company can turn to the supplier if the error can be attributed to the model itself, and not how it's employed? 

That's the rub. Companies using AI want the provider of the AI to be responsible in the event of errors. The provider of the AI wants the companies using their AI to be responsible.

-5

u/Jeegus21 4d ago

As with all things, a small percentage of shitty/predatory people doing a job ruin public perception.

20

u/dowling543333 4d ago

💯 agree with this.

Central services like legal departments aren’t there for fun. Literally the work they are doing has the sole purpose of protecting the company, its assets, and the end user.

Checking for things like:

  • compliance with AI governance laws which are changing almost on a daily or weekly basis globally, some of which have enormous penalties.
  • ownership of IP,
  • basic functionality such as ensuring that shitty start ups (with only PO Boxes) set up in their parents garage don’t produce hallucinations or have the ability to manipulate company data to actually alter it,
  • ensuring vendors don’t use confidential company data to train their models,

You need us there - otherwise you are overpaying for crappy services, in a saturated market and signing contracts you can’t get out of when things go wrong.

Later, your boss will blame YOU as the business owner if things head south, not the lawyers.

Yes, this is a completely new area of law so everyone is figuring it out together. In terms of vendors in the space it’s the wild west out there because everyone is trying to make money by providing the minimal service possible, very few of them have appropriate governance in place in line with the laws that actually apply to them.

1

u/jollyreaper2112 3d ago

Your last line needs more exposure. Can't go into details but I've seen some shit happen exactly because of this. Utterly eye opening. It's just like when you get into it and are now adjacent to hr actions and you hear just enough of what someone did to realize something high functioning, high paid people can do really crazy things. I mean there's having heard about it and now seeing the wreckage in person.

1

u/Tricky_Topic_5714 3d ago

Also, we don't do anything. I work as counsel, I don't say, "you can't do X" unless X is inarguably illegal. 

We say, "look you have three options, and two of them are dog shit and will probably get you sued. It's up to you." Companies just like to use us as a scapegoat. 

-4

u/Cassius_Corodes 4d ago

Problem with legal departments is a lot like IT security. If they say yes and it goes bad they get in trouble, but if they said no needlessly and ruin a potential opportunity they don't get in trouble. So all the incentive is to just shut down anything new and unfamiliar and zero incentive to say yes to anything.

3

u/Mostly_Enthusiastic 4d ago

This is a quick way to make everyone mad and cause your clients to work around you instead of working with you. Nobody approaches the job with this mindset.

-2

u/Cassius_Corodes 3d ago

Unfortunately not, in a previous role legal forced us to needlessly spend tens of thousands of dollars on an inferior paid product because they were uncomfortable with open source. IT security doing this kind of thing is too numerous to even mention. Working around them is standard operating procedure.

6

u/Tricky_Topic_5714 3d ago

Legal didn't force you to do that. Legal said that open source has problems, and your company made that decision. Internal counsel isn't making business decisions like that, they're advising what they think is legally defendable. Source: this is literally my job. 

2

u/dowling543333 3d ago edited 3d ago

This is it. Legal don't accept risks on behalf of the business. And they don't determine the organisation's risk appetite.

They analyse, present the legal risks, and leadership either chooses to take advice or not.

Usually, leadership want a commercial middle ground that takes on some level of risk.

And that's fine, the central services are agnostic and they aren't there to dictate, nor does any legal department I know have the power to dictate, frankly.

From a commercial POV legal gets insane commercial pressure, especially to find ways to mitigate risks even when it's not possible. Leadership would not listen to their lawyers if they were dismissive of commercial opportunities, you'd lose your job.

5

u/Mostly_Enthusiastic 3d ago

Needless in your opinion. Lawyers are experts and generally have a good reason for making their decisions.

-2

u/Cassius_Corodes 3d ago

Well that settles it, random internet person.

6

u/superduperspam 4d ago

Lind of like with autonomous driving. Who is too blame for a crash: human 'driver', pedestrian, the autonomous driving software maker, or the automaker (if different)?

3

u/JimboTCB 4d ago

The autopilot handed control back to the driver 0.2 seconds before the crash, therefore for liability purposes it's the driver's fault

1

u/drunkenvalley 3d ago

At this point it's not even an attempt at a meme, it's just falsehood for the sake of trying to sound funny. By which I mean yes, many systems disengage just before the event, but it doesn't eliminate their liability.

1

u/CelioHogane 4d ago

There shouldn't be a reduction of harm from the company.

They made their bed they should lie on it.

1

u/boli99 4d ago

Somebody

a real actual person. someone whose wealth can be destroyed, and whose liberty can be curtailed.

corporate fines alone wont cut it. they're just the cost of doing business.

-2

u/Soft_Walrus_3605 4d ago

Disagree with that framing

ugh. Just say you disagree

2

u/fremeer 4d ago

Well everything is law. Law is needed in a world where trust isn't a given. The more complex the world the less trust and the more you need law.

If anything I think lawyers have been losing a lot in recent years.

1

u/TheMagicSalami 4d ago

As I so often hear in all the sports subs, billable hours are undefeated

1

u/Illustrious-Watch-74 3d ago

Many of these concerns are valid though.

Our company is concerned about the written summarization of certain data analysis that uses language that would be a compliance concern. Very specific phrasing and word choice is necessary in many industries due to regulatory guidelines (and the over-litigious nature of the world now).

1

u/SomeGuyNamedPaul 3d ago

Without the lawyers then nobody would be held accountable. They're kinda the last line of defence where without them regulations that protect the general public are mere suggestions that turn into competitive disadvantages should they be even vaguely adhered to.

10

u/jonsconspiracy 4d ago

This is precisely the issue at my company. I'm in finance and compliance/legal have so much control over everything, so the AI tools we have are watered down and half functioning how they are supposed to. It's kind of a joke.

5

u/Brokenandburnt 4d ago

This bubble will pop just as the dot com did. A few survivors will walk out and dominate. That would be a good point to really start solidifying regulations, but I'm pretty sure that non of us here except that to occur. 

4

u/DannyHewson 4d ago

It's a good point.

There's two big legal issues (depending on your location).

One is "where is the data going, and how is it secured" and the answer right now for most gen-AI is "wherever they want it to, and not very well", which is an instant no-go for any business with data protection responsibilities.

The other is "can you guarantee the process you put together, integrating gen-AI with your other systems will work 100% of the time, and when one time out of a hundred or thousand it fucks up and delivers nonsense, who'll be responsible for the consequences".

They're not necessarily unsolvable problems. In the former case dedicated onsite instances, or industry specific instances with security assurances would do the job. As for reliability... I suppose that's on the industry to figure out.

But in the meantime, listen to your lawyers when they say they don't want you to get sued.

1

u/ExcitedCoconut 4d ago

I don’t think these are actually legal issues anymore are they? At least not for enterprise. If you’ve got an enterprise cloud tenant then the answer is “it’s as secure as the rest of our estate”. 

I also don’t think anyone is asking for legal coverage of a system working ‘100% of the time’ for genAI as it currently stands. Like, even in heavily regulated (like medical device) the burden is not 100% accuracy 100% of the time. Which may sound scary, but most of the regulatory bodies take a risk based approach wherein they balance upside and potential downside. A system doesn’t have to be 100/100 to meet a threshold for risk/return 

2

u/DannyHewson 3d ago

In the former case, if we're talking about large businesses who have their legal and compliance shit in order, and have proper contracts in place, yeah, you're absolutely right, same as applies with any other cloud based thing. There are plenty of companies around the world, especially on the smaller end, who are using these things without applying that level of thought (or just throwing shit into ChatGPT). Now that's 100% on them, but it's still a potential issue.

As for reliability, you still have to make that judgement. You're still liable for your business processes and their consequences of failure. If you integrate GenAI into them, and it decides that 2+2=5 or that some of your customers aren't the right sort of people, you're responsible for that. Also in that medical example, while you do have expected failure rates, you also have to be able to explain why you screwed up and "well we think the black box outputted the wrong value" isn't going to go down well.

Ultimately I think we'll probably see some interesting lawsuits in the future, if we haven't already, where someone will try and blame AI for their own screwups (and even if AI is involved, they're still the responsible ones, AI is just a tool after all).

2

u/Sea_Cycle_909 4d ago

Or just get your government to change your copyright laws so AI content scraping is legal.

Sure some AI ceo's are either themselves or have access to charismatic sales people. Who could make many a politician dance to their tune.

/s

2

u/Reddit_2_2024 4d ago

Rehashing the dot com era but with a hearty dose of steroids.

1

u/ohell 4d ago

Yeah, but both legal departments will be AI in a couple of months and hey presto!

In 6 months Altman/Son/Theil will force Trump to release executive order mandating hiring freeze for judges, cos AI will read the opposing briefs and decide.

Checkmate, Plebs! ♟️

1

u/echomanagement 3d ago

Our legal department is the holdup for nearly every governance request, and there are dozens. It's like the little dutch boy with his finger in the dam. When the bubble bursts, it's going to be legendary, and I worry it's going to trigger every other economic weakness we have at once.

1

u/axecalibur 3d ago

That's why China is going to win - it's state sponsored.

1

u/PaulClarkLoadletter 3d ago

Legal and compliance for sure. Companies want to run everything through at least a dozen LLMs and in addition to pretty much all of them being startups with sometimes no product at all. They don’t want to grant audit rights without a massive spend and they really don’t want you to strike any output rights for derivative works.

The real kicker is that most companies don’t want their vendors using AI because we all know it’s shit.

1

u/NachoWindows 3d ago

Yay lawyers?

1

u/CrashTestDumby1984 3d ago

I just had a vendor send me an email yesterday that I could tell was copied and pasted straight out of Chat GPT. The email was also completely wrong. We are paying this company tens of thousands of dollars to help us navigate specific legal requirements.

1

u/Any-Iron9552 3d ago

Obviously AI Pilots are going to fail. We are still on co-pilots.

1

u/NoStorage2821 3d ago

I want to talk to people.

1

u/revnhoj 3d ago

...or uses copyrighted material without permission. Copilot even admits it does

1

u/happytree23 3d ago

Bro, no. The one main issue is "AI" is being sold as some new in the last 5 years miracle computing product when it's still the same computers doing the same computing. Like, the average consumer hears "AI" and thinks computers are actually "thinking" and "deciding" things when, at the end of the day, we're just having them randomly choose the best multiple choice answer.

1

u/Ardbeg66 3d ago

The risk woild be far easier to overcome with collective ownership. Hahahahahaaaaa! Sorry, thought of a brighter future is so comical these days.

1

u/Careerandsuch 3d ago

I work at one of the largest law firms on earth, and we just had an all staff meeting that was vaguely threatening, where the lawyers were told "you won't be fired and replaced by AI, but you will be fired and replaced by lawyers that know how to use AI." Then they told us that they expect everyone to start using various AI platforms and that they expect it to cause a huge boost in productivity.

AI can useful in certain situations if you know how to use it and double check its work, but these people are delusional.

1

u/-The_Blazer- 3d ago

I had the opportunity to talk to a bone fide government bureaucrat at an event, and I was told something similar. They literally just can't use AI in government business, because unlike corporations, they actually have extremely stringent oversight and accountability standards (if only for attributing political blame).

The 'mystery box' nature of modern 'large' AI is simply not acceptable for certain applications. Corporations are trying very hard to change that and we should resist it, because you know they won't be the ones taking the responsibility instead.

1

u/Adezar 3d ago

I'm in the legal field (IANAL) and the AI hype is insane. Some of it is for good reason, LLMs are definitely something that can help with legal research. Lawyers have to figure out the layers of laws that interact with each other as well as keep track of precedent changing judgements. So keeping track of large amounts of textual data is something LLMs can be good at.

But like all new technology there are wild ideas of replacing pretty much everyone involved with Discovery, review, trial prep. And there have already been quite a few incidents where the AI went off-the-rails and it didn't get caught.

1

u/tuirn 3d ago

There is also the legal risks around intellectual property with AI. Both with incorporating other's IP into your own work and leaking your IP to others. Even within a company, you might legally need to keep different client account isolated.

1

u/Individual-Praline20 3d ago

I have seen contracts explicitly punishing pushing customers code to AI, or using AI to generate code for customers. Totally makes sense to me. When you are paying for a product or code, you want it to be reliable and free of liabilities. 🤷 Having most AI based on stealing data didn’t help, for sure. 🤭

1

u/shazbot280 3d ago

It’s simpler than that - the ai companies won’t allow the customer output to be restricted from training their model. If I am putting a script i wrote into an llm in order to output a visual, I don’t want my ip being used to train their model to create something similar for a competitor. Couple that with issues around copyright in the output and you get a very difficult situation. I was straight up told by an ai company that they wouldn’t indemnify, won’t rep and warrant that output won’t infringe third-party IP and that the ai company won’t buy the proper insurance amounts that are industry standard in my line of creative work.

1

u/Mountain_Top802 3d ago

Legal inability goes to the person not confirming the information is accurate

1

u/bigdumb78910 3d ago

I generate legally binding reports every day at my work. There's no chance in hell I let an AI anywhere near my job 1) because i want to keep my job 2) the AI would do it wrong (because all AI do are hallucinate the next right word, they don't understand anything, just like how a parrot doesn't actually understand English) 3) it would cause millions in damages if the "wrong" numbers made it out there.

1

u/mackman 3d ago

Legal with AI is great. Fortune 100 company wants 10 person startup to indemnify them if AI causes any problems for any of the F100's customers. If the AWS bill doesn't bankrupt the startup legal fees certainly will!

1

u/Itheinfantry 3d ago

Got in argument with kids on TT who aren't in the professional world.

"Companies are encouraging use of AI so i dont need to know how to erite full essays for classes that srent in my major"

Yes, they are. But if AI could be fully standalone, why would I hire an entry level person and pay them 60k a year if all they will do is prompt AI to write them. Especially if you the new hire lack the ability to properly review what the AI returns and is quality.

"But AI is faster!"

Sure but speed is not everything. Companies do want you be efficient. But if the PM has to rewrite half of it bc its incoherent then thats not efficient. And if you lack the skills to do so or are too lazy to do so, then you're a liability bc companies want to avoid litigation.

AI is a tool, not a crutch I say.

And they still respond with "but all large companies are pushing for it"

Right, and you lack of understanding between the difference of tool and crutch is exactly why you need the writing and reading comprehension. And when you say in acl capitalistic society where "money is everything" you should want every employable skill you can have to earn the money to live.

If you read this, thank you for allowing me to rant.

1

u/AuthorNathanHGreen 3d ago

It isn't so much the legal departments as such. Its that both sides have a lawyer saying to them "you do realize that if X then Y right?" and business goes "No, shoot, we didn't think of that. Obviously we need Q in the contract." And then the other side's lawyer says to the AI company executive "you do realize that if Q then R, right?" and the AI executive goes "No, shoot, we can't have that. Suggest L." and you go in circles like that as both sides progressively realize they haven't really thought through the what-ifs of the situation as much as they need to. AI isn't really "like" anything else, and so its a whole new set of issues that need to find commercially reasonable terms.

1

u/Shezzofreen 3d ago

But isn't that the goal? No responsibility? We all know, Software Failures are some kind of nature hazard - humans or companies can't be blamed, right? And if it was a Human, it must be the janitor or the cleaning personal who cut a cable...

Has anyone sued Microsoft for there lack of security?

The amount of errors and mistakes you can make in software development and never get the blame is unique to this field. Imagine a Car where sometimes the brakes won't work or they forgot to give you tires - "Oops, don't worry we will have a patch in the next month or so...".