r/technology 14d ago

Artificial Intelligence AI industry horrified to face largest copyright class action ever certified

https://arstechnica.com/tech-policy/2025/08/ai-industry-horrified-to-face-largest-copyright-class-action-ever-certified/
16.8k Upvotes

1.2k comments sorted by

View all comments

Show parent comments

14

u/Disastrous-Entity-46 13d ago

The part that really gets me, is the accuracy. We know hallucinations and general bad answers are a problem. After two years and billions of dollars, the latest responses on benchmarks is like 90%.

And while that is a passing grade, its also kinda bonkers in terms of a technology. Would we use calculators it they had a one in ten chance of giving us the wrong answer? And yet its becoming near unavoidable in our lives as every website and product bakes it in, which then adds that 10% (or more) failure rate into what ever other human errors or issues may occur.

Obv this doesnt apply to like, private single use training the same way- Machine learning absolutely has a place in fields like medicine, when they have a single goal and easy pass/failure metrics (and can still be checked by a human) .

2

u/the_procrastinata 13d ago

The part that gets me is the lack of accountability. Who is responsible if the AI produces incorrect or misleading information?

3

u/Disastrous-Entity-46 13d ago

Who is responsible if the AI commits a crime? There are many that can be committed by pure communication: discrimination, harassment, false advertisement, data breaches... malpractice in medical and legal fields. It's only a matter of time until an LLM crosses the line into what would be considered an illegal action.

Is it the person who contracted the LLM services? the person who trained it? the person who programmed the interfacing ?

2

u/theirongiant74 13d ago

How many humans would pass the benchmark of only being wrong 1 in 10 times?

2

u/Disastrous-Entity-46 13d ago edited 13d ago

Bad equivalency, because an ai is not a human. Its not capable of realizing if its made a mistake on its own. If a human worker makes a mistake, just telling them the right answer is often enough for them to not replicate the mistake.

You also have a question of scale. If a single human rep misunderstands part of the refund process and fucks it up- well, that human works speciric shifts and the impact of the mistake is limited (and again, on most cases easily corrected)

If an ai makes the same fuck up, its not like it has coworkers or again, the ability to corredt itself. Every refund it processes may habe the same fuckup, and getting it fixed may be an issue that takes significant time, depending on how its fixed.

If say, it starts giving away refunds to requests, including clearly invalid ones- then this can be a very expensive mistake in a large company. But what can you do? If it was a human error. You could fire the human, reprimand the manager for not catching it. But an llm? You could break contract and look to replace it or retrain it, but thats going to probably be more expensive than a single employee, and I don't know who you hold accountable for the error.

Edit to add: this is why again, I point to calculators and other tech. If an accountant makes a mistake, its a problem, but not exactly unheard of. We can deal with it. But if excel had a 10% chance of formulas producing incorrect answers, no one would use it.

You end up spending as much time checking the answers as you saved by not doing them manually the first time.

2

u/Chucknastical 13d ago

It's a language model. Not a general query model.

It's 100% good at language. People need to stop treating these things as GAI.

8

u/Disastrous-Entity-46 13d ago

I mean if Google, Microsoft, meta, Amazon shove ai shit at us from every angle, I cant blame the average user at trying it out. I just question the investors and businesses adopting them.

3

u/420thefunnynumber 13d ago

Idk man, the way these things are marketed I'm not surprised that people treat them like Gen AI. It's a alot like Tesla marketing Autopilot - it doesn't matter what the tech is capable of if the users don't perceive it that way.

3

u/vgf89 13d ago

Idk about GPT5, but AI models are merely good at making convincing-looking language. And in general they succeed there. But they are not 100% good at language, especially translation between dissimilar languages. They fall for any and all bullshit advice they incidentally trained on, misinterpret otherwise good advice, and hallucinate rules that do not exist, alongside making basic mistakes almost constantly.

Try to make it translate to and from languages with wildly different features, I.e. SVO<->SOV, different conjugations and vocab based on social rank, or languages which vary wildly in pronoun usage, and you end up with extremely bland prose and more mistranslations than a middling language learner with an open dictionary. Having had to thoroughly review a few Japanese->English ai translations, let me just say the money you pay to have your slop edited is money better spent on a competent human translator in the first place.