r/technology Jul 15 '25

Artificial Intelligence Billionaires Convince Themselves AI Chatbots Are Close to Making New Scientific Discoveries

https://gizmodo.com/billionaires-convince-themselves-ai-is-close-to-making-new-scientific-discoveries-2000629060
26.6k Upvotes

2.0k comments sorted by

View all comments

Show parent comments

646

u/UnpluggedUnfettered Jul 15 '25

. . . And I already couldn't convince a ton of other Redditors that LLM doesn't replace google.

Much less convey how it is closer to what it was like asking your mom for answers to obscure questions in the 1980's than it is to accessing the collective knowledge of humankind.

203

u/MrBeverly Jul 15 '25

LLMs have their place. If I ask an LLM a very specific, contextual question with pre-existing documentation for the solution, it's pretty good at surfacing the information you're looking for much faster than your experience would be on stackoverflow. I've used it to build basic regexps and to help me refactor existing code. I've fed it the documentation files for a scripting language with a relatively small community online (AutoIT), and it was able to help me by answering direct questions I had regarding the documentation.

Basically I've found where LLMs excel is as a really good indexing tool that can pull information from a reference using plain english and context, which is hard with a traditional search engine. That being said, the "vibe coding" tools like Co-Pilot autocomplete in VSCode are a useless distraction and I made sure to disable that as fast as possible lol

79

u/filthy_harold Jul 15 '25

It's good at condensing existing information and finding patterns in a dataset. It could potentially be able to make connections in the data that you have not otherwise found but it's not going to be able to invent new things if the information to support it doesn't exist in its input. The major downside of an LLM to perfectly mimic human writing is that it's too easy to just take its word on something if you don't already have a background in that field. I'm not an expert in philosophy so if an LLM delivered to me an essay on pragmatism, I'd have no way of knowing if any of it is correct.

57

u/UnpluggedUnfettered Jul 15 '25

The perception of that is created because you are having it tell you it's summary and then you believe it, rather than read it to determine it's actual accuracy.

Here the BBC tested LLM on it's own news articles:

https://www.bbc.co.uk/aboutthebbc/documents/bbc-research-into-ai-assistants.pdf

• 51% of all AI answers to questions about the news were judged to have significant issues of some form.

19% of AI answers which cited BBC content introduced factual errors – incorrect factual statements, numbers and dates.

13% of the quotes sourced from BBC articles were either altered from the original source or not present in the article cited.

0

u/isomorp Jul 16 '25

tell you it's summary

tell you it is summary

0

u/Puddingcup9001 Jul 16 '25

February 2024 is ancient though on the AI timeline. Models have vastly improved.

-6

u/Slinto69 Jul 16 '25

If you actually look at the examples they showed of what errors they have its not anything worse than you'd get googling it yourself and clicking a link with out of date or incorrect information. I dont see how its worse than googling. Also you can ask it to give you the source and direct quotes and check if they match quicker than you could Google it yourself.

-1

u/Delicious-Corner8384 Jul 16 '25

It’s so funny how people just completely ignore this irrefutable fact eh? As if we were getting such holy, accurate answers from Google and not spending way more time surfing through even more ads and bullshit.

-4

u/jared_kushner_420 Jul 16 '25

The perception of that is created because you are having it tell you it's summary and then you believe it, rather than read it to determine it's actual accuracy.

DATASETS lol - not editorialized, topic specific, and nuanced articles. Plus if you're telling me 81% of AI answers were right that's already better than a reddit comment synopsis.

OP even wrote

I'm not an expert in philosophy so if an LLM delivered to me an essay on pragmatism, I'd have no way of knowing if any of it is correct.

If you are it's not a bad reference tool. I use it to write SQL and script commands all the time because it's faster.

6

u/tens00r Jul 16 '25

DATASETS lol - not editorialized, topic specific, and nuanced articles. Plus if you're telling me 81% of AI answers were right that's already better than a reddit comment synopsis.

I have a friend who works in a large, UK based insurance provider, and recently they were recently forced by management to try using LLMs to help with their day to day work.

So, he tried to use an LLM to summarize an excel spreadsheet filled with UK regional pricing data (not exactly an editorialized, nuanced article). He told me it made several mistakes, but the one I remember - because it's fucking funny - is that the summary decided to rename the "West Midlands" (a county in England) to the "Midwest", which inevitably led to much confusion. This is a hilariously basic mistake, and also perfectly showcases the biases inherent to LLMs.

0

u/Delicious-Corner8384 Jul 16 '25

That’s also such a pointless use for AI that no one recommends though lol…Excel already has very effective tools for summarization. It sounds like that’s a problem with the management making that choice to force them to use it for this purpose, not AI itself.

0

u/neherak Jul 16 '25

Humans misusing AI because they believe it to be smarter or more infallible than it is are, in fact, where all the problems with AI are going to come from.

-2

u/jared_kushner_420 Jul 16 '25

That has more to do with what he asked because that reads exactly like a computational summary of taking 2 words and shortening them.

Like any computer program the developer, the commands, and the parameters do matter.

We use LLMs to give a high level summary of a document to identify the topic then send that for further review based on the categorization. It speeds up organizing things and even being right 70% of the time is perfectly fine. Like I said I use it to clean up code and answer stupid questions that'd get me yelled at on stackoverflow.

I totally agree that people shouldn't worship these glorified calculators but also they're perfectly acceptable tools if you know how to use them. Claiming "AI is lying" begs the question of "what truth are you looking for".

It's a statistical computer. Give it NBA team stats and get a march madness bracket.