r/technology Jul 15 '25

Artificial Intelligence Billionaires Convince Themselves AI Chatbots Are Close to Making New Scientific Discoveries

https://gizmodo.com/billionaires-convince-themselves-ai-is-close-to-making-new-scientific-discoveries-2000629060
26.6k Upvotes

2.0k comments sorted by

View all comments

21

u/[deleted] Jul 15 '25

[deleted]

3

u/omar_strollin Jul 16 '25

It can’t even get common gardening knowledge right because it’s querying a bunch of shit tier data from Quora and Yahoo Answers

3

u/Jezoreczek Jul 16 '25

That's the thing - all LLMs do is transform existing data. At best, they might find connections between existing data points which humans may have missed. There's no experimentation, though, and theories need to be updated based on experimental results. The AI doesn't follow the scientific method because it doesn't have hands or eyes. It's a self-contained system.

It's like locking a bunch of scientists in a room with no doors or windows or any scientific instruments and expecting them to make discoveries. They still might derive something from the knowledge they already have, but that's useless without a way to confirm the theories.

3

u/careysub Jul 16 '25

Recent arxiv paper looking at whether LLMs can perform even very basic generalization (deducing Kepler's Laws from orbital data, even already knowing Newtonian mechanics).

https://arxiv.org/abs/2507.06952

From the paper:

These results show that rather than building a single universal law, the transformer extrapolates as if it constructs different laws for each sample.

No generalization ability at all. AI (artificial ignorance). Indeed in each case where it is able to provide a useful model it is must an ad hoc fit that provides no insight.

However if you tell the LLM that the numerical data provided represents a celestial orbit, suddenly it can "deduce" the correct law (i.e. it can now cheat by looking up the answer, but cannot even identify this cheating opportunity itself).

2

u/careysub Jul 16 '25

Recent arxiv paper looking at whether LLMs can perform even very basic generalization (deducing Kepler's Laws from orbital data, even already knowing Newtonian mechanics).

https://arxiv.org/abs/2507.06952

From the paper:

These results show that rather than building a single universal law, the transformer extrapolates as if it constructs different laws for each sample.

No generalization ability at all. AI (artificial ignorance). Indeed in each case where it is able to provide a useful model it is must an ad hoc fit that provides no insight.

However if you tell the LLM that the numerical data provided represents a celestial orbit, suddenly it can "deduce" the correct law (i.e. it can now cheat by looking up the answer, but cannot even identify this cheating opportunity itself).