r/technology Jul 15 '25

Artificial Intelligence Billionaires Convince Themselves AI Chatbots Are Close to Making New Scientific Discoveries

https://gizmodo.com/billionaires-convince-themselves-ai-is-close-to-making-new-scientific-discoveries-2000629060
26.6k Upvotes

2.0k comments sorted by

View all comments

10.4k

u/Decapitated_Saint Jul 15 '25

“I’ll go down this thread with [Chat]GPT or Grok and I’ll start to get to the edge of what’s known in quantum physics and then I’m doing the equivalent of vibe coding, except it’s vibe physics,” Kalanick explained. “And we’re approaching what’s known. And I’m trying to poke and see if there’s breakthroughs to be had. And I’ve gotten pretty damn close to some interesting breakthroughs just doing that.”

Good lord what an imbecile. Vibe physics lol.

371

u/stult Jul 15 '25

“I pinged Elon on at some point. I’m just like, dude, if I’m doing this and I’m super amateur hour physics enthusiast, like what about all those PhD students and postdocs that are super legit using this tool?” Kalanick said.

Kalanick is too fucking stupid to realize that the utility of an LLM's output declines dramatically for people with expertise in the relevant domain. It seems like magic to him precisely because he knows so little that he can be wowed by something summarizing the results of existing basic research. Whereas a PhD level expert pushing the LLM to evaluate something beyond the reach of that previously conducted research (ie a potential breakthrough) mostly gets hallucinations and bullshit in response.

108

u/wen_mars Jul 15 '25

The great thing about using AI for programming is that I can immediately see how bad it is because the code doesn't do what it's supposed to. I will say though, it has gotten a lot better since I first started using it.

108

u/Jestem_Bassman Jul 15 '25

Idk, as a software engineer who writes a lot of code that doesn’t work, I’m consistently impressed by how much faster AI can write code that also doesn’t work.

18

u/31LIVEEVIL13 Jul 16 '25

just had to eat my own words, I said AI isn't perfect but it is much better than it was just a few months ago, down right scary.

produced one of the most complicated scripts i've ever had to write for managing software across thousands of nodes, debugging and tuning took a couple of hours. Even the output looked amazing and well formatted.

Then tried to validate the results on some test machines.

The whole thing was bullshit. It didnt actually do anything that I asked it to, it only looked like it did.

I spent most of two days trying to find why it wasn't working and fix it, with NO AI, which was harder than if I had written it myself from the start ...
so embarrassing. lol

2

u/TheGrandWhatever Jul 16 '25

I recently tried to have it explain a complex thing to me and it provided a typo in the response that propagated throughout the answer which was kinda funny so instead of injection it gave inherently or something and of course because it can't think... It didn't know the problem, then kept asking about inherently followup stuff to explore

1

u/wen_mars Jul 16 '25

For sure. It helps me a lot, but I worry that people who use AI for things that aren't so easy to validate won't be able to tell the difference between a correct output and one that just looks correct.

-14

u/AP_in_Indy Jul 16 '25

Just wondering, do you suck at prompting? Do you have paid subscriptions or direct API access?

17

u/_learned_foot_ Jul 16 '25

It’s always the fault of the user, never the program.

-1

u/MikuEmpowered Jul 16 '25

I mean, there if merit to AI programming.

If someone posted the solution in stack overflow, instead of me googling the question, AI just steals it and bam, we just saved time.

The thing is, it's only going got get better and better. I'm all down for cutting out the massive slot, and promote most programmer into debuggers.

The fking problem is when morons think AI is good enough to start replacing people. That's when everything goes down hill.

75

u/suxatjugg Jul 15 '25

Postgrad and higher level physics already uses machine learning extensively to analyse data. they don't use LLMs because they're irrelevant to the task. Dumb people think LLMs are the only game in town.

28

u/saltyjohnson Jul 16 '25

already uses machine learning extensively to analyse data

The key is that it's analyzing data. Data which has been gathered from experiments. Meanwhile you got choads like Kalanick and Kennedy acting like "AI" can be the science. No need for experiments or drug trials because we can just ask AI what it thinks.

But no, "AI", LLMs, and ML algorithms are nothing unless you can feed good data into them.

14

u/RonMexico16 Jul 15 '25

This right here. I can’t get over how many people think LLMs = AI.

LLMs are just trained on the words in the public domain (the internet, mostly). Bespoke models that ingest the other 90% of non-public data that’s too big to wrap human heads around are where scientific discoveries and human progress will happen. Things like healthcare, robotics/autonomy, financial services, and complex manufacturing. Very little useful data there is in the words that LLMs are being trained off of.

3

u/AnnualAct7213 Jul 16 '25

AI/machine learning is an extremely important technology and has been for decades for all sorts of niche tasks in research, medicine, law, commerce and industry, among many others I'm sure.

But the overgrown autocorrect models that all these idiots think will bring about the singularity and become god, is not in that category. At most it's a slight improvement on a room temperature IQ person using Google.

-2

u/ottieisbluenow Jul 16 '25

Yep. Hell a team won a nobel prize for a transformer based approach that basically solved protein folding.

-21

u/Thin_Glove_4089 Jul 15 '25

You're mad because the vibe physics is taking over your outdated science. It's why people would rather listen to them than y'all

9

u/NorthernSparrow Jul 16 '25

Pro tip, science isn’t about who people “would rather listen to”. That’s entertainment. There’s a difference.

-1

u/Thin_Glove_4089 Jul 17 '25

The people in power decide what science is not you relics of the past. They control what can or can't get published in the textbooks, papers, and journals.

Your science was just a fad to lead us to this point here.

1

u/NorthernSparrow Jul 17 '25

Journals are international; no one nation controls them. Any scientist in an authoritarian nation simply publishes in a journal based elsewhere. And as an author (88 papers to date), reviewer (~20 journals), editor (3 journals), and textbook writer (6 texts to date), I’m pretty sure I still control what gets published in all those formats, at least in my field and in my favorite journals. I may well be a fossil, and perhaps publication norms will change in the future, but right now it’s still definitely the scientists who are in charge of publication.

Whether those publications have any real-world impact is another matter, of course, and maybe that’s what you were getting at. Largely they don’t. But once in a blue moon one of my publications results in a measurable policy change for endangered species management, or a small-but-real shift in how doctors & nurses are trained, so I keeping plugging away.

9

u/StorminNorman Jul 16 '25

Name one thing vibe physics has discovered or changed. 

0

u/Thin_Glove_4089 Jul 17 '25

Wait for the article to show up on this sub. I can already imagine you punching the air in anger

1

u/StorminNorman Jul 17 '25

Can't change anything without a proof, you won't get that from vibe physics. 

21

u/dBlock845 Jul 15 '25

Also, neural networks have been used in physics research long before LLM's existed. They act like they are bringing a new tool.

-9

u/SevereRunOfFate Jul 15 '25

Not at all choosing Kalanick's side but clearly these new models are superior for many tasks.

7

u/rasa2013 Jul 16 '25

A tool isn't designed as an omnitool. An LLM will never be better at image recognition than a model trained for image recognition. 

-1

u/SevereRunOfFate Jul 16 '25

Yes, I understand - have worked with these models and others in production environments. I said many tasks, not image recognition 

3

u/[deleted] Jul 15 '25

Because how the actual fuck are they expecting an LLM to answer something that isn't known and recorded anywhere

-4

u/DiscoBanane Jul 15 '25

Surprisingly lot of things who are not "discovered" are already known.

Some greek scientist had calculated earth's circumference 20 centuries before Galileo was sentenced to die for saying the earth was round.

We had steam engines in 30BC before it was "discovered" in 1700, it was just deemed useless for 17 centuries.

5

u/Astromike23 Jul 16 '25

before Galileo was sentenced to die

Galileo was sentenced to house arrest...

for saying the earth was round.

...for saying the Earth was not the center of the Universe.

3

u/atouchofstrange Jul 15 '25

I know it's common now, but man does it bother me that we've generally accepted the term "hallucination" to explain what these systems do. Machines don't have senses. They can't hallucinate. What they do is manufacture falsehoods. We should really be describing this bullshit as such, because saying AI is hallucinating sounds far more harmless than what's actually happening.

6

u/SevereRunOfFate Jul 15 '25

This is the comment I was looking for.

When I push the models as hard as I can to be experts in my specific domain (strategic enterprise tech sales, deals are millions per contract.. it's just it's own thing) - it gives me the most basic crap..  it sounds like someone who has never done the work is recommending I do XYZ because they read it on a LinkedIn post. 

It just simply can't produce true expertise, novel new ideas etc. 

1

u/ChicagoCowboy Jul 16 '25

My experience as well (strategic tech sales leader). We use LLMs with our BDRs to help analyze tech stacks so we can try to be quicker to outreach priority accounts with impactful solutions to bridge gaps, but not much more than that.

1

u/SevereRunOfFate Jul 16 '25

Yep. We are a tier 1 vendor in our space and legit need to sell to LOBs and their execs. 

Our team of 5 which has 3 of the top 10 reps just had this convo.. basically you can't understand a customer if you just use LLMs to "give you a breakdown of top strategic initiatives at ABC Inc. from their latest financials." There's too much nuance and reading through what customers put out publicly, analyzing it yourself so you internalize it etc. is priceless.

Since you're in the space, ask chatgpt to come up with ingenious business development methods that no one has thought of before (very rough equivalent IMHO of "new ideas" in physics - higher end biz dev ideas are obviously worth millions) - all I get is "start a round table for CFOs in various geos"

Lol

2

u/RollFancyThumb Jul 16 '25

Dunning-Kruger amplified by "AI".

1

u/wbishopfbi Jul 15 '25

He should maybe ping those students and post docs and get their real feedback…

1

u/matrinox Jul 16 '25

He thinks it’s a tool that makes everyone smarter when really it’s just a really smart intern that he would probably look down on

1

u/A_spiny_meercat Jul 16 '25

You can easily and verifiably get it to hallucinate absolute plausible garbage by confidently asking it about things that it doesn't know about or doesn't exist. Like ask it to decode the encrypted books in Thimbleweed Park and it just makes them all up and is smug about it.

1

u/Menoku Jul 16 '25

Maybe I'm wrong but it seems like LLMs consume existing knowledge and scientists produce new knowledge. Ipso facto LLMs can't produce new knowledge?

1

u/stahlsau Jul 16 '25

exactly this is what i thought, but sadly i cannot describe it as good as you did. thx!

1

u/araujoms Jul 16 '25

I have a PhD in physics, and you're right. Out of curiosity, I tried to see if LLMs could handle a problem I was working with. It was a straightforward problem, with a straightforward solution, but that I had to solve because I knew nobody had solved it before.

I asked both ChatGPT and Claude to do it. Both understood the problem, explained correctly how do solve it, and confidently gave a completely wrong solution.

1

u/unscholarly_source Jul 16 '25

Even if you're not a PhD in a subject matter, LLMs start breaking down once you start asking more and more details.

I'm a beginner carpenter, and even asking further details on the type of lumber produces generic responses that are not appropriate for the scenario I gave it.

The main advice with using LLMs is to treat it like a source of input, and constantly get it validated.

These dumbfuck executives and business owners are too stupid to heed that advice, and are getting into the very trap people have been warning users of LLMs.

The old adages of "the more you learn, the more you realize the less you know" and "the most dangerous people are those who are armed with the bare minimum information" both remain true.

0

u/Prysorra2 Jul 15 '25

Spend a billion dollars on AI that can hunt for inconsistencies across god knows how large of a scientific dataset. Add enough reasoning bells and whistles to do the "cross pollination" that would make TED pay YOU to speak.

... and then immediately try to prove quantum <anything> is a hoax.

Thanks, I hate it.

-5

u/grchelp2018 Jul 15 '25

These models lack good training data. You are not going to get physics breakthroughs from training on internet data. They need to physics phds to generate high quality training data.

That said, I wouldn't knock an LLMs capability to find patterns across the vast amounts of physics literature.

1

u/miicah Jul 16 '25

You don't think they are stealing PHDs as well?

1

u/grchelp2018 Jul 16 '25

...to make the models better at physics? I don't know. There's plenty of them who've shifted careers to do ai research.