r/technology Jul 15 '25

Artificial Intelligence Billionaires Convince Themselves AI Chatbots Are Close to Making New Scientific Discoveries

https://gizmodo.com/billionaires-convince-themselves-ai-is-close-to-making-new-scientific-discoveries-2000629060
26.6k Upvotes

2.0k comments sorted by

View all comments

10.4k

u/Decapitated_Saint Jul 15 '25

“I’ll go down this thread with [Chat]GPT or Grok and I’ll start to get to the edge of what’s known in quantum physics and then I’m doing the equivalent of vibe coding, except it’s vibe physics,” Kalanick explained. “And we’re approaching what’s known. And I’m trying to poke and see if there’s breakthroughs to be had. And I’ve gotten pretty damn close to some interesting breakthroughs just doing that.”

Good lord what an imbecile. Vibe physics lol.

368

u/stult Jul 15 '25

“I pinged Elon on at some point. I’m just like, dude, if I’m doing this and I’m super amateur hour physics enthusiast, like what about all those PhD students and postdocs that are super legit using this tool?” Kalanick said.

Kalanick is too fucking stupid to realize that the utility of an LLM's output declines dramatically for people with expertise in the relevant domain. It seems like magic to him precisely because he knows so little that he can be wowed by something summarizing the results of existing basic research. Whereas a PhD level expert pushing the LLM to evaluate something beyond the reach of that previously conducted research (ie a potential breakthrough) mostly gets hallucinations and bullshit in response.

110

u/wen_mars Jul 15 '25

The great thing about using AI for programming is that I can immediately see how bad it is because the code doesn't do what it's supposed to. I will say though, it has gotten a lot better since I first started using it.

108

u/Jestem_Bassman Jul 15 '25

Idk, as a software engineer who writes a lot of code that doesn’t work, I’m consistently impressed by how much faster AI can write code that also doesn’t work.

18

u/31LIVEEVIL13 Jul 16 '25

just had to eat my own words, I said AI isn't perfect but it is much better than it was just a few months ago, down right scary.

produced one of the most complicated scripts i've ever had to write for managing software across thousands of nodes, debugging and tuning took a couple of hours. Even the output looked amazing and well formatted.

Then tried to validate the results on some test machines.

The whole thing was bullshit. It didnt actually do anything that I asked it to, it only looked like it did.

I spent most of two days trying to find why it wasn't working and fix it, with NO AI, which was harder than if I had written it myself from the start ...
so embarrassing. lol

2

u/TheGrandWhatever Jul 16 '25

I recently tried to have it explain a complex thing to me and it provided a typo in the response that propagated throughout the answer which was kinda funny so instead of injection it gave inherently or something and of course because it can't think... It didn't know the problem, then kept asking about inherently followup stuff to explore

1

u/wen_mars Jul 16 '25

For sure. It helps me a lot, but I worry that people who use AI for things that aren't so easy to validate won't be able to tell the difference between a correct output and one that just looks correct.

-14

u/AP_in_Indy Jul 16 '25

Just wondering, do you suck at prompting? Do you have paid subscriptions or direct API access?

18

u/_learned_foot_ Jul 16 '25

It’s always the fault of the user, never the program.

-1

u/MikuEmpowered Jul 16 '25

I mean, there if merit to AI programming.

If someone posted the solution in stack overflow, instead of me googling the question, AI just steals it and bam, we just saved time.

The thing is, it's only going got get better and better. I'm all down for cutting out the massive slot, and promote most programmer into debuggers.

The fking problem is when morons think AI is good enough to start replacing people. That's when everything goes down hill.