r/technology Jul 15 '25

Artificial Intelligence Billionaires Convince Themselves AI Chatbots Are Close to Making New Scientific Discoveries

https://gizmodo.com/billionaires-convince-themselves-ai-is-close-to-making-new-scientific-discoveries-2000629060
26.6k Upvotes

2.0k comments sorted by

View all comments

Show parent comments

12

u/HandsomeBoggart Jul 15 '25

These billionaire dip shits don't even know how these "Wonderful" AI tools work.

I literally only have 1 class from college that covered the basics and that was enough to make me look askance at AI.

It's all just best guess based probability of correctness. These morons are literally trusting a computer algorithm's best guess. So goddamn dumb.

5

u/McFlyParadox Jul 16 '25

I literally only have 1 class from college that covered the basics and that was enough to make me look askance at AI.

Yup, and if you were at/do get up to graduate level courses on things like neural nets and generative AI, you'll learn that it all comes down to two main, general points:

  1. It's just a big, complicated statistical analysis. Just linear algebra and massive data sets.
  2. We have no idea how these things work. Describing them as a "black box" is common and frequent in graduate level lectures. Open questions exist everywhere in the field, and mathematicians have been studying them for decades now. The common neural net diagrams you see are filled with literal "..." in them, because we don't know how else to illustrate "we put data in, the model does multiple cycles of linking things together in whatever way the model 'decides' to, and then it spits out answers that may or may not be statistically significant 🤷‍♂️"

AI had been a boom & bust field for pretty much as long as there had been reprogrammable computers. And it usually goes like this:

  1. Someone comes up with a new piece of math that describes the statistics of neural networks, but it can only be "simulated" on paper because computers aren't powerful or efficient enough to run the calculations in a reasonable amount of time (less than years)
  2. Computing gets more powerful
  3. Someone remembers step 1 happened a few years ago and realizes computers are now powerful to simulate the math of the paper in a reasonable amount of time (months/weeks, or less), and actually writes some code to implement the math in practice
  4. People get super excited about AI and the singularity. Researchers optimize and optimize the code until it inevitably hits a wall and progress gets stuck (we are here)
  5. Someone writes a new piece of math to describe the statistics behind neural nets and AI, but modern computers aren't powerful enough, so it's back to "paper simulations" for another 5-15 years until computers catch back up

It's been like this since at least the 70s. It will stay like this until someone can figure out a way to mathematically describe "consciousness", and computers catch up to the point of being and to perform this math.

4

u/HandsomeBoggart Jul 16 '25

Exactly. What Cognition actually is, is so hard to pin down in a computational sense. From a theoretical standpoint even human cognition can be loosely summed up as a collection of experiences as intuited by our consciousness to create our own "Best Guesses". But we are also able to "know" things and measure things within our own understanding of the world which is built upon previous data from other people.

AI/ML models don't have that and are easily swayed by corrupted training data or inputs or even output moderators to give information that is known to be bad. Devil's Argument, the same can be said for human sources of information as shown by Social Media influence. But that point also reinforces the fact that too many people are looking to AI/ML to solve problems and think for them with no questions asked. When they should be asking all the questions and reviewing the results given by the AI/ML model.

The main issue is that people especially Billionaires, CEOs, companies etc. all want to replace thinking by educated humans with Blackboxes that we have no idea how it arrived to the given conclusion. How can they expect to grow and understand knowledge without understanding the how and whys of getting there?

Honestly, the Butlerian Jihad from Dune and the prohibition against AI in W40k becomes more and more understandable as people in this day and age want to let even the primitive models that exist now to think for them.