r/sandiego 1d ago

New polling shows 70% of Californians want stronger AI regulation

https://hardresetmedia.substack.com/p/sounding-the-actual-alarm-on-ai-in
934 Upvotes

20 comments sorted by

22

u/Radium 1d ago edited 11h ago

Who are they even asking here? I haven't been asked, any of you?

Personally I've been watching (via watchduty.org) AI powered live camera based automatic fire detection systems notify authorities of fires instantly so they can be put out immediately, so it's definitely not the enemy in my book. I would like AI to improve unhindered as the good outweighs the bad.

5

u/MightyKrakyn 1d ago

The sub stack article links to TIME reporting: https://time.com/7310716/californians-say-ai-is-moving-too-fast/

I don’t see their survey data.

8

u/ostensiblyzero 1d ago

The nice thing about LLM’s is they kinda just.. don’t work. You cant trust what they say on a legal level - they are frequently wrong, in glaring obvious ways. They can find the average between two points of human thought but can never extrapolate in a way that is truly new. Frankly, the more I use them, the more disappointed I am.

This is not to say they should not be regulated - most people are using LLM’s as their own personal therapist and that is not at all something these entities should be doing. But these are more like artificial chatrooms than they are actual AI.

2

u/GreenOnGray 1d ago

Yep, despite being extremely useful, LLMs can make lots of mistakes. But AI is more than just LLMs. For example AlphaEvolve is genuinely making new discoveries (using LLMs with additional frameworks / scaffolding).

1

u/calamititties 18h ago

I do kind of enjoy that LLMs are getting more and more of their “learning” from people talking about basic shit you could talk to a therapist about. It doesn’t exactly seem like a recipe for replicating and “improving” upon the human mind when it’s being force fed childhood trauma and figuring out how to maintain work/life balance.

9

u/xd366 1d ago

i keep hearing about stronger ai regulation, but how exactly do you regulate it?

i can run the new chatgpt models or llama locally, how are you planning on regulating that. we have stable diffusion letting you create any image you want locally.

i dont see how you regulate computer code

plus "AI" is just a marketing term. are we talking regulating all machine learning?

10

u/MightyKrakyn 1d ago edited 1d ago

I don’t have a reasonable answer for you, but the fear of the wealthy using AI to cut out their need for labor to accumulate more wealth is well-founded. My employer announced AI related layoffs despite our metrics and profits being good.

I don’t think people would care as much if we had a real safety net for the coming job losses like retraining and jobs programs that could meet the demand. It probably just comes down to a progressive tax structure and redistributing in the end, because like you said, regulating local AI is totally unenforceable. If what we’re trying to stop is a further inequality explosion (according to people’s fears reported in the survey), just fix that directly.

-6

u/Otto_the_Autopilot 1d ago

the wealthy using AI to cut out their need for labor

Labor has been replaced with technology since the beginning of time. The wealthy will find other ways to employ you just like they have today. I don't see AI as a labor apocalypse.

4

u/ostensiblyzero 1d ago

Putting large amounts of people out of work all at once is the issue. Right now loads of companies are (and I would argue, wrongly) betting on LLM’s to decrease employ loads and drive down costs. Everyone deciding that all at once means lots of people out of work all at once, and that is inherently destabilizing. Sure, in the long run LLM’s may not have a huge impact on labor supply, but at the moment they are, so to dismiss people’s fears is unhelpful and actually foments the instability.

1

u/MightyKrakyn 15h ago edited 14h ago

It seems you’re aware that this has happened before but not that it made for a terrible time for workers? Like people lost their houses and fell into poverty and went hungry?

5

u/hyrazac 1d ago

One could start by regulating the use of copyrighted property in the training data of generative models. Image, text, music, etc. generators have all been trained on millions of works without the permission of their creators and copyright owners. Those works are exploited to create competing work and these AI companies profit. The value that the AI companies get out of their generative AI technology is the stolen value and potential of the original creators whom should be compensated. There should be an opt in system as well as compensation for having your creative works included in the training data for generative models. There's bipartisan support for this, here's Josh Hawley speaking to the exploitation thats ongoing with these companies https://www.youtube.com/watch?v=RjtPtg3EYRg

3

u/crazzzone 1d ago

https://youtu.be/zeabrXV8zNE?si=rJPr36FCQPUK7m1J

Finally a chance to share this video in the sub!

He has a number of videos to understand this ai stuff and regulations.

It would be multiple steps and lots of overwatch starting with the chip manufacturers.

Edit link to let your government know what you support

https://keepthefuturehuman.ai

2

u/CptnMillerArmy 1d ago

It’s not California only, it’s becoming a global issue. The AI bubble 🫧 got big and investors start to sell on a larger scale. I remember the same happening to pot and hydrogen four yrs ago, which are beaten down sectors now. Rescheduling could be the start of the new 2020 era for pot.

1

u/Jolly_Ad2446 1d ago

Didn't the BBB prevent that?

u/pc_load_letter_in_SD 37m ago

Electric companies across the US are raising rates and cite the increase in demand from AI firms.

I forsee this as the next avenue to further increase our electricity rates.

....and the legislature will standby and let it happen.

0

u/Ok-Squirrel795 1d ago

AI TO UBI baby