Tech billionaire Elon Musk and Ethereum Co-Founder Vitalik Buterin have shown support for California’s ‘AI regulation’ Act for over 20 years.
In a post on his social media platform X on Aug. 27, Musk said it was “a tough call” that would “make some people upset,” but added that he advocates for AI regulation “just as we regulate any product/technology that is a potential risk to the public.” Apart from Musk, Vitalik, too, has also agreed to the call.
This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill.
For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk…
— Elon Musk (@elonmusk) August 26, 2024
Democratics had proposed California’s “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act,” (SB 1047) in February. The SB 1047 would require AI developers to execute safety protocols to prevent mass casualties or major cyberattacks while admitting an “emergency stop” button for AI models.
Many in Silicon Valley have voiced opposition to the new proposed legislation.
Advertisement
The leader of Big Tech’s ‘e/acc’ movement and founder of stealth AI startup Extropic, Guillaume Verdon, stated that AI’s can be powerful “[It] sets an awful precedent at a national level and will open up the door for govt lawfare against companies that don’t offer govt LLM backdoors.”
Verdon also compared the policies with the dynamics of free speech and decentralization, saying it’s “like the Telegram situation but for AI.”
Director of the Center for AI Safety, Dan Hendrycks, however, has laid down the trivialities in the Act and its implications as he shared that if organizations don’t train a model with $100 million in compute, and don’t fine-tune a ($100m+) model with $10 million in compute (or rent out a very large compute cluster), then the laws don’t apply to them.
You’re the best, Elon!
TLDR of 1047:
1. If you don’t train a model with $100 million in compute, and don’t fine-tune a ($100m+) model with $10 million in compute (or rent out a very large compute cluster), this law does not apply to you.2. “Critical harm” means $500 million in… pic.twitter.com/aDnU95XqBb
— Dan Hendrycks (@DanHendrycks) August 26, 2024
The bill however, has also faced opposition from eminent personalities including Speaker Emerita Nancy Pelosi, Zoe Lofgren, Silicon Valley Representative Ro Khanna, Andreessen Horowitz and Meta.
Also Read: Artificial Superintelligence Alliance launches token; migration tools go live
Advertisement