California’s latest AI safety bill is driven by fears that another Donald Trump presidency would overturn federal efforts to rein in the technology — and it’s backed by people connected to a potential Trump ally: Elon Musk.
Dan Hendrycks, director of the Center for AI Safety and also the safety advisor at Musk startup xAI, is one of the chief backers behind the sweeping California plan, and told Semafor that mitigating risks posed by AI is a critical bipartisan issue.
“If [Trump] does take a very strong anti-AI safety stance, it will obviously make things difficult and it maybe makes sense not to ally with [him],” said Hendrycks, speaking in his role at CAIS, which is part of the effective altruism movement and has received millions in donations from billionaire Facebook co-founder and Asana CEO Dustin Moskovitz.
Trump’s campaign declined to comment.
Musk has warned multiple times that AI poses a threat to humanity and has donated millions to the Future of Life Institute, another EA-friendly group that supports the state plan. He and Trump have also discussed an advisory role for Musk if the Republican presidential candidate wins another term, and met with him recently about a voter-fraud prevention plan, the Wall Street Journal reported.
The Silicon Valley-connected EA movement focuses on the potential existential risk posed by artificial intelligence, and is beginning to make forays into politics. CAIS and other related groups worry that Trump would undo the work of his predecessor as he did the last time he was in the White House. In 2023, President Joe Biden signed an executive order requiring companies building powerful foundational models to conduct safety audits and share test results with the government, which would help mitigate national security risks as AI continues to advance.
California’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, proposed by state Senator Scott Wiener, is even tougher and focuses on tackling the most hazardous issues. The legislation would require developers to swear under oath that their models aren’t capable of carrying out nuclear, chemical, biological or cyber attacks on critical infrastructure that result in mass casualties or at least $500 million worth of damages. They must also have the ability to shut down their model in case it wreaks havoc.
“Executive orders do not have the same force of law as a statute does. Any executive order can be changed by any president. So having an actual law in place is important and makes sense,” Wiener told Semafor. “We know Donald Trump and his team [couldn’t] care less about protecting the public [from AI], so we think it’s important for California to take steps to promote innovation and for safe and responsible deployment of extremely large models.”
Many players in the tech industry, however, are concerned that the bill is too draconian and will hamper innovation in the best state to build AI. Wiener is now trying to strike a balance, and is currently in talks to adjust the legislation as the proposal heads to the Assembly.