- by foxnews
- 15 Nov 2024
California is known for taking on regulatory issues like data privacy and social media content moderation, and its latest target is AI. The state's legislature recently passed SB 1047, one of the US's first and most significant frameworks for governing artificial intelligence systems. The bill contains sweeping AI safety requirements aimed at the potentially existential risks of "foundation" AI models trained on vast swaths of human-made and synthetic data.
SB 1047 has proven controversial, drawing criticism from the likes of Mozilla (which expressed concern it would harm the open-source community); OpenAI (which warned it could hamper the AI industry's growth); and Rep. Nancy Pelosi (D-CA), who called it "well-intentioned but ill informed." But particularly after an amendment that softened some provisions, it garnered support from other parties. Anthropic concluded that the bill's "benefits likely outweigh its costs," while former Google AI lead Geoffrey Hinton called it "a sensible approach" for balancing risks and advancement of the technology.
Governor Gavin Newsom hasn't indicated whether he will sign SB 1047, so the bill's future is hazy. But the biggest foundation model companies are based in California, and its passage would affect them all.
The California State Assembly and Senate have passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), one of the first significant regulations of artificial intelligence in the US.
The bill, which has been a flashpoint for debate in Silicon Valley and beyond, would obligate AI companies operating in California to implement a number of precautions before they train a sophisticated foundation model. Those include making it possible to quickly and fully shut the model down, ensuring the model is protected against "unsafe post-training modifications," and maintaining a testing procedure to evaluate whether a model or its derivatives is especially at risk of "causing or enabling a critical harm."
In a new letter, OpenAI chief strategy officer Jason Kwon insists that AI regulations should be left to the federal government. As reported previously by Bloomberg, Kwon says that a new AI safety bill under consideration in California could slow progress and cause companies to leave the state.
The letter is addressed to California State Senator Scott Wiener, who originally introduced SB 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.
A second California state senator has introduced bills meant to regulate AI systems, particularly those used by state agencies.
Senator Steve Padilla, a Democrat, introduced Senate Bills 892 and 893, establishing a public AI resource and creating a "safe and ethical framework" around AI for the state. Senate Bill 892 will require California's Department of Technology to develop safety, privacy, and non-discrimination standards around services using AI. It also prohibits the state of California from contracting any AI services "unless the provider of the services meets the established standards."
A California lawmaker will file a bill seeking to make generative AI models more transparent and start a discussion in the state on how to regulate the technology.
Time reports that California Senator Scott Wiener (D) has drafted a bill requiring "frontier" model systems, usually classified as large language models, to meet transparency standards when they reach above a certain quantity of computing power. Wiener's bill will also propose security measures so AI systems don't "fall into the hands of foreign states" and tries to establish a state research center on AI outside of Big Tech.
A passenger paid for a first-class ticket on an American Airlines flight, but the seat in front of him trapped him in his chair, which led to the airline posting a public apology on X.
read more