7 min read
7 min read

Governor Gavin Newsom signed SB 53 into law, requiring major AI companies like OpenAI, Google, Meta, and Anthropic to disclose how they plan to mitigate potential catastrophic risks from their AI models.
This makes California the first state to enforce such specific AI safety disclosures, directly targeting companies at the forefront of AI development.
The law could influence how other states approach AI oversight and may even inspire federal discussions. It raises questions about whether public disclosure will push companies to take extra precautions or change the way they design their AI systems.

SB 53 applies to companies with annual revenues above $500 million, asking them to assess risks such as AI escaping human control or contributing to bioweapons development. Fines can reach up to $1 million per violation, ensuring serious attention to compliance.
This focus on the largest players might set a precedent, potentially creating pressure for smaller firms to voluntarily adopt similar safety standards. It also highlights California’s role as both an economic powerhouse and a safety regulator in the AI space.

Newsom’s office says the law addresses a gap left by the U.S. Congress, which has yet to pass broad AI legislation. SB 53 serves as a model that could guide future national standards and encourage discussions on how state and federal rules might align.
By acting first, California creates both an opportunity and a challenge for other states and the federal government. It raises the possibility of a national framework that incorporates state-level lessons without causing regulatory conflicts.

Newsom emphasized that AI regulation should not stifle innovation, aiming to protect communities while maintaining California’s attractiveness to AI companies. The law demonstrates a deliberate effort to combine safety oversight with support for industry growth.
This balance could encourage AI developers to invest in responsible innovation, but it also leaves room for debate about how strict regulations should be to prevent future risks without slowing technological progress.

Last year, Newsom vetoed a previous AI bill requiring annual third-party audits for companies spending more than $100 million on AI models. The veto reflected concerns about feasibility and pushback from the industry.
This history shows that creating workable AI laws is challenging. It also suggests that lawmakers might continue to experiment with regulations, adjusting requirements as technology and industry practices evolve.

Anthropic co-founder Jack Clark praised SB 53 as a strong framework balancing public safety with innovation. Companies like Anthropic and OpenAI may now have a clearer path to show that they are acting responsibly.
The positive reception by some industry leaders highlights that regulation does not necessarily mean opposition. There’s a chance that safety-focused frameworks could build public trust and support long-term growth.

Newsom mentioned that California should ensure alignment with any future federal standards while keeping SB 53’s high safety bar. This could prevent conflicting regulations if Congress eventually passes national AI laws.
This approach opens the door to cooperative governance between the state and federal levels. Companies may benefit from clarity, while lawmakers can maintain flexibility to adjust regulations as AI continues to evolve.

SB 53 follows similar legislation in New York and Colorado, signaling growing state leadership on AI oversight. It could inspire additional states to develop their own rules or adopt similar disclosure requirements.
However, a patchwork of state laws might make compliance more complicated for startups. The law highlights the ongoing tension between innovation and consistent regulation across the country.

The law requires companies to publicly share risk assessments, giving the public insight into AI safety measures. Transparency could pressure firms to prioritize risk mitigation in development processes.
This could also influence investor decisions and public perception, as companies seen as taking safety seriously might attract more support. There’s a possibility that disclosure becomes a competitive advantage for responsible AI developers.

While SB 53 applies in California, AI companies still hope for federal legislation that could create uniform standards nationwide. A federal framework could simplify compliance and reduce regulatory confusion.
It also raises the possibility that federal rules could override state requirements, creating a single cohesive system. California’s law may serve as a benchmark for what such national regulations might include.

According to the Stanford AI Index Report 2025, governments are stepping up on AI with both rules and big investments. In 2024, U.S. federal agencies introduced 59 AI-related regulations, more than double 2023’s total, and twice as many agencies were involved.
Globally, mentions of AI in legislation rose 21 percent across 75 countries, marking a ninefold increase since 2016.
At the same time, countries are investing heavily, from Canada’s $2.4 billion pledge to India’s $1.25 billion pledge, and Saudi Arabia’s $100 billion Project Transcendence initiative.
This global push shows why California’s new law requiring AI companies to disclose safety plans makes sense, reflecting a broader trend of governments treating AI safety as a serious priority.

Some Democrats and Republicans are exploring federal AI standards, reflecting the urgency of consistent oversight. Representatives like Ted Lieu stress the choice between fragmented state laws or a unified national approach.
These discussions suggest that lawmakers are aware of the risks of inconsistent regulation. Federal intervention could eventually harmonize rules, but the path forward remains uncertain, leaving states like California in a pioneering role.

SB 53 may push companies to enhance their risk assessment processes and internal safety protocols. Transparency and accountability could foster a culture of responsible AI development.
At the same time, firms might adapt in creative ways to meet disclosure requirements without slowing innovation. The law presents both a challenge and an opportunity for AI companies to demonstrate leadership in safety.
New York lawmakers are aiming to follow California’s lead, proposing similar AI safety legislation in response to SB 53.
Senators Andrew Gounardes and Alex Bores say the law proves that commonsense safeguards for advanced AI are both possible and necessary, helping AI stay safe while still driving innovation.
Their RAISE Act aims to do something similar in New York, requiring top AI developers to have plans for avoiding risks like bioweapons, automated crime, or loss of control over powerful AI systems.
With federal guidance lagging, these state laws could set a de facto national standard for AI safety.
Could Congress’s bowing out change AI rules? See how state laws might take the lead and reshape the AI landscape.

By requiring safety disclosures, California sets an example for both regulators and industry leaders. The state is shaping the conversation on how AI should be governed in high-stakes environments.
This influence could extend nationally or even globally, as other regions consider California’s approach when designing their own AI regulations. The law highlights the state’s unique position at the intersection of technology, economy, and public safety.
Could Ghibli-style AI art cross the legal line? See how this new AI trend fits in the eyes of the law.
Do you think California’s new AI law will truly improve AI safety, or is it just a regulatory formality? Drop your thoughts in the comments and hit like if this topic caught your eye.
Read More From This Brand:
Don’t forget to follow us for more exclusive content right here on MSN.
This slideshow was made with AI assistance and human editing
This content is exclusive for our subscribers.
Get instant FREE access to ALL of our articles.
Dan Mitchell has been in the computer industry for more than 25 years, getting started with computers at age 7 on an Apple II.
We appreciate you taking the time to share your feedback about this page with us.
Whether it's praise for something good, or ideas to improve something that
isn't quite right, we're excited to hear from you.
Stay up to date on all the latest tech, computing and smarter living. 100% FREE
Unsubscribe at any time. We hate spam too, don't worry.

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!