California-SB53

SB 53, AI Safely, and AI Governance

December 21, 20256 min read

California's new AI safety law signals something bigger than compliance — it marks the moment AI governance became a real profession.

Let me make a prediction that might age poorly: we'll look back at California's SB 53 not as the beginning of AI regulation, but as the moment AI governance stopped being a side project and became a profession.

Most commentary on this law has focused on what it restricts. I'm more interested in what it creates.

When California passed the CCPA in 2018, it didn't just regulate data privacy; it birthed an entire ecosystem of privacy officers, data protection consultants, and compliance specialists. The same pattern played out with SOX, HIPAA, and GDPR. Regulation doesn't just constrain industries; it professionalizes the functions that industries were doing informally (or not at all).

SB 53 is that inflection point for AI safety.

The Counterintuitive Truth About AI Safety Laws

There's a narrative in tech circles that regulation kills innovation. It's a convenient story, but it's historically wrong.

The pharmaceutical industry didn't collapse under FDA oversight, it professionalized. Clinical trials became a discipline. Regulatory affairs became a career. The industry grew more sophisticated, not less innovative. The same happened with aviation safety, financial services, and automotive manufacturing.

What's remarkable about SB 53 is that the major AI labs largely agree with its premises. Anthropic, OpenAI, and DeepMind have all published voluntary safety frameworks that exceed the requirements of the law. They've hired red teams, built evaluation protocols, and invested heavily in alignment research; not because regulators made them, but because they genuinely believe the risks warrant it.

SB 53 essentially sets the standard for what responsible actors were already doing, making it the floor, not the ceiling. That's not antagonism between industry and government. That's maturation.

What SB 53 Actually Does (And Doesn't Do)

The law is narrower than most coverage suggests. It applies only to "frontier models" — AI systems trained with extraordinary computational resources (10²⁶ FLOPS or more). Today, that's maybe a dozen models globally. It's not regulating your company's chatbot or the recommendation algorithm on your favorite app.

For companies that do qualify, the requirements fall into four buckets:

  • Documentation. Annual frameworks explaining how you identify and mitigate catastrophic risks. Pre-deployment transparency reports. This is the nutrition-label approach to AI; tell people what's in the box before you ship it.

  • Incident reporting. When things go wrong; unauthorized access to model weights, loss of control, deceptive behavior by the model; you have to tell regulators. Twenty-four hours for imminent threats, fifteen days otherwise.

  • Whistleblower protection. Employees who raise safety concerns can't be fired for it. This matters more than it might seem; some of the most important information about AI risks has come from insiders who faced professional consequences for speaking up.

  • Standards alignment. Frameworks must map to recognized standards like NIST AI RMF or ISO 42001. This creates interoperability; companies aren't inventing their own definitions of "safe."

Notice what's absent: the law doesn't create liability for AI harms. It doesn't ban any capabilities. It doesn't require government approval before deployment. It's fundamentally about transparency and process, not prohibition.

Why This Creates a Profession

Here's where I want to push beyond the obvious "compliance creates jobs" observation.

Yes, companies will need to hire people to write frameworks and file reports. But something more interesting is happening: the law creates a shared vocabulary and set of practices that define what AI governance actually means.

Before SB 53, "AI safety" meant different things to different people. To researchers, it meant alignment and interpretability. To ethicists, it meant bias and fairness. To security teams, it meant adversarial robustness. To executives, it meant liability management. Everyone was working on "AI safety" while talking past each other.

The law forces convergence. When you have to document your risk assessment methodology and align it with NIST frameworks, you're adopting a shared grammar. When you have to report incidents using specific categories, you're building a common knowledge base. When whistleblowers have legal protection, you're creating institutional memory that transcends any single company.

This is how professions form. Not just through job titles, but through shared standards, common training, and institutional structures that persist across organizations.

Reading the AI Governance Career Landscape

If you're thinking about entering this field, forget the job titles for a moment. Think about the underlying functions that SB 53 makes mandatory:

  • Risk identification. Someone has to determine what "catastrophic risk" looks like for a specific model. This requires technical understanding of AI capabilities, but also imagination about misuse scenarios and unintended consequences. It's a hybrid skill set that barely existed five years ago.

  • Framework development. Translating high-level principles into operational procedures. This is where policy meets engineering; you need to understand both worlds well enough to build bridges between them.

  • Evaluation and testing. Red teaming, capability assessments, safety benchmarks. The law specifically asks about third-party evaluators, which suggests a growing market for independent auditors.

  • Incident response. When something goes wrong, someone needs to detect it, classify it, report it, and learn from it. This is crisis management meets technical investigation.

  • Regulatory navigation. SB 53 is just the beginning. New York has the RAISE Act. The EU has the AI Act. China has its own frameworks. Someone needs to track this patchwork and advise organizations on how to operate across jurisdictions.

The interesting career opportunities aren't in checking compliance boxes. They're in building the intellectual infrastructure of a new field.

What We Don't Know Yet

I've been optimistic so far, but intellectual honesty requires acknowledging the uncertainties.

We don't know if the 10²⁶ FLOPS threshold is the right line. It was chosen based on current capabilities, but AI development is non-linear. The threshold might capture too few models or too many as the field evolves. The law allows for annual adjustments, but bureaucracies are slow and technology isn't.

We don't know how enforcement will work in practice. A million-dollar penalty sounds significant until you remember that frontier AI companies are valued in the tens of billions. Will the Attorney General's office have the technical expertise to evaluate compliance? Will they have the resources to investigate incidents?

We don't know if California's approach will actually become national policy. The "California Effect" is real, but it's not guaranteed. Federal preemption could override state law. Other states might take different approaches, creating a fragmented regulatory landscape.

And we don't know if the categories of "catastrophic risk" in the law are the right ones to worry about. The focus on CBRN weapons and mass casualty events reflects current threat models, but AI risks might manifest in subtler ways; gradual erosion of human agency, concentration of power, epistemic corruption; that don't fit neatly into these boxes.

These uncertainties don't invalidate the law. They're the normal growing pains of a new regulatory domain. But anyone entering this field should approach it with epistemic humility. We're building the airplane while flying it.

An Invitation, Not a Warning

Most writing about AI regulation is either breathlessly alarmist or defensively dismissive. I've tried to offer a different frame: SB 53 as a professionalization milestone.

If I'm right, the next few years will see the emergence of AI governance as a distinct professional identity; with its own conferences, certifications, career paths, and body of knowledge. The people who help build that infrastructure will have an outsized impact on how AI develops.

That's not a warning about compliance deadlines. It's an invitation to participate in something genuinely new.

The question isn't whether you need to worry about AI safety regulation. The question is whether you want to help define what it becomes.

FURTHER READING

SB53AIGovernanceAIGovernancecareeraiethicsaisafety
Back to Blog

Copyright 2025 © Technology Roles By Indigomark - All Rights Reserved.