mud being thrown at the poor SB1047

The slander of SB1047, redressed.

Lessig
4 min readSep 24, 2024

--

There is endless misinformation about California’s SB1047. I get it. Bills — even short bills — are hard to read. In the face of endless lobbyist messaging, they’re even harder to understand.

So here, in a single thread, is all you need to understand the bill. (If you want to read along, here’s a clear version of the bill, nicely indented.)

§22602:

The bill begins with definitions:

  • See especially the target of the bill: “covered models” (those trained at $100M or more, or fine-tuned at $10M or more)
  • And the trigger for liability — “CRITICAL HARM”

“Critical harm” means

  • (A) chemical and bioweapons attacks causing “mass casualties”;
  • (B) cyber attacks on critical infrastructure causing $500M damage
  • (C) “mass casualties” or $500M damage if there’s “limited human oversight” AND “would, if committed by a human, constitute a crime”;
  • (D) other comparable harms.

§22603 (the heart of the bill, IMHO):

The bill then requires “developers” of “covered models” to adopt a series of “protections” and “safety and security protocols”, including:

Before Training:

  • “cyber security protections” to prevent unauthorized access to and misuse
  • shutdown capability
  • effective protocols to avoid COVERED MODELS that pose an unreasonable risk of causing CRITICAL HARM
  • publish a redacted version of the protocols and give an unredacted copy to the Attorney General (AG)
  • a “take reasonable care…” provision I’ll describe more below (see tort liability)

Before Use:

  • “assess” whether “capable of causing CRITICAL HARM”
  • “record” and “retain” tests to assess whether “capable of causing CRITICAL HARM”
  • some “take reasonable care” provisions I’ll describe more below
  • don’t use if “unreasonable risk” it will cause CRITICAL HARM
  • after 2026, get an auditor to verify compliance with these obligations
  • annually certify compliance to the AG
  • report safety incidents within 72 hours
  • “consider” “best practices” when complying with these requirements.

This is the heart of the bill. Why is it needed? Isn’t every single tech company producing a COVERED MODEL doing essentially all this right now?

Yes, every company promises it is doing this right now. The bill makes these promises meaningful. This section requires auditors in 2 years to verify they do what they say; Section 22607 below protects whistleblowers who report that they’re not doing what they say.

So, companies opposing the bill are not opposing it because it requires “safety and security protocols.” They have those. They oppose it because they don’t want those protocols to be meaningful or operative or actually constraining. (I get it; auditors can be a hassle.)

§22604:

Creates obligations for computer clusters that train covered models (basically, know your customer obligations).

§22605:

a hole in the bill where there was a cool pricing rule that was eliminated (score one for the lobbyists)

§22606:

Gives the AG the power to enforce the law.

§22607:

Creating critical whistleblower protections so a company knows it can’t promise one thing and do something different. (See the extraordinary letter of 100+ AI corporation employees supporting the bill, in part because of this protection.)

But what about tort liability?

Doesn’t the bill create tort liability for COVERED MODELS that unreasonably cause CRITICAL HARM?

Yes and no. (And this is the most misunderstood part of the bill.)

YES, the bill codifies what lawyers call negligence liability for developers of covered models.

But NO, it doesn’t “create” that liability. Anyone unreasonably creating harm in society is subject to tort liability right now. (Except gun companies, which have paid millions in lobbying fees and campaign contributions to be exempted from this ordinary sort of liability, and some other favorites of the law.)

Indeed, you might say the bill potentially lessens tort liability by clarifying that the liability is negligence, not strict liability. Traditionally, businesses with inherently dangerous products face “strict liability”; one might well argue that a technology capable of producing mass casualties and $500M in harm is inherently dangerous. But not if this bill is signed into law.

What about open source models? Does it regulate them?

Yes, developers of covered models (remember, $100M training or $10M fine tuning) have tort obligations — but again, with or without this bill.

Limiting that liability to developers arguably lessens liability for those who adopt and use open-source models. By requiring a shut-off capability, it likely lessens that liability even more. (An obligation to include circuit breakers in housing makes housing safer; does it also “chill” housing development?)

The law does not create liability for downstream misuse of OS models; it expressly limits its shut-down obligations to models within the developer’s control. (See more on open source risks in my article in The Nation.)

Bottom line:

SB1047 creates an obligation for covered models to deploy meaningful and enforced safety and security protocols to avoid critical harm.

The bill clarifies — it does not create — the tort liability that developers of covered models face if their models unreasonably cause critical harm.

So, does it “chill innovation” in AI?

  • Not by creating tort liability; that liability is already there; it is just codified in this bill.
  • Not by clarifying that liability is negligence; strict liability would be much more burdensome for developers.
  • Maybe by chilling developers who want to pretend to have “safety and security protocols” but don’t want them to be meaningful or effective.

So sure, maybe it chills that. But how exactly is that a bad thing?

--

--