William Saunders: Why I Support SB1047

Lessig
5 min readSep 27, 2024

--

Guest essay

William Saunders is a former OAI technical researcher who resigned from the company because of its inadequate safety practices and lack of transparency. I was honored to represent him at that time. He recently testified before the US Senate about the company’s attempts to stifle employee dissent, and has written a piece explaining why, in light of his concerns, he supports California’s SB 1047. I’m grateful that he has allowed me to share that essay here. I encourage you to take the time to read the piece. Former employees like William who are willing to critically examine their time at large AI companies offer a vitally important perspective that you simply can’t get anywhere else. I hope @GavinNewsom listens. — @lessig

In high school, I competed in computer science competitions and trained for the International Olympiad of Informatics for years. In my last year, I made it to the final Canadian qualifying round, but just missed the score needed to qualify to compete internationally. As a trained computer scientist and former researcher at OpenAI, it was especially significant to me when OpenAI’s new model, o1, recently attained a gold medal on that same exam. It also passes 80% of questions based on interviews for my job at OpenAI. And also is the first AI system that could provide material assistance to experts in planning deployment of known biological threats. Artificial intelligence still underperforms humans on many important tasks, but I’ve watched the number of domains where this is true shrink dramatically over just a few years.

I recently testified at a hearing before the U.S. Senate on the topic of the oversight of AI. I was invited as a witness because of my insider perspective as a scientist formerly at a major AI company. While that hearing focused on general principles aimed to inform regulation in Congress, the arguments I made bear directly on a major political battle occuring on the other side of the country: Senate Bill 1047 in California.

SB 1047 would be one of the world’s first binding regulations on companies building the most powerful AI systems. It currently sits on the desk of Governor Newsom, who must choose whether or not to sign it. He should sign for the sake of both the AI industry and the public interest.

OpenAI, along with other leading AI companies like Google, Meta, and Anthropic, aims to build Artificial General Intelligence (AGI) — AI systems that are broadly smarter than humans. We don’t know when they will succeed, but when I was at OpenAI many of us believed that vision would likely be achieved in the next ten years, or could plausibly happen in as little as three years. AGI has the potential to bring great benefits, but it will also come with significant risks. A machine that is smarter than humans could enable bad actors to conduct cyberattacks or teach novices how to create biological weapons. It’s simple logic to recognize that it could even get out of human control entirely. My former team at OpenAI, which has since been eliminated, focused on the challenge of supervising and evaluating AI systems that might be more intelligent than us: how to tell if an AI really is safe or is simply smart enough to cheat on the safety test.

The stark reality is that we are unprepared for this. OpenAI, and other companies, have repeatedly prioritized fast deployment over rigorous evaluations. While there are many individuals at these companies who want the best, my experience at OpenAI has made me think that the companies cannot be trusted to handle this issue on their own.

I support SB 1047 for several reasons, but here I will describe how it addresses three key recommendations I made in my Senate testimony.

First, if future powerful AI systems were to be stolen by adversaries, this could be very damaging for U.S. national security. In my testimony, I described long periods of time when technical vulnerabilities would have allowed me or hundreds of others at OpenAI to steal its most advanced systems. One insider described OpenAI as “China’s leading AGI lab”, because it would be easy for a foreign government to steal our technology if they tried. Internally, the company prioritizes “research velocity” over investing in security before incidents happen, because developing new technology is glamorous and taking security seriously is not. SB 1047 requires AI developers to implement cybersecurity protections that would be stronger deterrents to theft and document their security protocols.

Second is transparency and the public interest. Put simply, the public has a clear right to be aware of how AI companies are managing the risks of their systems. Without public awareness, AI companies keep being tempted to release before safety evaluators have understood their system’s full capabilities. SB 1047 requires AI developers to publish safety and security protocols that describe how they manage the most severe risks from their systems, and it requires developers to actually adhere to those protocols.

Third is protection for employees to speak out about risk when necessary. Company insiders have the best information about emerging risks. Under certain circumstances, they may need to call outside attention in order to make their concerns heard. In my testimony, I described the restrictive non-disparagement agreement that I was presented when I left OpenAI. I was told that if I did not sign the agreement, which prohibited me from speaking negatively about the company, I would lose all of my equity. I was not even permitted to speak about the agreement. If employees don’t feel safe to talk about problems, then nobody outside of the company will find out until they become a full blown crisis, that will inevitably hurt the entire industry.

My experience convinced me that we need strong whistleblower protections for cases when AI companies fail to report serious risks. It is important that those protections extend not only to blatant violations of the law, but also to highly risky activities that do not necessarily violate the law. SB 1047 includes robust protections against retaliation for reporting issues to the government, including for safety concerns that are not violations of the law, and I strongly support these provisions.

I originally joined OpenAI because I am optimistic that if we do the right thing, we can and will develop AGI safely and it will be used to benefit humanity. Many AI companies, including my former employer, OpenAI, have lobbied against SB 1047, telling us that we should trust them to manage the risks to the public on their own. My experience shows that we should not, and cannot, trust them to do that, and that’s why I resigned. That’s what I told the Senate, and that’s why I urge Governor Newsom to sign SB 1047.

--

--