For a 2024 moratorium on political LLM AIs

Lessig
3 min readMay 18, 2023

--

Two months ago, many within the tech community called upon AI developers to pause the development of AI for six months, to give the industry time “to jointly develop and implement a set of shared safety protocols.” As that demand came just as most in the public were getting their first taste of large language model (LLM) AI, few were eager to see the experiment stopped. And when one of the most prominent signatories on the letter — Elon Musk — was reported to be developing his own AI company, cynicism quickly overwhelmed good faith.

But there is a narrower context in which we should be able to achieve universal agreement, at least from the dominant AI platforms: Every platform should commit to forbidding their technologies to be used in the 2024 campaign season. LLM technology has the potential to radically change how political campaigns operate — and none of that change will be an improvement to politics in America today.

Already, of course, AI affects politics dramatically. The models that drive the spread of advertising both online and in cable broadcasting leverage extraordinarily rich profiles of data about their audience. These data drive advertising to have its maximal effect. That advertising is largely human-crafted, though tweaks and alternatives are generated by machines. But the advertising is plainly the expression of a campaign, and the campaign is the work of human beings.

LLM-driven targetted advertising promises to be something wholly different. As the machine engages with an audience directly, it uses its own words to trigger the response it is seeking. Democratic AI could be set to suppress the vote of Republicans. Its LLM could therefore engage with Republicans, and a complementing AI could measure the response by the voter, and recalibrate based on that response. These complementing AI technologies could then work to efficiently drive the result that the campaign deploying them prefers.

But how it drives that result, no one really knows. Its messages are not public. We don’t know the psychological weaknesses it will exploit. For all we know, AIs could construct wholly fabricated worlds to produce the result they are seeking. If deep fakes work best, AI could be deploying them. At this point, no one in the community can say with confidence how such technology would achieve its results, nor what values would constrain it.

It might well be that AI eventually proves to be a valuable and edifying technology for politics. We could well imagine systems that begin to understand the public better than polls or simple online reactions to banner ads do. The technology has enormous potential to lower the costs of law and the rule of law; it could have equally valuable potential in the communication space around politics.

But we should know much more before we allow the technology to stand in the middle of an extraordinarily important national election. We should have confidence that it would be constrained by even the weak constraint of truth that operates in the context of politics today. But we don’t. We should understand the psychological weaknesses — or strengths— that it is exploiting before it exploits them. But we don’t. Many within platforms such as Facebook expressed enormous anxiety about how their platforms were being used in the 2016 and 2020 elections. The potential disruption by LLM AIs in 2024 is much much greater.

AI is extraordinary. We should celebrate its potential. But the industry needs to embrace a simple and ancient oath made famous by Hippocrates: First, do no harm. Before your technology is set loose within the domain of democracy, let us at least understand how it would work. We don’t have that understanding now. Every platform should therefore commit that its tools will not operate in the 2024 election.

Those commitments should then be complemented by regulations by Congress. Such rules would immediately be challenged under the First Amendment. Resolving that challenge would likely take years. Yet long before the Courts resolve whether democracy can protect itself through law from these optimizing machines, the makers of those machines can exercise simple restraint. This is the very minimum that we should expect from these extraordinary innovators.

--

--