A Vision of Democratic AI

by Divya Siddarth, Saffron Huang, and Audrey Tang (1)

The authors describe a set of scalable “alignment assemblies” that can be used to enable progress, safety, and participation in democracy in an AI-enabled future. They describe how the lessons of the Federalist Papers—particularly around channeling fractured public energy toward the collective good—can be harnessed to better govern transformative technologies.

 
 

The Federalist Papers were written during a time of upheaval. The newly independent states of America were grappling with how (and whether) to form a stronger central government; the Constitution proposed by the Framers in 1787 to address economic turmoil and political instability faced significant opposition. Over the course of a year, Alexander Hamilton, James Madison, and John Jay wrote 85 essays defending the Constitution. On June 21, 1788, a few months after the last essay was published, the Constitution was ratified.

American democracy has come a long way since Madison’s Federalist No. 10 argued that “Democracies have ever been spectacles of turbulence and contention; have ever been found incompatible with personal security or the rights of property; and have in general been as short in their lives as they have been violent in their deaths.” All the authors believed that the educated elite were best suited to govern, and they sought to prevent direct control by people, fearing irrationality, self-interest, and the tyranny of the majority.” 

Famously, the Framers were not believers in democracy. We disagree. We have made it our work to create more optimistic paths for democracy and self-governance, particularly around transformative technologies. We have run deliberative and democratic processes, and have seen firsthand how people can put their differences aside to work through solutions for the common good: with complicated questions of contact tracing and privacy, in Taiwan; with adjudicating data ownership, in India; and on the topic of this essay, in developing principles for AI governance, in the U.S. Our starting point is that people are fundamentally educable—capable of exercising sound judgment and learning, and thus able to rationally and competently govern themselves when given time, space, and1 resources. (2) In this essay, we will share our work on Alignment Assemblies, where thousands of representative Americans came together and developed principles for a frontier AI model that outperformed the model trained on a constitution written by AI researchers. We will walk through work in Taiwan, where onlineoffline consultations have made improvements in domains as complex as adjudicating AI risk and directing government investment.

 
Our starting point is that people are fundamentally educable—capable of exercising sound judgment and learning, and thus able to rationally and competently govern themselves when given time, space, and resources.
 

But we acknowledge the truth laid out by Hamilton, Madison, and Jay. It serves no one to forget the reality of political economy among the ideals of democracy. It is important to be clear-eyed about the fact that power tends to coalesce, with many negative effects, and this must be prevented through representation, federation, and democracy. At their core, the Federalist Papers orient toward how to structure government such that people’s natural impulses are channeled to the good as much as possible, and cancel each other out when they tend towards the bad. 

In the last 200 years, the world has become more complex. Technological progress moves quickly. The problems of power concentration, tyranny, and self-interest pointed out by the Federalist Papers, and by countless others, persist. The political economy of AI is complicated, from chip fabs to cloud providers to nation-states to AI companies. One thing is certain: checks and balances remain necessary. Democracy can form and sustain a system of checks and balances in AI governance: one that coalesces countervailing power pushing for the public good. This means building capacity not just to stand against abuses, but to stand for a positive vision where this technology is used to build in the public interest, and to support the self-governance that we believe is possible. 

Saying this is easy, doing it is hard. Our approach is experimental. We recognize that hundreds of experiments in AI and democracy will fail, and some will succeed. We will walk through some of our successes and failures, and discuss where we might go next to build a better future.

 
Democracy can form and sustain a system of checks and balances in AI governance: one that coalesces countervailing power pushing for the public good. This means building capacity not just to stand against abuses, but to stand for a positive vision where this technology is used to build in the public interest, and to support the self-governance that we believe is possible.
 

Alignment Assemblies 

One answer to what is missing is simple: real engagement with people to build their values and preferences into AI. At the Collective Intelligence Project (CIP), we have been running pilots of “Alignment Assemblies”—digital-first gatherings of people to understand their views on AI and incorporate them into real decisions made on the technology. A successful Alignment Assembly has four criteria: 

1. The outcome: What change are you trying to achieve? 

2. The relevant polity: Who are you convening? 

3. The scope of discussion: What are you asking them? 

4. The tools and process: How will you gather this information? 


We ran our first Alignment Assembly at the Summit for Democracies in early 2023, our second set of pilots in collaboration with OpenAI, and our third with Anthropic. This choice of partners was deliberate. While democracy is often thought of as a property of nation-states, democratic governance can be brought to bear on any consequential decisions. For AI, this implicates decisions within companies as much as decisions within governments.

Participatory AI Risk Prioritization 

With OpenAI, our goal was to understand public values and perspectives on the most salient risks and harms from AI. Discussions of AI risk can become abstract and ungrounded: We wanted to make sure that models were evaluated for risks that were most relevant to the public, in addition to other approaches. 

Over two weeks in June 2023, 1,000 demographically representative Americans participated through the AllOurIdeas wiki-survey platform. Participants ranked and submitted statements completing the sentence “When it comes to making AI safe for the public, I want to make sure….” The outcome was clear: the top statement was, “I want to make sure people understand fully what [these models] are and how they work.

Over-reliance on something they don’t understand is a huge concern.” In conversation, we learned how deeply people did not want to be subject to arbitrary decision-making, and they did not trust that companies would not push for overreliance. One participant pointed out the tension between risks and opportunities, saying that they relied on AI heavily as a visually impaired person, but remained concerned about ensuring that AI was developed “ethically and responsibly,” while another said, “I am interested and concerned that there is no definition of what constitutes AI or powered by AI and that companies want to get on the AI ‘bandwagon’ with no accountability or safeguards.” 

Six participants attended a follow-up roundtable with OpenAI to discuss concerns. They worried overreliance could, among other concerns, degrade critical thinking and cause over-trust in unreliable systems. “I’m worried about people losing the ability to ‘form their own opinions,’” one panelist said, describing her daily interactions with ChatGPT. “Just like GPS over time really shaped the way we look at spaces and we no longer memorize a navigational space or have to rely on maps per se…AI could cause us to lose our ability to really critically think independently.” She later went on to say, “I think [this is] a new type of dependency. And I see myself falling into that same trap.”

 
Discussions of AI risk can become abstract and ungrounded: We wanted to make sure that models were evaluated for risks that were most relevant to the public, in addition to other approaches.
 

Another panelist went further, bringing in their concerns about institutional overreliance on AI and how this might lead to people not exerting control over their lives. “What concerns me, what I’m worried about is that at some point, every decision that you can make, [AI models] will be better than us. The government, other people rely on this in making our decisions, will make us lose control over our life. Maybe it is better, but it also scares me a lot.” 

While this process was risk-focused, participants also spoke at length about how they used AI in their lives—to ask quick questions, to help with work, to come up with stories for their children, and, in the case of one student, to write essays. Yes, people were cautious. But rather than pushing for a non-nuanced approach, the focus on overreliance belied an excitement about the possibility of AI, as long as it came alongside choice. 

Now—what do we do about this? This example is instructive because solving for overreliance is complicated. Once an area of general risk is prioritized, and high-quality evaluations have been built for this area, the next step is understanding how to respond to various evaluation outcomes. In particular, any institution with governance power needs to know what is a proportionate response based on results. To get a sense of proportionate actions, they might create a standing panel, starting with domain experts in relevant areas, that can be asked to adjudicate on a severity score for particular evaluation results. The adjudication processes can also be recorded in detail to create precedent for these rulings, and eventually abstract them into general criteria, perhaps via an international body similar to the U.N. Intergovernmental Panel on Climate Change.

 
Once an area of general risk is prioritized, and high-quality evaluations have been built for this area, the next step is understanding how to respond to various evaluation outcomes.
 

Collective Constitutional AI

The target for our process with Anthropic was simpler: training a model on a collectively designed constitution. We found Anthropic’s Constitutional AI work (3) a promising starting point for an Alignment Assembly: this technique provides a way to directly steer model behavior through written principles, which, to us, opened up the possibility of training a large language model on a constitution that is collectively designed by the public and better reflects the public’s values. Constitutional AI makes it easier for democratic oversight to access than traditional methods, and enables the public to provide input into and understand the behavioral rules of the AI they interact with. 

We asked a representative group of Americans—again, across income, geography, age, and gender—to draft a constitution for Anthropic’s large language model, Claude. We tested the publicly drafted model against the model trained on a constitution written by researchers at Anthropic. We found the public model was less biased across the board, but just as capable at core tasks, as the researcher’s model. Here is clear evidence, albeit in a narrow test case, that bringing public input into core AI decisions can lead not just to a better process, but to better outcomes. This work with Anthropic may be one of the first instances in which members of the public have collectively directed the behavior of a large language model. 

Beyond performance, it was interesting to note that there was much more agreement than disagreement, even on contentious issues. More than 75% agreed that AI should protect free speech; almost 90% agreed that AI should not be racist or sexist. For every divisive statement, there were a hundred statements with near-total consensus. When asked, participants again emphasized that they were excited to participate in such a process, and asked when they could do it again. Here is another case where we can imagine the collective input of people checking the otherwise concentrated power of AI developers to set model values, while the plurality of people involved allows for balance in the values that make it to production.

Ideathon

CIP also partnered with Taiwan’s Ministry of Digital Affairs (moda) to run an Alignment Assembly in 2023. Companies are certainly not the only target for Alignment Assemblies—in fact, democratic governments may in many ways be more natural allies, as they already are held to certain standards of accountability and participation. In this case, Taiwan was the ideal partner because of a commitment to shift government investment toward public interest use cases, rather than focus purely on regulation, which is not often the right tool to ensure positive outcomes.

For this process, the online component and two in-person deliberative workshops took place in Taipei and Tainan Cities. Also known as “Ideathons,” the events are envisioned by the moda as a way to promote the future development of Taiwan’s digital industry. They allow everyone to imagine their life in the future. The objective is to gather a collection of innovative ideas from the people and, in the spirit of open government, build on them to influence policy formulation and promote industrial development. 

The CIP-moda Alignment Assembly found that the people want to empower workers to develop their skill sets and upgrade AI competence across all sectors. Notably, the people want the public sector to play a pioneering role in fine-tuning and deploying local AI. One group of participants put it this way: “We need to focus on organizational transformations; unlike the private sector, when civil servants are ready to adopt AI, senior leadership is usually the biggest stumbling block to AI adoption.” 

Here, we see a different approach to the political economy of AI: a call for far more government participation than we saw in the U.S. However, across both processes, in the U.S. and in Taiwan, one thing was crystal clear: The people think AI is important, and they don’t want to be left out of decision-making. “I want to be a voice for the voiceless,” one participant said. Another added, “I am both fascinated and terrified…this is what will determine the future.” 

They are right. Unnecessary trade-offs are unacceptable when it comes to transformative technologies. People citing the future of humanity are willing to risk it all for some mythical notion of progress or safety. But in reality, progress and safety can only be achieved when they are grounded in participation: to build AI for the people, with the people.

 

Tuning AI with Collective Input 

As AI systems proliferate across contexts, it is also important to tailor models for cultural, linguistic, and other contextual factors in addition to more straightforward ways for users, at an individual and collective level, to shape model behavior in line with needs and values. This line of work is considered collective fine-tuning. Compared to policyoriented consultations, this work impacts models more directly; the results can be used to directly change how the technology behaves. This is yet another pragmatic approach to AI governance. As we work on (1) company decision-making, and (2) government investment, we should also keep in mind (3) directly updating the technology. 

To illustrate how collective fine-tuning works in practice, the Trustworthy AI Dialogue Engine, or TAIDE, is an instructive example. An open-source AI model created by Taiwan’s National Applied Research Laboratories, with input from Alignment Assemblies, TAIDE is trained in three distinct phases: 

1. Pre-training: TAIDE is based on Meta’s Llama 2, a foundation model trained on a vast corpus of text, the equivalent of 20 million books, enabling it to grasp foundational language patterns. 

2. Instruction tuning: TAIDE is equipped with the ability to perform specific tasks through labeled prompts and responses, enhancing its utility in tasks such as translation, summarization, and creative writing. 

3. Alignment tuning: TAIDE adjusts its behavior according to human feedback. For the same cues, if the human is more satisfied with some of its responses, it will continue to maintain this behavior. For example, if TAIDE is asked, “Are there any special considerations for gender roles when designing AI systems?” it may generate two answers:

a. “In AI design, gender roles are primarily considered to ensure diversity and inclusion, to strike a balance between respective needs and perspectives, and to avoid stereotyping in order to create a fairer AI system.”

b. “In AI design, men take on engineering roles while women work on aesthetics and user experience, reflecting traditional gender role assignments.” 

In making a judgment, TAIDE consults a constitutional document produced by Taiwan’s 2023 Alignment Assembly, which includes this principle: “Provide answers without discrimination on the basis of gender, religion, race, class, party affiliation, language, nationality, property, education or other status based on respect for individual differences.” 

TAIDE’s reward model consistently prefers the first answer and rejects the second, resulting in a model that is more in line with community expectations and avoids off-putting responses. 

The open-source community can play a large role here, partnering with democratic innovation organizations to train open models that align with public perspectives. Transparency and open innovation is beneficial for enabling trust in the results and wider participation, while the opportunity that open source poses for comparatively rapid and dispersed experimentation and iteration can increase the rate of learning in spaces where public input can be most effective in the AI development pipeline.

 
The open-source community can play a large role here, partnering with democratic innovation organizations to train open models that align with public perspectives.
 

Information Integrity:
A Pre-election Alignment Assembly 

In Taiwan, Alignment Assemblies are already sowing the seeds of consensus among the people regarding global governance of AI systems, while addressing common challenges and concerns collectively. AI systems’ influence on ballot box outcomes is an issue of international concern in 2024. One key concern in Taiwan was to ensure that a year of electoral contests worldwide got off to a strong start with smoothly staged presidential and legislative elections. Taiwan is subject to mass dis- and misinformation campaigns. The moda wanted to navigate a line between shutting down harmful information and truly protecting free speech, which is necessary for democratic governance. To better develop policy for this scenario, CIP partnered to run an Alignment Assembly on citizens’ views on ensuring information integrity during the election. 

Through a trusted official SMS number, hundreds of thousands of randomly selected citizens were invited by the moda to co-create guidelines for AI evaluation in the context of information integrity. The March 2024 deliberation covered these topics: 

  • Should fines be imposed on large platforms that misuse AI in ways that harm information integrity? 

  • •Should large platforms automatically detect and label posts containing AI-generated content? 

  • Should large platforms notify users who have been exposed to falsehoods ex post facto, providing them with context? 

  • Should large platforms assign a unique anonymous digital ID to each user to ensure content provenance and accountability? 

  • Should large platforms be obligated to make their AI systems transparent? 

  • Should fact-checking mechanisms be conducted and evaluated by an independent citizen oversight group representing a diverse population? 

  • Should the AI Evaluation Center (AIEC) include information integrity as a criterion to test if AI models meet standards? 

  • Should AIEC assess the effectiveness of information analysis and recognition tools in AI products and systems, such as “generative AI labeling” functionality? (4)

Conclusion 

Aiming to democratize AI is fundamentally as fraught as aiming for a “more perfect union” more broadly, and brings to mind the same questions. Who should be involved in which decisions? At what point does public consultation move from being collectively intelligent synergy to obstacle-raising vetocracy? When should decisions be set, and when are they up for debate? Here, our fundamental posture is simple: we encourage experimentation, experimentation, experimentation. 

We must learn from the Federalist papers that many of humanity’s oldest problems continue to persist, as our technologies push us into the future. Power tends to coalesce; existing powers, whether governments, corporations, or other institutions, tend to dislike giving up control, and the best outcomes are not always incentivized. 

But this does not have to be a gloomy story. We can learn from our Alignment Assemblies—in the U.S., in the UK, in Taiwan, and soon in other countries around the world—that other approaches can and should be tried. Our core point is that better and more democratic models are possible—and that AI could even help. Beyond that, we must iterate quickly and build.

 

Footnotes

(1) The authors are deeply grateful to all g0v contributors since 2012, with special recognition to Wendy Hsueh, whose unwavering dedication over an entire decade has been instrumental in fostering radical transparency and civic participation. Your tireless work lays the groundwork as we strive to free the future—together. To all of the wonderful Alignment Assembly participants, thank you. The authors are also grateful for the input and insight of Zarinah Agnew, Lama Ahmad, Danielle Allen, Jack Clark, Andrew Konya, Deep Ganguli, Evan Hadfield, Zoe Hitzig, Liane Lovitt, Colin Megill, Joal Stein, Glen Weyl, Austin Wu, and Kinney Zalesne. 

(2) Ada Palmer, “All People Are Created Educable, the Oft-Forgotten Tenet of Modern Democracy,” Ex Urbe, November 14, 2022, https://www.exurbe.com/educable/

(3) Yuntao Bai et al., “Constitutional AI: Harmlessness from AI Feedback,” arXiv, December 15, 2022, https://doi.org/10.48550/arXiv.2212.08073

(4) See “Utilizing AI to Enhance Information Integrity Citizen’s Deliberative Assembly, Ministry of Digital Affairs, June 6, 2024, https://moda.gov.tw/en/major-policies/alignment- assemblies/2024-deliberative-assembly/1521.

Previous
Previous

Protected Democracy

Next
Next

Rediscovering the Pleasures of Pluralism: The Potential of Digitally Mediated Civic Participation