Introduction: Artificial Intelligence and Democracy in America
by Erik Brynjolfsson, Alex Pentland, Nathaniel Persily, Condoleezza Rice, and Angela Aristidou
Volume 1: Artificial Intelligence and Democracy in America
I. What would the Federalist Papers say if they were written in the 21st Century?
In the late 18th century, the convergence of transformative innovations—technological and economic—with political revolutions in the United States and Europe reshaped the way people lived, worked, and governed themselves. In the United States, the ineffectiveness of national government under the Articles of Confederation led to a Constitutional Convention to re-envision the promise of the American Revolution through new institutions tailor-made for the American context. Alexander Hamilton, James Madison, and John Jay wrote 85 essays under the pseudonym “Publius” to promote the ratification of the Constitution agreed to at the convention. The publication of these Federalist Papers represented a unique moment in political history, both in the United States and for other aspiring democracies, when political leaders analyzed the great challenges of the day and provided a roadmap of institutional innovation for the young nation.
Today, we need a similar ambition of imagination. We, too, stand at technological, economic, and political crossroads that demand creative rebuilding or reinvention of new institutions. The political pressures confronting democracy in America and around the world reveal the widely held anxieties that lead citizenries to lash out for change, but in uncertain directions. Economic changes feed those political anxieties, by diminishing individuals’ sense of stability and control over their economic future. And as the political and economic tectonic plates shift radically, a powerful new technology, artificial intelligence, explodes onto the scene and threatens to transform, for better or worse, all legacy social institutions. The convergence of these political, economic and technological forces requires an ambitious and fundamental rethinking of existing principles and institutions of governance.
Surveys suggest that the American public is skeptical and concerned about the impact of new technology on governance and society. The techlash of the last decade has led to pessimism about the potential of innovation to fuel human progress without also creating significant, and perhaps even existential, risks. Concerns about social and economic disruption, election disinformation, monopoly power, surveillance, privacy violations, and algorithmic discrimination, to name just a few, have fed into a common presumption that our technological future trends toward dystopia. These fears, while sincere and understandable given recent history, should not feed into nihilism or hopelessness. We can build new systems of governance and guide technological development with an eye toward supporting and even enhancing democratic principles, rather than undermining them.
The Digitalist Papers series presents an array of possible futures that the AI revolution might produce. As such, the Papers do not advocate for a particular system of governance, as did Hamilton, Madison, and Jay. Indeed, technological innovation in AI proceeds at such a rapid pace, that no single prescription could capture the full set of alternative paths of this transformative technology. Moreover, the innovation is not only rapid but pervasive; it extends to all aspects of social and economic life. We are in dire need of analyses and diagnoses that spring from different methodological traditions and reflect different normative commitments.
The Digitalist Papers series aims to bridge domains and disciplines by assembling experts from multiple fields—including economics, law, technology, management, and political science—alongside industry and civil society leaders. First, because the thoughtful examination of AI-enabled developments in relation to institutional structures requires us to turn the spotlight on our current institutions; whether these remain fit for purpose, and whose purpose. This daunting exploration demands both deep domain expertise and collaboration across academic disciplines. Second, because an essential component of building a vision for a desirable future is to convene voices from a diverse set of stakeholders and their respective domains, while maintaining openness to various perspectives. Third, because this multistakeholder and multidisciplinary approach is necessary for providing comprehensive insights and actionable strategies that are legible by, and may benefit, society as a whole. The Digitalist Papers create a space for ambitious vision-setting, thoughtful examination, and deliberate strategizing.
II. The First Volume: Setting the Stage
Amid AI-enabled developments that forecast a change in how people organize and govern themselves, and the democratic institutions that facilitate governance, in this first volume of The Digitalist Papers series the contributors were asked to focus their disciplinary tools and domain expertise to address two questions:
(1) How is the world different now because of AI, and what does that mean for democratic institutions, governance, and governing?
(2) What is the vision, and what is your strategy to reach this vision?
Across the volume’s twelve contributions, these questions forced a closer consideration of the role of AI in the evolving U.S. social and democratic landscape. Each author in this volume offers a unique perspective, and yet their works collectively put forward a vision in which our democratic institutions and society may not only survive, but thrive, in a world of powerful digital technologies such as artificial intelligence. At a time when national and international initiatives focus their attention on the AI risks more than offer solutions, our contributors open up landscapes of future possibilities and offer an array of well-constructed strategies, through the lens of their discipline and domain and grounded in their expertise.
The authors do not limit themselves to political or state-sponsored solutions to challenges posed by emerging technology. All recognize that charting the future for AI requires a broad conception of governance. Coming from different methodological traditions and ideological commitments, they focus on different sources of power—in the market, government, civil society, or even within the technology itself. Some offer new legal standards, new public bodies and institutions, new duties on AI platforms, new rights, or new codes of conduct for people in government, policy, industry, and for everyday citizens. By design, the prescriptions and arguments found in these chapters should be credited to the authors, not the editors. All works underwent a rigorous review process that drew on the input of a wide pool of academics and nonacademics, such as leading voices in philanthropy, civil society, and industry. Along with our many shared values, our spirited disagreements are a source of strength for a volume that must humbly recognize that no single person has all the answers and that these uncertain times require a diverse set of perspectives on what is to be done.
III. The Contributions in this Volume
Some of our authors urge significant transformation of foundational aspects of U.S. democracy.
Lawrence Lessig unpacks the assumptions underpinning our current democratic system and pinpoints its key vulnerabilities that AI will affect: the dependence of our democratic representatives on private resourcing and polarization. Lessig brings a refreshing, philosophically grounded look at what are real-world challenges. He concludes by envisioning a powerful form of “protected democratic deliberation,” i.e., shifting deliberation into a more protected space to enable the resolution of an issue that is not being addressed through regular processes, a practice similar to “citizen assemblies” or “deliberative polls.” Given AI’s potential effect on democracy’s vulnerabilities, the author proposes this strategy as the more effective way through which we may make at least some critical democratic determinations, and as essential to protecting democracy within an AI-empowered world. This work serves as an inspiring starting point for a much overdue substantive debate on the very essence of our U.S. democratic system, and as the starting point for the volume.
The practice of digitally enabled “citizen assemblies” is a reality for Divya Siddarth, Saffron Huang, and Audrey Tang. It informs and shapes their vision of citizens’ direct involvement in policy formulation. Siddarth, Huang, and Tang emphasize that citizens can and should be highly engaged. In a fascinating analysis of the Taiwanese experience with a form of digitally-enabled citizen assemblies—the Alignment Assembly—the authors outline their strategy to promote direct citizen engagement specifically toward collaboratively defining the future of AI. By extension, they put to their audience the possibility that if powerful digital technologies, like AI, can profoundly amplify and affect political processes, we can also leverage them to design policies and institutions that are more transparent, accountable, and better suited to contemporary needs of peoples and societies.
This opportunity for new forms of civic engagement and direct democracy at scale is foregrounded in the work by Lily L. Tsai and Alex Pentland. Tsai and Pentland offer that, if AI raises the voices of constituents through representing them and their communities directly in the broader political sphere, then it may also deliver on the promise of direct democracy at scale. To get to this long-term vision, the authors propose a strategy for reigniting civic engagement through designing digital civic infrastructure spaces for citizens. Digitally-mediated civic spaces may create new threads to weave the fabric of civic life, to (re)learn how to interact and how to engage in ways that cross lines of “difference,” political, ideological, or other. This sets out an ambitious agenda, of educating the “demos”—ongoing generations of people who can live together and govern together—to acquire the skills and capacities they will need to participate in our evolving societies.
The strategy of digitally mediated engagement as a scaffold for broader, bigger missions in our analog societies is prominent also in the work of Sarah Friar and Laura Bisesto. Friar and Bisesto remind us that people have a strong local compass in terms of the geographical scope of their activities. Creating digital forms of connection among neighbors plays an important role in strengthening the civic fabric of communities, turning attention to problems that affect the common fate of the region and sparking collaboration and advocacy. At the same time, their analysis offers that digitally-enabled platforms, and even more so AI-enabled platforms, thrive at scale, and it would be quite easy to forget that they are also embedded in our local communities and into our homes, which is profoundly intimate. These grounding realizations suggest that how we govern ourselves continues to be both place-based and context dependent.
For Jennifer Pahlka, the world’s advanced democracies are faced with the challenge of diminished state capacity—the ability of a government to achieve its policy goals. Pahlka argues that there is a strong link between diminished state capacity and civic disengagement: as governments fail to meet the need for public services and rising public expectations around the delivery of those services, large segments of the voting public become alienated. Further accentuating concerns, the gap between public sector and private sector capacity increases as the private sector adopts AI-enabled technologies but the public sector remains risk-averse. Through a powerful metaphor, the “cascade of rigidity,” and grounded in recent cases of U.S. government public sector decisions, Pahlka underscores the need to first understand how mandates and constraints actually operate in the real world of bureaucracy, and then suggests strategies for AI deployment in the public sector. She articulates a vision for flexing a set of governance “muscles” that act to enable and build capacity within government rather than to mandate and constrain specific public sector actors or actions.
Eric Schmidt bluntly makes the case that changing the existing model of organizing within the U.S. government is imperative in order to achieve our government’s purpose. Given AI’s overall potential, Schmidt highlights the importance of deploying AI toward supporting democratic governance, which may differ from AI deployments in other forms of governance. This underscores the need for intentional efforts to envision and articulate what the role of AI in a democratic government would look like, from local to global levels. Schmidt homes in on critical areas of government responsibilities—executive, judicial, and legislative—to identify daunting real-world challenges. He then leverages fascinating practical examples, such as “AI in the situation room,” “AI in the courtroom,” and “AI in Congress,” to propose actionable strategies for successfully transitioning to a new era of governance whereby AI recommends courses of action to the humans in charge of these crucial areas. In this vision, he underscores the importance of good leadership and the necessity for future generations of government leaders and policymakers to acquire the skills needed to govern in the age of AI.
John H. Cochrane argues that “it is AI regulation, not AI, that threatens democracy.” He makes the case that the institutional machinery of regulation cannot artfully guide the development of one of the most uncertain and consequential technologies of our century. Whereas he firmly acknowledges that emerging technologies often have turbulent effects, dangers, and social or political implications, he also argues that putting the brakes on the emerging technology of AI is not supported by historical consensus nor scientific evidence that is directly applicable and may risk losing the benefits of the emerging technology. By suspending the predominant focus on regulating AI use preemptively, Cochrane argues for continuing to trust competition and for strengthening our institutions (beyond regulatory agencies), including all of civil society, media, and academia, to detect and remedy AI-generated effects as they occur.
In his essay, Nathaniel Persily expresses concern that undue panic over AI might, itself, constitute a democracy problem. He argues that exaggerating AI’s impact on the information ecosystem may undermine trust in all media, which would pose a greater cost to democracy than the occasional deepfake that might persuade small groups of voters in a given election. Persily also worries that our assessment of AI’s democracy problem inappropriately grafts a set of concerns developed eight years ago relating to social media. Although both technologies may raise concerns about disinformation, surveillance, antitrust, and bias, the nature of these problems and the necessary policy response are quite different. He concludes that ensuring a democratic future for AI requires a new regime of transparency and accountability, as well as massive public investment to ensure that civil society has the tools to work with government to steer technological development toward democratic ends.
Eugene Volokh critically reassesses the risks associated with concentrated power among entities that provide information on public affairs. He posits that political life will likely be influenced by leading AI companies through the proliferating use of their large language model (LLM) platforms for political inquiries. Volokh points to the stark contrast between the “user sovereignty model” of previous media and the contemporary “public safety and social justice model” of AI tools. The former, exemplified by online search engines, involved technology companies offering tools to enable users to access desired information. The latter involves AI companies implementing guardrails against outputs considered harmful, and thus vastly amplifies concerns for concentrated power. In his vision, Volokh sternly advocates for a renewed emphasis on the user sovereignty model, through competition and market mechanisms alongside some select structural governmental mandates and legal frameworks.
Mona Hamdy, Johnnie Moore, and E. Glen Weyl advocate for a more inclusive, participatory framework that will integrate diverse perspectives and foster collaboration between technology and human society. They emphasize the importance of ecological and religious considerations in shaping a balanced, sustainable future. Presenting a plurality framework of the sort epitomized by the authorship team itself sets this work out from the vast majority of technology policy and governance works. These typically only seek to provide a solution to an identified policy problem, but at their core accept the underlying philosophical premises that created the issues in the first place, such as the dominant techno-ideologies of Libertarianism and Technocracy. In contrast, this team provides a refreshing alternative perspective.
Reid Hoffman and Greg Beato dive into the governance structures of the AI technology itself. To realize AI’s potential to drive widespread innovation and economic benefits, they argue, it is crucial to consider broad and open access, and to emphasize individual agency and participatory governance approaches. They draw insightful comparisons between the successful and widely beneficial GPS technology and the emerging technologies of generative AI and LLMs, noting that while GPS handles objective data, LLMs deal with subjective information. Hoffman and Beato leverage this comparison to propose actionable strategies to create fair and inclusive AI systems that enhance societal trust and deliver equitable benefits across communities, mirroring the successful integration of GPS into daily life.
James Manyika’s essay concludes the volume with a look to the future and an ambitious agenda. Suppose we look back in 2050 from a society where AI was broadly beneficial. What went right? Manyika lays out a set of grand challenges that must be addressed. He echoes themes of governance, but presents a broader vision for governance across the AI pipeline, from design, development, and use, which is markedly different from regulation. Manyika’s essay offers both a vision of and roadmap to a different type of society: one in which human and technological flourishing go hand in hand. Ensuring that AI technologies are developed and deployed ethically is crucial for maintaining democratic integrity and public trust.
— The Editors
Erik Brynjolfsson
Alex Pentland
Nathaniel Persily
Condoleezza Rice
Angela Aristidou