Democracy 2.0
by Eric Schmidt (1)
The author highlights the potential for AI to power rather than usurp human intelligence, showing how AI can improve decision-making in government and scale the practice of democracy itself.
As the Nazi armies descended on Europe, H.G. Wells wrote a book called The New World Order, outlining a path to world peace. Wells was not only the best-known science fiction writer of his time, but also a utopian socialist. And in fact, according to Wells, the two went hand in hand, with scientific and human progress moving in lockstep. Even before the creation of the internet, Wells held a deep belief in the transformative power of technology to expand human knowledge, advance world peace, and connect societies across geographies.
In an almost complete inversion of today’s battle lines, the West in the 1940s saw a lively debate between socialist techno-optimists and capitalist individualists. Those on the left imagined a central decision-maker that would ultimately be able to make optimal allocations based on a rational cost-benefit analysis. One Austrian philosopher of science, Otto Neurath, advocated socialist planning where the economy would be treated “as if it were one factory.” (2) Libertarians like Austrian economist Friedrich von Hayek disagreed. (3)
The potential of a true “singularity” today gives rise to a similar hope. Twenty-first-century techno-optimists, now often coming from among the world’s most successful capitalists, imagine a superintelligent artificial general intelligence (AGI) that can outperform all human intelligence and be leveraged to solve the world’s greatest challenges. The more utopian of the techno-optimists envisage a world in which AGI agents replace human policymakers altogether.
Like many revolutionary thinkers, today’s techno-optimists take things too far. They misunderstand what is required for a government to be perceived as legitimate by citizens who have become accustomed to democratic processes; worse, they overlook how democracies exist not merely to fulfill particular administrative goals but to engender a sense of equality and empowerment across society.
While we should not wholesale replace democracy with “algocracy”— rule by algorithms—the techno-optimists have indeed identified something essential: AI will radically transform the way governments make every decision, from the local to the global. For perhaps the first time since democracy’s modern inception in the late 18th century, the age of AI will force a reckoning with that very system. Citizens will need to reaffirm why democracy remains important in their lives. Leaders will need to reevaluate what is working and what is not, and innovate accordingly. If democracy is to survive this century and beyond, it must evolve.
This new reality, however, should not evoke fear or regret. It is cause for excitement and hope. The opportunities afforded by AI to make governance better are unprecedented in human history.
So far, the military domain has led the way on the public adoption of AI. This has played out most clearly on the battlefields of Ukraine, where the Ukrainian military has tried keeping pace with the much larger Russian military through technological innovations. From open-source intelligence collection to AI-powered drones, the technology has already begun to rewrite how wars are fought. Soon, however, other sectors—from healthcare to education—will follow. As AI systems continuously improve, governments will have to make tough calls— balancing equity, speed, and cost—about which decisions to delegate to these new technologies and which to reserve for human oversight.
AI has enormous potential to improve the way the local and national governments operate across a range of responsibilities—executive, judicial, and legislative. With the help of new predictive models and novel data collection tools, it will allow executive branch leaders to make more informed and data-driven decisions, especially with time constraints in moments of crisis. Judges, too, will be able to make better, fairer, and faster decisions when assisted by AI models built on past cases and adjusted for human and data-based biases. And on a legislative level, policymakers will summon expert research at any moment and access tools that can help draft and amend laws in seconds.
Beyond improving decision-making processes and government efficiency, AI can also help build support for and faith in democratic governance itself. Through new crowdsourcing mechanisms enabled by AI, governments can more easily aggregate citizen preferences and encourage direct citizen participation, facilitating more people to engage in their own governance and scaling the practice of democracy. By expanding the possibilities for new forms of collective decision-making, AI could fundamentally change what it means to be a politician, a citizen, and everything in between.
At the same time, deep-rooted norms of procedural justice and human leadership should give pause to those who foresee a full-scale victory of an AI social planner. After all, 20th-century socialist planners similarly proved only too fallible. The coming of AGI may herald less of a new world order and more of an improved version of our current liberal order: Democracy 2.0.
The Case for AI
Whether to use AI in governance is an active choice. Why, then, should we involve new technologies in areas so sensitive to the functioning of our democratic process? The simple answer is that our current methods just aren’t good enough.
Our current tools of governance are riddled with human bias, leading to suboptimal outcomes and public discontent. Among others, we know that political decisions are swayed by availability and recency biases— politicians spent billions on counterterrorism operations after 9/11 but paid little heed to the risk of a global pandemic prior to COVID-19. Experiments consistently show that highly educated and partisan individuals—in other words, the politicians in charge—are least likely of all to update their political opinions with new information. (4) This does not bode well for an already polarized political class, especially in the United States.
As if that were not enough, human decision-making is also negatively influenced by physical constraints: military operators perform worse when tired; judges are less lenient right before lunchtime. With the help of AI, governments will be able to deliver more services better, cheaper, and faster. In short, AI can help those in charge make better decisions.
For instance, AI will prove an important weapon in the fight against corruption and fraud. In 2022, the U.S. IRS was able to respond to less than a third of calls made during tax season. (5) In the future, large language models and AI agents should alleviate the burden by managing taxpayer inquiries, and algorithms will augment human analysts’ ability to detect fraud and tax evasion. Visa decisions are another great use case of sophisticated algorithms. Consular officers overseas spend the bulk of their time adjudicating visa cases, taking resources away from other diplomatic functions. Canada has pioneered the use of AI to triage visa cases, reducing processing times for low-risk approvals by 87%. (6) AI should drastically reduce the visa backlog in the United States for routine visa decisions, as well.
Social services could also be more efficiently allocated with the help of state-of-the-art algorithms. Many important social programs suffer from low uptake and are prone to human error. In the United States, eligibility decisions for food stamps, the Supplemental Nutrition Assistance Program, for example, had a startlingly high error rate of 45% in 2023, (7) in part because of numerous and antiquated forms. AI-driven enrollment and AI-assisted eligibility decisions would likely improve the delivery of these services.
Artificial intelligence is by no means without flaws: algorithms, too, are made by humans and thus vulnerable to many of the same biases. COMPAS, perhaps the most widely used AI tool in the U.S. judicial system, has faced criticism for its perceived racial bias in determining recidivism risk. (8) But for AI, biases can be less static. The models are trained on particular sets of data and can be retrained on better, more inclusive data if the algorithms display inadequacies over time. Even imperfect algorithms can still be useful in pointing out human biases as they appear.
Adoption of AI will take different shapes across the various branches of government, at times replacing human decisions, though more often enhancing and informing them. Indeed, the process of developing an AI to make autonomous decisions is actually quite distinct from developing an AI to complement human decision-making—and the latter will likely prove to be more impactful in the public sphere. The integration of AI in governance will thus more often be a response to human fallibility than an assertion of algorithmic prowess.
AI in the Situation Room
At a time when the world is facing a convergence of geopolitical crises, it is especially promising that AI has the potential to substantially improve executive decision-making. Its role here follows from its unique ability to process massive amounts of data at warp speed. Imagine the commander-in-chief being faced with the decision of whether to order a retaliatory missile strike on a position in a hostile country. Currently, the president would likely call on advisors, who at most would have quickly solicited memos from their departments with differing degrees of knowledge about the issue at stake. With the help of AI, however, the White House could summarize all incoming intelligence reports from various agencies and historical diplomatic cables, analyze a treasure trove of open-source information, and then arrive at an informed recommendation.
AI can also vastly improve the predictive capabilities of our executive apparatus. It could simulate different scenarios in a matter of minutes, helping those in charge better prepare for various contingencies. By relying on local sentiment analysis, sensor data, and historical datasets, AI has already helped augment human abilities to predict political trends and crises and will surely grow ever better with time.
Decision-makers could also rely on large language models to “red team” U.S. foreign policy, dynamically simulating how other actors would react to U.S. moves. Beyond running a war game once or twice, one could simulate a war game thousands of times and ask the model to summarize its findings. One could also imagine a “Putin chatbot” trained on public and classified information based on the Russian president’s actions, beliefs, and pronouncements. In these ways, AI can improve contextual understanding and recommend courses of action to the humans in charge.
AI in the Courtroom
The preeminent case of AI compensating for lapses in human judgment is in the courtroom. While courts have historically enjoyed higher levels of public confidence compared to political branches, in recent years, they have become mired in perceptions of bias and partisanship, often with good reason. AI could significantly increase the speed and fairness of judicial decisions.
In the courtroom, AI will largely augment rather than replace human decision-making. AI would be able to detect patterns of bias that judges themselves cannot see and assess risks based on predictive capabilities. Leveraging its ability to analyze large numbers of existing precedents, AI systems could compile decisions handed down by courts in similar cases and help judges make more informed, more consistent, and more equitable choices. Judges can even help tailor the model’s input system to their particular needs, deciding which precedents to base their decisions on, and use large language models to summarize relevant rulings.
In some instances, AI models may be able to substitute for low-level and routine judicial decisions about traffic violations or other misdemeanors. But judges and juries should always have the final say. What is more, the models they rely on should be auditable and transparent, with the information encoded in it both archived and accessible. Such a requirement may be hard to accept and deliver for AI companies but will be critical to ensure public trust in judicial sentencing. Encryption methods and evaluative algorithms can protect companies’ trade secrets while forcing models to reveal the rationales behind their recommendations. To protect technology providers, third-party auditors could be designated to legitimate algorithmic standards.
AI in Congress
Perhaps most utopian is the idea that AI could help the U.S. Congress and similar bodies emerge from perpetual legislative gridlock. In time, though, AI can equip lawmakers with powerful tools to make better-informed decisions, predict outcomes, streamline administrative tasks, and potentially reach agreement.
By processing large datasets from a variety of sources including public opinion surveys, social media, economic indicators, and historical voting patterns, AI algorithms can identify trends that will help legislators understand the potential impact of their proposed laws and predict public reactions to them. For example, AI can analyze the economic effects of proposed taxes by simulating scenarios and providing legislators with a range of possible outcomes. Such a tool would have been immensely helpful ahead of President Macron’s attempted carbon tax in France in 2018, which ended up generating immense public uproar through the Yellow Vests movement. (9) With enhanced foresight, AI could help policymakers preempt and redesign legislation before a backlash ensues.
AI can also improve the efficiency and accuracy of legislative processes. The Congressional Research Service, which provides comprehensive research and analysis to members of Congress, could leverage AI to significantly augment its capabilities. Natural language processing algorithms can assist in drafting, amending, and analyzing legislative texts by quickly identifying inconsistencies or redundancies with existing laws. This can significantly reduce the time and effort required for legal research to ensure that proposed bills are coherent and legally sound.
For something as simple as constituent services, AI could analyze communication from the public, categorize issues, and prioritize responses based on urgency and relevance. AI-driven platforms could similarly facilitate more effective public consultations, allowing citizens to provide input on legislative proposals through user-friendly interfaces. Overall, AI could make governments more open and responsive to the needs of the public, strengthening the link between the state and the people.
The Problem of Legitimacy
With such extensive applications of AI across government, were the techno-optimists right after all? Should algorithms simply take over for flawed human decision-makers? Not quite. The challenge of AI in governance is more than just a technical one. In a deeper sense, governance by AI risks a crisis of legitimacy. Already, trust in traditional institutions is at a record low. Opaque decisions at the behest of unaccountable AI systems may only further erode trust: one can easily imagine the populist vitriol against an administrative state beholden to Big Tech. So far, the paradox of AI is that people do not trust AI—but are eager to use it. Surveys find that AI is seen as both more effective but also less trustworthy than human systems.
Governance by AI falls into the same trap as governance by experts. It presumes that all political decisions are simply technical decisions that can be clearly and cleanly solved with the right information. Indeed, it is true that certain policy decisions objectively promote human flourishing more effectively than others. It is also true that there are many administrative functions that can be easily sorted, and which would produce profound inefficiencies if the government were to consult the public on each one. But many political decisions are reached through some compromise of competing values and priorities. As Aristotle put it in the Nicomachean Ethics, “No one deliberates about things that are invariable, nor about things that it is impossible for him to do.” Even if AI can perfectly simulate citizen preferences, public legitimacy rests as much on an inclusive and transparent process as it does on the end result.
Delegating important decisions to a superintelligent black box seems to take technocracy to its logical extreme. Government by AI threatens to further concentrate power in the hands of unaccountable bureaucrats, technocrats and, in some cases, autocrats. Rather than converting citizens’ preferences into policies by means of a transparent political process, AI takes an inconceivably large number of inputs and with the help of often opaque algorithms arrives at a seemingly unexplainable output. Even if its output suggestions significantly outperform humans on some metrics, the process of arriving at these does not inspire public trust. In fact, delegating power to AI may offer a convenient way for elected politicians to deflect criticism and provide an excuse for incompetence at best and authoritarianism at worst.
Democracy Through AI
Rather than outsourcing decision-making power to AI, we can use AI to improve the democratic process itself, yielding new forms of lawmaking and legislative engagement. In this way, AI will not only enable better decisions among individuals in government; it could even help foster a more democratic civic culture on a systematic level, with increased participation, deliberation, and social cohesion dispersed throughout society.
Historically, democratic legitimacy has rested on two pillars: deliberation and mass participation. Different institutional configurations prioritize these ideals to varying degrees, and they are at times in tension with each other. At the dawn of democracy in ancient Athens, a balance was sought by pairing a popular assembly—a form of direct democracy, where all citizens could choose to participate—with smaller deliberative bodies that were composed of citizens selected at random. The tension between deliberation and mass participation grew as large nation-states replaced homogenous city-states and elections emerged as the locus of modern democracy in the 18th century. Voting and occasional plebiscites were thought to satisfy the need for mass participation, with deliberative principles shaping parliamentary debates upstream, though seldom reaching the average citizen.
Today, as populist waves proliferate across the world and citizens demand a more central role in public deliberations, advocates of deliberative democracy hope that sortition-based citizens’ assemblies can empower citizens and revitalize our political discourse. But so far, experiments in deliberative democracy are limited in size to a few hundred individuals at most. It is logistically infeasible to have humans moderate and participate in much larger fora.
This is where AI comes in. AI has the ability to scale deliberative democracy and thus help resolve the fundamental tension between deliberation and mass participation. A virtual town hall—or hundreds of simultaneous town halls—could be run across an entire country through an AI platform that crowdsources inputs. An online forum could elicit comments from citizens and then aggregate common-ground perspectives with the input of experts. Each citizen could have an AI chatbot in their pocket that serves as a deliberative partner, forcing them to contend with opposing views and sharpen their justifications. Variations of these ideas may suit different governments, and policymakers should experiment with pluralistic approaches.
An early example is Pol.is, an AI platform that has managed to engage as much as half of Taiwan’s population, aggregating and analyzing citizen feedback in real time. Taiwan has used this platform to help achieve public alignment on the question of AI regulation, bolstering the legitimacy of its government. (10) In a way, the solution to too much AI may be more AI. Taiwan and other countries are not shifting power to unaccountable algorithms but instead using AI as a tool to devolve power to their own citizens.
AI for the People
The crisis of legitimacy is real when it comes to AI, and if new technologies are to be used in government, they will require a careful balance of the promise and risks. AI works best when it is human-centered, constantly improving its algorithms based on human feedback and communication. Only if guided by clear values will AI be able to restore trust in political decision-makers. To promote public trust and legitimacy, AI-assisted decision-making should be governed by principles of accountability and explainability.
First, humans must remain the final arbiter when it comes to the core functions of the state. Consequently, humans must also be the ones accountable. People acting on the basis of algorithmic recommendations, be they a low-level judge or a president, must still be held to account when their decisions go awry. Affected citizens should be able to appeal government decisions that were made with the help of AI.
Second and relatedly, bureaucratic decisions in critical areas must remain explainable—which will be no easy feat when involving large language models, but which will hopefully get more precise with time. Politicians and bureaucrats need the technical expertise to be able to understand the output of the algorithms that inform their decisions. Those in charge have to be able to justify decisions to impose a trade embargo or commute a criminal sentence even, and especially if they are made with the help of AI.
Third, AI systems used in the public sector must be subject to stringent controls related to nondiscrimination as well as privacy. China’s social credit score system conjures fears of a dystopian panopticon where every step is tracked and any click can be used against us. While in office, President Donald Trump instructed Congress to use social media to detect disability fraud. (11) Sadly, the United States is not immune to the allure of mass surveillance, either.
Innovation From Within
These challenges are daunting to say the least. As technically advanced as new AI models may be, they will not be able to overcome crises of trust and legitimacy unless they are understandable to the general public. A successful transition to a new era of governance means that not only AI models will have to become better at communicating their insights—political leaders will too. The humans in charge will need to proceed with caution in introducing these new models of decision-making, always remaining transparent about the risks and challenges ahead.
But leaders should not be so afraid of AI that they foreclose its applications altogether. AI systems do not act on their own; ultimately, it is a human choice whether to use AI in government. We maintain agency over how we design it, and in doing so can set the parameters for how it engages in the world. As discussed, governments can accrue tremendous benefits through the use of AI, leading to faster and fairer decisions across executive, judicial, and legislative domains. Even more important than any singular use case, the very process of interacting with and shaping government can be improved by AI, with new deliberative and participatory mechanisms that make the democratic ideal of self-rule accessible to all.
To prepare for a future where we embrace the promise of AI for good governance, we first have to ensure that the public sector does not fall behind. The U.S. government needs to invest in resources—data, software, compute—that enable the rapid innovation, adoption, and scaling of explainable, transparent, and reliable AI. Building on the AI Executive Order, (12) the White House should continue to encourage federal agencies to adopt these new technologies.
Above all, good leadership starts with the right people. The U.S. government should undertake an all-out effort to recruit the country’s top talent. Organizations like Horizon, Partnership for Public Service, and the Nobel Reach Foundation have been leading the charge to recruit top technical talent for public service. Our best and brightest should help build and implement AI tools that can augment and assist human decision-making. If we get this right, this new age of AI should make our politics more data-driven and more democratic, less arbitrary and less polarized.
Footnotes
(1) The author would like to give a special thanks to two members of his research team, Andrew Sorota and Johannes Lang, who were instrumental in writing this essay.
(2) See Otto Neurath, “Economic Plan and Calculation in Kind,” in Otto Neurath Economic Writings Selections 1904–1945, Vienna Circle Collection Volume 23 (Springer Dordrecht, 2023), 405–465, https://doi.org/10.1007/1-4020-2274-3_14.
(3) See Friedrich von Hayek, Individualism and Economic Order (Routledge & Kegan Paul, 1948).
(4) See, e.g., Dan M. Kahan, Ellen Peters, Erica Cantrell Dawson, and Paul Slovic, “Motivated Numeracy and Enlightened Self-Government,” Behavioural Public Policy 1, no. 1 (2017): 54–86, https://doi.org/10.1017/bpp.2016.2, and Shanto Iyengar and Kyu S. Hahn, “Red Media, Blue Media: Evidence of Ideological Selectivity in Media Use,” Journal of Communication 59 (2009): 19–39, https://doi.org/10.1111/j.1460-2466.2008.01402.x.
(5) “National Taxpayer Advocate Delivers Annual Report to Congress; Focus on Taxpayer Impact of Processing and Refund Delays,” National Taxpayer Advocate Service (January 11, 2023), https://www.taxpayeradvocate.irs.gov/reports/2022-annualreport- to-congress/newsroom/.
(6) CIMM – Advanced Data Analytics to Sort and Help Process Temporary Resident Visa Applications – Feb. 15 & 17, 2022, Government of Canada, accessed July 1, 2024, https://www.canada.ca/en/immigration-refugees-citizenship/corporate/transparency/committees/cimm-feb-15-17-2022/advanced-data-analytics-sort-process-trv.html.
(7) SNAP Case and Procedural Error Rates, USDA Food and Nutrition Service, accessed July 1, 2024, https://www.fns.usda.gov/snap/qc/caper.
(8) See, e.g., Jeff Larson, Surya Mattu, Lauren Kirchner, and Julia Angwin, “How WeAnalyzed the COMPAS Recidivism Algorithm,” ProPublica, May 23, 2016, https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm.
(9) See Bate Felix, “France’s Macron Learns the Hard Way: Green Taxes Carry Political Risks,” Reuters, December 2, 2018, https://www.reuters.com/article/world/francesmacron-learns-the-hard-way-green-taxes-carry-political-risks-idUSKBN1O10AS/.
(10) See Flynn Devine, Alex Krasodomski-Jones, Carl Miller, Shu Yang Lin, Jia-Wei ‘Peter’ Cui, Bruno Marnett, and Rowan Wilkinson, Recursive Public: Piloting Connected Democratic Engagement with AI Governance (Recursive Public, November 2023), https://vtaiwan-openai-2023.vercel.app/Report_%20Recursive%20Public.pdf.
(11) See Robert Pear, “On Disability and on Facebook? Uncle Sam Wants to Watch What You Post,” New York Times, March 10, 2019, https://www.nytimes.com/2019/03/10/us/politics/social-security-disability-trump-facebook.html?smid=url-share.
(12) Exec. Order No. 14110, 88 Fed. Reg. 75191 (October 30, 2023).