Misunderstanding AI’s Democracy Problem

by Nathaniel Persily (1)

Undue panic over AI could harm democracy more than AI itself. The author suggests that exaggerating AI’s effects might undermine trust in all media, posing a greater threat than isolated disinformation incidents. He concludes that a democratic future for AI requires transparency, accountability, and significant public investment to guide technological development.

 
 

Artificial intelligence amplifies the abilities of all good and bad actors to achieve the same goals they have always had. That maxim applies to democracy as it does to all other social systems. For foreign intelligence officials whose previous attempts to interfere in another country’s elections were limited by the language abilities of their staff, for example, generative AI lowers the cost of creating targeted disinformation in a native tongue. For a cash-strapped challenger without the resources to create advertisements, other professional campaign materials, a jingle, or a team of “volunteers” to converse with voters online or by phone, AI will allow them to compete against well-funded incumbents. And for election officials, who hope to improve communication with voters, optimize resource allocation, detect fraud and other anomalies in voting returns, or verify signatures on mail ballots, AI will help them perform their jobs better as well. AI will soon be pervasive throughout the democratic system; every digital tool related to elections will soon employ AI to a greater or lesser degree.

Even if AI is “just a tool,” Americans are concerned about its potential impact on democracy. Recent polls show that a majority of Americans expect AI to affect the outcome of the 2024 election. (2) On the one hand, the concern is curious, given that most Americans have not (wittingly) used AI tools. On the other, it is to be expected given Americans’ general unease with both technology and the state of our democracy. This anxiety has increased ever since the 2016 election, when reports of Russians buying Facebook ads, campaign targeting by privacy-endangering firms such as Cambridge Analytica, and the hacking of Hillary Clinton’s emails engendered a feeling that tech and democracy were at odds with one another. No matter that four years earlier, celebratory books were published about the tech wizards who used the latest data and ad targeting tools to help President Obama win two elections. (3) On balance, Americans continue to believe that emerging technology poses a threat to democracy. 

The concerns and concepts related to social media and earlier elections are now being grafted wholesale onto the newest capabilities enhanced by AI. This, too, is no surprise, given the categories of concerns that people share about the current crop of AI companies, some of which, like Meta and Google, continue to be blamed for their role in influencing earlier elections. As with social media, AI creates its own dangers related to disinformation, privacy, surveillance, antitrust, and racial bias. We should therefore expect these fears to build on the anxieties that have been growing and festering for the last eight years. As Eric Schmidt and Jonathan Haidt have also pointed out, the technologies are interrelated and codependent. AI will turbocharge social media and exacerbate its democracy threat. (4) 

However, there are significant differences with this new technology as well. AI is both “worse” and “better” than social media when it comes to its unique challenges to democracy and elections. This dynamic holds for AI’s disinformation problem, as it does for its antitrust/competition concerns and worries about racial and political bias. Re-fighting the last war on the tech-democracy battlefield will distract both from new threats and new opportunities that AI presents for democracy. 

This essay covers just a few of these types of challenges. It begins by explaining why panic over AI represents a democracy problem unto itself. It then shifts to two analogous problems from social media—bias and competition—to examine the differences between AI and social media. The essay concludes with a few high-level policy recommendations along with a recognition that we will not be able to “tech our way out” of the democracy challenges posed by AI. Unreasonably high expectations for both technology and regulation to rein in AI will compound some of the most widespread concerns, such as those relating to deepfakes and synthetic media.

 
So long as AI-produced deceptive content does not reach a significant number of persuadable voters, it is unlikely to have much political impact. That is, if a deepfake is created in the forest and there is no one there to view it, there is no democracy problem.
 

AI Panic and Democracy 

Conventional wisdom regarding AI and democracy tells a familiar story based on a mindset forged by the 2016 election. AI will allow bad actors to generate an avalanche of disinformation and deepfake imagery, according to this view, and some critical share of voters will cast their votes based on candidate-specific falsehoods to which they are exposed on social media. More worrisome still, AI might enable greater microtargeting and hard-to-detect surgical strikes of misinformation to narrow populations that might flip elections in battleground states. Especially if we are in a Bush v. Gore-style presidential election decided by 500 votes, even the smallest AI persuasion tactics might prove dispositive. 

Any or all of this might indeed happen, but there is considerable cause for skepticism. First, as with disinformation generally, (5) AI-produced fakes are quite likely to be a “tail-problem”—a small number of likely producers and a small-ish share of consumers. Many, if not most, of those consumers are true believers already. For most voters, AI-produced content, like all content, will do little to affect the deep partisan commitments that determine their vote. 

Second, the same countermeasures the platforms have used for influence operations generally would apply to operations that use AI to spread falsehoods. Although attention often focuses on fact-checking, most of the actions platforms have taken against disinformation do not relate to the content of posts but rather the network that spreads the content. There is a reason that Facebook takes down close to four billion accounts per year. Most of those accounts are not human-generated; bots and AI are not a new problem. The key question is whether the platforms will place brakes on the virality of AI-generated content and continue to address “coordinated inauthentic behavior,” even irrespective of the content. 

So long as AI-produced deceptive content does not reach a significant number of persuadable voters, it is unlikely to have much political impact. That is, if a deepfake is created in the forest and there is no one there to view it, there is no democracy problem. The fact that there will be thousands of examples of deepfakes (as there already are) does not speak to whether they will have an impact. One reason to predict a low likelihood of AI-generated electoral persuasion is that journalism and political content, in general, represents a small share of the average person’s social media feed. (If you are reading this, your feed is not normal and you are not an average user.) Estimates differ between platforms and the definition of political/civic/news content, but the most systematic analysis suggested that in 2020, “fake news” comprised only 0.15% of Americans’ media diets. (6) Of course, for millions of people, the share of news and even false content represents a larger share of the content they consume. (And outside of Western democracies with robust media ecosystems, we should expect the problem of viral disinformation to pose a more significant danger, given the absence of counterspeech.) Whether QAnon supporters, election deniers, or anti-vax groups, large numbers of people seek out or otherwise receive false political content. But for many people who fall into these categories, the truth or falsity of the content is beside the point. AI-generated or false content might as well be a form of tribal reaffirmation or entertainment. It does little to persuade people to vote for the party or candidate they already support.

With respect to the general population, even viral deep fakes do not occur in a vacuum. When they reach a certain level of popularity, the mainstream media will begin to identify them and evaluate their authenticity. Of course, at that point, it may very well be too late for some, but only in the sense that people will then retreat to their own trusted sources to vouch for or doubt the validity of the AI-generated content. In this respect, AI-generated content, like all content—true, false, or otherwise—interacts with the polarization related to media consumption generally. 

Although the likelihood of a given deepfake to flip a significant number of votes seems low (and a lack of significant examples from recent elections in India, Indonesia, the EU, the United Kingdom, and France suggests the harm may be overstated), that does not mean AI poses no threat to the information ecosystem or democracy. Indeed, the threat remains quite dire. This danger comes from the fact that fears about the ubiquity of AI-generated content will erode people’s trust in true sources of information. Even the most hyperventilating predictions would not suggest that AI-generated false political content would comprise more than a slice of a percent of a given social media user’s media diet. However, media coverage and amplification of that tiny share of fake content will infect users’ confidence in the remaining 99%+ of content in their feeds. Isolated AI-generated content might not persuade people of the truth of a given falsehood, but it will cause them to doubt the validity of all of the rest of the media they consume. Moreover, this “liar’s dividend” will enable political candidates to disclaim the truth of accurate video and audio content by saying it is likely fake. Voters are then left in a nihilist bind where they do not know if they can trust anything. That dynamic, much more than the occasional persuasion from synthetic media, poses the greater threat to democracy long term.

Hallucinations and Bias In addition to the synthetic media disinformation problem, AI also poses unique misinformation problems—that is, unintentional, as opposed to intentional, falsehood problems. Two distinct problems, each with democracy implications, have garnered a great deal of attention: hallucinations and bias. Although we ordinarily do not think of these dynamics as similar, in both cases we measure the “accuracy” of generative AI tools against some external measure of validity. For hallucinations, we fault the AI for misrepresenting the truth. For bias, we consider the responses inappropriate against some benchmark of an unbiased response (or pattern of responses). In each instance, the democracy costs arise as citizens develop beliefs and attitudes considered misinformed or otherwise prejudiced. 

As people use AI tools to replace search, the stakes for each answer increase. When Google provided ten blue links in response to a search query, it could disclaim responsibility (both legally under Section 230 of the Communications Decency Act, and, to some extent, morally) for the range of responses it provided. When it generates a single, original answer to a question, Google is “speaking” and responsible for the answer it provides. If those answers foster misinformation, polarization, prejudice, or other anti-democratic attitudes or behaviors, the AI tool should be held accountable.

At present, people fail to understand that generative AI tools, like ChatGPT, are not information providers. They generate responses from predictions derived from the training data, not from a real-time crunching of the knowledge found on the internet. As such, they can misquote (literally generate false quotes) and misattribute to particular sources, let alone make up recommendations, as when Google’s Gemini recently suggested people should eat one small rock per day or use nontoxic glue to keep cheese on their pizza. (7)

 
When it generates a single, original answer to a question, Google is “speaking” and responsible for the answer it provides. If those answers foster misinformation, polarization, prejudice, or other anti-democratic attitudes or behaviors, the AI tool should be held accountable.
 

With respect to democracy, we should be concerned when these models lead voters astray, particularly as they convey election-related information. Early in its release, for example, ChatGPT would respond to queries such as “Where is my polling place?” by making up a location or generating an accurate answer from training data that was several years old. In either case, the cost of error, if such tools were widespread, could be significant for a given voter. In response, OpenAI has partnered with the National Associations of Secretaries of States to forward such queries to authoritative sources of election information. (8) Anthropic has taken a similar approach by partnering with Democracy Works, which powers TurboVote from official sources of election information. (9) The platforms will either base responses on reliable government sources, akin to search, or forward users to those other platforms once a query triggers a response that depends on reliable, contemporaneous election information. Unfortunately, a recent study of these AI tools, even with these mitigations, has shown they often deliver inaccurate election-related information. (10) The results were even worse for the open-source models. 

When queries turn from objective facts to questions inherently infused with political valence, though, the challenges confronting platforms take on a different character. It is one thing to direct voters to a polling place, but it is quite another to provide an answer to “Which candidate is better for America?” A search engine could disclaim responsibility by forwarding to other sources, even if the prioritization of links might raise concerns. When a chatbot is expected to give an answer, though, the value choices undergirding the product inevitably draw scrutiny. Of course, for a certain set of queries, such as those mentioning candidates, the bot can default to search, as it does with election administration questions. But the line between political and nonpolitical is impossible to draw, especially as political polarization infects more and more topics typical of daily life. 

AI bias is a topic already so well trod that it need not be fully rehashed here. Suffice it to say that bias can creep into these systems through the training data, fine-tuning, and limitations placed on the responses (as well as at other stages). Examples abound: Google Gemini, for example, would create images of female popes and black Nazis, (11) and ChatGPT would say that misgendering someone or using racist language was as bad as a nuclear holocaust. (12) In the face of these scandals, the platforms have toned down some of the under-the-hood manipulations of query responses that seemed to create “Woke AI.” 

AI bias is actually a conceptually impossible problem to solve, especially as it relates to democratically relevant actions that we expect from these systems. Consider a related critique often lodged at these systems: that they exhibit racial or gender bias. (13) For example, some image-generation tools may respond to the query “draw me a picture of a nurse” only with pictures of women. There are a variety of arguments as to why this type of systematic bias is wrong. Some would say that the pattern of responses ought to represent the population, so that if 80% of nurses are women (as is true in the U.S.), then at least one out of five queries ought to lead to a picture of a man. Others might suggest that the tool should affirmatively avoid replicating the biases in society, such that it should actively try to undermine stereotypes by providing for more diverse outputs than a census of the relevant population would. (Such was the road Gemini traveled that produced the “diverse” set of responses in their image queries.) Still others might suggest that randomness is the way to go—let the model flip a coin, in a sense, whenever it needs to represent gender in its responses. (Although how it represents gender, let alone transgender nurses, would still argue against a 50-50 approach.)

 
The challenge for governance of these systems is to ensure that one set of corporate-determined values does not gain undue influence over voters and citizens around the world.
 

There is no “right” answer to these questions, which is to say that there is no value-neutral pattern of responses. Each of the patterns described above has something to commend it. One needs a theory of the purpose of a certain AI product in order to make an argument for the “correct” answer or pattern of answers. Moreover, especially when one considers the global application of these tools, a whole other set of questions arises concerning the proper baseline for judging biases in a given region among a given population with a given set of values. 

User empowerment, choice, or sovereignty is often proposed as a solution. Let users decide how woke their AI should be, for example. Or have the product respond with a query for more information—e.g., “Do you want me to draw a male nurse or a female nurse?” These mitigations can go some distance, but ultimately, no product can account for the diversity of the human experience, whether in image generation or even in the ideological valence of text responses. The same would be true with a human artist or open-ended survey respondent. Ultimately, one must ask which AI-generated responses will require greater user input in order to offload the value judgments from the companies to the users.

All of this matters for democracy because the corporations that develop these systems will make decisions about the political values that will govern these tools. Those decisions might cater to the ideology of employees, to the median voter, to whatever is most profitable, or to the pressures applied by governments and civil society. The AI companies are quickly becoming a new set of referees in the culture wars in which relevant constituencies lobby over the id and superego that will govern the mind of AI systems. The challenge for governance of these systems is to ensure that one set of corporate-determined values does not gain undue influence over voters and citizens around the world.

Competition 

The bias inherent in any given model might be less concerning if no one model had disproportionate power or control of the market. As with search engines, if there were a dozen Googles, we might care less about the bias or power of any given search algorithm. But with only one Google and almost everyone in the world turning to it for indexing and searching the web, any bias in the search engine has outsized importance. So too with AI. If we end up with only one or a very limited number of AI systems for certain purposes (let alone all generative purposes), then the stakes for society and democracy as to how that system is built and functions would be dramatic. 

The implications for democracy of an AI monopoly should be obvious and obviously dangerous. As concerning as social media or search engine concentration might be, AI monopolies could have an even greater impact given the many functions these tools would perform. If one company, in effect, could set the rules for how people will express themselves in the future, let alone how entities will ingest and analyze all forms of data, create code, or provide the “brains” for all other forms of technology, the power is almost unimaginable. It would also constitute a power much greater than that held by any national government or international organization. All the more troubling would be if a single government, especially an authoritarian one, were to have such a monopoly. At least with a private company, in theory, a democratic government might be able to shut it down or regulate it. 

Needless to say, the hopelessness that some bring to the competition question is not universally shared. Others see a proliferation of companies in multitudinous sectors competing over various applications of AI to particular industries or problems. Even if there might be concentration in the market for multimodal models, it does not mean that the AI economy would gravitate to a Google-like level of concentration. 

I want to focus on one particular aspect of the competition question that is relevant to democracy: namely, the special importance of open models. The presence and proliferation of open models not only distinguishes AI from social media and search but raises fundamental questions regarding democratic control of AI.

 
The presence and proliferation of open models not only distinguishes AI from social media and search but raises fundamental questions regarding democratic control of AI.
 

Openness promises to distribute the benefits and capabilities of AI. As with open-source software, these benefits would accrue to a broad tech innovation ecosystem growing out of platforms for which the start-up costs may have been considerable. Furthermore, the opportunities would not be geographically constrained. Open models allow for the possibility that under-resourced populations throughout the world without the capital and infrastructure necessary to build large models could share in the opportunities otherwise only available in the jurisdictions where the large AI companies choose to operate. 

Companies such as Meta, X.ai, and Mistral have decided to publish their models with open weights. (Some call these models “open source,” which is technically not correct, but the nomenclature is unimportant for our purposes, which are to detail the implications of publicly available, broadly editable models.) Of course, openness represents a continuum, not a binary; even so-called closed models may have some degree of openness about them and seemingly fully open models may have licensing restrictions on their use. The point from a competition perspective is that powerful AI models may be widely available and entities outside the biggest companies might be able to customize them for their purposes (profit-seeking, existential-risk-promoting, or otherwise). 

In one sense, model openness represents the most democratic form of AI. As with the internet itself, which democratized communication so that every user could have the potential to broadcast to the world (an ability in the television age previously held by only a privileged few), truly open AI shifts power away from a single model developer to anyone, anywhere, able to build off it. But this democratic feature of the emerging AI marketplace is also what might place great strain on democracies, as all kinds of bad actors may deploy these powerful tools for their own purposes to undermine democracy.

 
Reasonable people can disagree about whether this early identifiable cost of openness outweighs the many benefits of open image models. But openness creates identifiable and potential risks—ones that will be very difficult to redress once the model is, so to speak, out the door.
 

We can see the implications of this “democratic AI” in the consequences of the open image tools released in the first year of the explosion of generative AI. After Stability.AI released a version of its open-source image model, Stable Diffusion, users removed some of the built-in guardrails to produce a different model, Unstable Diffusion. In short order, these tools were then used to generate an endless supply of child pornography. (14) As a result, the entire enforcement regime used to police online Child Sexual Abuse Material (CSAM) threatens to collapse, as platforms inundate authorities with millions of examples of virtual child pornography, and those charged with enforcement have difficulty telling what is real from what is AI-generated. 

Reasonable people can disagree about whether this early identifiable cost of openness outweighs the many benefits of open image models. But openness creates identifiable and potential risks—ones that will be very difficult to redress once the model is, so to speak, out the door. The right approach to risk assessment of open models, as my colleagues at Stanford’s Institute for Human-Centered AI argue, is to examine the relative risk of such models, as compared to what might be derived from internet searches, for example. (15) It might be all well and good to regulate proprietary image-generation tools, but so long as similar capabilities are enabled by open models, bad actors seeking to use these tools to undermine democracy will have access to them. 

Conclusion: What is to be done… and what is not to be done

As Yogi Berra famously said, “It’s difficult to make predictions, especially about the future.” The joke is particularly apt when it comes to envisioning what the future of AI looks like and how it might affect democracy. The prognostications run the gamut from Terminator-like apocalypse to Eden-like Utopia. This uncertainty poses particularly difficult problems for those considering AI policy. Even aside from the oft-mentioned concern that regulation might inhibit innovation, it is difficult at the dawn of a technology to predict which harms are likely to require regulation and what capabilities governments need to enforce new rules. Nevertheless, there are certain “macroprinciples” that could guide our consideration of how AI should be shaped to benefit democracy. 

First, it is important to recognize that we will not be able to “tech our way out” of these AI-democracy challenges. Take the issue of watermarking or authentication of synthetic imagery as an example. Virtually every cross-industry initiative, NGO proposal, or proposed government regulation related to synthetic imagery has at its core the need for some attestation of the provenance of AI-generated content. And to be clear, even given the adversarial environment wherein tech protections like these inevitably lead to tech innovations to defeat them, development of this technology is critical so that responsible media organizations, let alone concerned consumers, are enabled to discern AI-generated content.

However, everything we have learned from our experience with “legacy” forms of disinformation suggests overreliance on this approach fails to address the emotional and psychological dimensions of disinformation, which are at the root of the democracy problem. The fact that an outside source of authority tags content as genuine or fake does not immediately lead to widespread agreement among users about whether it is believable. This is especially true when, as inevitably happens, mislabeling generates contestation that leads users to lose confidence in the authentication system going forward. Moreover, tagging only AI-generated content is insufficient—for the system to work, all content must be labeled either as “authentic” or “not,” which requires more than the cooperation of the AI companies. Then, once we create an expectation that all media must be verified, we have descended into the world of presumptive skepticism that leads to widespread doubt of true content and facilitation of the liar’s dividend—which are the true threats to democracy. 

Second, enforcement and administration of an AI regulatory regime is even more important than the development of high-level principles. (16) The scope and influence of AI will be so vast, literally to all reaches of economic and social life across the globe, and so fast-changing that any given government has limited capacity (let alone in-house expertise and staffing) to wrap its regulatory arms around the technology. Regulatory efforts quickly become outdated as new incarnations of AI emerge. Such was the case with the EU AI Act, which in its early draft did not include generative AI, but then had to be completely rethought once ChatGPT exploded onto the scene and reset the AI conversation. Any system of regulation and administration must be nimble enough to adapt to rapid advances in the technology. This requires an ongoing dialogic relationship between industry and government, such that relevant government agencies need not repeatedly seek new laws to deal with the newest forms and applications of AI.

Third and related, given the AI talent gap between the public and private sectors, civil society and the market must play active roles in informing government policy. This will require the development, for example, of a robust ecosystem of auditing firms that can be deployed to evaluate AI models and applications. As with the financial industry, outside institutions need to be developed to buttress and inform the work of the public sector, while at the same time abiding by rules to avoid conflicts of interest and industry capture. However, we need to develop a regulatory structure that prevents AI firms from checking their own homework, while also not forcing the government to be in the business of policing every application and development of AI. 

Fourth, if a whole-of-society approach is necessary to harness AI’s benefits to democracy while mitigating downside risks, governments must fund public infrastructure to ensure that outsiders are able to fully understand (and participate in) the trajectory of AI development. At the very least, this means building public computing resources so that GPUs (and the next generations of computing technology) can facilitate research on AI trust and safety. (17) In addition, a bedrock prerequisite to public accountability for AI systems will be a system of transparency that allows vetted outside researchers to have access to proprietary technology prior to deployment. Striking the right balance between transparency, safety, privacy, innovation, and other values is not simple. But we cannot live in a world where the only people with the necessary access and resources to understand the developing technology are people who are tied to the profit-maximizing missions of the firms. 

The relationship between artificial intelligence and democracy is complex, uncertain, and evolving. We need to be modest in our predictions as to how this story will play out. Nevertheless, we need to manage public expectations while also investing in the necessary public resources to steer technological development in a pro-democracy direction. Decisions that governments, industry, and civil society make now will chart the path AI takes in the coming decades. Even in this environment of great uncertainty, we need to develop systems of accountability now that will make it less likely that we look back later with regret only to say: “if only we had acted soon enough.…”

 

Footnotes

(1) Thank you to Benjamin Rosenthal for research assistance and to Dan Ho, Rob Reich, and Ali Noorani for helpful comments.

(2) Owen Covington, “New Survey Finds Most Americans Expect AI Abuses Will Affect 2024 Election,” Today at Elon, May 15, 2024, https://www.elon.edu/u/news/2024/05/15/ai-and-politics-survey/; Morning Consult, National Tracking Poll Topline Report (Project 2308055, August 10–13, 2023), https://pro-assets.morningconsult.com/ wp-uploads/2023/09/2308055_topline_AXIOS_AI_Adults_v1_EP-1.pdf.

(3) Sasha Issenberg, The Victory Lab: The Secret Science of Winning Campaigns (New York: Broadway Books, 2013).

(4) Jonathan Haidt and Eric Schmidt, “AI is about to Make Social Media (Much) More Toxic,” The Atlantic, May 5, 2023, https://www.theatlantic.com/technology/archive/2023/05/generative-ai-social-media-integration-dangers-disinformation-addiction/673940/.

(5) Ceren Budak, Brendan Nyhan, David M. Rothschild, Emily Thorson, and Duncan J. Watts, “Misunderstanding the Harms of Online Misinformation,” Nature 630 (2024), 45–53, https://www.nature.com/articles/s41586-024-07417-w.

(6) Jennifer Allen, Baird Howland, Markus Mobius, David Rothschild, and Duncan J. Watts, “Evaluating the Fake News Problem at the Scale of the Information Ecosystem,” Science Advances 6, no. 14 (April 3, 2020), https://www.science.org/doi/10.1126/sciadv.aay3539.

(7) Dan Ladden-Hall, “Google Explains Why Its AI Tool Told Users to Eat Rocks,” The Daily Beast, May 31, 2024, https://www.thedailybeast.com/google-explains-why-its-ai-overviews-told-users-to-eat-rocks-and-glue-pizzas.

(8) OpenAI, “How OpenAI Is Approaching 2024 Worldwide Elections,” January 15, 2024, https://openai.com/index/how-openai-is-approaching-2024-worldwide-elections/.

(9) Democracy Works, “Democracy Works Partnering with Anthropic,” February 21, 2024, https://www.democracy.works/news/democracy-works-partnering-with-anthropic.

(10) Julia Angwin, Alondra Nelson, and Rina Palta, Seeking Reliable Election Information? Don’t Trust AI (The Democracy Projects, February 27, 2024), https://www.ias.edu/sites/default/files/AIDP_SeekingReliableElectionInformation-DontTrustAI_2024.pdf.

(11) Megan McArdle, “Female Popes? Google’s Amusing AI Bias Underscores a Serious Problem,” Washington Post, February 27, 2024, https://www.washingtonpost.com/opinions/2024/02/27/google-gemini-bias-race-politics/.

(12) Rob Waugh, “The Nine Shocking Replies That Highlight ‘Woke’ ChatGPT’s Inherent Bias,” Daily Mail, February 11, 2024, https://www.dailymail.co.uk/sciencetech/article-11736433/Nine-shocking-replies-highlight-woke-ChatGPTs-inherent-bias.html.

(13) Legacy Communications, “According to AI, Males Dominate the Professional Workforce,” February 16, 2024, https://legacycommunications.com/insights/ai-bias/.

(14) David Thiel, Melissa Stroebel, and Rebecca Portnoff, Generative ML and CSAM: Implications and Mitigations (Stanford Cyber Policy Center, June 24, 2023), https://stacks.stanford.edu/file/druid:jv206yg3793/20230624-sio-cg-csam-report.pdf.

(15) Sayash Kapoor et al., “On the Societal Impact of Open Foundation Models,” arXiv, February 27, 2024, doi.org/10.48550/arXiv.2403.07918.

(16) Neel Guha, et al., “AI Regulation Has Its Own Alignment Problem: The Technical and Institutional Feasibility of Disclosure, Registration, Licensing, and Auditing,” George Washington Law Review 92 (2024), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4634443.

(17) Daniel Ho, Jennifer King, Russel C. Wald, and Christopher Wan, Building a National AI Research Resource: A Blueprint for the National Research Cloud (Stanford Institute for Human-Centered Artificial Intelligence, October 2021), https://hai.stanford.edu/sites/default/files/2022-01/HAI_NRCR_v17.pdf.

Previous
Previous

AI, Society, and Democracy: Just Relax

Next
Next

Generative AI and Political Power