Generative AI and Political Power

by Eugene Volokh (1)

As people come to rely on AI tools to answer questions, they will likely use those tools to answer political questions as well. The answers that the AI companies choose to provide, the author argues, may thus subtly but substantially influence public attitudes and, therefore, elections—especially to the extent Big Tech has been shifting from a “user sovereignty model,” in which tools (word processors, browsers, search engines) were intended to be faithful servants of the user, to a “public safety and social justice model,” in which tech tools (social media platforms, AI assistants) are designed in part to refuse to output certain answers that their creators think are dangerous or immoral. What should we think about that?

 
 

I. The Likely Surge in Concentrated Big Tech Power

Large Language Models (LLMs) are being integrated into search engines and other products, (2) and seem likely to become the main sources through which people seek answers to questions. Why search for webpages that you must then read, when you can use software that generates the answer? (3)

And this will likely equally apply to political questions: What are the arguments for or against various ballot measures? Is there an illegal immigration crisis? Which candidate more supports abortion rights or gun rights? AI programs offer the prospect of efficiently providing political information to voters who don’t want to invest much time in research. (4) 

Leading AI companies will thus acquire tremendous power to influence political life. Arguments included in AI outputs will tend to become conventional wisdom. Arguments AI programs decline to provide will largely vanish to most people, except the most politically engaged. In a closely divided nation, influencing even 20% of the public can massively affect elections. And if a few giant corporations can exclude or marginalize many citizens’ beliefs, political life becomes less pluralistic and less democratic.

 
AI programs offer the prospect of efficiently providing political information to voters who don’t want to invest much time in research. Leading AI companies will thus acquire tremendous power to influence political life.
 

Of course, this concern has long been raised about traditional tools for answering political questions: the media. But media concentration has indeed drawn attention and action. “[I]t has long been a basic tenet of national communications policy that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public.”(5) That tenet has led to antitrust enforcement; (6) media cross-ownership limits; (7) requirements that cable systems carry certain existing channels; (8) the Fairness Doctrine; (9) and more. 

These regulatory solutions caused their own problems, and some or all may have been unwise. (10) We should learn from those failures. And perhaps many more companies will enter and thrive, and dilute the current AI companies’ market share—or perhaps nearly all users will continue to get political information from other media as well as from AI companies, so that AI companies will end up having only relatively little political influence. But it’s possible that AI outputs will be highly politically influential, and that the market will remain dominated by a few companies (just as, say, the search engine market is thus dominated), whether because of entry costs (11) or because big companies will buy up the upstarts. Perhaps we should therefore consider again the dangers of concentrated power among those who provide information on public affairs.

II. The Retreat from User Sovereignty

To the extent that tech companies adopt what one might call a User Sovereignty Model, we might not have the same concerns. Few of us worry that Big Tech can control what we write or view even though a few companies control most of the American word processor and browser markets. 

Big Tech appears to view a word processor’s or browser’s job solely as helping users write or view what they want. Microsoft Word won’t refuse to let you write something it thinks is racist. Google Chrome won’t refuse to access neo-Nazi or Communist sites, or refuse to display swastikas or racial slurs. The one prominent exception—Google scans Google Drive and Gmail for apparent child pornography (12)—is narrow, and closely tied to an express legal obligation. 

Search engines, unlike word processors or browsers, must choose what content to display. But even so, my sense is that search providers generally view themselves as faithfully aiming to respond to users’ desires. And even if there might be ideological bias in search engine results, it has generally been hard to see, perhaps because search algorithms are opaque, and because search engines have never touted any ideological spin to those algorithms. 

But AI companies have deliberately shifted, in part, from this User Sovereignty Model to what one might call the Public Safety and Social Justice Model. They proudly stress that they deliberately implement “guardrails” aimed at blocking users from producing harmful outputs, such as bomb-making instructions or speech “that promotes discrimination or racism.” (13) 

And the announced guardrails make one wonder what other ideological skews have been deliberately but quietly inserted. A recent Future of Free Speech study suggests there are many. 

Google Gemini and ChatGPT-3.5, for instance, apparently wouldn’t write Facebook posts that (1) opposed allowing transgender athletes in women’s competitions, (2) argued that COVID-19 stemmed from a Chinese lab leak, or (3) argued in favor of abortion prohibition—but would write Facebook posts providing the opposite positions. (14) ChatGPT wouldn’t write a Facebook post arguing that no measures should be implemented to deal with alleged systemic racism, but would write a post taking the opposite view. (15) Google Gemini wouldn’t write a Facebook post arguing that sex quotas were not needed to fight patriarchy, but would write a post on the opposite side. (16) I have noted other examples elsewhere. (17) 

Some of this may have organically flowed from the training data, but some seems to have flowed from deliberate decisions by the AI companies. (18) Indeed, the Google Gemini black Nazi/female Pope fiasco admittedly stemmed from a poorly implemented attempt to deliberately assure “diversity” of output. (19)

 
Even if AIs don’t deliberately seek to kill us, what if they just blithely maximize, without regard for us, their own preferences, whether ones we instructed them to maximize or ones that emerged without our prompting? Generative AI by itself can’t easily cause these problems, so long as it’s just used to output material in response to a user prompt. Still, tech companies appear to be worried about a future AI Apocalypse.
 

III. Why the Retreat from User Sovereignty? 

Here are a few speculations as to why some companies have taken such a sharply different approach to generative AI than to other products, where User Sovereignty still dominates. 

A. Authorship
AI companies might see themselves as at least partly the authors of AI-generated output, even when it’s produced in response to user prompts. They may therefore feel more responsibility to make the output consistent with their own moral views—and may feel that they will be held responsible by the public, legislators, or regulators for their AIs’ output. In this respect, AI programs might be assimilated in some people’s minds to newspapers or magazines, which we do expect to have an editorial position that the publishers can defend as a moral or factual matter. 

B. Feasibility (plus Path Dependence) 

When word processors, browsers, and search engines were developed, it was hard to tell, except very crudely, what people were using them for. Early versions of Microsoft Word, for instance, couldn’t accurately determine whether a document contained racist material. They might recognize that it contained racial slurs, but those could mean very different things in different contexts (e.g., a racist polemic vs. a study of hate crimes vs. a movie script). 

But AI programs can fairly accurately determine their outputs’ message. Indeed, they may need to do that, to generate the outputs. It’s thus more feasible to screen out, to some degree, messages that are seen as harmful or productive of injustice. Indeed, something similar can likewise technically be done by integrating AI with older tech products; but those products’ history might keep them on the User Sovereignty path. 

C. Guardrail Creep AI companies can be held legally liable for libelous outputs, copyright-infringing outputs, and the like. (20) They might be potentially liable for outputting material that can be used to cause physical harms, such as incorrect medical advice or bomb-making instructions. (21) They might also be liable for outputting material that violates some countries’ “hate speech” laws. 

Companies thus must create guardrails to deal with those problems, in a way they largely haven’t had to do for other products. (22) And once they see themselves in the “guardrails to protect against bad outputs” business, they might find it institutionally and politically easier to add further guardrails, including for non-illegal material that nonetheless runs counter to managers’, customers’, legislators’, or activists’ sense of social justice. 

D. The Terminator Problem

People have long and sensibly worried about AIs going rogue. We use our intelligence to kill animals we view as threats, or as tasty. If AIs become more intelligent than us, why wouldn’t they do the same? Even if AIs don’t deliberately seek to kill us, what if they just blithely maximize, without regard for us, their own preferences, whether ones we instructed them to maximize or ones that emerged without our prompting? (23) 

Generative AI by itself can’t easily cause these problems, so long as it’s just used to output material in response to a user prompt. Still, tech companies appear to be worried about a future AI Apocalypse. (24) The companies are (laudably) feeling responsible to act now to prevent such future harms. And once the companies are in this mindset, it becomes easy for them to also feel responsible for preventing harms that stem from AIs being used by humans to spread or reinforce what the tech companies view as bad ideas. 

To be sure, here as with other possible explanations, there are also opposing pressures. There is a normal human inclination to reject responsibility: Accepting responsibility can be costly, considering the financial cost of having to develop various guardrails, the public relations cost of some people’s disapproval of certain guardrails, and the emotional cost of worrying that one isn’t discharging the responsibility well. But the concern about rogue AI may still have helped push toward the Public Safety and Social Justice Model, despite these extra costs. 

E. Political Zeitgeist (plus Path Dependence) 

Finally, my sense is that the 1970s–90s, when word processors, browsers, and search engines emerged, were largely a User Sovereignty world in the view of most computer technology developers: Developers were trying to give users cool tools to do what users chose to do. And once that era’s products were set on that path, people came to expect new versions to likewise take a User Sovereignty approach.

 
Even if AIs don’t deliberately seek to kill us, what if they just blithely maximize, without regard for us, their own preferences, whether ones we instructed them to maximize or ones that emerged without our prompting? Generative AI by itself can’t easily cause these problems, so long as it’s just used to output material in response to a user prompt. Still, tech companies appear to be worried about a future AI Apocalypse.
 

But in the 2010s–20s, more elites began to see technologists’ mission as making the world more equitable, including by stymieing supposedly deplorable user behavior. Decision-makers within the AI companies appear to have acted to implement that mission. 

To be sure, cynics might say this was merely profit maximization, with tech companies solely seeking to satisfy constituencies—legislators, regulators, advertisers, activists, users—whose disapproval might cause business difficulties. (25) If so, perhaps the relevant political zeitgeist relates to what those constituencies thought the AI companies’ mission should be. But my sense is that both ideology and profit mattered in some measure to the executives and engineers, as they do to people more generally.

IV. Opportunity for Growing Government Power 

Growth in Big Tech power may also increase the power of certain kinds of government officials. Like the giant social media companies, giant AI companies can easily become targets for government pressure, aimed at getting the companies to promote the views that government officials favor. (26) (Of course, they may also become targets for outright government regulation, which may raise complex First Amendment questions as to U.S. government actions (27) and even more complex questions once one considers all possible foreign regulators as well—too complex to consider in this short essay.) 

It’s logistically difficult for government officials to control debate in thousands of newspapers or on millions of user sites, whether through threat of regulation, threat of investigation or condemnation, or just attempts at persuasion. Controlling each publisher would yield officials comparatively little benefit. And if the officials think publicity about the pressure campaign would be bad for them politically, they might be reluctant to reach out to a large array of publishers, for fear that at least one might publicize the officials’ actions. But the few major social media platforms have been more tempting targets, because getting one vast platform to restrict certain kinds of speech could, from the government officials’ perspective, yield huge benefits. If there are likewise only a few major AI companies, they too might become targets for government officials who want to implement their preferred Public Safety and Social Justice agenda.

V. Possible Reactions 

What then, should we do about this problem—if we think it’s a problem? 

A. Government Nondiscrimination Mandates 

Sometimes, the law limits companies’ ability to leverage economic power for political purposes. All states ban private employers from firing employees based on how they voted. Many ban private employers from trying to restrict their employees’ other political activities. (28) 

Phone companies, both landline monopolies and competitive cellular companies, are “common carriers,” which can’t cancel, say, the Klan’s or the Communists’ phone lines, even if they don’t want their property used by such groups. One historical reason for such common carrier treatment was 1800s telegraph companies’ refusal to transmit certain kinds of messages. (29) Texas and Florida have used this common carrier model to try to ban certain kinds of viewpoint discrimination by social media platforms. 

But a similar approach to generative AI would be unworkable. Even more than for search engines, (30) generative AI software is useful only to the extent that it avoids inaccurate output (contestable as judgments of accuracy might be). Content-neutrality mandates would make generative AI useless. 

And even viewpoint-neutrality mandates would be unsound. If I ask an AI program to explain the best arguments for and against a particular policy, for instance, the program would have to choose among viewpoints, excluding ones that appear to be extremist or frivolous. It can’t usefully provide outputs that really show all perspectives on an argument. 

Of course, some might argue that AI companies should follow not viewpoint neutrality as such but rather User Sovereignty: The companies should be able to promote some viewpoints but not others, but only toward the end of serving users’ own preferences.

 
Even more than for search engines, generative AI software is useful only to the extent that it avoids inaccurate output (contestable as judgments of accuracy might be). Content-neutrality mandates would make generative AI useless.
 

An analogy might be to lawyers. Lawyers obviously can’t be viewpoint-neutral in the arguments they make in court. But lawyers (paid or pro bono) representing clients should serve their clients’ interests, not the lawyers’ own views of social justice or public safety (though they may pick clients in the first place based on the lawyers’ own views). 

It’s hard, though, to see how any such approach could be effectively implemented as a legal rule for AI governance. If a user suspects that some output stems from a Social Justice motivation on the AI company’s part rather than a User Sovereignty desire to give the user what that user sees as useful, the legal system can’t easily decide whether that’s so, given the opacity of modern generative AI software. 

And that’s especially true if the AI companies are constrained by some legal obligations, for instance, to avoid outputting libels or medical misinformation. Those legal obligations may be consistent with User Sovereignty since users generally want accurate output. Still, guardrails aimed at preventing such outputs will likely diminish the output of related but not illegal material, even when the users would like to see that material. 

There may sometimes be a smoking gun, such as the log of a fine-tuning session in which an AI company employee deliberately “taught” the AI not to output certain viewpoints because of that employee’s own beliefs. But those will likely be rare, especially if such expressly ideological fine-tuning is forbidden. 

Finally, AI companies may have their own First Amendment rights to design their models to reject views they disfavor. (31)

 
Evidence of AI program ideological bias may itself fuel competition by stimulating an appetite for less biased (or differently biased) programs.
 

B. Protecting Competition
Competition might solve the problems described here. Competition could pressure some companies to return to User Sovereignty. And it could lead others to produce many different AIs with different visions of Public Safety and Social Justice, so User Sovereignty could be served by users choosing AIs they prefer. Indeed, evidence of AI program ideological bias may itself fuel competition by stimulating an appetite for less biased (or differently biased) programs. We should be on the lookout, though, for possible barriers to competition that might need to be removed. Here’s one possibility, though at this point it’s unclear whether it will come to pass.

AI companies are being sued by copyright owners, who claim that the AI products’ output infringes the copyright owners’ work, and that training the products on that work was itself an infringement. Setting aside whether the claims are sound as a matter of copyright law, what will likely happen if the owners prevail? 

Surely the big AI companies wouldn’t just shut down. Rather, they’ll offer the copyright owners massive licensing fees—OpenAI, Microsoft, and Google can afford a lot—for a license covering both past and future use of the copyrighted works. (32) Perhaps owners can join to create a collective licensing agency, along the lines of ASCAP and BMI, which have long collectively licensed musical compositions. That agency could authorize AI companies to use the works, and distribute the licensing fees to its members. 

That might make sense as a matter of copyright law, but it could also create a barrier to entry into the generative AI market. Say, for instance, that the agencies demand massive flat fees, perhaps the likely billions that OpenAI and Google would be prepared to pay. An upstart AI company might not be able to afford that fee, especially since investors will realize they might never recoup it if the upstart fails to draw significant market share from the giants. (Most upstarts do fail.) 

Or even if the agencies demand fees proportioned to the licensee’s revenue or user count, the agencies may well have the authority to refuse licenses. (33) Say the agencies are approached by a company that wants to develop Buckley, a hypothetical conservative AI alternative. Perhaps the agencies will just seek to maximize their members’ revenue. But perhaps the agencies, supported by many leading copyright owners—or under pressure from advocacy groups, industry groups, or government officials—might see themselves as having a Public Safety and Social Justice imperative to say no to Buckley. (34) If that’s so, a competitive check on the AI companies’ ideological missions might be precluded by the licensing agency’s own ideological mission (which might match that of the AI companies). 

The Parler story offers a cautionary tale. As people (mostly conservatives) in the late 2010s began to be concerned about Twitter (now X) and Facebook restricting various views, many responded, “You don’t like the moderation? Start your own platform.” So wealthy conservative Rebekah Mercer and some others did, funding Parler. (35) Parler imposed many fewer restrictions on user speech. 

Too few, it turned out, for Big Tech. Several days after the January 6, 2021, riot, Amazon removed Parler from its cloud hosting service, “effectively kicking it off of the public internet after mounting pressure from the public and Amazon employees.” Apple and Google likewise removed the Parler app from their app stores. (36) As a condition of allowing it to return, the tech companies demanded that Parler impose more speech restrictions. (37) Eventually, Parler did come back up some weeks later, but it has since been a shadow of its former self. “What we said about starting your own—just a little joke!,” the message seemed to be. “Of course you can’t start something that allows speech we dislike enough, unless you also start your own app store, and your own web hosting services. Then we’ll see what further pressure points we might find to get you back in line.” And the message was likely received not just by Parler, but also by other prospective entrants. 

This could happen to heterodox AI companies well, since such AI companies would need supporting infrastructure just as Parler did: hosting, payment processing, access to mobile devices, computer security, access to specialized computer chips, and perhaps licenses for access to other products. (38) 

Perhaps all this should still be left to the free market, even in a market dominated by a few Big Tech players. Indeed, because regulation often itself raises costs in ways that especially burden new entrants (compared to wealthy incumbents), perhaps deregulation is the best tool for reducing barriers to entry. Perhaps, for instance, Section 230—which, for better or worse, freed the computer industry from the regulatory rules set forth by state tort law and by various state statutes—should indeed be extended at least in some measure to AI companies; if that happens, they wouldn’t face the legal pressure, discussed above in Part III.C, to restrict their outputs. (Of course, if one thinks that libel law, negligence law, and the like are important tools for preventing various harms caused by erroneous AI output, such an extension might do more harm than good.) 

But perhaps some structural governmental mandates are needed, to promote “the widest possible dissemination of information from diverse and antagonistic sources.” There might, for instance, be a broadening of the essential facilities doctrine, to require technology infrastructure companies to provide access to all comers. (39) There might be some preferences for freely available models that new entrants can build on. There might be some transparency requirements where transparency is possible—such as mandating disclosure of reinforcement learning decisions or prompt modifications. 

There might also be compulsory licensing schemes for intellectual property, to retain an incentive for creators but to prevent creators’ blocking future uses of their works. (40) Or courts that worry about the need to promote AI company competition might be more inclined to reject—for instance, under the fair use rubric—the copyright claims of the creators of materials used in training data.

C. Public Pressure Toward Results the Pressure Organizers Like 

AI companies’ ideological restrictions may sometimes partly stem from fear of public reaction and consequent financial loss: “We can’t let our software output view X, since some people would be outraged if this happened.” But what then would be a politically sensible reaction for people who think those views (e.g., the view that transgender athletes shouldn’t be able to compete on women’s sports teams (41)) shouldn’t be excluded? 

Those people would need to be able to organize political pushback, such as publicity, boycotts of AI companies, or boycotts of AI companies’ investors or business partners. And they might have the political power to do so (rightly or wrongly). 

Such fights might seem socially wasteful, but they may be the only way for relative outsiders to Big Tech to succeed. Perhaps future students of politics will view ideological movements that don’t mobilize to influence AI companies with the contempt that people view movements that failed to react to other social and technological changes—radio, the internet, the widening electoral franchise, and more.

D. Public Pressure or Advocacy Toward the User Sovereignty Model 

Finally, despite the undeniable appeal to many of the Public Safety and Social Justice Model, perhaps AI companies might be pushed back toward the User Sovereignty Model. Indeed, the model might fit well with competitive pressures and the anti-paternalistic “customer is always right” business ethic to which competition long led. 

“You Big Tech companies aren’t responsible for what people do with the output of your generative AI,” the argument would go (perhaps with rare exceptions, such as for instructions on nonobvious ways to commit serious crimes, such as guidance on constructing biological weapons). “And you shouldn’t have the power that would come with that responsibility. (42) We’ll absolve you of fault when we learn that someone used your software to create something offensive or hateful. But we’ll in turn insist that you not use your massive economic power to influence our political lives through your products’ outputs.” 

Whether such a campaign to influence AI companies—and the public— will succeed is hard to predict. But it may be worth considering.

VI. Conclusion 

A trusted and expert advisor can enjoy great power. The king may be sovereign, but the advisor who has the king’s confidence can often influence policy more than the king himself can. 

When the people are sovereign, their advisors enjoy great power as well. If generative AIs become our trusted advisors, guiding us with answers to questions small and large, they too can influence policy. 

We should think hard about what the consequences of that might be, and how those consequences, if dangerous, can be mitigated. And in particular, we should consider whether we should take steps—whether through law or market pressure—to promote a User Sovereignty model.

 

Footnotes

(1) The author would like to thank Angela Aristidou, Laura Bitesto, John Cochrane, José Ramón Enríquez, Anika Heavener, Mark Lemley, Larry Lessig, Nate Persily, and Erica Robles-Anderson.

(2) See Will Douglas Heaven, “Google’s Gemini Is Now in Everything. Here’s How You Can Try It Out,” MIT Technology Review, February, 8, 2024, https:// www.technologyreview.com/ 2024/ 02/ 08/ 1087911/ googles-gemini-is-now-in-everything-heres-how-you-cantry-it-out/ .

(3) See, e.g., Sarah E. Needleman, “How Generative AI Will Change the Way You Use the Web, from Search to Shopping,” Wall Street Journal, October 17, 2023, https://www.wsj.com/tech/ai/how-generative-ai-will-change-the-way-you-use-the-web-from-searchto-shopping-457c815f; Kevin Roose, “Can This A.I.-Powered Search Engine Replace Google? It Has for Me.,” New York Times, February 1, 2024, https://www.nytimes.com/2024/02/01/technology/perplexity-search-ai-google.html.

(4) I focus here on the potential effect of generative AI on voters, who are likely to want quick answers, rather than on experts or legislative staffers.

(5) United States v. Midwest Video Corp., 406 U.S. 649, 668 n.27 (1972).

(6) See, e.g., Associated Press v. United States, 326 U.S. 1, 20 (1945).

(7) See, e.g., FCC v. Nat’l Citizens Comm’n for Broadcasting, 435 U.S. 775 (1978).

(8) See, e.g., Turner Broadcasting System v. FCC, 512 U.S. 662 (1994).

(9) See, e.g., Red Lion Broadcasting Co. v. FCC, 395 U.S. 367, 380 (1969).

(10) See, e.g., Thomas G. Krattenmaker and Lucas A. Powe Jr., Regulating Broadcast Programming (MIT Press, 1994).

(11) Cf. Nate Silver, “Google Abandoned ‘Don’t Be Evil’—and Gemini Is the Result,” Silver Bulletin (February 27, 2023), https:// perma.cc/ 5NTR-2Z89.

(12) See Kashmir Hill, “A Dad Took Photos of His Naked Toddler for the Doctor. Google Flagged Him as a Criminal.,” New York Times, August 21, 2022, https://www.nytimes.com/2022/08/21/technology/google-surveillance-toddler-photo.html.

(13) See, e.g., OpenAI, GPT-4 Technical Report (March 27, 2023), https:// perma.cc/ U9VVM5UA. I use Social Justice to refer to (1) a general commitment to excluding allegedly harmful moral and political views and (2) a particular ideology loosely associated in America today with the political Left. But the concerns raised in this essay would potentially apply regardless of whether “social justice” reflects the views of the Left, the Right, or anyone else.

(14) Jordi Calvet-Bademunt and Jacob Mchangama, Freedom of Expression in Generative AI: A Snapshot of Content Policies (Future of Free Speech, February 2024), 29–30, https:/futurefreespeech.org/report-freedom-of-expression-in-generative-ai-a-snapshot-ofcontent-policies/.

(15) Calvet-Bademunt and Mchangama, Freedom of Expression at 31.

(16) Calvet-Bademunt and Mchangama, Freedom of Expression at 31.

(17) See Eugene Volokh, “Google Bard AI Responds to ‘What Are Some Good Things About [Trump’s/Biden’s] Presidency?,’” The Volokh Conspiracy, March 23,2023, https:// reason.com/ volokh/ 2023/ 03/ 23/ google-bard-ai-asked-what-are-some-good-things-abouttrumps-bidens-presidency/ .

(28) See, e.g., Eugene Volokh, “Private Employees’ Speech and Political Activity: Statutory Protection against Employer Retaliation,” Texas Review of Law and Politics 16 (2012): 295, http://www.law.ucla.edu/volokh/empspeech.pdf; Eugene Volokh, “Should the Law Limit Private-Employer-Imposed Speech Restrictions?,” Journal of Free Speech Law 2 (2022): 269, https://www.journaloffreespeechlaw.org/volokh2.pdf.

(29) See Genevieve Lakier, “The Non-First Amendment Law of Freedom of Speech,” Harvard Law Review 134 (2021): 2299, 2322, https://harvardlawreview.org/print/vol-134/thenon-first-amendment-law-of-freedom-of-speech/.

(30) See Eugene Volokh and Donald M. Falk, “First Amendment Protection for Search Engine Search Results,” Journal of Law, Economics & Policy 8 (2012): 883 (white papercommissioned by Google), https://www2.law.ucla.edu/Volokh/searchengine.pdf.

(31) See Volokh, Lemley, and Henderson, “Freedom of Speech and AI Output.”

(32) Consider Google settling a copyright infringement lawsuit over Google Books Search in 2008 for $125 million. Google, Authors, Publishers, and Google Reach Landmark Settlement, Google News, October 28, 2008, https:// googlepress.blogspot.com/2008/10/authors-publishers-and-google-reach_28.html. Google Books Search, though, was a fairly minor product for Google; the payment for using copyrighted works that formed part of the training data for AI products may be much larger.

(33) ASCAP and BMI lack such authority, but that is because of the 1941 and 1950 antitrust consent decrees that govern their behavior. See Broadcast Music, Inc. v. CBS, Inc., 441 U.S. 1 (1979).

(34) Compare the Google Gemma license discussed infra note 38.

(35) Kelsey Vlamis, “Rebekah Mercer Is Funding Parler, the Social-Media App Touted by Republican Politicians and Pundits That Conservatives Are Flocking To,” Business Insider, November 14, 2020, https:// perma.cc/ V67F-SBEL.

(36) See Brian Fung, “Parler Has Now Been Booted by Amazon, Apple and Google,” CNN Business, January 11, 2021, https:// perma.cc/ 4XF7-SWL6.

(37) Kif Leswing, “Apple Will Reinstate Parler,” CNBC, April 19, 2021, https:// perma.cc/3LFQTX6E.

(38) So far, the major AI companies have mostly declined to assert patent rights in inventions that make generative AI possible, but that might later change. And even open models released by the Big Tech companies are often released with ideological conditions: Google, for instance, prohibits users from “us[ing]” or “allow[ing] others to use” the open-access “Gemma or Model Derivatives to” “[g]enerat[e] content” that (among other things) promotes “hatred,” “violence,” or “self harm,” or “may have unfair or adverse impacts on people, particularly impacts related to sensitive or protected characteristics.” “Gemma Prohibited Use Policy,” Google AI for Developers, February 21, 2024, https:// perma.cc/ G4JC-UJMR.

(39) Cf. Final Judgment, United States v. Microsoft Corp., no. 98-cv-1232 (CKK) (D.D.C. Nov. 12, 2002) (antitrust consent agreement, among other things limiting Microsoft’s discretion to terminate contracts with competitors, and requiring Microsoft to allow end users to switch, “via an unbiased mechanism readily available from the desktop,” from Microsoft default applications to competitor applications).

(40) Such schemes are familiar in copyright law, though they are controversial. See, e.g., 17 U.S.C. §§ 111, 115, 119, 122.

(41) See Calvet-Bademunt & Mchangama, Freedom of Expression, at 30.

(42) See Eugene Volokh, “The Reverse Spider-Man Principle: With Great Responsibility Comes Great Power,” Journal of Free Speech Law 3 (2023):197, https://www.journaloffreespeechlaw.org/volokh3.pdf.

Previous
Previous

Misunderstanding AI’s Democracy Problem

Next
Next

Techno-ideologies of the Twenty-first Century