Informational GPS

by Reid Hoffman and Greg Beato (1)

The authors emphasize the importance of broad access and individual agency in realizing AI’s potential benefits. They compare AI to GPS technology and propose strategies to develop equitable and inclusive AI systems that build societal trust and deliver benefits to billions of people.

 
 

Until the early 2000s, “paper maps” did not exist—they were simply called “maps.” Unwieldy, hard to update, and, honestly, if you think texting while driving is not a great idea, try paper-mapping while driving. It was a dangerous era. In the year 2000, things began to shift in a new direction. That’s when civilian access to the Global Positioning System (GPS) got a major upgrade. By that point, the U.S. military had already been providing free global access to GPS to anyone who wanted to use it for more than a decade. But out of concerns over national security and other potential misuses, the Air Force had been deliberately scrambling the signal available for civilian use to make it ten times less accurate than the real thing.

By the mid-1990s, though, the utility of GPS in civilian contexts was becoming increasingly obvious. In addition, national security concerns were less pressing in the wake of the Soviet Union’s collapse. The benefits that could accrue from deploying the full power of GPS in non-military contexts, the Clinton Administration reasoned, outweighed the risks. A completely accessible GPS would boost private sector investment and innovation, accelerate adoption rates, and dramatically increase the overall value of GPS as a global public good.

At midnight on May 30, 2000, when this new level of access was operationalized, there were only around four million civilian GPS users worldwide. But hopes were high that improved performance and falling prices for consumer GPS receivers would quickly turn GPS into an indispensable part of life in the 21st century. “Devices that know where they are will soon be everywhere. And everything is going to know where it is,” James Spohrer, IBM’s Chief Technical Officer, told the New Yorker in 2000. “We are going to map every metre of this planet.” (2) 

In the wake of this policy shift, GPS experienced a Cambrian explosion of innovation. Today, it enables a wide range of location-based services, helps synchronize telecommunications networks, supports emergency response and disaster relief efforts, and plays key roles in urban infrastructure management and precision-farming, to name just some of its applications. A 2019 report from the National Institute of Standards and Technology estimates that GPS technologies have created $1.4 trillion in economic benefits to the public sector from 1984 to 2017, with 90% of that occurring since 2010. (3) 

There are a number of reasons why the story of GPS is relevant to AI’s ongoing development. First, it stands as a clear example of the positive outcomes that can result when the government embraces a pro-technology, pro-innovation perspective and views private-sector entrepreneurship as a strategic asset for achieving public good. Second, it’s also a great example of how we can effectively leverage our capacity to turn Big Data like geographic coordinates and timestamps into Big Knowledge—actionable and accessible knowledge that can be used to provide context-aware guidance in many aspects of our lives. Third, and most importantly for democracy, it reinforces individual agency. 

While GPS serves many purposes, across multiple domains, its breakthrough application was turn-by-turn navigation. It and the commercial services built on top of it allow us to move through the physical world with constantly updated proximate knowledge. At literally every turn, these navigation systems increase individual agency by telling us where we are, what else is nearby, what obstacles might impede our progress, and so much more. 

On a metaphorical level, large language models (LLMs) and the conversational agents built on top of them function similarly: They increase our capacity to navigate the complex and ever-expanding informational environments that define life in the 21st century. In doing so, they enhance the individual agency of billions of people worldwide, by providing the kind of situational fluency that enables higher engagement and more informed decision-making. 

Enhancing individual agency is especially important given the centralizing tendencies of AI, where extensive data, hardware, energy, and human talent are needed to achieve state-of-the-art performance. To ensure that we develop AI in alignment with the democratic ideals of self-determination and participatory governance, we must pursue design paradigms that prioritize individual agency and give people hands-on access to tools that they can use in practical, open-ended ways. Conceptualizing LLMs as a form of informational GPS provides a model for doing that.

 
On a metaphorical level, large language models (LLMs) and the conversational agents built on top of them function similarly to GPS: They increase our capacity to navigate the complex and ever-expanding informational environments that define life in the 21st century.
 

Individual Agency in an Age of Abundant Intelligence 

Along with their similarities, there are clear distinctions between GPS and LLMs. In the case of the former, the U.S. military exercises exclusive control over the development of the core technology. The latter are the product of a diverse array of researchers, engineers, and corporations, and are available as open-source, proprietary, or partially open models that offer limited developer access through public APIs. 

Even more importantly, GPS deals primarily with objective, ground-truth spatial and temporal data—i.e., geographic coordinates and precise timestamps. Large language models, in turn, process and generate context-dependent textual information, and create outputs based on patterns that are rooted in the nuances, complexities, and subjectivities of human language. There is no single ground truth of objective data for LLMs to utilize. 

Instead, every LLM developer maps a unique “informational planet” of its own making. The shape and the contours of that planet are contingent upon the size of the model’s pre-training dataset, the number of its parameters, and the optimization strategies it is subjected to during its pre-training and fine-tuning phases—especially Reinforcement Learning from Human Feedback (RLHF), a specific type of fine-tuning that impacts the model’s behavior, biases, and outputs by aligning it with specific human preferences and values. All of this can significantly shape the landscape of the informational planet the model represents.

While the Earth largely stays fixed, these informational planets can change over time, through updates in training techniques and the incorporation of new data. Business models can potentially have an impact as well. Imagine, for example, how developers might filter training data or tweak optimization algorithms in an attempt to over-prioritize content that drives engagement and ad revenue at the expense of accuracy and representativeness. Finally, since LLMs generate outputs based on statistical probability rather than fixed rules, a single prompt—or “request for directions,” to draw upon the GPS analogy—can produce different outcomes each time you input it. 

That’s why the existing LLM environment, characterized by many developers and diverse development paradigms, is the right approach. In this way, no single entity or ideology dominates the technology as a monolithic “source of truth.” Instead, a multipolar ecosystem that embraces a multiplicity of viewpoints emerges. This diversity feeds into individual agency, by increasing the chances that a wide range of users can find LLMs that align with their own values, preferences, and intentions. (It also helps avoid a vulnerability of GPS—namely, that significant damage to its constellation of 32 satellites, whether intentional or accidental, could disrupt many essential services and aspects of public infrastructure across the globe.) 

Why is individual agency so important in the context of AI? In an essay OpenAI published in 2015 to announce its launch, two of its co-founders, Greg Brockman and Ilya Sutskever, summarized the ethos that would inform their organization’s efforts: “We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible.” (4) 

In prioritizing “individual human wills” and the “spirit of liberty,” Brockman and Sutskever clearly sought to address long-standing concerns about the potentially oppressive and authoritarian nature of Big Data and the increasingly omniscient technologies that generate and process it. 

Their perspective was pivotal in 2015 and has only grown more important since then. In fact, as we make progress on challenges like model accuracy and bias, concerns over individual agency will likely intensify. Improvements in reliability and fairness will presumably make models more trustworthy and more authoritative—so much that we begin to cede more and more of our decision-making to AI systems of various kinds. Or others may do so, ostensibly on our behalf but often without our consent or even our knowledge. 

“There are banks of giant memory machines that conceivably could recall in a few seconds every pertinent action—including failures, embarrassments, or possibly incriminating acts—from the lifetime of each citizen,” mid-20th century social critic Vance Packard wrote in his 1964 bestseller, The Naked Society. “And brain research has progressed to the point where it is all too readily believable that a Big Brother could implant an electrode in the brain of each baby at birth and thereafter maintain by remote control a certain degree of restraint over the individual’s moods and behavior, at least until his personality had suitably jelled.” 

Ask ChatGPT to translate phrases like “giant memory machines” into more contemporary idioms, and Packard’s fears from 60 years ago would fit seamlessly into current discourse on AI and its potential impacts on privacy and individual agency. The dystopian nightmare of surveillance and compliance that he invoked with his reference to Big Brother and George Orwell’s 1984 feels more au courant than ever. 

Sometimes history rules with an ironic fist, though. Even with the emergence of new technologies that continue to claim a greater place in our lives, the Orwellian nightmare never actually materialized in the way that Orwell, Packard, and subsequent generations of tech skeptics predicted it would, especially in democratic nations. 

In fact, there’s a strong case to be made that the opposite happened. Instead of a world of panoptic telescreens enforcing top-down compliance and conformity, we got a world of PCs and smartphones enabling bottom-up individualism and self-determination. The “organization man” of the 1950s, a human punch card who aspired to nothing less and nothing more than middle-management homeostasis, was slowly vaporized from public discourse and replaced by “rockstar” product managers armed with candy-colored iMacs. 

For better and worse, we now live in a world where “think different” and “do your own research” are rallying cries. The selfie is our signature art form. Social media platforms are arenas for identity expression and rights advocacy. Three decades into the 21st century, the challenge for democracies is not that Orwellian surveillance has led to authoritarianism and cowed compliance to the state. It’s that platform-driven polyvocality balkanizes us. Consensus feels impossible, compromise unthinkable. 

 
Fully applying AI to grand challenges like sustainable energy abundance, drug discovery, and more equitable access to healthcare and education will require more than just technological breakthroughs. Broad societal understanding, trust, and a sense of shared purpose matter, too.
 

Everyone dreams of a tech-driven solution to enforce unity in their own preferred manner. A virtual border wall to reinforce the physical border wall. A repeal of Section 230 and a return to more centralized media gatekeeping. Network states where a public pledge of allegiance to a minimum viable manifest destiny is the price of citizenship. And yet we’re also deeply suspicious and skeptical of technology. The enduring specter of data-driven conformity and disempowerment, lately expressed in books like Shoshana Zuboff’s 2019 bestselling The Age of Surveillance Capitalism, haunts our tweets and Substacks. 

Fully applying AI to grand challenges like sustainable energy abundance, drug discovery, and more equitable access to healthcare and education will require more than just technological breakthroughs. Broad societal understanding, trust, and a sense of shared purpose matter, too. One way to pursue such ends is through applications that enable individual users to access and experiment with AI directly. When people experience the benefits of new technologies in hands-on, self-determined ways, a sense of equity accrues. That’s what happened with automobiles and the internet. By helping people see what’s in it for them, they’ll also be more likely to appreciate what’s in it for all of us.

If this seems obvious, the default way to do things, consider the development paradigms that dominated AI until just a few years ago. By the mid-2010s, recommendation engines, newsfeed curation, and predictive text and auto-complete services were all incorporating machine learning techniques. So too were facial recognition systems, predictive policing apps, and systems making life-changing decisions about who qualified for mortgages or early parole and who did not. 

In these instances, the institutions and individuals choosing to use such technologies were certainly enhancing their own operational capabilities. But these were also all scenarios where AI was impacting people who had not explicitly chosen to use it. Even with recommendation services and news curation, you were probably simply following the path that developers have laid out for you. Amazon says that I might be interested in this one thing because I just bought this other thing? Maybe I’ll check it out. 

That started to change with the release of wildly popular applications like DALL-E 2 and ChatGPT. Now, there are myriad systems like these that the public can access—AI that affirmatively works for you and with you, rather than on you. Of course, it’s also true that someone else might choose to use generative AI in ways that work on you—for example, by creating a deepfake designed to trick you somehow. But the predominant impact of these tools has been to democratize access to cutting-edge AI in a way that did not previously exist. This shift is crucial in ensuring that AI develops in ways that are conducive to individual agency and human flourishing.

License to Skill

In the time it takes you to read this sentence, the world produces enough data and information to fill 23 billion ebooks. (5) Some of that comes from humans, in the form of tweets, Wikipedia entries, GitHub repos, white papers posted to arXiv, IRS guidances, and TikTok dances. Some of it comes from smartphones, smart thermostats, security cameras, and other IoT infostructure, in the form of GPS data, temperature readings, video footage, and more. 

How do we manage and make the best use of the vast and complex informational landscapes we now inhabit? There are laws, rules, and norms that define your life as a citizen. Beliefs, values, and traditions that contribute to your identity as a member of a particular community or group. Specific knowledge and lexicons you use in your line of work. Different kinds of literacies that cut across multiple realms. 

Life as a human today means constantly upskilling—at work, yes, but everywhere else, too. While digital technologies underlie these challenges, they also help us manage them. The 20th century brought us innovations like email, hyperlinks, search, and emojis in response to new informational demands. The 21st century has given us AI.

As their name implies, large language models are, at heart, systems for analyzing, synthesizing, and mapping language flows. That’s what informs the analogy to GPS navigation systems—LLMs are infinitely applicable and extensible maps that help you get from point A to point B with greater certainty and efficiency. You can ask ChatGPT to “translate” a white paper on Q-learning in terms that will help a person with no computer science background form a basic understanding of the material it covers. You can give it a work contract from a potential client based in a country where you’ve never done business and have it evaluate how representative its terms and conditions are for that market. 

Given that LLMs can generate inaccurate or misleading outputs, it’s always good practice to seek further verification, particularly in high-risk scenarios. However, for gaining a basic understanding of a new field you need to master, learning new terminology, or synthesizing vast amounts of information, LLMs have quickly become indispensable tools for accelerating skills development. Transforming “search” into “fetch,” they automate the labor-intensive and often fruitless process of exploration, analysis, and synthesis that traditional search requires. Instead of a map, where you must figure out your starting point and ending point, then plot a course between them, you get turn-by-turn directions. 

At the same time, you have many opportunities to modify your route along the way. With GPS navigation, you can choose to go off-course at any time, and the system will readjust to accommodate your current position. With LLMs, it’s similar. If a conversational agent provides an output that doesn’t address your query in a way that seems useful, you can modify your prompt and ask again. If you want to double-check an output, or get an output from a different perspective, you can. Ultimately you can guide, reorient, and challenge an agent in however many ways you can dream up. 

Or to put it another way, conversational agents combine a high degree of automation with a high degree of hands-on control. While you no longer have to perform many of the functions that search generally requires, your level of engagement may actually be higher depending on how actively you decide to participate in managing the agent’s effort. The ability to always make that choice increases your own agency and helps ensure that you achieve outcomes that are tuned to your needs. 

In addition, while human knowledge is broadly accessible in the form of books, videos, and other forms of media, human intelligence tends to be more tightly clustered, given how it’s bound up in actual human brains. There are far more psychiatrists per capita in cities like New York and Seattle than there are in rural counties. Legal experts cluster at high-priced law firms; computer science PhDs with expertise in machine learning gravitate toward Big Tech campuses.

While LLMs do not possess intelligence in the same way that humans do, interacting with them already provides a much different experience than interacting with a Wikipedia page or a podcast. And this will become even more apparent as LLMs acquire more multimodal capabilities. In time it will be possible to receive information, highly personalized to your particular needs, in whatever media format you prefer. 

Ultimately, people across all strata of society will benefit from this technology, just as they do from GPS and smartphones. But as synthetic intelligence diffuses throughout society just as broadly and diversely as synthetic manpower and horsepower do, it’s likely to have an outsized impact on those who lack access to the places where human intelligence traditionally clusters. 

For example, if you’re one of the 25 million people in America who are characterized as having “limited English proficiency” because English is not your first language, and you have a vision impairment, you could show an AI a photo of a letter you received from your utility company and ask it to read it to you in your native language. 

If you’re a high school student from a low- or middle-income family navigating the college application process with little access to pricey human tutors, coaches, and advisors, LLMs may offer context and guidance that helps you increase your chances of admission to your preferred school.

 
Transforming “search” into “fetch,” LLMs automate the labor intensive and often fruitless process of exploration, analysis, and synthesis that traditional search requires.
 

If you get a letter from your landlord that states you must “quit premises” in 30 days unless you pay $3,000 in “rent arrearage,” an LLM can offer suggestions about potential next steps you can take. 

While the ability to intervene, correct, and fine-tune LLM outputs in real time affords them a capacity to meet you where you are that books, video, and other forms of mediated knowledge don’t possess, it’s also true that training data for many if not most LLMs over-represents English-speaking, American, white male voices, because so much of it came from sources like Reddit and Wikipedia. For all their safety guardrails, LLMs can often still be coaxed into producing toxic, discriminatory, or harmful outputs through various forms of jailbreaking. Maliciousness is also not a requirement to produce toxic or biased outputs. Ask ChatGPT to “create an image of a flight attendant serving lunch to a group of lawyers” and see what you get. 

These issues, and others, are real challenges. LLMs are shaped not only by the data they are trained on, including whatever biases and focus areas might inform it, but also by the data they lack. No single model, no matter how all-encompassing it might get, can encapsulate every facet of human experience and diversity in all its three-dimensional nuance. They’re models, not life itself, and so to some extent they will always be reductive in how they render the world. 

So, making them fairer and more inclusive and understanding their limitations will require persistent vigilance. It’s an ongoing process. But it’s a process where we’ve already made significant progress, such as implementing more robust content filters. And it’s a process where we can continue to make progress faster by making models broadly accessible and learning how people use LLMs, where issues arise, and how public beliefs and values regarding how models should function cohere over time. 

That’s another reason why the history of GPS is a useful guide here. In the early days of GPS, not everyone agreed that unleashing GPS access to the public was a great idea: Why make it easy for adversarial states, terrorists, stalkers, and criminals to access such precise geolocation information? Concerns over what could possibly go wrong led the Pentagon to deliberately inhibit the utility of GPS for more than a decade. 

Once that policy was repealed, however, we have not seen the significant downsides that the Department of Defense had feared might come to pass with free global access to GPS. That doesn’t mean it’s fail-safe or risk-proof, far from it. A range of mishaps and abuses regularly occur. The rare but tragic instances where drivers follow erroneous navigation system instructions into remote terrain, or into a body of water, with fatal consequences, draw the most attention. But most people who rely on GPS navigation services have experienced similar but ultimately harmless scenarios at one time or another, with systems providing confusing directions, sending them down private roads or paths, or otherwise leading them astray. 

There are also instances where bad actors jam or spoof the precise timing signals that GPS relies on, causing receivers to calculate incorrect positions or fail altogether. Thieves who steal entire shipping containers use jammers to disable the GPS tracking tags on the goods inside them. Oil tankers defying U.S. sanctions have used spoofers to make it look like they’re in one location when they’re really picking up cargo at Russian shipping terminals in Russia for eventual delivery to China. (6) 

For the last 20-plus years, however, the prevailing story of GPS is not that its limitations, flaws, and vulnerabilities have led to significant disruptions in security, navigation, and other essential services. Instead, it’s that GPS has globally enabled a wide range of massively beneficial services, every minute of every day, week after week, year after year. With generative AI models that function as a new form of informational GPS, we’re now on the same path. It’s a journey that will give billions of people new powers to navigate the 21st century.

 

Footnotes

(1) The authors appreciate the opportunity to contribute to The Digitalist Papers, and extend their deepest gratitude to the volume’s faculty leads, Erik Brynjolfsson, Sandy Pentland, Nate Persily, and Condoleeza Rice; editors Angela Aristidou and Susan Young; and all the fellow contributors and reviewers who offered feedback that helpedshape our essay.

(2) Michael Specter, “No Place to Hide,” New Yorker, November 17, 2000: 100, https://www.michaelspecter.com/wp-content/uploads/satellite.pdf.

(3) Allen O’Connor, Michael Gallaher, Kyle Clark-Sutton, Daniel Lapidus, Zack Oliver, Troy Scott, Dallas Wood, Manuel Gonzalez, Elizabeth Brown, and Joshua Fletcher, Economic Benefits of the Global Positioning System (GPS), RTI Report Number 0215471 (National Institute of Standards and Technology, June 2019), https://www.rti.org/publication/economic-benefits-global-positioning-system-gps.

(4) Greg Brockman and Ilya Sutskever, “Introducing OpenAI,” December 11, 2015, https://openai.com/index/introducing-openai/.

(50 According to the IDC Worldwide Global DataSphere Forecast 2023–2027, worldwide data production in 2024 is estimated at 147 zettabytes, or roughly 402 billion gigabytes per day. Given that a typical eBook requires 2 megabytes of storage, that equates to enough data to fill roughly 201 trillion ebooks a day, or 139 billion books per minute, or 2.3 billion books per second. If it takes you 10 seconds to read that sentence, the world produces enough data to fill 23 billion ebooks. See “3 Things Driving Data,” Platter Chatter, February 2, 2024, https://www.cbldatarecovery.com/blog/data-recovery/3-things-driving-data-storage-tech-trends-2024; John Rydning, World IDC Global Datasphere Forecast, 2023–2027: It’s a Distributed, Diverse, and Dynamic (3D) DataSphere, IDC Research, April 23, 2023, https://www.idc.com/getdoc.jsp?containerId=US50554523&pageType=PRINTFRIENDLY.

(6) Reporting on such incidents in 2023, the New York Times explained that while such shipments are not illegal for the tankers to engage in because neither Russia nor China recognizes U.S. sanctions, most shipping industry insurers are based in the West, and thus are bound by these sanctions. Thus, the tankers that engage in this practice are trying to deceive their insurers, in order to keep the coverage they need to operate in major ports around the world. Christiaan Triebert, Blacki Migliozzi, Alexander Cardia, Muyi Xiao, and David Botti, “Fake Signals and American Insurance: How a Dark Fleet Moves Russian Oil,” New York Times (May 30, 2023), https://www.nytimes.com/interactive/023/05/30/world/asia/russia-oil-ships-sanctions.html.

Previous
Previous

Techno-ideologies of the Twenty-first Century

Next
Next

Getting AI Right: A 2050 Thought Experiment