AI Meets the Cascade of Rigidity
by Jennifer Pahlka (1)
Diminished state capacity leads to civic disengagement as governments remain risk-averse and fail to meet public service expectations. The author advocates for understanding bureaucratic constraints and using AI to build government capacity for more effective governance.
A quarter of the way into the 21st century, we face multiple overlapping crises, what Adam Tooze calls the polycrisis. (2) One factor quietly underpins or amplifies each of these individual crises: the diminished state capacity of the world’s advanced democracies. State capacity is simply the ability of a government to achieve its policy goals, and its decline is visible in everything from the botched COVID-19 response to our inability to build green infrastructure in the face of an impending climate collapse. In these cases and many others, the famously dysfunctional U.S. Congress (and its state and local counterparts) did in fact act, passing much-needed if imperfect legislation. But the intent of that legislation was only partially realized. Elected leaders who pay attention to outcomes feel like they are trying to steer the ship of our nation, but the rudder is loose from the helm. Even when we pass laws, we fail to implement them.
This crisis of state capacity isn’t as simple as a need for more or better technology, but it does implicate our government’s failure to adopt internet-era tools and ways of thinking. The consequences of these failures are hard to overstate, fundamentally eroding our democracy. The trifecta of a rise in need for public services, a rise in the public’s expectations around the delivery of those services, and government’s failure to meet both has profoundly alienated large segments of the voting public. Political scientist Joe Soss made the connection: he showed that participating in means-tested benefit programs significantly reduces the chance that people will vote. The process of accessing benefits is confusing, often insulting, and impossible to succeed at for many people. “Because clients interpret their experiences with welfare bureaucracies as evidence of how government works more generally, beliefs about the welfare agency and client involvement become the basis for broader political orientations,”1 Soss2 explains. (3) But Soss’s work was done in the 1990s. How many of these same alienated clients simply skip voting today, now that they have a candidate who promises to destroy the administrative state that showed them such disregard?
In recent years, the U.S. government at all levels has made significant but incomplete progress catching up to the expectations and ways of working of the internet era. Nowhere near done with its first digital transformation, though, it has now been jolted rudely into the age of AI. Government’s reaction so far has looked a lot like its reaction to past paradigm shifts: words, hundreds of thousands of them, describing emerging (and hotly contested) dos and don’ts to guide this transition.
AI is risky, both in ways we understand and because there is so little we understand, given its emergent properties. Because managing risk is the effective raison d’être for many government institutions, they’ve swarmed to weigh in like ants to a picnic. The result has been the usual mix of controls and mandates designed primarily to keep bad things from happening. This well-intentioned guidance also mentions the opportunities of AI in the public sector, but rarely talks about an equally powerful opposing risk: the risk of further widening the gap between public sector and private sector capacity.
The internet era coincided with a decline in state capacity. It’s debatable how much technological changes contributed to this decline, but what’s not debatable is that the AI era must see a reversal of this trend if we are to confront our polycrisis, and AI itself is going to have to be part of that reversal, whether we like it or not. Unfortunately, so far, we are meeting this transition with the same tools we employed the last time: mandates and constraints. But we deeply misunderstand how mandates and constraints operate in a bureaucracy already subject to conflicting constraints and managed for conformity to process instead of outcomes.
This essay seeks to explain not only how to predict more realistically how current safeguards in government use of AI will play out in practice, but also what we miss when we focus exclusively on how much or how little we should constrain AI’s use. I make the case for a far greater focus on how much or how little capacity and competency we have to deploy AI technologies thoughtfully, and for flexing a different set of governance muscles that act to enable and build capacity within government generally rather than to mandate and constrain specific public sector actors or actions.
The Cascade of Rigidity
To get AI in the public sector right, we need to understand how mandates and constraints actually operate in the real world of bureaucracy. When we are weighing risks and benefits, we worry that we have not been sufficiently detailed about what the state must or must not do in the use of any technology. It is the wrong worry. Safeguards that sound eminently reasonable on paper act very differently than expected when operationalized within risk-averse bureaucracies. The result can be not at all what the framers of those safeguards intended.
Take FISMA, the Federal Information Security Management Act. (4) FISMA provides a menu of some 300 distinct “controls” that government tech teams can choose from to secure software and data from hackers. Competent developers should, in theory, create an informed, thoughtful security plan that chooses the controls most relevant to the circumstances and focus their efforts on implementing and testing those choices. But technologists in government will tell you that’s not an option for them. Routinely, they are forced to implement every one of the 300 controls before their software is allowed to ship. Even if you have a skilled security team, they’ll have to march through a massive checklist, much of it meaningless for their project, instead of focusing on the specific controls that will actually secure their system. Implementing all 300 and verifying that they are implemented will add months, sometimes even years, to the development schedule, making compliance extremely costly. It will also detract from the time spent on both features and testing—and testing, of course, is critical to the real-world security of software. FISMA, as written, is a fine law. But as practiced, it doesn’t just make the software worse; it actually impairs the security of our systems.
How does this happen? It’s perhaps more helpful to understand what would have to happen to ensure the better outcome: the tech team would use their discretion to employ the appropriate controls and put their efforts toward testing and resilience. The launch of a piece of software, even one as simple as a static website, requires sign-off from people at multiple layers of the hierarchy. Other than the team on the ground, generally no one in the layers above has the technical expertise or domain knowledge to know which controls are most appropriate, but each will need to put their name on paper (yes, often paper) stating they approve the launch. The tech team might be able to convince their direct supervisors that their plan was sound, but to go forward with it, each successive layer above will also have to essentially take the word of a team whose work is foreign to them and who is often very distant from them not only organizationally but physically, culturally, and even temporally (it can take a long time for these approvals to even get to the higher-ups). In the meantime, there are compliance officers whose job is defined around “better safe than sorry” and who are strongly disincentivized from approving anything other than all 300 controls in order to protect the agency. If something goes wrong, they will remind the others in the chain, the fact that the team thought that controls 210–244 weren’t relevant won’t matter. It will only matter that it appeared that we skimped on controls. These processes function as a vetocracy, in which it takes all thumbs up in order to accept the risk, and only one thumbs down to stick with the less-risky option.
Anyone in technology knows that there is no such thing as 100% secure—only better and worse practices, some of which change over time as threat vectors change. And that security is a function of ongoing practice and resilience, not a thing you can certify and move on from. And yet, the effect of well-informed and well-meaning security legislation is to force the bureaucracy into rigid, static, maximalist practices that degrade security at the same time they force them to pretend that their systems are 100% secure. Well-meaning and well-written legislation originates at the top of a very tall hierarchy, and as it descends, the flexibility that its authors intended degrades. Laws often have an effect entirely different from what lawmakers intended because of this cascade of rigidity.
Culture Eats Policy
This is not some edge case of malfunctioning bureaucracy explainable by government’s lack of technology expertise. It is the predominant dynamic. You see it in the civil service rules governing hiring, to take another flagrant example that has little to do with technology (but deeply affects our technical capabilities and capacities). Today’s civil service dates back to reforms in the mid to late 19th century, when positions in government were filled through patronage. Starting with passage of the Pendleton Act in 1883, (5) reformers have sought to ensure that federal employees are no longer hired because of who they know but on the basis of their skills and job performance. But the practices that have evolved over the years to implement these laws now have a very different effect.
If you were in a federal agency today, and trying to hire someone with tech expertise through a competitive process that was open to the public, your experience might resemble what the team at the Defense Digital Service faced when trying to hire Jack Cable. (6) In 2017, DDS held a contest to see who could find the most security flaws in its software. Jack won the contest, beating out 600 other security researchers. The DDS team was delighted to find out that Jack was actually open to the idea of working for the Pentagon, and they encouraged him to apply for a job. But the resume Jack submitted described his experience developing “mobile applications in IonicJS, mobile applications using Angular, and APIs using Node.js, MongoDB, npm, Express gulp, and Babel.” The job description called for “experience that demonstrated accomplishment of computer-project assignments that required a wide range of knowledge of computer requirements and techniques pertinent to the position to be filled,” and the HR staffer did not see a connection between what looked like a grab bag of gobbledygook on Jack’s resume and the job requirements. Winning the contest did not even merit giving him the benefit of the doubt, and he was cut in the first downselect. When the team intervened and asked that HR speak with Jack, he was advised to get a job selling computers at Best Buy for a few years and come back, because then he might be qualified for the job he was applying for.
Why does this happen? In a risk-averse culture, rules intended merely to guide processes are interpreted very rigidly. In accordance with law, HR rules try to reduce bias in hiring. The safest way to do that, the logic goes, is to allow only HR professionals, who are specially trained in complex, obscure rules and processes, to review resumes and assess candidates. Involving domain experts in any part of the process except the final step is considered risky, as they may introduce bias or fail to follow one of many safeguarding procedures. Over the years, it has become so uncommon to allow nurses to assess nurses, or data scientists to assess data scientists, for example, that many in government believe the practice to be illegal. But merit system principles do not limit candidate reviews to HR professionals. In fact, the U.S. Digital Service and the Office of Personnel Management (OPM) have successfully piloted a new hiring process that employs subject matter experts to assess candidates, resulting in higher-quality hires and more satisfied hiring managers.
The point of the legacy process is not to select the best candidate, but to be able to defend the ultimate selection from criticism through strict adherence to a process in which no judgment can be questioned, because no judgment was used. What this means is that in implementing laws written to reduce nepotism and patronage, we have created a system in which only those who know someone on the inside to guide them has a hope of getting past the first screen. (Jack did eventually get hired, but only after repeated interventions by increasingly high-level officials.) Every step down the ladder from the high-level principles of law to the public servants’ day-to-day practices, the process was drained of judgment and common sense, replacing them with a bizarre literalism in the service of defensibility. This cascade of rigidity perverts the intentions of lawmakers.
AI Meets the Cascade
The federal government announced an AI talent surge at the time of the Executive Order on AI. (7) In a tacit recognition of the problems with its standard hiring procedures, OPM granted agencies “direct hire authority” for these jobs, which removes some of the strictures on HR managers and should allow for greater discretion in selection on the part of hiring managers. My point in describing the effects of the cascade of rigidity on hiring is not a concern about hiring unqualified people into AI roles, though that could happen for a variety of other reasons. I describe these dynamics because the cascade of rigidity is what I fear will happen with the safeguards that governments are now putting in place on the use of AI.
In the U.S., the primary source of these safeguards is the Executive Order on AI that President Biden signed in October 2023, but states and many other governmental entities are issuing similar documents. (To be clear, these documents also usually discuss government regulation of private sector use of AI, but my concern here is exclusively the use of AI within government operations.) When I read these documents, my first response is that the safeguards they’ve put in place sound eminently reasonable. But my second response is to imagine how they are going to be operationalized as lower-level government offices issue additional guidance, which will be a bit more specific than the executive order, and each department, agency, subagency, bureau, and division in turn translates that guidance into their own memos, again, a bit more specifically and ever more prescriptively. The cascade of rigidity is beginning.
This rigidity sometimes manifests as extremely narrow, literal interpretations of guidance as strict but off-base rules, but at other times it expresses itself through overly broad interpretations of the same. As guidance about AI began to roll out last year, for example, gatekeepers began to get the message that AI carried risks and needed to be constrained. In one instance, a policymaker responsible for a healthcare data analysis program told researchers that certain programs that submitted data to their agency could not use “algorithms.” Either the policymaker was unaware that algorithms are core to basic mathematical analysis, necessary to the core work of health IT analysis, and not exclusive to AI, or the fear of not being able to distinguish between AI and non-AI algorithms led them to take the “better safe than sorry” route and attempt to ban the use of algorithms broadly. This error in judgment was ultimately resolved, but now imagine that kind of disruption to the operations of the program occurring over and over in various forms across government.
Now imagine this guidance applying to existing uses of AI in government where the risks are well-understood and minimal and the benefits clearly established, or even where the technology is so firmly embedded and noncontroversial that de-authorizing its use would be devastating. There are many of these uses, as Dan Ho and Nick Bagley point out, including the Postal Service’s long-standing use of handwriting recognition. (8) The reason my barely legible scrawl on an envelope arrives without delay at its destination is that the post office has been using a form of AI to read addresses on envelopes since the 1960s. Is that use now subject to the rules imposed by the recent executive order from the White House? (9)
The set of procedures required by the executive order includes public consultation with outside groups, studies to demonstrate the equity impacts of the application of any AI-enabled technology, the creation of a mechanism to appeal the AI’s decision, and a requirement to allow individuals to opt out of any use of AI. But how—and why—would we allow members of the public to opt out of having their handwritten addresses on envelopes read by machines, or to appeal the decisions of those machines? The new guidance from the White House seems to require it, but pausing this use of AI until all the executive order’s provisions have been met would cripple the Postal Service’s ability to function.
In the abstract, these procedures are all thoughtful, reasonable, and desirable safeguards against bias and harm. In practice, they are likely to function not as safeguards but as barricades. Public consultation, for example, could in theory be conducted thoughtfully and expeditiously. But there are models for public consultation in federal government, and the executive order alludes to notice-and-comment rulemaking and public hearings. According to a report from the Government Accountability Office, it takes an average of four years to conduct rulemaking through a notice-and-comment process. (10) There’s little reason to believe it will take less time when used to consult the public about uses of AI. It is more reasonable to assume it will take longer, both because much of civil society objects to AI’s use in government contexts and because AI’s novelty will mean extra-thorough review of the public consultation process itself by internal actors taking a “better safe than sorry” approach. In the context of AI, four (or more) years (just for one step of an approval process) is not a delay, it is a death sentence. The technology in question will be outdated after four months, much less four years.
Even if the review could take place in a matter of weeks or months, rather than years, rigid interpretations of guidance may make the point moot. For example, the draft guidance to agencies issued by the OMB regarding implementation of the AI executive order implied that “agencies must consider not deploying the AI” upon receipt of “negative feedback” from members of the public. Again, this sounds reasonable, but operationalized in a literalist, maximally risk-averse environment, it is very easy (for those who’ve lived the absurdities of this environment, at least) to imagine any negative feedback at all effectively stopping a deployment, no matter how much time and energy had gone into understanding and mitigating potential harms (not to mention documenting these mitigations and jumping through process hoops to establish the other required safeguards). A later revision of the OMB memo clarifies that negative feedback provided in consultation does not automatically require the termination of the AI system, but in a highly risk-averse culture, the threat of a vetocracy remains real.
The constraints we are imposing today also interact poorly with constraints that have accumulated over many decades. Take the requirement for equity studies. These studies can also take years, though exact averages are hard to calculate because so many studies are still in progress and face serious challenges to their completion. In response to a different Biden Executive Order from 2021, on Advancing Racial Equity, (11) federal agencies were required to file equity action plans. One study of these plans conducted two years later concluded that out of 25 agencies reviewed, 21 “noted serious data challenges to conduct the required equity assessment.” These challenges are largely the result of other guardrails put in place to prevent violations of data privacy, in the form of laws like the Privacy Act of 1974, and to reduce burden on the public, in the form of the Paperwork Reduction Act. (12) Those guardrails interact with constraints on building internal capacity, like OMB memo A-76, which required agencies to outsource wherever possible, and on hiring, as previously discussed, to create a low-capacity environment as it relates to digital technology. (13) Low capacity and adjacent strict guardrails in turn create an environment in which what sound like reasonable constraints necessary for safe use of AI could in effect stop its use.
Beyond Mandates and Constraints
We need constraints on the use of AI. But we should understand the direction their impacts will drift: not toward cavalier attitudes but toward overly risk-averse ones; not irresponsible use, but toward potentially irresponsible lack of use. In guidance, it would be helpful to explicitly promote the use of judgment and discretion on the part of civil servants, and to acknowledge that no risk is effectively stasis, and stasis has its own risks that must also be considered. But tweaking guidance just fiddles with the dials, tuning between stricter and looser controls along one narrow dimension. And loosening the controls too far is neither practical nor desirable.
There is another dimension we pay far too little attention to. Fine-tuning between strict and loose controls is like obsessing over the safety features of cars while entirely neglecting driver education and licensing. The guardrails are in place, but the drivers don’t know how they work—nor how to actually drive. Responsible, effective use of AI will be a function of government’s competencies and capacities far more than its rules.
Our digital competence and capacity deficit exists not because government technologists are bad, but because they are understaffed and overburdened. People who understand both the systems in question and the possibilities of technology are far outnumbered by lawyers, compliance officers, and oversight bodies whose default is to stop rather than to go. They must spend far more time reporting (often to an absurd level of detail) on what they will do, what they are doing, and what they have just done, and seeking approvals from sometimes dozens of stakeholders, than actually building or deploying technology. Improving government’s capacity starts with correcting these glaring imbalances between watching and doing (to borrow from Mark Schwartz), between stop energy and go energy. (14)
Mandates are meant to be the gas to constraints’ brakes. But telling an agency to do something doesn’t help them do it. In theory it could help them prioritize it, but mandates aren’t priorities. You can’t have unlimited priorities, by definition. You can, and in government do, have unlimited mandates. At any given point in time, the priority may seem to be the mandates that the current Congress or party in charge care about. But the reality is that agencies must comply with all the mandates that have piled up over the decades all the time. The soft mandates encouraging use of AI in the federal executive order and others like it are not likely to result in the responsible and effective use of AI any more than the constraints they detail. Again, we must look to building competence and capacity.
AI Competence Is Operational Competence
Competency and capacity have not been entirely ignored. The federal government, for instance, announced an AI hiring surge along with the executive order. It does not appear to be living up to expectations. Conventional wisdom blames pay, and it’s true that the skill of building AI models garners sky-high salaries in the private sector right now. But government’s need is not primarily building models so much as using existing ones, something a far greater number of people can do. In fact, much of the novel use of AI today is done by people with relatively low technical skills. The skill these people have is in understanding a domain or problem, being able to judge where AI could uniquely add value, and availing themselves of the plethora of options now commercially or freely available to try out solutions. These successes are driven less by expertise in the inner workings of AI models than curiosity and the desire to solve a real problem or create a real benefit.
One problem with the hiring surge is that agencies don’t know what they would use AI for. Government has long outsourced much of its operations, and often the people who run agencies don’t know how their own systems work. When I was working on the pandemic unemployment insurance crisis at the California Department of Employment Development during the summer of 2020, I saw firsthand how little grasp the department had on its own operations. At the end of our engagement, I lamented to a colleague that no more than a handful of the department’s 5,000 people understood how its IT systems worked. My colleague corrected me. No, a handful of people knew how individual pieces of the systems worked, but there was no one who understood how it all worked together. There were a great many people who understood the request for proposal for a new system they were trying to procure—the department had been working on that procurement for eleven years when we arrived. This is because instead of developing digital competency, government has developed extensive processes and procedures for purchasing digital work. What that means is that when asked what they might use AI for, most government officials simply ask their vendors. The conversation then becomes about a new contract, not a new hire. It’s entirely possible that procurement expertise is more specialized and “technical” than AI expertise in a certain sense; in part because of this, we have a lot of the former and little of the latter.
The executive order mandated that each federal agency have a Chief AI Officer, so many of those positions have indeed been hired for (or in some cases, the Chief Information Officer has taken on that role.) But at the level of the operations of a particular program or service, the lack of internal competence breeds lack of demand for internal competence. And the lack of AI demand at the program or service level means that much of the work of those Chief AI Officers is to be another gate through which procurements must pass and to craft additional guidance for the use of AI in their particular agencies—in other words, to be the next step in the cascade of rigidity.
There is a term often used in the context of the AI hiring surge that’s deceptively helpful: AI-enabling. Strictly speaking, AI-enabling positions are those that build the foundation for the use of AI. Anything having to do with the quality of or access to data, for instance, would fit, because of course AI is nothing without data to ingest, and access to relevant and reliable data is a huge problem across government at all levels. But positions like product manager should also be given priority under the banner of the surge. Any role that increases government’s ability to understand its own operations and spot where AI can responsibly improve outcomes will better position government to get on the innovation curve it’s been missing over the last two decades.
How to Build Capacity
The middling results of the AI hiring surge should tell us that we need to dig deeper to understand and address what holds government back from harnessing the power of AI. Mandates and controls can only get us so far. But legislative and executive branch leaders can learn to operate in an enablement and capacity-building framework.
I was recently asked by a Congressional office what they might do to force a particular federal agency to perform better. “I think you’re asking me what mandates and constraints you might impose on them,” I replied. “But this agency has been subject to a neverending stream of these orders and rules for decades, and their performance isn’t improving.” The staffers agreed. Rather than assuming they’d been imposing the wrong ones, and that someone with greater digital expertise might help find the right ones, I asked them to consider asking entirely different questions. “What is keeping this agency from delivering? What constraints might we remove?” It turned out this agency had almost no flexibility in how they used their funds. What they spent on which projects was determined far in advance by processes poorly suited to understanding actual needs. A working capital fund might give them a start on that flexibility. To my delight, the Congressional staffers agreed. This move won’t solve all the agency’s problems, but it’s an important step toward enablement.
Mandates and constraints trap us in a downward cycle: by assuming incompetence on the part of the bureaucracy, they ironically encourage incompetence, as the people responsible for delivery are held responsible to process fidelity over outcomes and progressively stripped of the right to use their own discretion. Overuse of these controls also degrades trust between those imposing them and those being controlled. There is shockingly low trust and poor communication between the executive agencies and Congress, for example.
Enablement begins to reverse those negative spirals. In this framework, the focus shifts from greater specificity around process to accountability for outcomes. Instead of assuming incompetence, this framework revolves around asking the agency being acted on what is needed to gain the appropriate competence. Instead of asking “what is wrong with these people?” an enablement framework assumes something is wrong with the system, and that the people who understand the system are often the key to fixing it. Instead of adding control after control (in addition to those that were added decades ago, and not well understood by those seeking to add new ones), leaders edit or reduce the controls so that the agency is no longer trapped in a halting, handicapped, “Mother may I?” mode of operation, in which permission is needed from Congress or another oversight body before any action is taken. Instead of constantly eroding trust, enablement builds it.
Something as small as granting a working capital fund won’t necessarily help the agency embrace AI, but many more like it could begin to build the foundation. It’s not just increased flexibility for agencies. The muscles we build when we enable insights and information to flow up the hierarchy, not just down, are the muscles we’ve needed (and had too little of) in the transition to the internet era, and the ones we really can’t do without as we make this next transition. Internet-era software has called for iterative cycles of build-measure-learn, so different from the cascade natural to hierarchies. AI, with its relentless dynamism, requires them.
Choosing Competence
The past year has seen both gains and losses in the public’s faith that government can deliver on its promises. On the plus side, the IRS launched a pilot tax-filing tool for people with low incomes. When surveyed, 90% of respondents who’d used the tool ranked their experience as excellent or above average, citing ease of use and trustworthiness as reasons for their satisfaction; 86% of them said that their experience with Direct File increased their trust in the IRS. (15) On the minus side, the Department of Education badly botched the rollout of the new form for applying for federal student aid and lost track of 70,000 emails from undocumented parents of student applicants containing the proof of income needed to qualify their children. (16) The emails were found, but so late in the process that some schools could not issue their financial aid packages to these students in time to enroll. In other words, an agency known for taking money from the public inspired trust, while another known for giving money away broke trust. How does that happen?
The difference between these two outcomes is clear: the Department of Education (technically, the FSA, which is part of DoE) was visibly focused on policies around student loans and the politics of changing them, whereas the IRS, under the leadership of newly confirmed Commissioner Danny Werfel, focused on building the capacity to deliver. FSA relied on traditional, rigid contracting mechanisms (despite the flexibility to do otherwise, suggesting that the cascade of rigidity was at play), while the IRS assembled an internal development team capable of fast build-measure-learn cycles from within the agency and across government. People, not rules, build state capacity.
Outsiders to government technology blame politics for an environment hostile to building good software. But the failure to build good software has also deeply influenced our politics. When the child of immigrants is told that good grades and a little bit of paperwork can make them the first in their family to go to college, and that promise turns out to be false, that’s one more person likely to seek hope in strongman rule. Conversely, when even the interaction of collecting taxes demonstrates respect for the taxpayer through clarity and ease, our democracy may live to see another day.
To my knowledge, neither of these projects attempted to use AI, but AI is not the goal. The ability of our government to deliver on its promises is the goal. Government will ultimately need to employ AI, because the magnitude and complexity of the challenges we face continue to grow and because the public’s expectations continue to grow. Tom Loosemore, one of the founders of the UK’s Government Digital Service, defines digital as “applying the culture, processes, business models and technologies of the Internet era to respond to people’s raised expectations.” (17) The culture, processes, business models, and technologies of the AI era will raise expectations even further. To meet those expectations with only an ever more persnickety set of rules and orders is to allow unintended consequences to dictate our future. Luckily, we have a choice.
Footnotes
(1) The author wishes to thank Dan Ho, Nick Bagley, and Cass Madison.
(2) Adam Tooze, “Welcome to the World of the Polycrisis,” The Financial Times, October 28, 2022, https://www.ft.com/content/498398e7-11b1-494b-9cd3-6d669dc3de33.
(3) Joe Soss, “Lessons of Welfare: Policy Design, Political Learning, and Political Action, American Political Science Review 93 (1999):363–380, https://api.semanticscholar.org/ CorpusID:149448187.
(4) Federal Information Security Management Act of 2002, 44 U.S.C. §3541 (2002), https://csrc.nist.gov/CSRC/media/Projects/Risk-Management/documents/FISMA-final.pdf.
(5) Pendelton Civil Service Reform Act, ch. 27, 22 Stat. 403 (1883).
(6) See Jennifer Pahlka, “Culture Eats Policy,” Niskanen Center, June 21, 2023, https://doi.org/10.1016/j.cognition.2020.104469
.(7) Exec. Order No. 14110, 88 Fed. Reg. 75191 (October 30, 2023), “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificialintelligence/.
(8) Daniel E. Ho and Nicholas Bagley, “Runaway Bureaucracy Could Make Common Uses of AI Worse, Even Mail Delivery,” The Hill, January 16, 2024, https://thehill.com/opinion/technology/4405286-runaway-bureaucracy-could-make-common-uses-of-ai-worseeven-mail-delivery/.
(9) “FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence,” The White House, October 30, 2023, https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-bidenissues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.
(10) Mariano-Florentino Cuéllar, Daniel E. Ho, Jennifer Pahlka, Amy Perez, Kit Rodolfa, and Gerald Ray, letter to Director Shalanda D. Young and U.S. Office of Management and Budget Colleagues, Stanford RegLab, November 30, 2023, https://dho.stanford.edu/wp-content/uploads/OMB_Letter.pdf.
(11) Exec. Order No. 13985, 86 Fed. Reg. 7009 (January 20, 2021), “Advancing Racial Equity and Support for Underserved Communities through the Federal Government,” https://www.federalregister.gov/documents/2021/01/25/2021-01753/advancing-racial-equityand-support-for-underserved-communities-through-the-federal-government.
(12) Jennifer King, Daniel Ho, Arushi Gupta, Victor Wu, and Helen Webley-Brown, “The Privacy-Bias Tradeoff: Data Minimization and Racial Disparity Assessments in U.S. Government,” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Association for Computing Machinery, 2023), https://doi.org/10.1145/3593013.3594015.
(13) “Federal Acquisition Regulation; OMB Circular A-76.” Federal Register 70, no. 142 (July 26, 2005): 43107-43109, https://www.federalregister.gov/documents/2005/07/26/05-14569/federal-acquisition-regulation-omb-circular-a-76.
(14) See Mark Schwartz, The Art of Business Value (IT Revolution Press, 2016).
(15) “Updates for the Direct File Team,” IRS Direct File, accessed July 15, 2024, https://www.irs.gov/about-irs/strategic-plan/irs-direct-file-pilot-news.
(16) Erica L. Green and Zach Montague, “Inside the Bluders that Plunged the College Admission Season into Disarray,” New York Times, March 13, 2024, https://www.nytimes.com/2024/03/13/us/politics/fafsa-college-admissions.html.
(17) Tom Loosemore, “Our Definition of Digital,” Public Digital, June 28, 2017, https://public.digital/about-pd/our-definition-of-digital.