Procurement officials are leading the federal adoption of AI


Earth view from space at night with lights and connections of cities. (World map courtesy of NASA:

Contract workers and agency leaders are key to deploying ethical AI processes.

More than 50 years ago, in the science fiction film, 2001: A Space Odysseythe spacecraft’s crew included the HAL 9000 computer, which became a malicious artificial intelligence (AI) system motivated by its survival, aiming to assassinate the human crew members who distrusted it.

This cautionary tale is an example of the potential for unintended, albeit extreme, AI behavior when using AI solutions to enhance human capabilities. How can the government prevent AI from acting against human intentions and expectations? How can the procurement process and agreements be used to mitigate potentially undesirable behavior?

Government procurement processes are beginning to mitigate this potential through new specialized principles and methodologies that promote the responsible use of AI. There is much to learn and do on this mission-critical journey.

Academia and the private sector are at the forefront of developing AI technologies. The ubiquitous government adoption of AI will enable agencies to achieve improved outcomes at scale.

Using AI to enhance human performance can also enable efficiency gains at levels that were once unattainable and orders of magnitude better than the status quo.

But harnessing the power of private sector AI technology for good government can create Challenges for Public Procurement, as Cary Coglianese and Eric Lampman to have written down in their work on procurement for AI governance. They point out that the risks of AI need to be weighed against its potentially game-changing benefits.

Coglianese goes on to do this in an article with Alicia Lai Point that there is no perfect, unbiased system to compare AI to. Human designers, decision makers, and government officials bring lifelong acquired, often underestimated, cumulative biases to their work. These prejudices will not be eliminated through AI when humans are still in the process – as it will and should be. In addition to residual human judgment, algorithmic bias introduced B. through decisions in training data, which are embedded in algorithms through repeated training, lead to the fact that human biases remain out of sight.

In order for the government to reap the benefits of AI without creating problems, the application of the technology needs to be improved. At stake are thorny issues in cybersecurity, equity, diversity, inclusion and adaptation to a life-changing climate crisis. But innovation in harnessing the technology for such critical use case solutions will be discouraged as even more rules are imposed for overburdened public buyers and entrepreneurs.

The federal purchasing system is an expression of a vast library of rules and regulations interpreted by each agency and their legitimate professional clients. That federal employment regulations (FAR) and its derivations are the sheet music for a bureaucratic orchestra and choir in which the principal is the conductor. The high complexity of government procurement regulations is intended to meet the need for public confidence in the fairness of the system and has largely met this goal – with hidden and unobservable opportunity costs in the form of lost performance.

Part 1 of the FAR conditions that contracting officials should use good business judgment on behalf of taxpayers. In practice, however, Part 1’s discretion is often overwhelmed by the cultural norms of compliance with the extensive regulations.

Many clients recognize that regulatory libraries present obstacles and barriers to companies that are at the forefront of inventing and adopting technology to work with government. This realization has led to an avalanche of procurement innovation among buyers, responding to new market opportunities in the changing landscape.

The Federal Office for Procurement Policy by the Federal Acquisitions Institute partnership with mine ACT IAC team of volunteers to create an easy-to-use knowledge repository for such innovations – the Periodic Table of Acquisition Innovations— Promoting the adoption of successful acquisition techniques in government and industry. One such innovation is the PilotIRS Program. It intelligently connects the authorities of FAR part 12 and 13 to allow the Internal Revenue Service (IRS) to do so Buy like a venture capitalist. The current limit for such contracts is $7.5 million, which conforms to the IRS Looking for to be brought up.

The US Congress, feeling a similar need expanded the 60 year old Other Transaction Authority (OTA), designed to remove FAR governs and encourages experimentation with new technologies such as AI. The use of OTAs has been heaving concentrated in the US Department of Defense in recent years.

These agencies were essential in advancing the art of AI procurement by the Department of Defense Joint center for artificial intelligence (JAIC). The ability to use these powers requires more experience on the part of contract staff, who should be selective in using reasonable business acumen rather than the rules per seembedded in the FAR when they create OTAs.

To his credit, the JAIC intentionally created called a “golf course” by AI Contracting Passat, where the tees, pins, traps and fairways can combine the business acumen of the FAR with the relative freedom of OTAs. Tradewind is available for use across the federal government enable better and faster AI acquisition.

Responsible AI (RAI) forms a set of new, AI-specific principles from JAIC’s enterprise-wide AI initiative. Those of the Department of Defense Obligation to RAI begins with top department head.

The new Chief Digital and AI Office (CDAO) is the single point of contact for the execution the Defense Department’s AI strategy. RAI principles are to lead the development and acceleration of the adoption of AI through innovative acquisition approaches on a new acquisition path based on OTA, related authorities and an infrastructure of contract vehicles such as test and evaluation support, described by the Defense Innovation Department. Contracts based on declarations of avoidance can be executed in 30 to 60 days to quickly design and use new techniques.

On the other hand, many important civilian missions are essentially about resource allocation.

Missions in the US Department of Health and Human Services, for example, must mitigate against unintended socioeconomic bias that is illegal. In leading procurement teams to avoid such bias, the National Institute for Standards and Technology (NIST) is remarkably ambitious in this addressing Data, tests and ratings, and human factors. NISTs analysis of the future standards includes the legal, business and technical guidelines to prevent and uncover socio-economic prejudice when using AI solutions.

Reminiscent of JAIC, NIST argued that traditional contractual testing approaches are not sufficient to eliminate socio-economic bias from AI solutions. They recognize that “explainability” should be a challenge for powerful but opaque machine learning techniques factored into contracts.

Bias mitigation requires deep and transparent insight into the data used to train the solutions. NIST presents an approach to testing socio-technical AI solutions that procurement teams should consider. NIST’s work is the first systematic map of this unexplored territory. At stake is public trust in AI.

Contract staff are pioneering the brave new world of acquiring AI for government use. At the same time, they drive industry engagement, ensure AI solutions are accountable and free from unwanted bias, and enhance human effort in mission performance. The ethical use of AI technology starts with procurement. Contract employees conduct the symphony of evolving federal procurement of AI.

This paper is part of a nine-part series entitled Artificial Intelligence and Procurement.


About Author

Comments are closed.