Experts react: What does Biden’s new executive order mean for the future of AI?

https://www.atlanticcouncil.org/blogs/new-atlanticist/experts-react/experts-react-what-does-bidens-new-executive-order-mean-for-the-future-of-ai/

By Atlantic Council experts

“Can machines think?” The mathematician Alan Turing posed this question in 1950, imagining a future human-like machine that observed the results of its own behavior and modified itself to be more effective. After observing the rapid development of artificial intelligence (AI) in recent months, US President Joe Biden issued an executive order on Monday intended to modify how humans use these “thinking machines.” The thinking behind the order is to make AI safer, more secure, and more trustworthy. Will it be effective? Below, our own “thinking machines”—that is, Atlantic Council experts—share their insights.

Click to jump to an expert analysis:

Graham Brookie: What stands out are the implications for AI use in the US government

Lloyd Whitman: Executive action alone won’t get the job done

Rose Jackson: The US still must have hard conversations about AI

Trisha Ray: Establishing AI ethics is a task the US must tackle with allies and partners

Newton H. Campbell: This aggressive but necessary order will introduce regulatory burdens on AI

Frances G. Burwell: The order lacks the legislation with enforcement of Europe’s AI Act

Maia Hamin: A one-two punch to put the US on a path toward standardized testing of AI models

Rachel Gillum: A potential catalyst for responsible private sector innovation


What stands out are the implications for AI use in the US government

The Biden administration’s executive order on AI is a simple, pragmatic step forward in coherent and connective tech policy. The proliferation of AI governance efforts this year at nearly every level, including local, national, multinational, multi-stakeholder, and more, has been a natural extension of the rapid deployment of AI and industry reorientation around it. This executive order is an opening salvo not meant to be comprehensive or final, but it sets a significant policy agenda as other bodies—including Congress and aligned international partners—consider next steps. It is a clear signal from the United States ahead of the AI Safety Summit in the United Kingdom later this week.

What stands out the most is not necessarily the rules set out for industry or broader society, but rather the rules for how the government itself will begin to consider the deployment of AI, with security being at the core. As policy is set, it will be extremely important for government bodies to “walk the walk” as well.

Graham Brookie is the vice president and senior director of the Atlantic Council’s Digital Forensic Research Lab.


Executive action alone won’t get the job done

The Biden-Harris administration has taken strong action with the comprehensive executive order on safe, secure, and trustworthy AI. But an executive order can only do so much, limited by the existing authorities and appropriations of the executive branch agencies. While priority-setting, principles and best practices, frameworks, and guidance across the federal AI landscape are important, much of the teeth of this order will require rule-making and other administrative actions that take time, are subject to judicial review, and can be revoked by a future administration. US leadership on AI will require bipartisan recognition of the opportunities and challenges AI presents for our economic security and national security, and thoughtful legislation ensuring a balanced, transparent, and accountable approach to promoting and protecting this critical emerging technology.

Lloyd Whitman is the senior director of the Atlantic Council’s GeoTech Center. He previously served at the National Science Foundation as assistant to the director for science policy and planning. He also held senior positions at the White House Office of Science and Technology Policy in the Obama and Trump administrations.


The US still must have hard conversations about AI

The White House’s executive order comes days before world leaders head to the United Kingdom for a major summit on “AI Safety.” Amid a flurry of partner government and multilateral regulation, convenings, and conversations, the administration is clearly trying to both make its mark in a crowded space and begin to make sense of the AI landscape within the powers it has. It’s worth noting that this massive executive order builds on a few years of action from the administration, including the Commerce Department’s release of a Risk Management Framework, the more recent voluntary principles negotiated with major AI companies, and the White House’s Blueprint for an AI Bill of Rights.

We’ve seen these existing actions serve as the basis for US engagement on the Group of Seven’s (G7’s) Guiding Principles and Code of Conduct on Artificial Intelligence, which was released just this morning. We should expect to see echoes of the same in the commitments to come out of the AI Safety Summit in London later this week.

However, this executive order is more than just posturing. By requiring every government agency to examine how and where AI is relevant to their jurisdictions of policy and regulation, the United States is taking a major step in advancing a sectoral approach to AI governance. With nods to data privacy action and a clear call for Congress to pass legislation, there are plenty of hooks for meaningful action here. This is a substantive move that sets up the United States to have the hard conversations required to ensure AI is leveraged toward a better future.

Rose Jackson is the director of the Democracy + Tech Initiative at the Atlantic Council’s Digital Forensic Research Lab. She previously served as the chief of staff to the Bureau of Democracy, Human Rights, and Labor at the State Department.


Decoding Artificial Intelligence


Establishing AI ethics is a task the US must tackle with allies and partners

The Biden administration’s executive order is a timely signal of the United States’ intent to lead the global conversation on AI ethics by example. The order’s emphasis on international engagement is welcome, given the current moment of convergence of several trends in AI development and geopolitical tensions. In this vein, the US government should prioritize supporting existing multilateral and multi-stakeholder processes and recommendations. With the United States having rejoined the United Nations Educational, Scientific, and Cultural Organization (UNESCO) earlier this year, this includes UNESCO’s “Recommendation on the Ethics of Artificial Intelligence,” adopted in November 2021. Similarly, the executive order also calls for “the development of a National Security Memorandum that directs further actions on AI and security.” In doing so, the order finally, albeit only partially, addresses the void left by the 2023 US policy on “Autonomy in Weapons Systems” regarding the use of AI in law enforcement and border control, among other applications outside conflict. This memorandum could serve as an important signal to democratic allies and partners in a sphere that is often treated as an exception to broader principles of AI ethics.

Trisha Ray is an associate director and resident fellow at the Atlantic Council’s GeoTech Center.


This aggressive but necessary order will introduce regulatory burdens on AI

Today’s executive order from Biden on a safe, secure, and trustworthy artificial intelligence is quite aggressive and will likely encounter some hurdles and court challenges. Nonetheless, direction was needed from the executive branch. The order is necessary to strike a balance between AI innovation and responsible use in the federal government, where new AI models, applications, and safeguards are constantly being developed. It emphasizes safety, privacy, equity, and consumer protection, which are essential for building trust in AI technologies. I see the emphasis on privacy-preserving technologies and the focus on establishing new international frameworks as a positive step for global AI governance.

The order directs every federal agency to regulate and shape AI’s growth to protect the public, national security, and the economy. But with limited power (the improbability of Congress passing any real laws that align funded activity to accommodate these new constraints and responsibilities), the order will introduce regulatory burdens, potentially slowing AI development and other AI-impacted processes due to an evolving skills gap in the government. The potential misalignment of new government programs and funding is of significant concern, and will likely be used to reinforce political narratives of government inefficiency.

Newton H. Campbell is a nonresident fellow at the Atlantic Council’s Digital Forensic Research Lab and the director of space programs at the Australian Remote Operations for Space and Earth Consortium.


The order lacks the legislation with enforcement of Europe’s AI Act

The new White House executive order is a notable step forward toward protecting Americans from the biggest risks of advanced AI. The European Union (EU) is about to conclude negotiations over its own AI Act, and the similarity in ambitions between the two initiatives is remarkable. Both call for testing and documentation, greater security against cyberattacks, safeguards against discrimination and deception, and transparency for consumers, along with other measures. But the EU AI Act is legislation with enforcement, including significant fines, while the executive order depends on the market influence of the federal government.

Will developing standards and best practices aimed at preventing algorithmic discrimination, for example, and pushing these through federal programs and procurement, be sufficient? It will be some time before we know, but it is a worthwhile experiment. In the meantime, this executive order gives the US administration credibility as it works with other countries, in the G7 and elsewhere, to ameliorate the risks of AI and focus on the opportunities.

Frances G. Burwell is a distinguished fellow at the Atlantic Council’s Europe Center and a senior director at McLarty Associates.


A one-two punch to put the US on a path toward standardized testing of AI models

The executive order directs the National Institute of Standards and Technology (NIST) to develop standards for red-teaming (adversarial testing for risks and bad behavior in AI models), and then separately proposes using the Defense Production Act to compel AI companies to disclose the results of their own red-teaming to the government. This one-two punch could be a path to getting something like a regime for pre-release testing for highly capable models without needing to wait on congressional action. Hopefully, the NIST standards will encompass both the cybersecurity of the model (e.g., its susceptibility to malicious attacks and circumvention) and its usefulness for malicious cyber activity. It will also be important to test models as integrated with other systems, such as code interpreters or autonomous agent frameworks, that give AI systems additional capabilities, such as executing code or taking actions autonomously.

The direction for the Department of Commerce to develop standards for detecting AI-generated content is important: any regime for AI content labeling that can be used by many different AI companies and communications platforms will rely on standardization. I’m glad to see the executive order mention both the watermarking of AI-generated content and authentication of real, non-AI generated content, as I suspect both may be necessary in the future.

I admire the White House’s goal to build AI to detect and fix software vulnerabilities, but I’ll be curious to see how they think about managing risks that could arise from powerful AI systems custom-built to hunt for vulnerabilities. I also hope they’ll tie new tools into existing efforts to “shift the burden of responsibility” in cyberspace to ensure AI vulnerability finders create secure-by-design software rather than endless patches.

It’s good to see privacy mentioned, but, as always, painful that no path appears but the congressional one, which has remained at an impasse for years now. However, the presence of privacy-preserving technologies is excitingthese technologies may help secure a policy that balances painful tradeoffs between individual privacy and innovation in data-hungry spaces like AI.

Maia Hamin is an associate director with the Atlantic Council’s Cyber Statecraft Initiative under the Digital Forensic Research Lab.


A potential catalyst for responsible private sector innovation

The Biden administration’s executive order on AI is an important step toward steering the fast-moving AI sector toward responsible development. Its impact will largely depend on how the private sector reacts to its incentives and enforceability.

The order rightly focuses on safeguarding societal and consumer interests, such as identifying misleading or deceptive AI-generated content. However, an effective technological solution to this critical issue is still needed. Ideally, this directive will serve as a catalyst for investments in this space. Similarly, the inclusion of the National AI Research Resource pilot has the potential to democratize AI advancements, reducing reliance on major tech companies and encouraging innovations that prioritize societal benefits.

I welcome the executive order’s focus on immediate-term societal risks, especially its efforts to empower the government to enforce existing anti-discrimination laws. These efforts should incentivize developers to build these protections into their systems by design rather than consider them after the fact. However, effective enforcement will only be feasible if agencies are adequately equipped for this work. The executive order attempts to address this by attracting the desperately needed AI talent to government positions, but more needs to be done to facilitate interagency coordination to avoid fragmented policymaking and inconsistent enforcement.

Lastly, the order wisely aims to relax immigration barriers for skilled AI professionals, a bipartisan issue often overlooked yet strongly advocated for by the private sector. Nevertheless, equal emphasis should be placed on domestic education and retraining programs to create a comprehensive talent pipeline and support today’s workforce.

Rachel Gillum is a nonresident fellow at the Atlantic Council’s Digital Forensic Research Lab. She is also vice president of Ethical and Humane Use of Technology at Salesforce and served as a commissioner on the Commission on Artificial Intelligence Competitiveness, Inclusion, and Innovation.