On May 15, the Bipartisan Senate Artificial Intelligence (AI) Working Group (“Working Group”) released a report titled “Driving U.S. Innovation in Artificial Intelligence” that lays out a detailed policy roadmap for approaching the risks and benefits of AI. The Working Group—consisting of Majority Leader Chuck Schumer (D-NY), Sens. Mike Rounds (R-SD), Martin Heinrich (D-NM) and Todd Young (R-IN)—released the policy roadmap following the several educational briefings and nine AI Insight Forums the Working Group held last year.
The policy roadmap is organized into eight key priority areas, modeled off topics discussed during the AI Insight Forums last fall:
- Supporting U.S. Innovation in AI: The Working Group outlines support for emergency appropriations to reach $32 billion in annual non-defense spending on AI.
- AI and the Workforce: The Working Group details the need for adequate training programs to upskill the workforce in addition to ensuring key stakeholders are engaged in job displacement conversations and policymaking efforts.
- High Impact Uses of AI: The Working Group encourages consideration of whether there is adequate enforcement of existing laws when applied to high-risk AI use cases and supports legislative action to address enforcement gaps.
- Elections and Democracy: The Working Group recognizes the need to address the impact of deepfakes and other nonconsensual AI-generated content on elections.
- Privacy and Liability: The Working Group outlines steps to address whether, and when, AI developers and deployers should be held accountable for harms caused by their AI models. The Working Group also calls for comprehensive national data privacy legislation.
- Transparency, Explainability, Intellectual Property and Copyright: The Working Group identifies policies to enhance transparency, particularly around disclosing the use of AI in the workplace and providing digital content provenance information.
- Safeguarding Against AI Risks: The Working Group encourages the development of standard practices for risk testing and evaluation of AI models, and outlines support for a “capabilities-focused risk-based” risk regime.
- National Security: The Working Group remains sensitive to the unique intersection between AI and national security, which requires collaboration and innovation to remain ahead of adversaries and deter emerging threats. The Working Group identifies areas for legislation and investment to augment research and maintain U.S. global competitiveness.
Addressing the Spending Gap for AI Innovation
The Working Group calls for emergency appropriations language to reach the $32 billion in annual non-defense spending on AI recommended by the National Security Commission on AI (NSCAI). The emergency appropriations language is advised to include funding for AI and semiconductor research and development (R&D) across federal agencies, outstanding AI programs under the CHIPS and Science Act (Public Law 117-167), the National Institute of Standards and Technology’s (NIST) U.S. AI Safety Institute and the modernization of federal government information technology (IT) infrastructure, among other things. The Working Group further supports the inclusion of defense-related AI spending in emergency appropriations language as needed.
R&D efforts referenced in the policy roadmap include those at the Department of Energy (DOE), Department of Commerce (“Commerce”), National Science Foundation (NSF), NIST, National Institute of Health (NIH), National Aeronautics and Space Administration (NASA) and Department of Defense (DOD). Outstanding CHIPS and Science Act programs recommended for funding include the Regional Tech Hubs Program, DOE National Labs and Microelectronics Programs and the Advanced Technical Education (ATE) Program, to name a few.
Recommended Legislative Efforts
The Working Group recognizes the need to apply existing laws to AI models while also expressing concrete support for new legislative efforts. They emphasize that existing laws should apply consistently to the AI ecosystem, particularly in high-impact areas (civil rights, consumer protection, employment, etc.). Further, the Working Group calls for legislation to address any gaps in applying existing laws effectively to AI systems, developers, deployers and users. In doing so, they emphasize the importance of addressing the potential for disparate impact.
The policy roadmap identifies several existing bills that align with the Working Group’s priorities:
- Artificial Intelligence Advancement Act of 2023 ( 3050): The Working Group expresses support for Section 3 of this bill that addresses regulating AI in the financial services industry. Section 3 calls on key financial services agencies—including the Federal Deposit Insurance Corporation, Office of the Comptroller of the Currency, and others—to produce a report identifying knowledge gaps in their agency’s use, governance and oversight of AI systems. The bill was introduced in October 2023 by all four leaders of the Working Group.
- CREATE AI Act (2714): The Working Group expresses support for the passage of the CREATE AI Act that would authorize the National AI Research Resource (NAIRR). The bill was introduced in July 2023 by all four leaders of the Working Group.
- Future of AI Innovation Act ( 4178): The Working Group expresses support for Section 202 of the bill that provides funding for “AI Grand Challenges” that have a goal of expediting AI development and commercialization. This bill was introduced in April 2024 by Sens. Maria Cantwell (D-WA), John Hickenlooper (D-CO), Marsha Blackburn (R-TN) and Todd Young (R-IN).
- AI Grand Challenges Act ( 4236): The Working Group expresses support for this bill that provides funding for “AI Grand Challenges,” as previously identified. This bill was introduced in May 2024 by Sens. Cory Booker (D-NJ), Mike Rounds (R-SD) and Martin Heinrich (D-NM).
- Small Business Technological Advancement Act (2330): The Working Group expresses support for this bill that authorizes the Small Business Administration (SBA) to provide small businesses with loans for software or cloud computing services used in normal business operations. This bill was introduced in July 2023 by Sens. Jacky Rosen (D-NV), Ted Budd (R-NC), Jeanne Shaheen (D-NH), John Boozman (R-AR), John Hickenlooper (D-CO) and Todd Young (R-IN).
- Workforce Data for Analyzing and Tracking Automation (DATA) Act (2138): The Working Group expressed support for this bill that would allow the Bureau of Labor Statistics (BLS) to asses automation’s impact on the workforce. This bill was introduced in June 2023 by Sens. Gary Peters (D-MI) and Todd Young (R-IN).
The Working Group further encourages the consideration of additional policymaking to address several key areas, including:
- Public-private partnerships and broader impacts on small businesses.
- Workforce training programs and the development of career pathways both at the private-sector and federal level.
- S. immigration policies for STEM workers.
- Online child sexual abuse material (CSAM), nonconsensual intimate images and deepfakes.
- Liability standards for AI developers, deployers and end users.
- Transparency requirements and policies around the use of private, sensitive personal data to train AI models.
- Autonomous vehicle (AV) testing standards and guidance on automation-level requirements across a wide range of AI systems.
- The development of a capabilities-focused risk regime for AI model testing and evaluation.
- Bolstering R&D efforts.
- Foreign adversaries’ development and deployment of AI models and broader cybersecurity concerns.
- Energy use of AI systems.
Recommended Executive Branch-Level Action
In addition to directing Senate committees of jurisdiction to review proposed legislative efforts, the policy roadmap includes recommendations for agency-level action to address opportunities and risks posed by AI. This includes proposing the federal government utilize AI to modernize service delivery, in addition to recommending programs to upskill the federal workforce and enhance recruitment of highly skilled employees. The federal government is also recommended to examine AI procurement processes, particularly around testing, evaluation and procedural efficiencies.
The policy roadmap emphasizes the need for advanced R&D efforts and the study of AI at the federal level. This includes considering the advancement of the Federally Funded Research and Development Centers (FFRDCs) and AI-focused Information Sharing and Analysis Center (ISAC). The Working Group specifically mentions the following programs and resources as equipped to address some of their priorities: U.S. Digital Service, Presidential Innovation Fellows, Presidential Management Fellows, AI Toolkit for Election Materials and the Cybersecurity Toolkit and Resources to Protect Elections. The Comptroller General of the United States is also encouraged to release a report identifying regulations potentially prohibiting AI innovation.
Defense and National Security
The Working Group acknowledges the unique considerations when addressing the opportunities and risks AI poses to national security and deterrence, devoting a full section of the report to national security considerations. The Working Group emphasizes the need for robust investments in AI research, development and deployment to augment defense capabilities including cyber capabilities, drones, next-generation fighter aircraft and other conventional military platforms. The Working Group also calls for the inclusion of defense-related funding in their recommended emergency appropriations, specifically prioritizing funding to advance DOD’s AI capabilities. The report supports funding for testing and evaluation at the National Nuclear Security Administration (NNSA); Chemical, Biological, Radiological and Nuclear (CBRN) risk mitigation; and the development of Special Access Programs (SAPs) to ensure DOD’s data can be effectively utilized for machine learning and other AI use cases.
The Working Group focuses on three key areas of policy and regulation when addressing national security concerns: talent and workforce development, U.S. competitiveness abroad and transparency requirements.
- Talent and Workforce Development: The Working Group encourages the creation of career pathway and workforce training programs for AI in addition to enhanced oversight of security clearance applications to expedite the process for developing and securing AI-focused talent for defense. The report also calls for the DOD and DOE to partner with commercial AI developers to enhance AI development and prevent sensitive data and AI models from being leaked or reconstructed by adversaries.
- U.S. Competitiveness Abroad: The Working Group encourages the intelligence community to continuously monitor and examine AI developments by foreign adversaries, specifically regarding artificial general intelligence (AGI). The policy roadmap also specifically references and calls for consideration of recommendations from the National Security Commission on Emerging Biotechnology and NSCAI on preventing adversaries from utilizing AI to advance bioweapons programs. The Working Group additionally emphasizes the role of export controls in maintaining U.S. competitiveness and encourages the consideration of “on-chip security mechanisms for high-end AI chips.” The report specifically encourages collaboration with the private sector to address rising energy demands for AI systems to ensure the United States will remain competitive with the Chinese Communist Party regarding energy costs.
- Transparency: The Working Group calls relevant committees to assess whether DOD policy on fully autonomous weapons systems should be codified or if the progress on the development of such weapons systems should remain classified.
The impact of legislation and regulations amounting from the policy roadmap on defense and national security is isolated from other industries. Policymaking within this space is likely to focus on addressing the security risks and opportunities of AI and methods to enhance U.S. competitiveness on AI development and implementation.
Health Care
The Working Group provides detailed recommendations on addressing AI’s impact on the health care industry. This includes a call for legislation to implement transparency requirements for the use of AI in medical devices and address the deployment of AI in health care settings. The Working Group emphasizes the need to embrace a patient-forward approach to regulating AI use by the health care industry, specifically calling for a prioritization of consumer protection and safety.
On the regulatory side, the Working Group identifies a need for additional support for AI efforts at the Department of Health and Human Services (HHS), Food and Drug Administration (FDA), Office of the National Coordinator for Health Information Technology and NIH. The Working Group encourages this support to focus on data governance, biomedical data availability for AI R&D efforts and agency access to tools for weighing the benefits and risks of AI use. The Working Group also calls for the consideration of advancing AI use to improve both efficiency and health outcomes, specifically referencing the need to review how AI can be used to streamline reimbursement processes at the Centers for Medicare and Medicaid Services (CMS).
Big Tech and Privacy
Big Tech would likely feel the impact of legislative efforts amounting from the policy roadmap, particularly with recommendations for consumer transparency requirements, watermarking, digital content provenance disclosures, liability and the testing and evaluation of AI models. Importantly, the Working Group and Biden administration have remained engaged with Big Tech on AI policymaking and regulatory efforts, signaling they would likely remain open to engagement with these stakeholders on legislative proposals moving forward.
Privacy is another key element of the policy roadmap, with the Working Group calling for comprehensive federal data privacy legislation. There are currently several legislative proposals for a national data privacy standard circulating in Congress, such as Sen. Maria Cantwell (D-WA) and Rep. Cathy McMorris Rodgers’ (R-WA) America Privacy Rights Act. However, the Working Group does not lend its support to any existing proposals.
The Working Group also recognizes the need for legislation to protect children online. This includes a call for legislation addressing both CSAM and the harms posed by AI to children more generally. Similarly, the Working Group does not explicitly mention any existing children’s privacy legislative proposals in the roadmap, such as the Kids Online Safety Act (KOSA) (S.1409) or the Children and Teens Online Privacy Protection Act (COPPA) 2.0 (S.1418). It is unclear at this time whether the Working Group will come out in support of either of the bills, with all but Sen. Mike Rounds (R-SD) signing on as a co-sponsor for KOSA.
Overlap with the House AI Task Force
On Feb. 20, House Speaker Mike Johnson (R-LA) and Minority Leader Hakeem Jeffries (D-NY) announced the establishment of the bipartisan House Task Force on AI. This Task Force consists of 12 Democratic and 12 Republican members of AI-related committees and will be led by Chairman Jay Obernolte (R-CA) and Co-Chair Ted Lieu (D-CA). The Task Force will also develop a report detailing policy proposals and recommendations, which is likely to address similar subjects to the Senate Working Group’s report. In the announcement of the Task Force, Speaker Johnson, Chairman Obernolte and Co-Chair Lieu said the group will primarily focus on maintaining U.S. technological competitiveness, protecting national security and addressing regulatory concerns. These shared areas of interest and any similar findings between the House and Senate AI groups should provide significant bicameral initiative for subsequent legislation.
Brownstein’s Outlook
The Working Group’s release of its policy roadmap provides ample opportunity for engagement with key stakeholders across a wide range of industries. Both the Senate Working Group and Biden administration have previously emphasized engagement with private sector leaders when crafting policy on AI. They will likely both remain amenable to continued engagement as policymaking efforts amounting from the roadmap and expected House AI Task Force report begin to take shape. Majority Leader Schumer has also indicated the policy roadmap will not lead to a large AI package, but instead piecemeal legislation that will be introduced as committees of jurisdiction are ready. He further explained his intentions to prioritize legislation addressing the threat AI poses to elections, but it remains unclear how quickly he would be able to advance such legislation through the Senate, as the Senate Rules Committee advanced three AI-focused election bills last week.
https://www.jdsupra.com/legalnews/senate-ai-working-group-releases-4479009/