Hearing Wrap Up: Federal Government Use of Artificial Intelligence Poses Promise, Peril
WASHINGTON—The Subcommittee on Cybersecurity, Information Technology, and Government Innovation held a hearing titled, “How are Federal Agencies Harnessing Artificial Intelligence?” Members asked witnesses about the federal government’s current and planned use of artificial intelligence (AI), and discussed risks associated with the widespread integration of AI into various operations of the federal government. Members inquired as to how the government is preparing to safeguard against those risks, including potential violations of the civil liberties and privacy rights of citizens.
Key Takeaways:
Artificial Intelligence, if harnessed correctly, creates an opportunity for federal agencies to better achieve their mission.
- Dr. Arati Prabhakar—Director of White House Office of Science and Technology Policy—spoke on the importance of the work being done within federal agencies to harness the capabilities of AI: “Our work is only becoming more important as AI’s capabilities advance, especially as AI is increasingly integrated into society. This is the moment for our government to take bold and decisive action, so that America can continue to lead the world in harnessing AI to do big things in ways that strengthen our values. I look forward to working with you to realize this tremendous opportunity.”
The federal government must responsibly govern its use of AI systems, and it must do so with appropriate oversight and accountability.
- Dr. Craig Martell— Chief Digital and Artificial Intelligence Officer at the Department of Defense— discussed the need for proper oversight tools to be in place to safeguard national security interests from AI and to use AI to our nation’s advantage over adversaries: “AI-based technologies can help take advantage of opportunities and improve our capabilities and effectiveness. However, they also pose significant challenges and risks that require careful oversight, management, governance, and accountability. DoD is committed to developing and deploying AI technologies in a manner that upholds our Constitution, laws, policies, and values. We will lead in the global AI field thanks to our democratic principles, not in spite of them.”
The Biden Administration is delinquent in complying with laws and regulations intended to facilitate the appropriate use of AI by federal agencies. The Subcommittee has and will continue to lead and press for action on this issue.
- Eric Hysen—Chief Information Officer at the Department of Homeland Security—emphasized the role of Congress – including this Committee — in establishing the authorities and responsibilities under which federal agencies will incorporate AI into their operations: “As the Chief Information Officer for DHS, I understand that this technological transformation requires growth and adaption within my role and the DHS IT community, as I apply authorities often authored or shaped from this Subcommittee and full Committee, such as the Clinger-Cohen Act, the Federal Information Technology Acquisition Reform Act, and the Federal Information Security Modernization Act. DHS looks to become a leader in the interagency as we confront this opportunity, but that will only be possible with continued support and leadership from Congress, especially this Subcommittee.
Member Highlights: Subcommittee on Cybersecurity, Information Technology, and Government Innovation Chairwoman Rep. Nancy Mace (R-S.C.) pressed for answers why the Office of Management and Budget is two years late in issuing AI guidance to agencies after a White House witness insisted the guidance is an urgent priority.
Rep. Mace: “OMB is more than two years late in complying with a Congressional mandate to give federal agencies guidance on the acquisition and use of AI, the law requires OMB to coordinate with your office in drafting that guidance. Why is the process stalled? When can we expect to see some guidance?”
Dr. Prabhakar: “The Office of Management and Budget is working in a very focused manner on what they clearly understand is an important priority.”
Rep. Mace: “Two years late?”
Rep. Nick Langworthy (R-N.C.) spoke on the need for transparency as the federal government implements AI across federal agencies and concerns that criminals will be able to harness AI for nefarious purposes.
Rep. Langworthy: “Do you agree that the public has right to know for what purposes AI is being used by the federal agencies and that it’s important that these inventories are done consistently, completely and accurately, and will you pledge to work to continue to work to ensure that is the case?”
Dr. Prabhakar: “I share your focus on the value of those use cases for all the reasons that you mentioned, it’s important for the public to know, across government, for people to understand how AI is being used. There’s important progress that we are making and we will continue to make as a federal government on those AI use cases.”
Rep. Langworthy: “Mr. Hysen, are you concerned that as AI systems become more mature and complicated that criminals will have greater opportunity to commit heinous crimes…”
Mr. Hysen: “We absolutely are concerned there, however we are also looking to harness AI to combat those crimes.” Rep. William Timmons (R-S.C.) discussed the national security application of AI and how our national security apparatus perceives AI’s monolithic nature.
Rep. William Timmons (R-S.C.) discussed the national security application of AI and how our national security apparatus perceives AI’s monolithic nature.
Rep. Timmons: “Can we talk about possible uses of AI within either DOD or our adversaries’ military capabilities?”
Mr. Martell: “AI is not monolithic, when we say AI what we really mean is a specific AI based technology, or a specific statistically based technology. It’s important to differentiate that because we may be doing really well for one use case and very poorly in another and that may be so for our adversaries as well. If we focus mostly on AI as a monolithic thing, if we have it we win, if they have it we lose, then we are actually missing where we should be aiming our attention at. Particular activities that we want to deliver and particular capabilities that we want to defend against.”
CLICK HERE to watch the hearing.