Hearing Wrap Up: U.S. Must Not Stifle AI Innovation in Establishing Guardrails for Use
WASHINGTON—The Subcommittee on Cybersecurity, Information Technology, and Government Innovation held a hearing titled, “White House Policy on AI.” Subcommittee members discussed with industry experts the scope and impact of the nearly 150 requirements associated with White House’s AI Executive Order (EO), and how the EO and corresponding draft guidance from the Office of Management and Budget (OMB) could shape the Federal government’s acquisition and use of AI systems.
Key Takeaways:
White House policy is being rolled out with insufficient industry input.
- Ross Nodurft, Executive Director at the Alliance for American Innovation, spoke to the challenges faced by companies due to the truncated timeline that OMB and the White House expect companies to comply with AI guidance: “ADI also is very supportive of OMB’s outreach to industry to solicit comments and feedback on its draft memo, although the short turnaround time is limiting the amount of thoughtful and constructive feedback from industry. The rushed nature of the response means OMB will be finalizing its guidance to agencies without the full benefit of insights from the industry partners that are developing and deploying AI capabilities, which is concerning given the importance of the topic.”
There is significant risk that the government will fail to adopt AI in a timely manner.
- Samuel Hammond, Senior Economist at the Foundation for American Innovation, highlighted how the federal government must move quickly with regulation and integration of AI in order to keep pace with innovation: “The question is whether governments will keep up and adapt, or be stuck riding horses while society whizzes by in a race car. The risks from adopting AI in government must therefore be balanced against the greater risks associated with not adopting AI proactively enough.”
If the U.S. does not lead on AI, it will open the door for hostile foreign nations to control the values driving the technology. It is paramount that the U.S. lead in AI development and production so that U.S. values are imbued into its fabric at an early stage.
- Kate Goodloe, Managing Director at The Software Alliance, emphasized how critical it is the U.S. to lead in the AI space and what U.S. AI policy should focus on for best results: “The United States needs a strong, clear, and thoughtful approach to AI policy. US companies are at the leading edge of developing and using AI technologies, as businesses of all sizes and in all industries leverage digital tools including AI to improve safety and competitiveness. The US Government must also be a leader in promoting the responsible development and deployment of AI. The United States’ AI policy should promote responsible and trusted uses of AI, including enabling beneficial government uses of AI that help agencies work more effectively and efficiently in serving all Americans.”
- Dr. Daniel Ho, William Benjamin Scott and Luna M. Scott Professor of Law at Stanford Law school, discussed how correct decisions on AI made by the federal government now will set the U.S. up for success moving forward in the future: “There are three possible futures of AI. One is a future of AI abuse unchecked by government regulation. Nefarious actors use AI voice cloning to scam citizens, bot-generated text impersonates people, and deep fakes erode trust. Another is where the government harms citizens because of improper vetting of AI. But a third future is one where the government protects Americans from bad actors and leverages AI to make lives better—like the VA’s use of AI to enable physicians to spend more time caring for veteran patients, and less time taking notes. To get there, we must make the right decisions today.”
Member Highlights:
Subcommittee Chairwoman Rep. Nancy Mace (R-S.C.) raised the risks of the government moving too slow to use AI to bolster security. She also asked whether federal agencies will be able to timely carry out the tasks assigned them in the White House EO.
Chairwoman Mace: “Could slow and reluctant government adoption of AI jeopardize the cybersecurity of federal systems? Is this a national security issue? Where do you see it?”
Mr. Hammond: “Yes, I think it’s both as a national security issue and a sort of good government issue. We lived through the pandemic and when you saw those lineups around the block to claim unemployment insurance, a big part of that was because state unemployment insurance systems are built on mainframe computing technology from 50-60 years ago.”
Chairwoman Mace: “I’d like to ask a question of Dr. Ho, you’ve written about the lack of AI talent in government and the failure of this Administration to timely implement AI related mandates…under this new EO it has 150 new tasks to perform based on your count, do you expect federal agencies to meet the timetables for action set out in the EO?”
Dr. Ho: “The question you raise is a really important one, this Subcommittee has been so important in really providing good oversight and transparency over implementation. I think follow up is going to be necessary and. I think the talent pipeline that you mentioned is going to be absolutely critical for ensuring that the right folks are in place to be able to implement these requirements faithfully and in an informed way by the technology.”
Rep. William Timmons (R-S.C.) expressed concern that overregulation of AI by the federal government could drive companies out of the United States.
Rep. Timmons: “How can Congress create a regulatory framework that protects against potential harm associated with AI while not impeding the development and implementation of all the benefits that AI has to offer? I’m concerned that businesses will just relocate abroad if our regulatory framework, what can we do to strike the right balance?”
Dr. Ho: “I think it is critical that we lead with values. There are values that are embedded within technology and one of the big questions is that is facing us: whether we want a small number of Silicon Valley firms to embed those values, whether we want our foreign adversaries to embed those values, or whether we want broader forms of democratic input to embed those values.”
Rep. Eric Burlison (R-Mo.) questioned whether the E.O. could squelch AI innovation.
Rep. Burlison: “Do you have any concerns? Everybody on the panel has said they support this new executive order, but do you understand my concerns about how this might throttle back in innovation?
Mr. Hammond: “Yes sir, I mean one of the reasons that we are leaders in software is because software has been the exception to the rule of our physical industries. You don’t need to get permission to build a new app the same way you do to build a transmission line or to build a refinery. For that reason it’s why we are a leader in AI. Many of the issues that are being brought up with deepfakes and so on, those will get market solutions. No company wants AI users to be flooding their services, they are going to be developing tools that are going to be developing faster than we can iterate standards.”