Thune, Klobuchar Lead Commerce Committee Colleagues in Introducing Bipartisan AI Bill to Boost Innovation and Strengthen Accountability
Senators’ bipartisan legislation would bolster innovation and increase transparency and accountability for higher-risk AI applications
WASHINGTON — U.S. Sens. John Thune (R-S.D.), Amy Klobuchar (D-Minn.), Roger Wicker (R-Miss.), John Hickenlooper (D-Colo.), Shelley Moore Capito (R-W.Va.), and Ben Ray Luján (D-N.M.), all members of the Senate Committee on Commerce, Science, and Transportation, today introduced the Artificial Intelligence (AI) Research, Innovation, and Accountability Act of 2023. The bipartisan legislation establishes a framework to bolster innovation while bringing greater transparency, accountability, and security to the development and operation of the highest-impact applications of AI.
“AI is a revolutionary technology that has the potential to improve health care, agriculture, logistics and supply chains, and countless other industries,” said Thune. “As this technology continues to evolve, we should identify some basic rules of the road that protect consumers, foster an environment in which innovators and entrepreneurs can thrive, and limit government intervention. This legislation would bolster the United States’ leadership and innovation in AI while also establishing common-sense safety and security guardrails for the highest-risk AI applications.”
“Artificial intelligence comes with the potential for great benefits, but also serious risks, and our laws need to keep up,” said Klobuchar. “This bipartisan legislation is one important step of many necessary towards addressing potential harms. It will put in place common sense safeguards for the highest-risk applications of AI – like in our critical infrastructure – and improve transparency for policy makers and consumers.”
“Artificial Intelligence technologies hold incredible potential for the future of our country, and it is important for Congress to provide guidance to industry as new applications emerge,” said Wicker. “This legislation would provide a light-touch framework to help protect Americans and ensure this promising capability is not crushed by an overly burdensome and reactive approach.”
“We’re entering a new era of Artificial Intelligence,” said Hickenlooper. “Development and innovation will depend on the guardrails we put in place. This is a commonsense framework that protects Americans without stifling our competitive edge in AI.”
“I am glad to partner with my colleagues to introduce a bipartisan first step towards addressing the development of AI,” said Capito. “Our bill will allow for transparent and commonsense accountability without stifling the development of machine learning. I look forward to working with my colleagues to advance this important legislation.”
“Artificial Intelligence is advancing faster than ever. AI presents immense opportunities, but these high-tech capabilities also pose significant risks,” said Luján. “That’s why I’m proud to join my colleagues in introducing this bipartisan legislation to support research and innovation in AI, and approaches for accountability and safeguards for its use. Congress must keep pace with the development of AI systems with the American people in mind, and this bill will do just that through the creation of risk-based guardrails, transparency requirements and corporate accountability.”
Read additional statements of support here.
Find additional information on the AI Research, Innovation, and Accountability Act of 2023 here:
Content Provenance and Emergence Detection Standards: To provide clearer distinctions between human and AI-generated content, the bill would require the National Institute of Standards and Technology (NIST) to carry out research to facilitate the development of standards for providing both authenticity and provenance information for online content, similar to the efforts of the Coalition for Content Provenance and Authenticity. The bill would also direct NIST to support standardization of methods for detecting and understanding emergent properties in AI systems in order to mitigate issues stemming from unanticipated behavior.
AI Definitions: The bill would provide new definitions for “generative,” “high-impact,” and “critical-impact” AI systems. A clear distinction would also be made between “developer” and “deployer” of the AI system for purposes of requirements outlined below.
Generative AI Transparency: To ensure consumers are properly notified when they are interacting with generative AI, the bill would require large internet platforms to provide notice to users when the platform is using generative AI to create content the user sees. The U.S. Department of Commerce would have the authority to enforce this requirement.
NIST Recommendations to Agencies: NIST would be required to develop recommendations to agencies for technical, risk-based guardrails on “high-impact” AI systems, in consultation with other agencies and non-government stakeholders. The Office of Management and Budget would be tasked with interagency implementation of such recommendations.
Risk Management Assessment and Reporting: Consistent with the structure of NIST’s AI Risk Management Framework, the bill would require companies deploying critical-impact AI to perform detailed risk assessments. These reports would provide a comprehensive, detailed outline of how the organizations manage, mitigate, and understand risk. Deployers of “high-impact” AI systems would be required to submit transparency reports to the Commerce Department.
Critical-Impact AI Certification: The bill would require critical-impact AI systems to be subject to a certification framework, in which critical-impact AI organizations would self-certify compliance with standards prescribed by the Commerce Department. The structure of the certification process is outlined below:
- AI Certification Advisory Committee: The bill would establish an advisory committee, composed of industry stakeholders, to provide input and recommendations on the issuance of proposed critical-impact AI certification standards.
- Submission of Certification Plan: Before any standards for critical-impact AI could be prescribed, the Commerce Department would be required to submit to Congress and the advisory committee a five-year plan for testing and certifying critical-impact AI. The plan would outline not only proposed standards and a timeline for their implementation, but also a process for (1) evaluating whether standards are necessary and appropriate, including whether a prevailing international standard is sufficient and (2) how objective and transparent information will be gathered. The Commerce Department would be required to regularly update the plan and consult with other government agencies, the advisory committee, and non-governmental standards organizations in the process.
- Critical-Impact AI Standards: The Commerce Department, upon submitting a certification plan to Congress and the advisory committee, would have the authority to prescribe testing, evaluation, validation, and verification (TEVV) standards for critical-impact AI. Critical-impact AI deployers would self-certify compliance with the standards. All standards would have to be submitted for public comment.
- Exemptions: The Commerce Department would have the authority to issue exemptions from certain TEVV standards, as deemed appropriate, in response to exemption applications from critical-impact AI deployers.
- Noncompliance and Enforcement: Upon discovering noncompliance with an applicable TEVV standard, critical-impact AI organizations would be required to immediately notify the Commerce Department and provide a plan for addressing the noncompliant aspect of the AI system. In response, the Commerce Department could order additional actions to be taken by the organization.
AI Consumer Education: The bill would require the Commerce Department to establish a working group to provide recommendations for the development of voluntary, industry-led consumer education efforts for AI systems.
###