Government

The United States works with domestic and international AI communities to establish frameworks that advance trustworthy AI for all

Over the past year, dramatic advances in generative Artificial Intelligence (AI) and its rapid availability in products and services have catapulted AI into the public’s imagination with images and predictions that are both enormously promising and deeply concerning.  To respond to these trends, the United States has sought to address AI technologies holistically by focusing on the potential of AI to boost economic prosperity, help overcome major societal challenges, and close the digital divide. It also recognizes that trustworthy AI requires strong governance tools that engage all relevant stakeholders to create a safe, secure and productive AI ecosystem.

As a starting point, in 2022, the White House published a Blueprint for an AI Bill of Rights to help counter harms that AI can perpetuate, including discrimination in hiring processes or credit decisions and violations of individual privacy. The Blueprint identified five principles to guide the use and development of AI, in large part inspired by the OECD AI Principles that the United States has endorsed.  In January 2023, the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) also released its AI Risk Management Framework (AI RMF).

More recently, on October 30th, President Biden issued a landmark Executive Order (E.O.) 14110 on Safe, Secure, and Trustworthy Artificial Intelligence to ensure that the United States leads the way in seizing the benefits and managing the risks of AI. The E.O. aims to establish new standards for AI safety and security, protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, and advance American leadership around the world.  Importantly, the E.O. also builds on previous White House work to establish voluntary commitments from 15 leading U.S. companies to drive safe, secure, and trustworthy development of AI.

The E.O. emphasizes the Department of Commerce’s leading role in the U.S. Government’s approach to safe, secure, and trustworthy AI and promoting AI innovation.   

Commerce will play a major role in several lines of work under the auspices of this E.O.:

  • Guidance and Testing – NIST will develop guidance on red-teaming, safety, and cybersecurity and provide testing environments to evaluate AI models against these guidelines.
  • Transparency with Frontier Models – Commerce will require industry to share information on their development of frontier models with the U.S. Government, including disclosure of information on frontier AI computing resources and frontier AI development by foreign companies on U.S. cloud.
  • AI and Intellectual Property – Commerce will clarify how the use of AI in the inventive process impacts patent inventorship and calls for recommendations on potential executive actions as to copyright and AI;
  • AI Model Weights – Commerce’s National Telecommunications and Information Administration (NTIA) will solicit public input and issue a report relating to the benefits and risks of widely available model weights (such as open-source AI models); and
  • International Engagement – Commerce will work in collaboration with the Department of State to expand bilateral, multilateral, and multistakeholder collaborative AI engagements, prioritizing work with likeminded nations to harmonize policies on AI, accelerate the development of vital AI standards, and promote the safe, responsible, and rights-affirming development and deployment of AI abroad to solve global challenges.

U.S. Department of Commerce bureaus have a variety of roles in shaping emerging global norms and regulatory interoperability in AI governance, including through this E.O.

I. NIST: a federal laboratory for innovation but also trust in technology

As a federal laboratory focused on driving U.S. innovation and supporting economic security, the National Institute of Standards and Technology (NIST) has a broad research portfolio and a long-standing reputation for cultivating trust in technology. NIST has been working with the AI community in the U.S. and abroad to provide scalable, research-based methods to manage AI risks and advance trustworthy approaches to AI.

The NIST AI Risk Management Framework (RMF) is a case in point. Called for by Congressional statute and released in January 2023, the NIST Framework provides a voluntary, practical resource to organizations that design, develop, deploy, or use AI systems. Its main objective is to help them manage the many risks of AI while promoting trustworthy and responsible development and use. A companion AI RMF Playbook offers concrete actions to help organizations implement the AI RMF.  Organizations are already using the Framework for their own AI systems, and work has begun on guidance, including “profiles,” which specialize the AI RMF for entire sectors, technologies, or other broad contexts.

NIST also launched the NIST Trustworthy and Responsible AI Resource Center, a one-stop shop for foundational content, technical documents, and AI toolkits. As a common forum, AI actors can also engage and collaborate here to develop and deploy trustworthy and responsible AI technologies and standards.

NIST launched the Generative AI Public Working Group in June 2023 with the short-term goal of developing a profile describing how the AI RMF may be used to manage the risks of generative AI technologies.

Finally, NIST will launch the U.S. Artificial Intelligence Safety Institute (USAISI) to lead the U.S. government’s efforts on AI safety and trust. Part of this will be to establish evaluation methodologies for the most advanced AI models.

There is much more to be done to cultivate trust in AI. AI systems are socio-technical in nature: they are a product of the complex human, organizational, and technical factors involved in their design, development, and use. Accordingly, there needs to be measurements beyond computational and system accuracy and functionality to evaluate and assess the risks and impacts of AI systems. There is an urgent need for clear specifications standardized measurement methodologies and metrics for trustworthy AI.

NIST will continue to work with the AI community to conduct research and develop technically sound standards, interoperable evaluations and benchmarks, and usable practice guides. We invite you to learn more at https://www.nist.gov/artificial-intelligence and contact us at ai-inquiries@nist.gov.

II.  To advise the President, NTIA consulted the AI community on accountability

The National Telecommunications and Information Administration (NTIA) serves as the Executive Branch agency responsible by statute for advising the President on telecommunications and information policy issues, including artificial intelligence.

To support the Biden Administration’s work on AI, on April 11, 2023, NTIA launched an inquiry into what AI accountability policies will foster earned trust in AI systems. The “AI Accountability Policy Request for Comment” (RFC) sought input on topics including:

  • The data access, credentialling, disclosure, and documentation necessary to conduct audits and assessments;
  • How regulators and other actors can incentivize and support credible assurance of AI systems along with other forms of accountability, and;
  • What different approaches to AI accountability might be needed in different industry sectors.

The RFC closed for comment on June 12, 2023, and the 1,400+ comments are available online. NTIA will release a report and recommendations soon.

International engagement to ensure interoperability

In addition to their domestic policy work, NTIA works with other Commerce bureaus – including NIST and the International Trade Administration (more below) – to engage internationally to convey the United States’ views on promotion of trustworthy AI and risk management in international settings like the G7 Hiroshima process, G20, OECD, the United Nations, and the U.S.-EU Trade and Technology Council (TTC), and to ensure that the United States’ vast and vibrant AI ecosystem continues to drive global competition. 

All three agencies participate in the OECD Working Party on AI Governance (WPAIGO) and are encouraged by its fruitful discussions on critical issues that will impact how countries will govern AI in the short and long term. The OECD recently provided an updated official definition for “AI systems,” which has international implications as a fundamental building block for legal and regulatory initiatives. NIST and NTIA are committed to contributing to these conversations and building frameworks that work for all parties.

NTIA is encouraged by the G7’s work to develop the Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI Systems and the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems. The Department of Commerce looks forward to advanced AI developers endorsing the Code and continues to encourage partners outside the G7 to support the Code. The Department of Commerce will continue to work with allies and partners to ensure the Code remains fit for purpose with stakeholder input.

III. As AI use expands, ITA supports U.S. business abroad

The International Trade Administration’s (ITA) mission is to strengthen the U.S. industry’s international competitiveness by promoting trade and investment and ensuring fair trade compliance with trade laws and agreements.

ITA is an important stakeholder in the emerging AI policy space, not least of all because AI technologies are exportable products and services. In international trade, AI technologies play direct and supporting roles as companies leverage them internally to streamline operations and customer engagement and, more broadly, to gain competitive advantages.

ITA is educating staff on the AI ecosystem and driving AI engagement in foreign markets to ensure U.S. competitiveness globally

As AI use expands, ITA’s Industry & Analysis (I&A) unit deploys an AI Policy Team to support efforts by U.S. businesses to build a competitive edge in AI technologies worldwide, understand the global policy landscape for AI technologies, and drive regulatory interoperability with trading partners.  This includes equipping teams around the world with information to help them understand the basic tenets of AI technologies and the global AI ecosystem, as well as tools to help promote U.S. competitiveness in AI.  The AI Policy Team also leads or supports AI policy engagements with Argentina, Brazil, Canada, Japan, Singapore, and the United Kingdom.

Consultations with businesses guide ITA’s efforts for U.S. competitiveness in AI

The AI Policy Team engages with industry in the United States to solicit views on AI regulations to better understand and represent their interests in light of international policy developments, and to unlock business opportunities. 

As opportunities unfold, new regulations and policies that govern trade in AI directly impact U.S. business. ITA solicits businesses’ feedback on policies that drive global trade in AI systems. This helps to understand better the challenges regulations pose to businesses. In Fall 2022, ITA facilitated a 60-day public consultation, receiving responses from 20 U.S. industry associations.

In their responses, businesses came together on three issues, emphasizing that governments should:1) acknowledge the cross-border nature of most AI development, especially data collection and training, and ensure that AI governance laws do not hinder this cross-border innovation; 2) leverage technology-neutral standards developed through rules-based processes to ensure these enhance interoperability and facilitate trade and innovation in AI; and 3) protect AI intellectual property and enact clear requirements on algorithmic transparency. 

Respondents also flagged barriers that disproportionately impact small- and medium-sized enterprises (SMEs), including: 1) restrictions on data transfers limit SMEs ability to process the quality and quantity of data needed for commercially-viable AI applications; 2) differential pre-market conformity assessments across markets burden SMEs; and 3) SMEs face compliance disadvantages in markets where the United States does not have an underlying trade agreement. 

As part of its work to support U.S. industry competitiveness in foreign markets, ITA has prioritized key trading partners for engagement on AI policy in Africa, East and Southeast Asia, Europe, and Latin America based on respective market size and importance to U.S. companies. It also looks to the level of activity in AI regulatory and policy development and existing trade agreements or commercial and economic dialogues with the United States.  ITA uses these engagements to communicate the views and experiences of U.S. industry.   

ITA approaches these engagements from the trade policy perspective, focusing on promoting U.S. Government policy and standards tools that enable interoperability with key trading partners, clear rules of the road, and common compliance mechanisms for our companies. These tools include the White House AI Voluntary Commitments, the NIST AI Risk Management Framework, and multilateral, government-backed privacy certifications like the Global Cross-Border Privacy Rules (CBPR) System

The key message: Interoperability at a global scale is crucial

AI technologies continue to develop at lightning speed worldwide, and governments are moving quickly to keep pace with legislation and policies to regulate these technologies effectively.  As regulation across countries expands, ITA, NIST, and NTIA will work together to promote broader adoption of interoperable AI regulatory and standards frameworks to ease the complexity for U.S. businesses and enhance transparency and safety for consumers.  Their efforts will focus on advocacy in multilateral fora like the OECD, which the United States views as its preferred international venue for discussions around AI policymaking, and the G-7, G-20, and the Asia-Pacific Economic Cooperation (APEC).  As the United States and its partners move from principle to practice in responsible use of trustworthy AI, Commerce will leverage its policy and technical expertise and deep relationships to ensure that stakeholders from civil society, industry, and the technical community play a role in these activities.  



Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD or its member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.