Artikkelit
18.09.2024
Balancing Innovation and Regulation: U.S. Approaches to AI Governance
While U.S. corporations account for about 70 percent of global R&D investments in Artificial Intelligence, the country has not led the way in regulating AI. Until recently, there was significant political reluctance to regulate AI at all. Although this stance has softened, existing regulation remains decentralized and sector-specific, especially compared to regions like the European Union. Additionally, different U.S. states have adopted varying approaches to AI regulation and industrial policy.
Although the future of AI regulation remains somewhat uncertain, especially with the upcoming presidential and congressional elections in the Fall of 2024, it is fairly clear that the U.S. leans toward a light-touch, market-friendly regulatory approach. In contrast, the EU’s framework is more comprehensive and precautionary. While the U.S. approach promotes innovation, the European model provides clearer guidelines on ethical issues and a more defined stance on the political and social acceptability of AI-related risks.
However, the U.S. is moving toward more centralized AI oversight. In late 2023, President Biden issued an Executive Order aimed at promoting AI security and trustworthiness through the development of standards, tools, and testing. Key aspects of the order include requiring each federal agency to appoint a Chief AI Officer; funding national AI research institutes through the National Science Foundation (NSF); and implementing the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework (RMF) across all critical infrastructure sectors.
Of these measures, NIST's RMF may have the most lasting impact in terms of shaping AI regulation, serving as a guideline for businesses and research institutions to evaluate AI technologies, with a focus on risk management, accountability, and fairness. Compliance with the RMF is also crucial for subcontracting companies, given its role as a federal guideline. The framework provides operational definitions for key concepts like transparency, fairness, trustworthiness, safety and security, and accountability.
The relative lack of federal regulation is seen by some as a risk, as companies and research entities face uncertainty when bringing innovations and applications to market. In response, several states—such as California, Colorado, Texas, Virginia, New York, and Illinois—have enacted their own AI legislation, which can vary significantly in scope and the level of sanctions.
Critics argue that the absence of a unified national AI framework could result in lower actual AI usage rates in the U.S. compared to the EU and other competitors, despite the U.S. having the greatest potential due to its vast resources. They claim that a clear political consensus, such as the one achieved in the EU, helps focus public investment and guides AI development with a well-defined ethical standpoint.
Nevertheless, the U.S.’s substantial human and financial resources still make it a formidable competitor to any other country and region in the AI race. In August 2024, the U.S. was still slightly ahead of China in its R&D expenditure in AI, with about USD 800 Billion total (USD 200 Billion in public investments). While earlier development resulted largely from investments in basic research in the U.S., the major movers and shakers now come from the private industries. How this balance evolves is one area where the outcome of the upcoming U.S. elections can make a difference.
Petri Koikkalainen
TFK Senior Specialist/Science Counselor, Washington DC