US lawmakers advance bill to close loopholes in AI export controls

The House Foreign Affairs Committee has advanced a bill that would enhance the White House’s ability to regulate the export of AI systems, amid ongoing efforts to tighten grip on key technologies.

“The US-China Economic and Security Review Commission reported last year that China is using commercial AI advancements to prepare for military conflict with Taiwan,” bill co-author and House Representative Michael McCaul said in a statement. “We must understand that whoever sets the rules on its application will win this great power competition and determine the global balance of power.”

McCaul emphasized the critical role of AI, stating, “AI will permeate every facet of our economy and military, serving as a bedrock upon which our prosperity and security rests. This is why safeguarding our most advanced AI systems, and the technologies underpinning them, is imperative to our national security interests.”

The Bureau of Industry and Security (BIS) is currently responsible for approving or denying exports of items such as advanced semiconductors and related tools. It recently revoked export licenses from Intel and Qualcomm for selling to Huawei, a decision McCaul praised as “long overdue.”

However, McCaul highlighted a gap in the BIS’s mandate regarding AI systems, noting the agency’s lack of clear legal authority in the segment. He cited this as a key reason for introducing the bill.

Potential to hurt US businesses

The suggested restrictions may be in the interest of the government, but analysts point out that while they may seem useful in the short term, countries like China will likely accelerate the development of their own models later.

“This pattern is similar to what has been observed in the semiconductor sector, where restrictions on exports have spurred the development of local ecosystems,” said Pareekh Jain, CEO of Pareekh Consulting. “Although these restrictions might seem detrimental to China or other countries initially, they are likely to have a long-term negative impact on US companies.”

However, regulations like these could help major tech companies like Microsoft, Google, and OpenAI improve public trust, according to Thomas George, President of Cybermedia Research.

“These firms may find themselves with an advantage and disadvantage,” George said. “While the laws may restrict some of their operations, their extensive resources and established compliance frameworks also position them as more stable and trustworthy in the eyes of global consumers and regulators than newer entrants.”

Impact on operations and cost

The restrictions could particularly affect the cross-border collaboration that is often vital for large corporations. Companies will have to navigate these complexities while ensuring that their operations comply with local laws, which may differ markedly from one jurisdiction to another.

“The overall cost implications of these regulatory challenges are also significant,” Jain said. “With the implementation of these laws, US companies may see an increase in fixed costs, previously spread over a large user base. As the potential user base shrinks due to regulatory restrictions, the per-unit cost for remaining users is likely to increase, affecting the financial dynamics in the medium term. This could hinder the global competitiveness of US firms.”

The financial burden can be significant for big corporations, but disproportionately high for smaller companies and startups, according to George.

“The smaller entities must be particularly strategic about resource allocation, focusing on niche markets or specific AI applications where compliance costs are manageable,” George said. “The resource disparity between large firms and startups underscores a potential widening of the competitive gap due to regulatory costs.”

Intensifying AI regulations

As the use of AI increases, the US is seeing a variety of responses at different levels of government. The surge in AI adoption has led to calls for robust regulatory frameworks to safeguard individuals from potential negative outcomes of automated decisions.

In response, regulatory bodies are crafting a complex array of laws and guidelines. More than a dozen states have enacted AI regulations, with further legislative debates on AI underway.

Meanwhile, the EU and the US have agreed to enhance their collaboration in the development of AI technologies, with a specific focus on improving safety and governance measures. Major companies have also rallied behind the need for AI regulations. Sixteen major users and creators of AI, including Microsoft, Amazon, Google, Meta, and OpenAI, have endorsed the Frontier AI Safety Commitments. This initiative introduces a new set of safety guidelines and development standards for AI technologies.

© Foundry