No Adversarial AI Act

On June 25, 2025, U.S. lawmakers introduced the bipartisan No Adversarial AI Act to bar federal agencies from using AI models developed by foreign adversaries, including China, Russia, Iran, and North Korea. Advocates say this move will stop hostile AI from entering sensitive government systems. Let’s break down what is bipartisan No Adversarial AI Act and why it matters.

What Is the No Adversarial AI Act?

The No Adversarial AI Act bans AI systems from “adversarial nations” from being used by U.S. federal agencies unless Congress or the Office of Management and Budget issues an exemption.

The No Adversarial AI Act is a new U.S. law that aims to stop government agencies from using artificial intelligence (AI) tools created by countries that are considered threats to national security like China, Russia, Iran, or North Korea.

Lawmakers introduced this act to make sure foreign AI does not end up controlling or influencing sensitive government systems. They want to protect U.S. data and decision-making from outside interference.

This law tells federal agencies they can only use AI from trusted sources, and a special council will keep a list of banned tools. If an agency still wants to use a tool from a restricted country, they must get approval and prove it’s safe.

In simple terms, the No Adversarial AI Act helps to protect American systems by keeping untrusted foreign AI out of the government’s hands.

  • Bipartisan support: Introduced by Rep. John Moolenaar (R-MI) and Rep. Raja Krishnamoorthi (D-IL); co-sponsors include Senators Rick Scott and Gary Peters.
  • Restricted nations: China, Russia, Iran, North Korea.
  • Implementation: The Federal Acquisition Security Council will maintain a list of banned AI tools, updated regularly.

The act also allows agencies to remove a tool from the banned list if they prove it is not tied to foreign influence.

Why the U.S. Seeks This Ban

Lawmakers launched this bill after concerns arose about the Chinese AI model DeepSeek, which reportedly accessed large volumes of NVIDIA chips and had potential ties to Chinese military efforts.

They argue that foreign-developed AI may:

  • Threaten national security.
  • Compromise government data.
  • Serve foreign intelligence goals.

Rep. Moolenaar stated, “Hostile AI systems have no business operating inside our government”.

What Are the Benefits Of This Act?

  • Strong national protection: The act creates a formal firewall against adversarial AI.
  • Clear compliance structure: Federal agencies will reference a maintained list of approved AI tools.
  • Enables exemptions: Agencies can keep using certain tools for research or national security if cleared by Congress or OMB.

Who Are the Critics?

1. AI Startups and Open-Source Advocates

A report from Andreessen Horowitz, a top Silicon Valley venture firm, notes that major tech companies on the new AI Security Board deliberately excluded startups. They warn this move could suppress smaller developers by forcing compliance burdens that only large firms can afford.

2. Open-Source Supporters Like Meta and Hugging Face

Meta’s CEO Mark Zuckerberg and open-source leader Yacine Jernite at Hugging Face both argue that open-source AI fosters innovation and improves security. They believe heavy regulation could stifle tech progress even though they acknowledge the need for transparency and safeguards.

3. Academic and Policy Experts

Experts like Markus Anderljung at the Centre for the Governance of AI and Advisor Adam Thierer from R Street Institute say licensing AI tools even powerful ones could create unfair barriers, limit innovation, and invite excessive surveillance.

What These Critics Warn

  1. Open-source tools might get blocked
    They say restricting AI from specific countries may stop users from accessing free, widely-used open-source models built anywhere even within the U.S..
  2. Costs could soar
    If U.S. tools grow slower, agencies and companies may have to pay more or wait longer for reliable AI, which hurts startups the most.
  3. Global tech tensions may rise
    Naming countries in the law could add to political friction, shutting down collaboration and harming worldwide innovation.
  4. Lawmakers risk favoring big tech
    Critics see the policy as favoring large companies (who can handle compliance) at the cost of smaller developers and open innovation.

In summary, these critics, top venture firms, tech CEOs, AI policy experts, and open-source advocates warn that the No Adversarial AI Act may unintentionally slow down innovation, overburden smaller creators, and increase global tension without fully improving security.

What AI Tools Could Be Affected?

Any AI systems from China’s DeepSeek, Russia, Iran, or North Korea would fall under the ban. Even companies with international partnerships may need proof that their systems operate independently .

The Road Ahead

  • The bill passed its first steps in Congress; the Senate version is led by Senators Rick Scott and Gary Peters.
  • It will shape future AI governance and export control strategies.
  • This act may inspire similar legislation in other democratic countries aiming to secure their digital infrastructure.

Final Thoughts

The No Adversarial AI Act aims to protect U.S. institutions by restricting AI built by foreign adversaries. With bipartisan support, it marks a significant shift in how the government approaches AI national security and digital autonomy. As it progresses, everyone from tech companies to researchers should monitor how it shapes the future of AI tools and global cooperation.

How do you feel about banning foreign AI? Share your thoughts below!

Stay ahead of the curve with the latest insights, tips, and trends in AI, technology, and innovation.

LEAVE A REPLY

Please enter your comment!
Please enter your name here