US-China Begin High-Stakes AI Governance Talks

The two countries that build the world’s most powerful artificial intelligence systems are about to sit down and talk about it formally. After years of escalating tech competition, export controls, and a hardening Cold War-style rivalry, Washington and Beijing are reportedly preparing to launch formal discussions on AI governance, safety, and the contested rules of global competition.

The talks come at a moment of extraordinary tension. Tariffs, semiconductor export bans, data centre rivalries, and competing visions of how AI should be governed have pushed the two superpowers further apart. Yet both sides increasingly recognise a hard truth: AI’s most dangerous risks, biosecurity, autonomous weapons, infrastructure attacks, and model misuse do not respect borders. A breakthrough in either country could become a catastrophe in both.

Here is what is actually on the table, what each side wants, and why the world is watching.

The Diplomatic Opening

The formal momentum traces back to the APEC summit, where Donald Trump and Xi Jinping agreed to put a floor under spiralling China-US trade relations and committed to consider cooperation on AI in the year ahead. With Trump and Xi planning an exchange of visits in 2026, that commitment is now driving a flurry of preparatory diplomacy.

This is not the first attempt at a US-China AI dialogue. Earlier intergovernmental AI talks were held in Geneva, and the two governments have already reached an informal understanding that AI should not control nuclear weapons launch decisions. The 2023 Bletchley Declaration, signed by both nations, acknowledged shared concerns about AI’s risks to human rights, privacy, fairness, and the potential for catastrophic harm from misuse or loss of control.

What is different this time is the stakes. AI capabilities have advanced dramatically since those early conversations. The technology has moved from narrow, task-specific systems to increasingly general-purpose models whose behaviours are sometimes unpredictable even to their creators. Both governments now treat AI as a force-shaping factor in national security planning.

Two Very Different Visions

The biggest obstacle to meaningful talks is that Washington and Beijing fundamentally disagree about what global AI governance should look like.

The Trump administration’s AI Action Plan, released in July 2025, takes a strongly sovereign-first stance. It is sceptical of multilateral efforts, wary of technological cooperation with Beijing on advanced systems, and openly critical of “burdensome regulations” and what it characterises as foreign “cultural agendas that do not align with American values.” The administration’s strategy prioritises setting de facto global standards through the diffusion of an American technology stack exporting chips, models, and governance models together to secure US dominance.

China’s Global AI Governance Action Plan presents itself as the multilateral alternative. It calls for inclusive global governance under the UN framework, support for the Independent International Scientific Panel on AI and the Global Dialogue on AI Governance, and explicit action to help developing countries bridge the digital divide. Beijing has pitched this as governance for “all nations” rather than what it describes as “a game of the club of wealthy nations.”

When the UN launched the Global Dialogue on AI Governance in September 2025, set to convene annually starting at the 2026 AI for Good Global Summit in Geneva, the United States came out in strong opposition at a Security Council debate the day before launch. China, in contrast, aligned closely with the G77 and developing countries to back the framework.

The two visions are not just different. They reflect competing theories of what global order should look like in the AI era.

What Both Sides Actually Agree On

Despite the public divergence, recent academic analysis comparing US and Chinese policy documents has found at least moderate overlap on most major AI risk categories. Both governments are concerned about the same core dangers:

Dangerous capabilities: Both worry that advanced models could lower the barrier for outsiders to plan sophisticated cyber operations or design biological and chemical weapons. Both are concerned that more autonomous systems could behave unpredictably once deployed.

Responsible design: Both recognise the need for rigorous testing requirements before brittle AI is woven into critical infrastructure or financial markets.

Misuse and deception: Both are alarmed by the possibility of AI being used for large-scale fraud, impersonation, election interference, and the manipulation of public opinion.

Cybersecurity and model theft: Both governments have publicly emphasised the need to protect AI model weights from theft and unauthorised access.

The International Dialogues on AI Safety (IDAIS), a track-II diplomatic effort, has already produced two consensus statements involving Chinese and Western experts: the Ditchley Statement calling for coordinated global action on AI risks, and a subsequent Beijing Statement establishing specific technological “red lines” including autonomous self-replication and AI systems’ deception of regulators.

These are real, substantive agreements. The question is whether they can be converted from expert-channel statements into binding governmental commitments.

What’s Likely to Be on the Agenda

Analysts at Brookings, the Center for Strategic and International Studies, and other think tanks have identified three “baskets” most likely to shape any formal US-China AI dialogue.

Military AI and strategic stability: Both countries already embed AI in military systems, autopilots in aircraft, computer vision in targeting, and pattern recognition in intelligence analysis. The realistic goal is not to ban military AI but to build common boundaries: shared expectations on testing rigour before deployment, prohibition of AI control over nuclear launch decisions, and possibly limits on autonomous weapons that operate without human oversight.

Testing and evaluation processes: Even without sharing access to sensitive systems, the two governments could agree on common processes, appropriate error rates, contamination-detection methods, multilingual evaluation tests, and remediation procedures when AI systems cause harm. Drawing a parallel with international airline safety standards, which require investigation and remediation after fatal incidents, analysts have proposed similar frameworks for AI failures.

Capacity building and global access: A crucial question is how both countries treat AI access in the developing world. China’s framework explicitly emphasises bridging the digital divide; the US prefers market-driven diffusion of its own technology stack. Some convergence may be possible on issues like open-source compliance standards, technical safety guidelines for open-source communities, and frameworks for handling cross-border AI services.

The Institutions Driving the Talks

Beijing has invested in new institutional infrastructure for these conversations. In February 2025, China launched the Chinese AI Safety and Development Association (CnAISDA) on the sidelines of the Paris AI Action Summit, placing China among a small number of jurisdictions with dedicated AI safety institutes. CnAISDA is currently focused primarily on international engagement rather than domestic supervision, essentially serving as China’s voice in global AI governance discussions.

The institute’s constituent members, particularly Shanghai AI Lab, are reportedly conducting genuine technical work on testing and evaluation, signalling that China’s frontier AI safety community is becoming more coherent. The US has its own AI Safety Institute infrastructure, though its role under the current administration’s deregulatory agenda is in flux.

Meanwhile, the International AI Safety Report 2026, chaired by Turing Award recipient Yoshua Bengio and drawing on contributions from over 100 AI experts nominated by more than 30 countries, including China, has provided the most comprehensive shared evidence base to date. The report does not make policy recommendations but synthesises the science behind AI risks, giving both Washington and Beijing a common factual foundation to negotiate from.

Why This Matters Now

Three forces make 2026 a uniquely consequential year for US-China AI diplomacy.

First, frontier capabilities are accelerating. Models are growing more capable, more autonomous, and more difficult to predict. The window to set norms before runaway capabilities lock in irreversible dynamics is narrowing.

Second, the geopolitical environment is hardening. Tariff battles, chip export controls, and trade tensions are pushing both economies toward decoupling in critical technology areas. Without insulated dialogue channels, even basic safety cooperation could fall victim to broader political ruptures.

Third, the rest of the world is choosing sides. Smaller nations, developing economies, and middle powers are being pulled into competing US and Chinese AI ecosystems. The norms set by Washington and Beijing separately or together will shape the rules of AI for everyone else.

The Realistic Outlook

No serious analyst expects these formal discussions to produce a grand treaty. The trust deficit is too deep, the competitive incentives too strong, and the visions of governance too divergent.

What is achievable is something narrower but still significant: insulated expert channels that survive political turbulence, modest technical agreements on testing and evaluation, hard rules around the most dangerous applications such as nuclear command and biological weapon design, and shared evidence bases that prevent each side from being surprised by the other’s capabilities.

History offers useful precedent. During the Cold War, US-Soviet arms control talks operated through insulated expert channels that continued even when broader political relations collapsed. Those talks did not end the arms race, but they made it less likely to end the world. The Advanced Encryption Standard, developed through an open international competition, shows how cooperation can grow first where sharing poses little risk and where both sides fear the same disasters.

That may be the most realistic path forward for AI. Not a global regulator. Not a shared technology stack. But a quiet, technical, durable channel that keeps the worst outcomes off the table while the competition continues everywhere else.

For the rest of the world watching this unfold, the message is clear: the two countries that built this technology will shape its rules together or separately. Whether they choose collaboration or collision will define the AI century.

The two countries that build the world's most powerful artificial intelligence systems are about to sit down and talk about it formally. After years of escalating tech competition, export controls, and a hardening Cold War-style rivalry, Washington and Beijing are reportedly preparing to launch formal discussions on AI governance, safety, and the contested rules of …

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *