HomeNews
Request a Demo

Request a Demo

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

DeepSeek and the new arms race

Published on
May 22, 2025
Ideas
Back to news

The New Arms Race

The United States stands at a critical crossroads in the development of artificial intelligence. As policymakers, major technology companies, and AI research institutions grapple with whether to keep large language models (LLMs) proprietary or openly available, the stakes have never been higher. Against this backdrop, Meta remains an outlier among U.S. tech giants by releasing cutting-edge LLMs under permissive licenses. Yet the sudden emergence of China’s DeepSeek v3 — a model claimed to outperform GPT-4 at a fraction of the cost — has changed the competitive landscape. For developers worldwide, access to a more powerful and cheaper option is compelling, raising a crucial strategic question: Will restricting open-source AI in the U.S. simply shift innovation and market share to Chinese competitors?

The Debate Over Openness

The struggle between open-source and closed-source paradigms in software is hardly new. Advocates of openness emphasise that publicly sharing model architectures and weights accelerates innovation, enhances community feedback, and reduces reliance on corporate gatekeepers. They also argue that advanced AI is too vital to remain locked within the walled gardens of a few large companies. Historically, open-source efforts — like Linux or Kubernetes — have proven their value in driving down costs and expanding adoption across industries.

Critics maintain that open LLMs pose serious risks: they can be misused for malicious ends, infringe on copyrights, or devalue significant corporate R&D efforts. For many businesses, proprietary systems promise stronger guardrails, clear accountability, and robust monetisation. This is an old argument; Microsoft’s internal memos from the 1990s (the “Halloween Documents”) reveal how entrenched players can see open-source as a direct threat to both market share and revenue. Today, that same wariness persists, especially as LLMs grow more powerful and ubiquitous.

Figure 1 Elon Musk, owner of xAI supports regulating opensource AI.

U.S. legislative efforts like SB1047 — which would have placed an unprecedented accountability for application use cases on the US developers of advanced AI systems — reflect these tensions, with supporters insisting on stricter oversight to safeguard national security and the public interest, while opponents worry about stifling innovation and driving researchers overseas. Although the bill was ultimately vetoed, similar proposals highlight ongoing concerns over how to balance open knowledge and technological leadership.

B2B Preference for Open Solutions

Long before the rise of contemporary LLMs, businesses worldwide embraced open-source software as a cornerstone of their tech stacks. Linux, once a scrappy upstart, now dominates servers and supercomputers. Google’s open-source Android and community-driven projects like Kubernetes demonstrate how collective expertise can surpass the development cycles of any single company.

Enterprises value open-source for several reasons:

  1. Lower Costs: Eliminating expensive proprietary licenses.
  2. Vendor Independence: Avoiding lock-in to specific platforms.
  3. Community Collaboration: Rapid innovation and bug fixes from a global talent pool.

In B2B environments, reliability, security, and interoperability are paramount. Open-source solutions can be rapidly tailored to meet diverse needs — an advantage that becomes even more compelling in the AI arena.

For instance, at Automatise, we rely on open-source LLMs (Llama 3.3 at 70B and Llama 3.1 at 405B) self-hosted on Nvidia H200 GPUs for our legal AI platform Cicero. By bypassing per-token economics, we enable 20–50 times more AI calls at roughly 20% of the proprietary API cost. This has led to faster deployments, completely new architectures enabling more robust use cases, and significantly greater customer value.

The larger point remains: if businesses have historically favoured open solutions across various domains, why should AI be any different?

China’s Game-Changer

The U.S. dispute over open-source AI has been abruptly reframed by the advent of China’s DeepSeek v3. Early indications suggest that DeepSeek v3 challenges GPT-4 in raw capabilities — ranging from complex reasoning to nuanced linguistic analysis — while undercutting it significantly on cost: $0.28 per million tokens for inference vs GPT4o’s $10.00 via Azure, a roughly 35x cost reduction. For developers and enterprises operating in cost-sensitive environments, that pricing is hard to overlook.

The affordability factor alters the economics of AI adoption. Companies in regions with limited R&D budgets — or where large-scale AI experimentation was previously too costly — now have a more powerful, budget-friendly option. Given how critical language AI is becoming for everything from customer service to analytics, even modest cost savings can rapidly translate into substantial competitive advantages.

Deepseek v3 also raises a critical strategic point. If a high-performance and inexpensive Chinese model remains freely available, domestic U.S. restrictions on open-source AI may prove self-defeating. Limiting open access at home does little to prevent global developers from adopting foreign models — potentially ceding both technological leadership and market share to competitors with fewer constraints.

Figure 2 Deepseek v3 offers the most affordable frontier model by a factor of 100.

Global Economics & Soft Power

The disruptive potential of Deepseek v3 in cost-conscious markets — Africa, Southeast Asia, and Latin America — echoes earlier success stories like TikTok or Huawei. Yet there is more at stake than pricing. Recent comparative data suggests that Western and Chinese AI models embed distinct value orientations into their outputs.

One such analysis compares how LLMs respond to various ideological tags — for example, “Worker Rights,” “National Way of Life,” or “Collective Harmony” — depending on the language of the prompt (English vs. Chinese) and the origin of the model (Western vs. non-Western).

  • Western-trained LLMs often emphasise concepts like freedom, human rights, and worker protections.
  • Chinese LLMs may prioritise collective harmony, economic control, or national identity in how they interpret language and propose solutions.

Such differences reflect deeper ethical and cultural underpinnings, shaping how these AI systems make judgments or suggest policy ideas. This introduces a dimension of soft power. If a Chinese-developed LLM becomes a go-to choice for millions of users in developing nations, its embedded worldview could subtly influence norms, ethics, and problem-solving approaches. Over time, AI could become a platform for cultural and geopolitical sway — a digital channel for reinforcing particular value systems.

While Western policymakers and corporate leaders debate the merits of open or closed AI models, a fundamental shift may already be happening on the ground. Cost, performance, and availability drive adoption, but the values coded into these LLMs may exert a far-reaching influence on how societies and industries evolve.

Security Threat Models & Hidden Capabilities

As LLMs become more deeply embedded in critical infrastructure — from chatbots to supply-chain analytics — the risk of hidden vulnerabilities grows. One concern is the potential for “delayed activation” or back doors, where a model might appear benign under normal usage yet can be triggered to spread disinformation, leak sensitive data, grant unauthorised access or simply stop working.

Competition and transparency are crucial to mitigating these threats. When multiple reputable models are available — and openly scrutinised — malicious code or anomalous behaviours can be caught more easily. By contrast, a single dominant or secretive provider would face fewer checks on possible misuse.

This highlights the importance of open-source or at least transparent approaches. While secrecy can offer short-term security benefits, broad public review is often the strongest safeguard for ensuring LLMs remain trustworthy over time.

Corporate Interests, Legal Grey Areas, and Commercially Aligned Trust

Even though many Western companies tout the importance of AI transparency, their core motivation is typically profit rather than geopolitics. This focus on commercial gain actually makes them more appealing in regulated fields like law or insurance: reputational and liability concerns incentivise them to avoid scandals that would jeopardise trust. While not immune to controversies (e.g., Cambridge Analytica), these firms generally prioritise monetising user data over advancing state agendas.

In high-stakes sectors, this notion of “commercially aligned trust” offers a peculiar reassurance. Companies aiming to protect their market image and revenue streams are less likely to introduce covert back doors or intentionally violate user privacy. Thus, even as large Western corporations maintain trade secrets and intellectual property, their motivation to remain profitable can align with user interests — especially in an era where AI drives daily business operations.

Ensuring Open Alternatives Remain Viable

China’s rapid progress with models like Deepseek v3, alongside ongoing ethical and security concerns, underscores the urgency of preserving a robust ecosystem of open-source or transparent LLMs in the West. If access to advanced AI is heavily restricted domestically, developers and enterprises will inevitably look elsewhere. That exodus risks undercutting not only America’s commercial competitiveness but also its influence on the global norms that govern AI.

To keep open models viable, policymakers and industry leaders should:

  1. Support Public–Private Collaborations: Invest in and incentivise open-source AI research.
  2. Clarify Legal Frameworks: Establish clear guidelines on copyright, data usage, and liability to reduce uncertainty.
  3. Promote Transparent Governance: Encourage a culture of auditing and peer review to identify hidden vulnerabilities.

For businesses, adopting open-source AI offers flexibility, transparency, and security — critical advantages when hidden back doors or external political agendas are a concern. By relying on a wide community of developers and researchers, companies can detect security risks faster and adapt solutions to unique enterprise challenges.

Conclusion: A Strategic Imperative

The future of AI will be shaped not just by raw computational power or model size, but by who controls the technology, how widely it can be accessed, and whose values it ultimately represents. Models like Deepseek v3 show that powerful alternatives will keep emerging — often beyond the reach of U.S. regulations. If the West hopes to maintain technological leadership and safeguard its ethical frameworks, preserving open-source AI is essential.

In practice, this means offering attractive, transparent alternatives that can be vetted and adapted by a global community. It does not entail dismantling profit motives or ignoring the need for sensible regulation. Instead, the goal is to foster an environment where multiple players — corporate, academic, and government — can develop, scrutinise, and improve upon advanced AI.

Only through competition, openness, and diversity of approaches can we ensure that no single proprietary or state-sponsored platform monopolises how the world interacts with and interprets information. In this emerging AI landscape, the West’s capacity for open innovation and collaborative checks and balances may well be its strongest asset. If it is sacrificed for the sake of tighter control, the West might not only lose the AI race but also cede an entire generation’s worth of ethical, economic, and cultural influence to new contenders.

By Joseph Rayment - Founder and CEO, Automatise

Featured

News

Cicero is one of the first to achieve ISO42001 certification

Cicero is one of the first AI companies globally to achieve a stringent ISO/IEC 42001, an international standard that specifies requirements for managing Artificial Intelligence System within organisations.
Read original post
Read more
Ideas

DeepSeek and the new arms race

In this article we discuss the crossroads that face AI development with an introduction of DeepSeek and geopolitics into the AI race. We talk about the strategic risks of restricting open-source AI in the U.S., which could shift innovation and market share to Chinese competition.
Read original post
Read more
Partnerships

Colin Biggers & Paisley partners with leading AI legal platform Cicero

Colin Biggers & Paisley is proud to announce its participation in the trial of Cicero, a secure AI-enabled technology with the potential to transform workflows in our litigation, disputes and investigations teams.
Read original post
Read more