China’s DeepSeek AI, and the missing discourse on privacy and ethics

As the global west and emerging economies obsess over DeepSeek’s disruptive commercial popularity in the AI development space, the more value laden considerations of privacy and ethics cannot be brushed under the rug
China’s DeepSeek AI, and the missing discourse on privacy and ethics
Published on

AS the U.S. President Donald Trump began his second term, among the first in the order of business was the launch of the Stargate Project—a $500 billion artificial intelligence infrastructure partnership with large technology firms  like OpenAI, SoftBank, Oracle, Microsoft, and NVIDIA. 

What enraptured the world, however, was the disruptive introduction of the Chiene AI company - DeepSeek, which sent shockwaves across the tech industry, and exposed vulnerabilities in AI governance, national security, and global data privacy frameworks.

DeepSeek’s rapid success in the AI innovation space has ignited fierce debates on technological sovereignty. It has already been alleged that DeepSeek relied on OpenAI’s training data without explicit permission. It is an allegation that raises ethical questions—not only about data integrity but also about the hypocrisy of existing AI leaders.

For years, OpenAI and other Western firms have been criticized for scraping vast amounts of data from the internet—often without user consent—to train their large language models. Now, DeepSeek’s approach seems to mirror those same methods, challenging the very foundations upon which OpenAI and other tech behemoths built their dominance.

China’s DeepSeek AI, and the missing discourse on privacy and ethics
At the Paris AI Action Summit, all eyes must be on the European Union

This raises a critical question: Does OpenAI itself scrape data without permission? If so, is it now falling victim to its own methods?

The largely west led backlash against DeepSeek underscores the grave consequences of leaving the AI development space majorly unregulated. Beyond regulation, the industry seems to conveniently ignore data ethics until geopolitical pressure compels greater scrutiny upon it.

Meanwhile, regulatory bodies worldwide are scrambling to understand, adapt, and respond to the implications of this new AI arms race. 

AI Rivalry: Tariffs or Bans?

Tariffs appear to be the Trump administration’s go-to mechanism for countering economic and technological threats. But it is contestable if that might be enough to curb China’s rapid AI expansion. The U.S. has already taken aggressive action against Chinese tech firms, banning Huawei from 5G networks and forcing TikTok to divest its U.S. operations due to concerns over data privacy and potential state surveillance. Now, DeepSeek presents another challenge, one that may not be as easy to regulate through tariffs alone.

Banning DeepSeek could further escalate tensions between the U.S. and China. But failing to act could enable AI models that potentially train on unauthorized datasets and pose privacy risks to U.S. users.

China’s DeepSeek AI, and the missing discourse on privacy and ethics
Trump’s Stargate Project: A US $500 billion AI infrastructure investment

Recently, Dario Amodei, the Chief Executive Officer of Anthropic, an AI public-benefit startup asserted that blocking AI chip exports to China is of “existential importance”. For the U.S., AI development is evidently not merely an economic issue but a national security threat. 

But is limiting access to advanced chips enough? 

The Expanding Chinese AI Ecosystem

In China, besides DeepSeek, Alibaba Cloud, a subsidiary company of Alibaba, has also unveiled an upgraded AI model, Qwen 2.5 Max, which promises enhanced reasoning, mathematical capabilities, and multimodal processing.

Moonshot AI, another Alibaba-backed startup, has entered the large language model race with claims of superior performance in logic-based tasks and computational reasoning. 

DeepSeek’s rapid success in the AI innovation space has ignited fierce debates on technological sovereignty.

However, these expansions bring with themselves the need for heightened scrutiny over privacy, security, and ethics. 

In Europe, Italy’s Data Protection Agency blocked DeepSeek from processing Italian users’ data after it was unsatisfied with the Chinese company’s responses on data collection and processing practices. Earlier, in December 2024, Italy had  charged OpenAI with a $15.6 million fine for violating personal data protection laws through ChatGPT. 

Taiwan has also put restrictions on DeepSeek, banning its use in all government departments due to national security risks. 

AI-powered information warfare could become a significant geopolitical tool, with adversarial states leveraging AI-driven misinformation, surveillance, and cyberattacks to destabilize governments.

The crucial dilemma facing the world is how to balance technological progress with the need for stronger safeguards against AI-enabled surveillance, data exploitation, and regulatory non-compliance.

What Is the Chinese AI App DeepSeek?

DeepSeek is an AI development firm based in Hangzhou, China, that specializes in advanced language models and artificial intelligence technologies. The company was founded in May 2023 by Liang Wenfeng, co-founder of High-Flyer, a hedge fund. 

China’s DeepSeek AI, and the missing discourse on privacy and ethics
What happens when an AI chatbot turns into a grooming paedophile?

One of DeepSeek’s standout offerings is its DeepSeek-R1 reasoning model, which was released in January 2025. This model has garnered across-the-board attention for surpassing OpenAI’s ChatGPT in several benchmark tests. As a result, DeepSeek-R1 quickly became the most downloaded productivity app across major platforms, including Google Play Store and Apple’s App Store.

What sets DeepSeek apart is cost-efficiency. Contrasted to OpenAI’s pricey $100 million investment in ChatGPT, DeepSeek cost only $5.6 million. Awe and concern took over the global tech industry, spiralling stock markets everywhere, most prominently on Wall Street. 

As an indication of its material impact, NVIDIA’s market value plummeted by $600 billion, while co-founder Jensen Huang lost $20.1 billion in personal wealth. Larry Ellison, Oracle’s co-founder lost $22.6 billion.. 

The Shift from OpenAI to DeepSeek

DeepSeek’s biggest draw is that it provides a highly sustainable price point to enterprises looking to integrate AI solutions, as compared to OpenAI. DeepSeek is capturing considerable attention among smaller firms who previously couldn't afford advanced AI services. Its pricing is estimatedly 20 to 40 times cheaper than OpenAI’s.

DeepSeek is capturing considerable attention among smaller firms who previously couldn't afford advanced AI services.

“It marks a significant step toward democratizing AI,” Seena Rejal, chief commercial officer of British firm NetMind.AI, which has integrated DeepSeek’s technology for its predictive analytics services, told the South China Morning Post. "With the affordable costs, we’re now able to explore more possibilities with AI in sectors that were once too costly for such solutions."

This shift is not without its challenges. As more enterprises and startups adopt DeepSeek's cost-effective services, the concerns around the ethical implications and security of AI large language model training are rising.

If training data is not carefully consolidated or privacy standards remain unmet, the risk of harmful biases and inaccuracies could threaten millions of users. With businesses potentially compromising on  protecting sensitive information in the rush to cut costs, security concerns are galore. 

DeepSeek has made AI more accessible and the industry must grapple with the responsibility of ensuring its widespread deployment does not outpace the ethical considerations needed to ensure safety and fairness.

Data Privacy and Ethics

The bulletins display the impact of AI on stock markets and corporate competition. Missing amidst this noise is a more value laden concern - the threat ofgross data privacy violations, and unforeseen ethical dilemmas. 

China’s DeepSeek AI, and the missing discourse on privacy and ethics
The possibilities and pitfalls of ChatGPT

Corporate interests frequently sideline the ethical concerns surrounding user data. Data is the new currency and its unhindered exploitation poses serious challenges to individual consent, privacy, and control over personal information. 

Recently, Wiz Research, a Romania lab researching data, AI and user-interface discovered a major data breach at DeepSeek, exposing over one million sensitive records.

Stakeholders must work together to create frameworks that balance innovation with privacy. Stricter data protection laws, like the European Union’s  General Data Protection Regulation are a crucial starting point. The tech industry needs to actively embrace privacy by design and ensure that ethical considerations are embedded in AI development. 

Users must be empowered with the learning and mechanisms to control their own personal data and to make informed decisions about what they share and how it is used.

Reactions and Challenges

On January 31, 2025, Texas Governor Greg Abbott became the first U.S. official to ban DeepSeek, citing potential risks posed by foreign influence on American infrastructure. He stated:

"Texas will not allow the Chinese Communist Party to infiltrate our state’s critical infrastructure through data-harvesting AI and social media apps." 

Abbott’s move was seen as part of a broader trend of states taking unilateral action to protect sensitive data and curb the influence of foreign technology companies. But are such statewide bans enough, or do we need more robust federal AI regulations? 

Missing amidst this noise is a more value-laden concern—the threat of gross data privacy violations and unforeseen ethical dilemmas.

Lawmakers and government officials have expressed the need for oversight, while tech firms in the U.S. are pushing for self-regulation. The reluctance to impose strict federal regulations could leave the U.S. vulnerable in the global race for AI dominance, especially when compared to the EU and countries like China, which have implemented more stringent control mechanisms. 

The European Union has already taken decisive action to regulate AI, most notably with the enactment of the Artificial Intelligence Act (AI Act). The EU's approach classifies AI models like DeepSeek under “High-Risk AI” and mandates full transparency on data sources, training methodologies, and ethical compliance. AI systems that affect critical areas such as healthcare, justice, or transportation face stringent requirements for safety, fairness, and accountability. This regulatory approach aims to mitigate the potential harms posed by AI while promoting its ethical use.

What Comes Next?

At the core is the challenge of ethics, and what type of ethics informs these AI models. AI must be built on the pillars of transparency, fairness, and accountability, ensuring that technological advancements are aligned with humanistic values. Ethical guidelines should be integrated from the outset of AI development, not merely as an afterthought or response to crises.

The missing chapter in AI development is privacy and ethics. As AI systems become more embedded in our lives, they will increasingly influence how we work, interact, and even think. Without privacy protections and ethical oversights, AI could potentially become a tool for surveillance, discrimination, and exploitation.

The challenges that remain ahead are how to hold AI enterprises accountable for violations of data privacy, and how governments should impose global AI ethical standards. 

Loading content, please wait...

Related Stories

No stories found.
The Leaflet
theleaflet.in