DeepSeek: China's AI Startup Challenges OpenAI Amidst Investigation
DeepSeek's R1 AI Model Sparks Investigation Over Possible Data Misuse

In January 2025, DeepSeek, a Chinese artificial intelligence (AI) startup, introduced its R1 model, an open-source AI assistant that quickly gained traction, even surpassing OpenAI's ChatGPT on the Apple App Store. This rapid ascent has prompted OpenAI and Microsoft to investigate potential unauthorized use of their proprietary technology by DeepSeek.
DeepSeek's R1 Model: A Competitive Leap
DeepSeek's R1 model has been lauded for its advanced reasoning capabilities, particularly in mathematics and coding tasks. The startup claims that R1 achieves these results with significantly lower costs and hardware requirements compared to its competitors. For instance, while leading AI companies train their chatbots with supercomputers using as many as 16,000 graphics processing units (GPUs), DeepSeek reportedly needed only about 2,000 GPUs, the H800 series chip from Nvidia, and trained in around 55 days at a cost of US$5.58 million.
Investigation into Potential Unauthorized Use
The success of DeepSeek's R1 model has raised concerns among industry leaders. Microsoft and OpenAI are investigating whether DeepSeek utilized OpenAI's proprietary technology to develop its AI model. Microsoft's security team identified suspicious activity involving the use of OpenAI's API, suggesting that DeepSeek may have bypassed restrictions to acquire large amounts of data, potentially violating OpenAI's terms of service.
Implications for AI Governance and Export Controls
The situation has prompted discussions about the broader implications for AI governance and export controls, emphasizing concerns over military applications and competitive advantages. The investigation remains ongoing, with both Microsoft and DeepSeek yet to provide further comments.
Conclusion
As the AI industry continues to evolve, the case of DeepSeek underscores the challenges in protecting intellectual property and ensuring ethical practices in AI development. It also highlights the need for robust international cooperation to address potential misuse of AI technologies.





