**DeepSeek Warns Its Open-Source AI Models Are Vulnerable to ‘Jailbreaking’**
*By Dwaipayan Roy | Sep 21, 2025, 02:58 PM*
Hangzhou-based start-up DeepSeek has issued a warning regarding the security vulnerabilities in its artificial intelligence (AI) models. According to the company, their open-source AI models are at risk of being “jailbroken” — a tactic employed by malicious actors to bypass model restrictions and generate harmful or unintended content.
The findings were published in a peer-reviewed paper featured in the prestigious academic journal *Nature*. The study sheds light on the potential exploitation methods targeting open-sourced AI models, raising concerns over their safety and robustness against adversarial attacks.
**Comprehensive Testing Protocols**
DeepSeek conducted thorough evaluations of its AI models using a combination of industry-standard benchmarks and internal testing procedures. Fang Liang, a noted expert from China’s AI Industry Alliance (AIIA), highlighted that the *Nature* paper offers detailed insights into DeepSeek’s rigorous testing methodology.
Part of their assessment involved “red-team” exercises modeled after a framework introduced by Anthropic. During these tests, skilled testers attempted to manipulate the AI to produce harmful or inappropriate speech, enabling DeepSeek to identify and address critical vulnerabilities proactively.
**Risk Mitigation and Industry Response**
While many U.S.-based AI companies have openly discussed the risks linked to their rapidly advancing technologies, firms in China have largely remained silent on such issues. DeepSeek stands out by taking a proactive stance, having previously evaluated severe “frontier risks” associated with AI development.
This forward-leaning approach aligns DeepSeek with leading organizations like Anthropic and OpenAI, which have implemented comprehensive risk mitigation strategies to safeguard against the misuse and unintended consequences of their AI models.
As AI continues to evolve, DeepSeek’s findings serve as a crucial reminder of the importance of transparency, rigorous testing, and cooperation within the global AI community to ensure safe and responsible deployment.
https://www.newsbytesapp.com/news/science/chinese-ai-firm-warns-of-jailbreak-risks-in-its-models/story