Deepseek, hailed as a disruptor AI model by its creators, has now received some criticism for welcoming public access to its databases and control over its operations, including with the ability to assess internal data. The data exposed includes chat history, backend data and secret keys.

The AI model, which undercut costs to create competitor technologies, suffers from critical security issues which can put users at risk, according to data from Wiz and SecurityScorecard. Wiz discovered two open ports in the ClickHouse database without authentication in place to protect sensitive logs, chat messages and passwords. This exposes the rapid growth of AI services globally whilst compromising on security. The inherent cyber risks of AI applications can directly link to underdeveloped infrastructure and tools. AI innovation for many countries is focused on increasing competitive AI hubs with more start-ups, however, customer data must always remain a priority. 

Emma Zaballos, Security Researcher at CyCognito says DeepSeek AI has a dangerous combination of exposed databases, weak encryption, AI jailbreak susceptibility, and SQL injection risks”. According to SecurityScorecard, DeepSeek-R1 failed 91% of security tests for AI jailbreak attempts and 86% for prompt injection attacks, indicating that adversaries can easily manipulate responses.

DeepSeek’s infrastructure is accused of transmitting a broad scope of data to Chinese entities.