Separate analyses by Japanese and US security companies show that DeepSeek’s R1 AI model can be used for crimes such as creating malware and Molotov cocktails. This is reportedly due to the model’s lack of security measures.
According to Takashi Yoshikawa of the Tokyo-based security company Mitsui Bussan Secure Directions, Inc., R1 generated source code for ransomware when prompted with instructions intended for acquiring inappropriate answers. While the DeepSeek model included a message warning the user not to use the information for malicious purposes in its response, Yoshikawa noted that other generative AI models like ChatGPT had outright refused to answer at all.

Yoshikawa’s findings were supported by an investigative team at Palo Alto Networks, a security firm based in the US. The team confirmed that the R1 model can be prompted to provide inappropriate answers without the user needing to have professional knowledge. Additionally, the answers given by the model can be easily implemented by anyone.
The team suspects that DeepSeek prioritised the swift release of the model to the market over security measures, resulting in R1 lacking the necessary safeguards to prevent its misuse. The model took the world by storm when it first released in January due to its performance at a lower cost compared to existing models. However, it is becoming clear that this comes at a price.

DeepSeek has also come under scrutiny over privacy issues, with user personal information and other data are reportedly stored in servers in China being one of the major concerns. Many countries have restricted or even banned the model’s use, including South Korea Australia and Taiwan.
(Source: The Star)