Skip to content

Fake DeepSeek AI Packages Stealing Credentials - A Case of AI as a Convenient Ally for Hackers

AI is at the center of today’s technology conversation - and attackers are taking full advantage. In late January 2025, two malicious Python packages, “deepseeek” and “deepseekai,” appeared on the Python Package Index (PyPI). They claimed to provide developer tools for DeepSeek, the Chinese AI startup behind the popular R1 large-language model. Instead, the packages executed infostealer scripts, quietly siphoning off sensitive data from unsuspecting users.

Behind the Scenes 

The malicious packages were uploaded by a user named “bvk,” an account created in June 2023 with zero prior activity. Once installed, the packages ran commands named after themselves (deepseeek or deepseekai) to harvest environment variables - keys to practically every service a developer might use. This includes AWS S3 credentials, database connections, and tokens granting access to critical infrastructure. 

The stolen data was then exfiltrated to a command-and-control server hosted on Pipedream, a legitimate automation service. Interestingly, researchers at Positive Technologies Expert Security Center (PT ESC) noticed telltale signs in the comment section that an AI assistant was used to help generate and document the infostealer script. Even hackers, it seems, find AI a convenient ally. 

Swift Discovery and Removal 

Positive Technologies promptly reported the packages to PyPI, prompting immediate quarantine. The packages were then fully removed from the platform, but not before 222 developers had installed them. According to PT ESC’s breakdown, most downloads occurred in the United States, followed by China, Russia, Germany, Hong Kong, and Canada. Different download methods ranged from web browser retrieval to pip install commands. 

While 222 downloads might seem modest, consider that environment variables often hold the important information of any software project. A single stolen token could open the door to broader network access or even the entire CI/CD pipeline. 

Key Lessons 

  1. Validate New Packages: Malicious uploads often appear under suspiciously aged or low-activity accounts. Before installing a newly published library, check the author’s reputation, read the project details, and see if there’s an established community or reviews.
  2. Secure Your Environment Variables: If you unknowingly installed these packages - or suspect any compromise - rotate your API keys, database passwords, and authentication tokens immediately. Keep an eye on logs for unusual activity, such as unauthorized logins or unexpected data transfers.
  3. Monitor Supply Chain Risks: PyPI is widely trusted as a default repository for Python projects, but it isn’t immune to abuse. This event highlights the importance of supply chain security: always use automated scanning tools where possible, and stay informed about newly discovered malware campaigns.
  4. Watch Out for AI “Buzz”: Attackers know developers are hungry for AI integrations, so they will exploit that excitement. Whether a package claims to harness powerful AI models or is simply a utility library, be sure to confirm its authenticity.

Conclusion 

The DeepSeek impersonation attack shows how quickly threat actors can seize on hot trends - AI in this case - to target developers. Thanks to rapid detection from PT ESC, the malicious packages were pulled offline within hours, limiting the fallout. However, this serves as a powerful reminder that we all share responsibility for our digital supply chain. 

As AI continues to reshape the tech landscape, we need to be just as innovative in guarding our projects. Verify dependencies, rotate credentials regularly, and maintain a healthy dose of skepticism when installing packages that promise the next best AI integration. A few extra precautionary steps can save you - and your infrastructure - from major security headaches down the road. 

An den Anfang scrollen