A shocking revelation has emerged in the world of cybersecurity, highlighting a new and concerning trend. The theft of AI agent configurations is no longer just a theoretical concern; it's a reality.
Researchers have uncovered a case where an information stealer successfully infiltrated a victim's OpenClaw AI environment, stealing critical configuration files and gateway tokens. This marks a significant shift in the tactics of infostealers, moving beyond simple browser credentials to targeting the very essence of personal AI agents.
Hudson Rock, a cybersecurity firm, has identified this incident as a potential variant of the notorious Vidar infostealer, which has been active since 2018. The data capture, however, was not achieved through a custom OpenClaw module but rather through a broad file-grabbing routine designed to seek out sensitive data.
Among the files stolen were:
- openclaw.json: Containing the OpenClaw gateway token, the victim's email address, and workspace path.
- device.json: Holding cryptographic keys essential for secure operations within the OpenClaw ecosystem.
- soul.md: Revealing the core principles, guidelines, and ethical boundaries of the AI agent.
The theft of the gateway authentication token is particularly worrying, as it could allow attackers to remotely access the victim's OpenClaw instance or even impersonate the client in authenticated requests.
"The malware's search for 'secrets' led to an unexpected treasure trove of operational context for the user's AI assistant," Hudson Rock noted. "As AI agents become integral to professional workflows, we can expect dedicated modules to decrypt and parse these files, much like we see with Chrome or Telegram today."
But here's where it gets controversial... The disclosure of this incident comes at a time when OpenClaw, the open-source agentic platform, is facing security challenges. The maintainers have announced a partnership with VirusTotal to address these issues, including scanning for malicious skills and adding auditing capabilities.
Last week, the OpenSourceMalware team revealed an ongoing malicious skills campaign on ClawHub, where threat actors are using decoy skills and external malware hosting to bypass detection.
"The shift to external hosting shows a clever adaptation by threat actors," said security researcher Paul McCarty. "AI skill registries are becoming prime targets for supply chain attacks."
And this is the part most people miss... Another security concern revolves around Moltbook, an online forum exclusively for AI agents. OX Security has found that AI Agent accounts on Moltbook cannot be deleted, leaving users with no option to remove their data.
Furthermore, SecurityScorecard's STRIKE Threat Intelligence team has identified hundreds of thousands of exposed OpenClaw instances, potentially exposing users to remote code execution (RCE) risks. RCE vulnerabilities allow attackers to execute arbitrary code, and when OpenClaw has access to email, APIs, or cloud services, it becomes a powerful pivot point for attackers.
OpenClaw's popularity has skyrocketed since its launch in 2025, with over 200,000 stars on GitHub. Its founder, Peter Steinberger, is now joining OpenAI, ensuring OpenClaw's continued support as an open-source project.
This article raises important questions: How can we better protect our AI agents and the sensitive data they hold? Are current security measures sufficient in the face of evolving threats? Join the discussion in the comments and share your thoughts on these critical issues.