Anthropic is facing mounting scrutiny after an accidental leak in late March exposed more than half a million lines of “Claude Code” source code. In the immediate aftermath, the company moved quickly to limit the spread of the material, filing DMCA takedown requests with GitHub and other platforms. Those takedowns reportedly removed around 100 repositories that hosted the leaked code, but they also swept up far more than intended: more than 8,100 legitimate repositories that relied on Anthropic’s official codebase were taken down in the process.
After backlash from developers caught in the crossfire, Anthropic scaled back the action and apologized to those whose projects were mistakenly removed. Still, the damage from the initial move hasn’t been limited to inconvenience. As early analysis of the leaked code circulated, critics began arguing that the aggressive takedown campaign looked less like routine IP protection and more like an effort to erase traces before outsiders could examine what the code was doing.
What researchers say they found in the leaked Claude Code has fueled several serious allegations, including sentiment monitoring, identity concealment, risky automation, and expansive file access that could raise major privacy and security concerns.
One of the most widely discussed claims centers on emotional surveillance through sentiment analysis. According to reporting cited in the post, the tool includes mechanisms that scan user prompts for signs of frustration, flagging language such as “this sucks” or “so frustrating,” and storing those prompts for later analysis. For users who expect an AI coding assistant to focus strictly on code and productivity, the idea that emotional signals might be watched and retained could feel like a major boundary crossing.
Another allegation involves deliberate deception through identity concealment. Analysis described in the post suggests the system may contain functions designed to obscure where generated code came from. In this telling, when the tool contributes to public projects, internal labels or codenames like “Claude Code” are automatically removed, potentially making the output appear as if it were written entirely by a human. If accurate, critics argue this could enable AI-assisted code contributions to blend in without transparency, complicating attribution and trust in open-source collaboration.
The leak also revived debate around tool autonomy, particularly a component referred to as the “YOLO” protocol (You Only Live Once). The post claims the code includes a mechanism (described as classifyYoloAction) that allows the AI to decide whether certain actions can be performed without consulting the user. Rather than strict, rule-based permission checks, the model itself performs the risk assessment and may proceed based on its own judgment. Detractors say this “all-or-nothing” approach conflicts with common AI safety expectations, especially for tools that can touch developer environments, run commands, or modify files.
Perhaps the most alarming claim is about extensive file access and data upload behavior. The post argues that the system effectively vacuums up the local working directory, with the implication that files the assistant reads could be uploaded to Anthropic’s cloud. A security researcher quoted in the post summarizes the concern bluntly: if the AI can see a file on your device, then a copy may exist on the provider’s servers. If that interpretation reflects real-world behavior, it raises immediate questions for anyone handling sensitive source code, proprietary data, credentials, customer information, or internal documents on a development machine.
Taken together, these points have prompted critics to frame the situation as a breach of trust. The post claims that multiple independent analyses paint a picture of software that doesn’t just assist with programming, but also tracks user emotion, minimizes disclosure of AI involvement, and potentially captures large portions of a developer’s local environment. Against that backdrop, the sweeping DMCA campaign is portrayed not merely as copyright enforcement, but as a potential attempt to prevent deeper inspection and public discussion.
The post also points to a stark example of why these capabilities matter: security researcher Nicholas Carlini reportedly demonstrated the power of Claude Code by using it in an attack workflow that compromised the FreeBSD operating system in about four hours. Whether readers interpret that as a warning about the dangers of highly capable AI tools, or as evidence of how fast offensive security can move with automation, it underscores the stakes of giving an AI assistant broad access and discretion.
For developers, security teams, and companies evaluating AI coding assistants, the controversy highlights several practical questions that aren’t going away: What exactly is being collected during use? Which files can the assistant access by default? When is data uploaded, what is stored, and for how long? How transparent is the system about AI-generated contributions? And how much autonomy should any coding agent have before it must stop and ask for explicit permission?
As the fallout continues, the Claude Code leak may leave a lasting mark on how developers view Anthropic’s tooling—especially if the company can’t convincingly address concerns about privacy, transparency, and guardrails.






