The image shows the icon of Bluesky, the new decentralized social network created by Twitter co-founder Jack Dorsey.

Bluesky Publishes First Transparency Report Amid Surge in User Complaints and Legal Requests

Bluesky has published its first full transparency report, offering a detailed look at how the platform handled safety, moderation, and compliance over the past year. The report goes beyond simple content removals, covering everything from age-assurance efforts and automated labeling to the detection of coordinated influence operations and the growing volume of legal requests.

The timing makes sense: Bluesky is scaling fast. In 2025, the decentralized social network grew nearly 60%, rising from 25.9 million users to 41.2 million. That total includes people using Bluesky’s own infrastructure as well as accounts running on independent servers connected through the AT Protocol.

User activity grew just as quickly. People published 1.41 billion posts during the year, which Bluesky says represents 61% of all posts ever made on the platform. Media sharing surged, too: 235 million posts included media, accounting for 62% of all media posts shared on Bluesky to date. In other words, 2025 wasn’t just a growth year for signups—it was the year a large portion of Bluesky’s entire historical content was created.

A major focus of the report is moderation pressure, and the numbers show a platform dealing with a rapidly expanding workload. User-submitted moderation reports increased 54% year over year, climbing from 6.48 million in 2024 to 9.97 million in 2025. Bluesky notes that this rise closely matches its overall user growth for the same period, suggesting that reporting volume scaled with adoption rather than spiking disproportionately.

About 3% of users—roughly 1.24 million people—submitted reports in 2025. The most common reasons were “misleading” content (a category that includes spam), which made up 43.73% of all reports, followed by harassment at 19.93% and sexual content at 13.54%. Another 22.14% fell into an “other” bucket that didn’t neatly match those categories or smaller areas like violence, child safety, rule-breaking, or self-harm.

Spam was a standout within the misleading category. Out of 4.36 million misleading reports overall, 2.49 million were spam-related. In the harassment category, Bluesky says hate speech represented the largest share of a 1.99 million total, with around 55,400 reports. Other notable harassment-related issues included targeted harassment (about 42,520 reports), trolling (29,500 reports), and doxxing (about 3,170 reports). At the same time, Bluesky points out that many harassment reports sit in a gray area of antisocial behavior—content that feels toxic or unpleasant but doesn’t always fit cleanly into narrower definitions like hate speech.

For sexual content, Bluesky says most reports weren’t about the existence of adult content itself, but about mislabeling—cases where adult material wasn’t properly tagged with metadata. Those labels matter on Bluesky because they power user-controlled moderation tools, letting individuals filter what they want to see. The report also cites smaller report volumes for nonconsensual intimate imagery (about 7,520), abuse content (about 6,120), and deepfakes (just over 2,000).

Violence-related reports were relatively lower at 24,670 total, broken out into threats or incitement (around 10,170), glorification of violence (6,630), and extremist content (3,230).

User reports weren’t the only signal Bluesky relied on. Its automated systems flagged 2.54 million potential violations. And one of the more notable takeaways in the report is what happened after Bluesky introduced a feature designed to reduce the reach of toxic replies. By identifying potentially harmful responses and putting them behind an extra click (reducing immediate visibility), Bluesky says daily reports of antisocial behavior dropped 79%. The platform also saw steady improvement throughout the year: reports per 1,000 monthly active users fell 50.9% from January to December.

Beyond everyday moderation, Bluesky says it removed 3,619 accounts tied to suspected influence operations, with the activity most likely originating from Russia. This is part of the report’s wider focus on platform integrity, especially as Bluesky becomes a larger target for coordinated manipulation.

The report also suggests Bluesky leaned into tougher enforcement in 2025. The company removed 2.44 million items over the year, including both accounts and content. That represents a major step up compared with the prior year’s figures, which included 66,308 accounts removed, 35,842 accounts taken down via automated tooling, 6,334 records removed by moderators, and 282 removed by automated systems.

Enforcement didn’t stop at removals. Bluesky issued 3,192 temporary suspensions in 2025 and carried out 14,659 permanent removals tied specifically to ban evasion. The company says most permanent suspensions targeted inauthentic behavior, spam networks, and impersonation.

Even with the increased takedown activity, the report makes it clear that Bluesky often prefers labeling over outright removal when possible. In 2025, it applied 16.49 million labels to content—up 200% year over year—while account takedowns grew 104% from 1.02 million to 2.08 million. The majority of labeling was tied to adult or suggestive content and nudity, reinforcing the company’s direction: give users more control through labeling and filters while reserving removals for clear violations and abuse patterns.

Finally, Bluesky reported a sharp rise in outside legal pressure. The company saw a fivefold increase in legal requests in 2025 from law enforcement, regulators, and legal representatives, totaling 1,470 requests compared with 238 in 2024. For a fast-growing social network, that’s another sign it’s moving into a new phase—one where scaling trust and safety isn’t just about moderation queues, but also about compliance, transparency, and responding to formal demands.

With this first comprehensive transparency report, Bluesky is positioning itself as a platform willing to show its work: how much content is being created, what users are reporting, where automation steps in, how enforcement is changing, and how the company balances removals with labeling. As its user base expands, these metrics will be closely watched as an indicator of whether Bluesky can keep the network open and expressive while still limiting spam, abuse, and coordinated manipulation.