The Hidden Patterns in 10 Billion Content Moderation Decisions
What the EU's Digital Services Act reveals about how different types of platforms really work
The biggest news in content moderation in recent years has been the EU's Digital Services Act (DSA). The new regulation requires tech platforms to publish reports on every content moderation decision they make, providing an unprecedented window into how the internet really works.
In just a six-month period, over 10 billion decisions were reported across 154 platforms. There’s a trove of insights buried within that data, but the DSA transparency database isn’t really designed for discovery.
I started digging into the data by grouping the 154 platforms into 17 intuitive categories to map the trends by vertical/industry. There are countless ways to categorize these platforms, but after a few different iterations, I stopped fiddling and settled on the following:
Note that these are only the platforms falling within the regulatory scope of the EU. If we were to include the Americas or Asia this list would not only grow, but would also have a very different distribution of platform categories.
Then I built a query tool that makes it easier to see patterns across detection systems, use of automation, and raw counts of enforcement decisions by platform category. You can try it at https://www.kamanda.org/dsa (the password is ‘dsa2025’):
I analyzed some of the query results spanning June 2024 - June 2025. After dumping the results into a spreadsheet, I discovered a few things
Social Media: Policing speech is still subjective
Of the 428M enforcement decisions for “Illegal Or Harmful Speech,” only 31,500 or 0.02% were actually flagged as illegal. This indicates that 99.98% of social media content moderation focuses not on illegal content, but on platform rules and community standards.
We still need humans to make decisions: Nearly 655 million enforcement actions on social media platforms still involve humans at least partially, representing 43% of enforcement decisions. In some cases, AI detected something and humans made the final call; and in others, humans both detected and decided on the enforcement.
Self-harm content is hard to detect: The data shows an interesting pattern: 235K instances of self-harm were detected automatically, but 12.5M decisions are "partially automated + not detected," which suggests that this subject area requires the most nuanced human judgment.
E-Commerce & Marketplaces: Striving to keep you safe from harmful products
In stark contrast to social media, the number one issue for e-commerce platforms is “Unsafe and Illegal Products,” which represents 55% of all the enforcement reasons reported. Over 205M decisions were made to prevent the sale of this harmful content (this is 2X more than social media’s top enforcement issue).
Additionally, there were about 20M decisions related to IP violations, suggesting that counterfeit products, trademark violations, and unauthorized brand usage, play a big role in e-commerce.
There were not a lot of “Violence” (~1,000) or “Illegal/Harmful Speech” violations (69K).
Education: Welcome to IP Infringement 101
Intellectual property issues completely dominate the violations on the education platforms. Although the platforms in this category report far fewer violations than social media and e-commerce, 60% of the reported violations are for “Intellectual Property Infringements.” Textbooks and course materials are expensive, and, as such, are ripe for piracy. This may be an indication of broader tensions within academic publishing between democratizing learning and preserving copyright laws.
31% of violations are for “Scams and Fraud,” which is surprising. But there’s plenty of evidence that academia is susceptible to fake degrees, fraudulent courses, academic paper mills, fake tutoring services, and scholarship scams.
Adult Content Has Rules Too
“Scope of Platform Service” is the top moderation category (38% of enforcement decisions). This may reveal that even within adult content, there are strict boundaries. Adult platforms are carefully curated experiences with specific rules about what type of adult content fits their brand/business model.
Protecting minors is the top legal priority. 85% of all illegal content decisions are for “Protection of Minors.”
Almost all the enforcement is manual. 76% of all decisions require human judgment, echoing some of the trends we saw in social media enforcement. It should come as no surprise that context matters in this space. Perhaps platforms have determined that automated enforcement is not quite capable of handling the nuanced cultural, legal, and ethical judgments required in the adult content space.
The Bigger Picture
The DSA has created the most comprehensive view of content moderation we've ever had. But the real insights aren't in the raw numbers; they're in the patterns that emerge when you can compare how different types of platforms handle different types of problems.
Content moderation challenges are universal. Whether you're running a social network, an online marketplace, or a travel booking site, you're dealing with the full spectrum of human behavior.
The platforms that succeed won't be the ones that eliminate all bad content (which is impossible), but the ones that build systems nuanced enough to handle the complexity of human expression at scale.




