Free Toxicity Dataset (September 2022)

Scott Heiner
Sep 17, 2022
Free Toxicity Dataset (September 2022)
A team of content moderation Surgers, labeling social media toxicity

tl;dr Download our latest up-to-date toxicity dataset here!

Wouldn’t it be great if the Internet were less toxic? From Twitter threads to Facebook comments, the number of places on the web where mean, insulting content proliferates seems endless. As a result, we released a free toxicity dataset earlier this year to help machine learning teams train their own toxicity classifiers and make the Internet a healthier place.

Yet language is constantly changing. 12 months ago, for example, the Internet was filled with social media posts using “let’s go Brandon” as a political slogan to mean “fuck Joe Biden”.

Recently, however, Biden supporters have embraced and reclaimed these anti-Biden references through a new meme of their own: Dark Brandon.

The problem: a machine learning model trained only on older datasets might start mistakenly classifying all tweets with “Brandon” as toxic and negative in sentiment!

This is the way language works. It’s constantly changing, and the same memes used as insults one day can be reclaimed as symbols of solidarity the next. That’s why our team updated the toxicity dataset with a sample of fresh social media posts from this past month. Download the updated dataset here!

The Toxicity Dataset

The updated toxicity dataset contains 500 toxic and 500 non-toxic comments across social media sites like Twitter, Facebook, YouTube, Instagram, Reddit, and LinkedIn.

How we labeled it

Data Labeling Workforce

Labeling toxic speech can be surprisingly tricky. In order to do a good job, you need to understand memes, slang, reclaimed speech, political nuances, and more. For example, if you’re not familiar with US politics and the state of the economy, would you understand what “Stock market in freefall, food up 11%, CPI up 8.3 % . Let's go Brandon !!!!!” means?

That’s why having data labelers with the right skills and backgrounds is essential to creating quality datasets.

For this project, we used our content moderation team of Surgers, who work on many of our other toxicity, hate speech, misinformation, and spam projects.

More Surge AI Datasets

Want to build a custom content moderation dataset? Sign up and create a new labeling project in seconds, or reach out to for a platform walk-through.

Interested in learning more about content moderation? Check out our other content moderation blog posts:

Scott Heiner

Scott Heiner

Scott runs Business Development and Operations at Surge AI, helping customers get the high-quality human-powered data they need to train and measure their AI. Before joining Surge, he led operations and marketing teams in the media industry.

surge ai logo

Data Labeling 2.0 for Rich, Creative AI

Superintelligent AI, meet your human teachers. Our data labeling platform is designed from the ground up to train the next generation of AI — whether it’s systems that can code in Python, summarize poetry, or detect the subtleties of toxic speech. Use our powerful data labeling workforce and tools to build the rich, human-powered datasets you need today.

Meet the world's largest
RLHF platform

Follow Surge AI!