How Surge AI Helps NYU Study the Impact of Social Media

Surge AI background. Get high-quality datasets using Surge AI's elite workforce and labeling platform.
 How Surge AI Helps NYU Study the Impact of Social Media

The Center for Social Media and Politics is an academic research institute dedicated to studying social media's impact on politics, policy, and democracy.

Maggie Macdonald is a second-year postdoc at CSMaP. She studies the online behavior of American political figures and how they leverage social media to drive their policies. As part of her research into the online behavior of elites during the 2020-21 Georgia runoff elections, Maggie and CSMaP used the Surge AI platform and workforce to label a large dataset of tweets.

Georgia on Our Minds

If you lived in the United States last December — and had an internet connection — you'll probably remember the campaigns leading up to the Georgia runoff elections. Maybe you even tweeted about the election or retweeted one of the candidates (some highlights for us were Ossoff accusing Purdue of insider trading and Warnock's beagle).

But unless you lived in the Peach State, were one of the many people from out of state who donated to the campaigns, or were Andrew Yang, your participation in the election probably started and ended on social media.

One of the many tweets that Maggie and her team at CSMaP studied, the message here is framed around national policy: 'vote in Georgia to flip the Senate.'

So the question is: did national attention influence the outcome of the election? Or did Georgians vote while the rest of the country yelled hoarse from outside the ballot box?  

In late December, when the campaigns were reaching their fever pitch, a team of researchers at NYU's Center for Social Media and Politics (CSMaP) released a data report that sought to measure exactly what issues mattered to Georgian voters. The team, composed of second-year postdoc Maggie Macdonald, co-director Jonathan Nagler, professor Joshua Tucker, and professor Richard Bonneau, were interested in the following four questions:

  • Were Georgians discussing the elections in nationalized or strategic terms? And did they mention national Republican or Democratic figures when discussing the candidates?
  • How did social media shape the conversation around the election, and what role did attack ads play?
  • Was there variation across ideological, ethnic, and gender lines in these behaviors?
  • Did the four candidates effectively reach their audience?
“This was an attempt to understand what regular people are saying. We wanted to know what regular people in Georgia were tweeting about, in an election that has huge implications on their life and on the country.” -Maggie Macdonald

Using Tweets to Measure Issue Discussion

CSMaP was uniquely posed to explore the difference between what Georgians and the rest of the country were tweeting during the election: they maintain a random collection of geographically located Twitter accounts across the country. From this collection, they could compile a dataset of tweets from Georgia users that referenced the election, one of the four candidates, or used a hashtag associated with the election. These were then categorized into four policy areas:

  • Substantive policy areas
  • Mentions of claims made in attack ads
  • Mentions of national Democratic politicians
  • Mentions of national Republican politicians

And within these substantive policy areas, they examined tweets on:

  • The economy
  • The Covid-19 pandemic
  • Education
  • Racial justice
  • Law and order
  • Health care
  • Abortion
  • The environment
  • Immigration
  • LGBT issues

The team's report found that: 

"The narrative that (Georgian) voters are viewing this as a nationalized election may be false: most voters are not mentioning any national party figures when tweeting about each of the senate candidates." -Maggie

You can read CSMaP's full report here.

Expanding the Project

Maggie and the NYU team have been at work expanding the report into an academic paper. This paper will study what Georgians and the rest of America were saying about the election on Twitter —this means they have significantly expanded the number of tweets they need to label.

For the initial discussion report, CSMaP had exclusively engaged undergrad research assistants to annotate their tweets. This approach worked for the smaller dataset of Georgian tweets, but with the larger dataset, unpredictable student availability was causing delays:

“We had academic deadlines that we were trying to reach, and my lab manager suggested that I use Surge as an option since Surge had helped NYU label other datasets before.
So I started with 500 tweets, three labelers each. My main questions were: how user-friendly is this for me? How long does it take? How accurate is it? And overall: is this easier than hiring undergraduate labelers?
It was much quicker: in four hours, we finished the project that would have taken the undergraduates months to complete.”
-Maggie

Surge AI’s platform and workforce helped Maggie and her lab instantly launch her project — just by uploading a CSV of data — without needing to spin up her own labeling tools or recruit and manage more undergraduates herself.

An example of a labeling task from Maggie’s project.

Quality Control That Meets Peer Review Standards

Labeling platforms often suffer from quality control issues, so Maggie knew it was important to consider the possibility of labeling mistakes in her dataset — especially since, as a researcher, her work needs to stand up to the scrutiny of peer review.

Our top priority at Surge AI has always been labeling quality. We specialize in the realm of language and social media, where datasets are full of context and nuance like community-specific slang, sarcasm, hidden meanings, and more.

Here are just a few of the ways we've built quality into our platform:      

  • Trustworthy workers. Before we allow labelers onto our platform, they must pass a series of tests that ensure skills in a wide variety of areas. This means that only the highest quality workers are allowed onto the platform in the first place — many of them Ivy League graduates, Ph.D. researchers, or teachers looking to earn extra money on the side.
  • Gold standards. Users can create gold standards (questions with known answers) and randomly insert them into projects to measure worker accuracy and remove workers who don’t meet the quality threshold.
  • Custom labeling teams. Projects often require special skills or knowledge; Maggie’s, for example, required labelers familiar with Georgia politics and social media trends. Our platform allows you to build and train your labeling teams so you can get super high-quality labels. You can also bring in an existing labeling force onto our platform.
  • Demographic diversity. Know who is behind your labels! Researchers can filter for demographic targets to understand and mitigate bias.
Can you spot the sarcasm? 


--

Good data leads to better models and better science. We’re excited to help researchers like Maggie in their progress to understand social media’s influence on our world. If you’re interested in our platform, schedule a demo, or reach out to us here — we’d love to chat.