At present, most journalists treat social sources like they would any other — individual anecdotes and single points of contact. But to do so with a handful of tweets and Instagram posts is to ignore the potential of hundreds of millions of others.
Many stories lay dormant in the vast amounts of data produced by everyday consumers. Here's a guide and tool box that may help you. What you find below are a number of scripts developed to mine data from APIs.
Slides that explain the work process can be found here. I'm currently in the process of writing more thorough resources on the subject of social media data mining. Feel free to reach out with questions on Twitter @lamthuyvo!
This is a growing list of scripts we've put together to make social data mining easier.
There are broadly three different ways to harvest data from the social web:
- APIs
- Personal archives
- Scraping
The kind of data that official channels like API data streams provide is very limited. Despite harboring warehouses of data on consumers’ behavior, social media companies only provide a sliver of it through their APIs (for Facebook, developers can only get data for public pages and groups, and for Twitter, this access is often restricted to a set number of tweets from a user’s timeline or to a set time frame for search).
Scripts and instructions related to APIs can be found in the 01-apis
directory of this repository.
There are ways for users of social media platforms to request and download archives of their own online persona and behavior. Some services like Facebook or Twitter will allow users to download a history of the data that constitutes their posts, their messaging, or their profile photos.
Scripts and instructions related to personal archives can be found in the 02-personal-archives
directory of this repository.
While there's plenty of social media data on display on the sites you browse, extracting social media data from the platforms through scraping is often against the terms of service. Scraping a social media platform can get users booted from a service and potentially even result in a lawsuit.
If you end up wanting to look into harvesting data from the social web, there is information information related in the 03-scraping
directory of this repository.
Below is a set of instructions you can follow to get your machine ready to run any of the Python scripts in this repository. While Python is one of the most powerful languages for data gathering and analysis, it can take a few tries to get it installed and running properly. If you're a beginner, don't despair though, these growing pains are normal and can vary from machine to machine. We promise the payoff is worth it!
- If you don’t already have Python installed, start by getting Python up and running. There are two Python versions — Python 2 and Python 3. Please install Python 3, as it handles modern Internet language and characters related to it better. Also have
git
installed. A helpful guide to getting a brand new machine set up can be found here, courtesy of NPR's Visuals Team. - You should also make sure you have
pip
.
- You need to get developer oauth credentials from the social media platforms you want to tap into. Oauth credentials are like an ID and password (often referred to as an app ID and secret respectively) that you create for an app or a script to access the data stream that a social media company provides. This data stream — also known as a company's Application Program Interface, or API — is often accessible using these credentials through a link (for example, this is what one of these queries could look like https://graph.facebook.com/v2.6/BuzzFeed/posts/?fields=message/&access_token=YOURID|YOURSECRET). Here's where you can get them: Twitter: https://apps.twitter.com/ Facebook: https://developers.facebook.com/
- Open up your Terminal and go to the folder where you want to clone this repository of code using the
cd
bash command.
git clone https://github.com/lamthuyvo/social-media-data-scripts.git
cd social-media-data-scripts
- Then install all the dependencies, i.e. the Python libraries we are using for these scripts by running the following command:
pip install -r requirements.txt
or
sudo pip install -r requirements.txt
If you have problems with installing the dependencies through
pip install requests
pip install tweepy --ignore-installed six
pip install beautifulsoup4
or
sudo pip install requests
sudo pip install tweepy --ignore-installed six
pip install beautifulsoup4
Hooray! You're ready to get your data now. We have created a directory for scripts that you can use to get data from each data source.
You can follow the directions for each script in its sub-folders:
- To gather data from APIs, you can use the scripts in this directory:
01-apis
- To gather data from personal archives, you can use the scripts in this directory:
02-personal-archives
- To gather data from live web sites, you can use the scripts and instructions in this directory:
03-scraping
There are numerous useful resources and tools out on the web for social media data gathering. Find an incomplete list that I'll continue to update below.
- Data and society: Media, Technology and Society
- Sockpuppets, Secessionists, and Breitbart
- The Atlantic: How the Like Button Ruined the Internet
- Linguistic data analysis of 3 billion Reddit comments
- Gun emoji pairings
- How Russian & Alt-Right Twitter Accounts Worked Together to Skew the Narrative About Berkeley
- Click fraud
- Your data is being manipulated
- Smartphone addiction dystopia
- Delete: The Virtue of Forgetting in the Digital Age by Viktor Mayer-Schönberger
- Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O'Neil
- This Is Why We Can't Have Nice Things: Mapping the Relationship between Online Trolling and Mainstream Culture by Whitney Phillips
- Dataclysm by Christian Rudder
- The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think by Eli Pariser
- Realtalk about fake news
- Limited individual attention and online virality of low-quality information
- Competition among memes in a world with limited attention
- On confirmation bias
- Mining the Social Web (O'Reilly)
- The Digital Methods Initiative (University of Amsterdam)
- TrackerTracker - to extract widgets, analytics and more general trackers embedded in sites
- Netvizz - facebook data extraction tool for groups, pages and search
- TCAT - tool to collect and analyze Twitter data
- Issue Crawler and Hyphe - for hyperlink analysis to see relations between websites based on how they link amongst each other