Materials for my Analyzing Census Data with Pandas workshop for PyCon 2019.
This tutorial is meant to be followed using mybinder.org but if you choose to download the materials and follow along these are the instructions.
The easiest way to get a copy of this repository is to clone it if you know git
git clone https://github.com/chekos/analyzing-census-data.git
But you can also download it straight from GitHub:
Only 2 packages are essential for this workshop:
- Pandas
- Jupyter (notebooks or lab)
You can either pip
install them:
pip install pandas jupyterlab
or use conda to install them
conda install -c conda-forge pandas jupyterlab
Once you have the materials and python
packages necessary, head over to the exercises directory and launch Jupyter Lab
cd analyzing-census-data
cd exercises
jupyter lab
This tutorial will guide you through a typical data analysis project utilizing Census data acquired from IPUMS. It's split into 2 notebooks:
In the first notebook you will:
- Work with compressed data with pandas.
- Retrieve high-level descriptive analytics of your data.
- Drop columns.
- Slice data (boolean indexing).
- Work with categorical data.
- Work with weighted data.
- Use python's
pathlib
library, making your code more reproducible across platforms. - Develop a reproducible data prep workflow for future projects.
On top of that, in the second notebook you will:
- Aggregate data.
- Learn about
.groupby()
- Learn about cross-sections
.xs()
- Learn about
pivot_table
s andcrosstabs
- Develop a reproducible data analysis workflow for future projects.