From aefbd3c8df29404216eccece8dc293de8b6d5143 Mon Sep 17 00:00:00 2001 From: Emma Marshall <55526386+e-marshall@users.noreply.github.com> Date: Fri, 8 Dec 2023 15:54:26 -0700 Subject: [PATCH] Add tidying material (#229) * move data tidying to fundamentals and add pages from separate tidying jupyter book * move data tidying from fundamentals to intermediate * text edits to intro, fix velociy example link * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Update 05.5_scipy_talk.md fix spelling mistake * chaneg data_cleaning.md loc * fix toc to have data tidying in intermediate * Update _toc.yml fix datacleaning.md filename * Update _toc.yml path fixes * rename data tidying dir * Update _toc.yml fix dirpath * Update _toc.yml fix data cleaning paths * Update 05.4_contributing.md fix header * Update 05.2_examples.md fix ase example link * Update 05.1_intro.md add missing link ref * Update _toc.yml Co-authored-by: Deepak Cherian * Update intermediate/data_cleaning/05.1_intro.md Co-authored-by: Deepak Cherian * Update intermediate/data_cleaning/05.5_scipy_talk.md Co-authored-by: Deepak Cherian * Update intermediate/data_cleaning/05.1_intro.md Co-authored-by: Deepak Cherian * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Update 05.1_intro.md add note wtih link to examples in intro * Update _toc.yml remove duplicate intro data cleaning page * Update intermediate/data_cleaning/05.5_scipy_talk.md Co-authored-by: Deepak Cherian --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Deepak Cherian --- _toc.yml | 7 +- intermediate/data_cleaning/05.1_intro.md | 83 +++++++++++++++++++ intermediate/data_cleaning/05.2_examples.md | 16 ++++ .../data_cleaning/05.3_ice_velocity.ipynb | 0 .../data_cleaning/05.4_contributing.md | 5 ++ intermediate/data_cleaning/05.5_scipy_talk.md | 13 +++ .../data_cleaning/05_data_cleaning.md | 0 7 files changed, 122 insertions(+), 2 deletions(-) create mode 100644 intermediate/data_cleaning/05.1_intro.md create mode 100644 intermediate/data_cleaning/05.2_examples.md rename data_cleaning/ice_velocity.ipynb => intermediate/data_cleaning/05.3_ice_velocity.ipynb (100%) create mode 100644 intermediate/data_cleaning/05.4_contributing.md create mode 100644 intermediate/data_cleaning/05.5_scipy_talk.md rename data_cleaning/data_cleaning.md => intermediate/data_cleaning/05_data_cleaning.md (100%) diff --git a/_toc.yml b/_toc.yml index d7be07c1..aa80237a 100644 --- a/_toc.yml +++ b/_toc.yml @@ -46,9 +46,12 @@ parts: - file: intermediate/xarray_ecosystem - file: intermediate/hvplot - file: intermediate/cmip6-cloud - - file: data_cleaning/data_cleaning.md + - file: intermediate/data_cleaning/05.1_intro.md sections: - - file: data_cleaning/ice_velocity + - file: intermediate/data_cleaning/05.2_examples.md + - file: intermediate/data_cleaning/05.3_ice_velocity + - file: intermediate/data_cleaning/05.4_contributing.md + - file: intermediate/data_cleaning/05.5_scipy_talk.md - caption: Advanced chapters: diff --git a/intermediate/data_cleaning/05.1_intro.md b/intermediate/data_cleaning/05.1_intro.md new file mode 100644 index 00000000..c3ccfd43 --- /dev/null +++ b/intermediate/data_cleaning/05.1_intro.md @@ -0,0 +1,83 @@ +# Data Tidying + +Array data that are represented by Xarray objects are often multivariate, multi-dimensional, and very complex. Part of the beauty of Xarray is that it is adaptable and scalable to represent a large number of data structures. However, this can also introduce difficulty (especially for learning users) in arriving at a workable structure that will best suit one's analytical needs. + +```{seealso} +Look for examples [here](05.2_examples.md) +``` + +This project is motivated by community sentiment and experiences that often, the hardest part of learning and teaching Xarray is teaching users how best to use Xarray conceptually. We hope to leverage the experiences of Xarray and geospatial data users to arrive at a unifying definition of 'tidy' data in this context and best practices for 'tidying' geospatial raster data represented by Xarray objects. + +This page discusses common data ‘tidying’ steps and presents principles to keep in mind when organizing data in Xarray. We also point out helpful extensions to simplify and automate this process for specific dataset types like satellite imagery. + +A great first step is familiarizing yourself with the [terminology](https://docs.xarray.dev/en/stable/user-guide/terminology.html) used in the Xarray ecosystem. + +## A brief primer on tidy data + +Tidy data was developed by Hadley Wickham for tabular datasets in the R programming language. Many resources comprehensively explain this concept and the ecosystem of tools built upon it. Below is a very brief explanation: + +**Data tidying** is the process of structuring datasets to facilitate analysis. Wickham writes: "...tidy datasets are all alike, but every messy dataset is messy in its own way. Tidy datasets provide a standardized way to link the structure of a dataset (its physical layout) with its semantics (its meaning)" (Wickham, 2014). + +### Tidy data principles for tabular datasets + +The concept of [tidy data](https://vita.had.co.nz/papers/tidy-data.pdf) was developed by Hadley Wickham in the R programming language, and is a set of principles to guide facilitating tabular data for analysis. + +{attribution="Wickham, 2014"} + +> "Tidy datasets are all alike, but every messy dataset is messy in its own way." + +Wickham defines three core principles of tidy data for tabular principles. They are: + +1. Each variable forms an observation +2. Each observation forms a row +3. Each type of observational unit forms a table + +## Imagining a 'tidy data' framework for gridded datasets + +### Common use-case: Manipulating individual observations to an x-y-time datacube + +Data downloaded or accessed from DAACs and other providers is often (for good reason) separated into temporal observations or spatial subsets. This minimizes the services that must be provided for different datasets and allows the user to access just the material that they need. However, most workflows will involve some sort of spatial and/or temporal investigation of an observable, which will usually require the analyst to arrange individual files into spatial mosaics and/or temporal cubes. In addition to being a source of duplicated effort and work, these steps also introduce decision-points that can be stumbling blocks for newer users. We hope a tidy framework for xarray will streamline the process of preparing data for analysis by providing specific expectations of what 'tidied' datasets look like as well as common patterns and tools to use to arrive at a tidy state. + +## Tidy data principles for Xarray data structures + +These are guidelines to keep in mind while you are organizing your data. For detailed definitions of the terms mentioned below (and more), check out Xarray's [Terminology page](https://docs.xarray.dev/en/stable/user-guide/terminology.html). + +**1. Dimensions** + +- Minimize the number of dimensional coordinates + +**2. Coordinates** + +- Non-dimensional coordinates can be numerous. Each should exist along one or multiple dimensions + +**3. Data Variables** + +- Data variables should be observables rather than contextual. Each should exist along one or multiple dimensions. + +**4. Contextual information (metadata)** + +- Metadata should only be stored as an attribute if it is static along the dimensions to which it is applied. +- If metadata is dynamic, it should be stored as a coordinate variable. +- Metadata `attrs` should be added such that dataset is self-describing (following CF-conventions) + +**5. Variable, attribute naming** + +- **Wherever possible, use cf-conventions for naming** +- Variable names should be descriptive +- Variable names should not contain information that belongs in a dimension or coordinate (ie. information stored in a variable name should be reduced to only the observable the variable describes. + +**6. Make us of & work within the framework of other tools** + +- Specification systems such as [CF](https://cfconventions.org/) and [STAC](https://stacspec.org/en), and related tools such as [Open Data Cube](https://www.opendatacube.org/), [PySTAC](https://pystac.readthedocs.io/en/stable/), [cf_xarray](https://cf-xarray.readthedocs.io/en/latest/),[stackstac](https://stackstac.readthedocs.io/en/latest/) and more make tidying possible and smoother, especially with large, cloud-optimized datasets. +- + +## Other guidelines and rules of thumb + +- Avoid storing important data in filenames +- Non-descriptive variable names can create + perpetuate confusion +- Missing coordinate information makes datasets harder to use +- Elements of a dataset's 'shape'/structure can sometimes be embedded in variable names; this will complicate subsequent analysis + +## Contributing + +We would love your help and engagement on this project! If you have a dataset that you've worked with that felt particularly messy, or one with steps you find yourself thinking back to as you work with new datasets, consider submitting it as an example! If you have input on tidy principles, please feel free to raise an issue. diff --git a/intermediate/data_cleaning/05.2_examples.md b/intermediate/data_cleaning/05.2_examples.md new file mode 100644 index 00000000..3abdb943 --- /dev/null +++ b/intermediate/data_cleaning/05.2_examples.md @@ -0,0 +1,16 @@ +# Examples + +This page contains examples of 'tidying' datasets. If you have an example you'd like to submit, or an example of an anti-pattern, please raise an issue ! + +## 1. Aquarius + +This is an example of tidying a dataset comprised of locally downloaded files. Aquarius is a sea surface salinity dataset produced by NASA and accessed as network Common Data Form (NetCDF) files. +You can find this example [here](https://gist.github.com/dcherian/66269bc2b36c2bc427897590d08472d7). This example focuses on data access steps and organizing data into a workable data cube. + +## 2. ASE Ice Velocity + +Already integrated into the Xarray tutorial, this examples uses an ice velocity dataset derived from synthetic aperture radar imagery. You can find it [here](05.3_ice_velocity.ipynb). This example focuses on data access steps and organizing data into a workable data cube. + +## 3. Harmonized Landsat-Sentinel + +This [example](https://nbviewer.org/gist/scottyhq/efd583d66999ce8f6e8bcefa81545b8d) features cloud-optimized data that does not need to be downloaded locally. Here, package such as [`odc-stac`](https://github.com/opendatacube/odc-stac) are used to accomplish much of the initial tidying (assembling an x,y,time cube). However, this example shows that there is frequently additional formatting required to make a dataset analysis ready. diff --git a/data_cleaning/ice_velocity.ipynb b/intermediate/data_cleaning/05.3_ice_velocity.ipynb similarity index 100% rename from data_cleaning/ice_velocity.ipynb rename to intermediate/data_cleaning/05.3_ice_velocity.ipynb diff --git a/intermediate/data_cleaning/05.4_contributing.md b/intermediate/data_cleaning/05.4_contributing.md new file mode 100644 index 00000000..aa47b7d9 --- /dev/null +++ b/intermediate/data_cleaning/05.4_contributing.md @@ -0,0 +1,5 @@ +# Contributing + +This project is an evolving community effort. **We want to hear from you!**. Many workflows involve some version of the examples discussed here. The solutions you've developed in your work could help future users and help the community move toward more established norms around tidy data. Please consider submitting any examples you may have. You can create an issue [here](https://github.com/e-marshall/tidy-xarray/issues/new?assignees=&labels=&projects=&template=data-tidying-example-template.md&title=).If you have any questions or topics you'd like to discuss, please don't hesitate to create an issue on github. + +_note: issue template has some errors currently, need to fix_ diff --git a/intermediate/data_cleaning/05.5_scipy_talk.md b/intermediate/data_cleaning/05.5_scipy_talk.md new file mode 100644 index 00000000..d1cfb076 --- /dev/null +++ b/intermediate/data_cleaning/05.5_scipy_talk.md @@ -0,0 +1,13 @@ +# Presentations + +## SciPy 2023 + +This project was initially presented at the 2023 SciPy conference in Austin, TX. You can check out the slides and a recording of the presentation below. + +### Slides + +The presentation slides are available through the [2023 SciPy Conference Proceedings](https://conference.scipy.org/proceedings/scipy2023/slides.html) and can be downloaded [here](https://zenodo.org/records/8221167). + +### Recording + +A recording of the presentation is available [here](https://www.youtube.com/watch?v=KZlG1im088s). diff --git a/data_cleaning/data_cleaning.md b/intermediate/data_cleaning/05_data_cleaning.md similarity index 100% rename from data_cleaning/data_cleaning.md rename to intermediate/data_cleaning/05_data_cleaning.md