Skip to content

Commit

Permalink
wip
Browse files Browse the repository at this point in the history
  • Loading branch information
floriscalkoen committed Dec 12, 2024
1 parent f315a09 commit c2ae9c1
Show file tree
Hide file tree
Showing 12 changed files with 362 additions and 2,111 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ az storage blob download-batch \

The Coastal Grid dataset provides a global tiling system for coastal analytics. It
supports scalable data processing workflows by offering coastal tiles at varying zoom
levels (5, 6, 7) and buffer sizes (500 m, 1000 m, 2000 m, 5000 m, 10000 m, 15000 m).
levels (5, 6, 7, 8, 9, 10) and buffer sizes (500 m, 1000 m, 2000 m, 5000 m, 10000 m, 15000 m).

## Usage

Expand Down
1 change: 1 addition & 0 deletions ci/envs/312-coastal-blue.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -170,6 +170,7 @@ dependencies:
- rio-tiler
- rioxarray
- spatialpandas
- spyndex
- stac-vrt
- stackstac
- stactools
Expand Down
1 change: 1 addition & 0 deletions ci/envs/312-coastal-full.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -170,6 +170,7 @@ dependencies:
- rio-tiler
- rioxarray
- spatialpandas
- spyndex
- stac-vrt
- stackstac
- stactools
Expand Down
26 changes: 0 additions & 26 deletions scripts/python/make_coastal_grid.py
Original file line number Diff line number Diff line change
Expand Up @@ -165,32 +165,6 @@ def load_data():
raise


def parse_file_paths(files_to_process):
"""
Parse file paths to extract zoom levels and buffer sizes.
Args:
files_to_process (set): Set of file paths to process.
Returns:
dict: A dictionary grouped by zoom levels, where each key is a zoom level
and the value is a list of buffer sizes.
"""
pattern = re.compile(r"coastal_grid_z(\d+)_(\d+m)\.parquet")
zoom_to_buffers = defaultdict(list)

for file_path in files_to_process:
file_name = file_path.split("/")[-1] # Extract the file name
match = pattern.match(file_name)
if match:
zoom, buffer_size = match.groups()
zoom_to_buffers[int(zoom)].append(buffer_size)
else:
logger.error(f"Failed to parse file name {file_name}. Skipping.")

return zoom_to_buffers


VALID_BUFFER_SIZES = ["500m", "1000m", "2000m", "5000m", "10000m", "15000m"]
COLUMN_ORDER = [
"coastal_grid:id",
Expand Down
4 changes: 2 additions & 2 deletions scripts/python/stac_coastal_grid.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@
storage_options = {"account_name": STORAGE_ACCOUNT_NAME, "credential": sas_token}

# Container and URI configuration
VERSION = "2024-12-10"
VERSION = "2024-12-11"
DATETIME_STAC_CREATED = datetime.datetime.now(datetime.UTC)
DATETIME_DATA_CREATED = datetime.datetime(2023, 2, 9)
CONTAINER_NAME = "coastal-grid"
Expand All @@ -44,7 +44,7 @@
DESCRIPTION = """
The Coastal Grid dataset provides a global tiling system for geospatial analytics in coastal areas.
It supports scalable data processing workflows by offering structured grids at varying zoom levels
(5, 6, 7) and buffer sizes (500m, 1000m, 2000m, 5000m, 10000m, 15000m).
(5, 6, 7, 8, 9, 10) and buffer sizes (500m, 1000m, 2000m, 5000m, 10000m, 15000m).
Each tile contains information on intersecting countries, continents, and Sentinel-2 MGRS tiles
as nested JSON lists. The dataset is particularly suited for applications requiring global coastal
Expand Down
Loading

0 comments on commit c2ae9c1

Please sign in to comment.