All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
- The
static-analysis
Github Actions workflow now usesruff
rather thanflake8
for linting.
- Add a new https://hyp3-opera-disp-sandbox.asf.alaska.edu deployment with an
OPERA_DISP_TMS
job type for generating tilesets for the OPERA displacement tool.
- Upgrade to Amazon Linux 2023 AMI for Earthdata Cloud deployments
- All failed jobs now have a
processing_times
value ofnull
.
- Resolve a regression introduced by the previous release (v8.0.0) in which a processing step could report a negative processing time if the underlying AWS Batch job had a failed attempt that did not include a
StartedAt
field. Fixes #2485 - Upgrade from Flask v2.2.5 to v3.0.3. Fixes #2491
- Specify our custom JSON encoder by subclassing
flask.json.provider.JSONProvider
. See pallets/flask#4692
- A job step can now be applied to every item in a list using a new
map: for <item> in <items>
syntax. For example, given a job spec with agranules
parameter, a step that includes amap: for granule in granules
field is applied to each item in thegranules
list and can refer toRef::granule
within itscommand
field. - If a job contains a
map
step, the processing time value for that step (in theprocessing_times
list in the job's API response) is a sub-list of processing times for the step's iterations, in the same order as the items in the input list. - A new
SRG_TIME_SERIES
job type has been added to thehyp3-lavas
andhyp3-lavas-test
deployments. This workflow uses the newmap
syntax described above to produce a GSLC for each level-0 Sentinel-1 granule passed via thegranules
parameter and then produces a time series product from the GSLCs. See the HyP3 SRG plugin. - The
SRG_GSLC
job type now includes parameter validation.
- Changes to custom compute environments:
- Custom compute environments are now applied to individual job steps rather than to entire jobs. The
compute_environment
field is now provided at the step level rather than at the top level of the job spec. - If the value of the
compute_environment
field isDefault
, then the step uses the deployment's default compute environment. Otherwise, the value must be the name of a custom compute environment defined injob_spec/config/compute_environments.yml
.
- Custom compute environments are now applied to individual job steps rather than to entire jobs. The
- Other changes to the job spec syntax:
- The
tasks
field has been renamed tosteps
. - Job parameters no longer contain a top-level
default
field. Thedefault
field within each parameter'sapi_schema
mapping is still supported. - Job specs no longer explicitly define a
bucket_prefix
parameter. Instead,bucket_prefix
is automatically defined and can still be referenced asRef::bucket_prefix
within each step'scommand
field.
- The
- The
hyp3-its-live
deployment now uses a greater variety ofr6id[n]
instances.
- The
INSAR_ISCE_BURST
job type is now available in thehyp3-avo
,hyp3-bgc-engineering
,hyp3-cargill
, abdhyp3-carter
deployments. - The
AUTORIFT
job type is now available in thehyp3-bgc-engineering
,hyp3-cargill
, abdhyp3-carter
deployments.
- Added a new
INSAR_ISCE_MULTI_BURST
job type for running multi-burst InSAR. Currently, this job type is restricted to a specialhyp3-multi-burst-sandbox
deployment for HyP3 operators. However, this is an important step toward eventually making multi-burst InSAR available for general users.
- Job validator functions now accept two parameters: the job dictionary and the granule metadata.
- Granule metadata validation now supports
reference
andsecondary
job parameters in addition to the existinggranules
parameter. - Burst InSAR validators now support multi-burst jobs.
- Replaced the step function's
INSPECT_MEMORY_REQUIREMENTS
step with a newSET_BATCH_OVERRIDES
step, which calls a Lambda function to dynamically calculate Batch container overrides based on job type and parameters.
- Added missing cloudformation:DeleteStack permission to cloudformation deployment role in ASF-deployment-ci-cf.yml .
- Deleted the
hyp3-pdc
deployment in preparation for archiving the hyp3-flood-monitoring project.
- Copied cloudformation permissions from user to cloudformation deployment role in ASF-deployment-ci-cf.yml to address breaking AWS IAM change when deploying nested stacks via a cloudformation role.
- The
SRG_GSLC
job now takes in a--bounds
argument that determines the extent of the DEM used in back projection.
- The
ARIA_AUTORIFT.yml
job spec now specifies the optimum number of OpenMP threads and uses a dedicated compute environment withr6id[n]
spot instances. - The
AUTORIFT_ITS_LIVE.yml
job spec now specifies the optimum number of OpenMP threads. - The
INSAR_ISCE.yml
job spec now reserved 16 GB memory for running the DockerizedTopsApp task. - The
hyp3-a19-jpl-test
,hyp3-a19-jpl
,hyp3-tibet-jpl
, andhyp3-nisar-jpl
ARIA deployments now uses on-demandm6id[n]
instances. - The
hyp3-its-live-test
deployment now uses a greater variety ofr6id[n]
instances.
- Upgraded to flask-cors v5.0.0 from v4.0.1. Resolves CVE-2024-6221.
- Allow overriding certain AWS Batch compute environment parameters (including instance types and AMI) within a job spec.
- Allow job spec tasks to require GPU resources.
- The
SRG_GSLC
job type now runs within a GPU environment. - Revert ARIA hyp3 deployments back to C-instance family - including the job-spec CLI parameter
omp-num-threads
to ensure multiple jobs fit on single instance. - Deployments with INSAR_ISCE.yml job specs will now use a dedicated compute environment with on-demand instances instead of spot instances for INSAR_ISCE jobs.
- Renamed the
SRG_GSLC_CPU
job toSRG_GSLC
- Changed the
SRG_GSLC
job to use thehyp3-srg
image, rather thanhyp3-back-projection
since the repository was renamed.
- The
ESA_USERNAME
andESA_PASSWORD
secrets have been removed from the job specs that no longer require them (those that use thehyp3-gamma
,hyp3-isce2
,hyp3-autorift
, orhyp3-back-projection
images).
ARIA_AUTORIFT.yml
job spec for Solid Earth offset tracking in the ARIA JPL deployments
- Increased throughput for
hyp3-a19-jpl
(0 -> 4,000 vCPUs) to support continued processing of ARIA GUNW products. - The
hyp3-a19-jpl
andhyp3-nisar-jpl
deployments now use them6id[n]
instance families to reduce the high number of spot interruptions seen with wthc6id
instance family. - Increased available vCPUs for DAAC deployments.
- The
INSAR_ISCE_TEST.yml
job spec, which only differed fromINSAR_ISCE.yml
in support of different instance families, has been removed now that all ARIA JPL deployments are using the same instance families again.
- Reduced throughput for
hyp3-its-live
to prevent Sentinel-2 processing from being rate limited (10,000 -> 2,000 vCPUs).
- The
SRG_GSLC_CPU
job spec - The
SRG_GSLC_CPU
job type to thehyp3-lavas
andhyp3-lavas-test
HyP3 deployments
- The
hyp3-tibet-jpl
deployment now uses them6id[n]
instance families and includes theARIA_RAIDER
job spec
- The
hyp3-lavas
andhyp3-lavas-test
enterprise HyP3 deployments.
This release adds support for access codes. If a user specifies an active access code when they apply for HyP3 access, they will be granted automatic approval without the need for a HyP3 operator to review their application.
If you operate a HyP3 deployment, you can create a new access code by adding an item to the AccessCodesTable
DynamoDB table for your deployment, with any string for the access_code
attribute and an ISO-formatted UTC timestamp for the start_date
and end_date
attributes, e.g. 2024-06-01T00:00:00+00:00
and 2024-06-02T00:00:00+00:00
for an access code that becomes active on June 1, 2024 and expires on June 2, 2024.
- The
PATCH /user
endpoint now includes an optionalaccess_code
parameter and returns a403
response if given an invalid or inactive access code.
- Turn off hyp3 ACCESS spend by zeroing the max VCPUs in the associated deployment.
- Reduce product lifetime in hyp3 ACCESS deployment to 14 days.
- Added missing
requests
dependency to lib/dyanmo/setup.py. Fixes #2269.
This release includes changes to support an upcoming user whitelisting feature. A new user will be required to apply for HyP3 access and will not be able to submit jobs until an operator has manually reviewed and approved the application. As of this release, all new and existing users are automatically approved without being required to submit an application, but this will change in the near future.
- Changing a user's application status (e.g. to approve or reject a new user) requires manually updating the value of the
application_status
field in the Users table. - The response for both
/user
endpoints now automatically includes all Users table fields except those prefixed by an underscore (_
). - The following manual updates must be made to the Users table upon deployment of this release:
- Add field
application_status
with the appropriate value for each user. - Rename field
month_of_last_credits_reset
to_month_of_last_credit_reset
. - Rename field
notes
to_notes
.
- Add field
- A new
PATCH /user
endpoint with a singleuse_case
parameter allows the user to submit an application or update a pending application. The structure for a successful response is the same as forGET /user
. - A new
default_application_status
deployment parameter specifies the default status for new user applications. The parameter has been set toAPPROVED
for all deployments.
- The
POST /jobs
endpoint now returns a403
response if the user has not been approved. - The response schema for the
GET /user
endpoint now includes:- A required
application_status
field representing the status of the user's application:NOT_STARTED
,PENDING
,APPROVED
, orREJECTED
. - An optional
use_case
field containing the use case submitted with the user's application. - An optional
credits_per_month
field representing the user's monthly credit allotment, if different from the deployment default.
- A required
- The
reset_credits_monthly
deployment parameter has been removed. Credits now reset monthly in all deployments. This only changes the behavior of thehyp3-enterprise-test
deployment.
- Reduced
start_execution_manager
batch size from 600 jobs to 500 jobs. Fixes #2241.
- A
hyp3-its-live-test
deployment todeploy-enterprise-test.yml
for ITS_LIVE testing in preparation for some significant ITS_LIVE project development - A
hyp3-a19-jpl-test
deployment todeploy-enterprise-test.yml
for ARIA testing of them6id[n]
instance families - An
ARIA_RAIDER
job spec that allows RAIDER processing of previous INSAR_ISCE job that either did not include a weather model or failed on the RAiDER step. ARIA_RAIDER
jobs are now available in thehyp3-a19-jpl
andhyp3-a19-jpl-test
deployments.
- The
INSAR_ISCE_TEST.yml
job spec now only differs from theINSAR_ISCE.yml
with respect to the++omp-num-threads
parameter, because the value is specific to a particular instance family - Job specs are no longer required to include the
granules
parameter.
- The
AUTORIFT_ITS_LIVE_TEST.yml
job spec which supported running test versions of the AUTORIFT jobs in the production hyp3-its-live deployment
This release marks the final transition to the new credits system. These changes apply to the production HyP3 API at https://hyp3-api.asf.alaska.edu. Read the announcement for full details.
- Each type of job now costs a different number of credits, as shown in the table here.
- Users are now given an allotment of 10,000 credits per month.
- Added a Lambda function that sets
Private DNS names enabled
to false for VPC endpoint in EDC.
- A
publish_bucket
parameter toAUTORIFT_ITS_LIVE
andAUTORIFT_ITS_LIVE_TEST
that specifies if product should be uploaded to either the ITS_LIVE open data bucket or test bucket. - Access key secrets to
AUTORIFT_ITS_LIVE
andAUTORIFT_ITS_LIVE_TEST
that allow for S3 upload of products.
- Update throughput for ACCESS deployments by factor of 4 (from 1000 to 4000 vcpus).
- Reduced vcpu limits for EDC deployments from 1,500/3,000 to 1,200/2,400.
- The
disable-private-dns
lambda function added in v4.3.2 has been removed; the underlying issue has been resolved in the Earthdata Cloud platform. Fixes #1956.
/costs
API endpoint now returns a list of job cost dictionaries, instead of a dictionary of dictionaries.- Cost table parameters are now contained within the
parameter_value
dictionary key. - Cost table costs are now contained within the
cost
dictionary key.
HyP3 is in the process of transitioning from a monthly job quota to a credits system. HyP3 v6.0.0 implemented the new credits system without changing the number of jobs that users can run per month. This release implements the capability to assign a different credit cost to each type of job, again without actually changing the number of jobs that users can run per month.
Beginning on April 1st, the production API at https://hyp3-api.asf.alaska.edu will assign a different cost to each type of job and users will be given an allotment of 10,000 credits per month. Visit the credits announcement page for full details.
/costs
API endpoint that returns a table that can be used to look up the credit costs for different types of jobs.
- The https://hyp3-test-api.asf.alaska.edu API now implements the credit costs displayed on the credits announcement page.
hyp3-a19-jpl
andhyp3-tibet-jpl
deployments max vCPUs have been reduced to 1,000 from 10,000 because of persistent spot interruptions.
- Upgraded to
cryptography==42.0.4
. Fixes CVE-2024-26130.
- Previously, the
job_parameters
field of thejob
object returned by the/jobs
API endpoint only included parameters whose values were specified by the user. Now, the field also includes optional, unspecified parameters, along with their default values. This does not change how jobs are processed, but gives the user a complete report of what parameters were used to process their jobs.
- Increased maximum vCPUs from 0 to 10,000 in the hyp3-tibet-jpl deployment.
- Decreased product lifetime from 60 days to 30 days in the hyp3-tibet-jpl deployment.
HyP3's monthly quota system has been replaced by a credits system. Previously, HyP3 provided each user with a certain number of jobs per month. Now, each job costs a particular number of credits, and users spend credits when they submit jobs. This release assigns every job a cost of 1 credit, but future releases will assign a different credit cost to each job type. Additionally, the main production deployment (https://hyp3-api.asf.alaska.edu
) resets each user's balance to 1,000 credits each month, effectively granting each user 1,000 jobs per month. Therefore, users should not notice any difference when ordering jobs via ASF's On Demand service at https://search.asf.alaska.edu.
- The
job
object returned by the/jobs
API endpoint now includes acredit_cost
attribute, which represents the job's cost in credits. - A
DAR
tag is now included in Earthdata Cloud deployments for each S3 bucket to communicate which contain objects that required to be encrypted at rest.
- The
quota
attribute of theuser
object returned by the/user
API endpoint has been replaced by aremaining_credits
attribute, which represents the user's remaining credits.
- The non-functional CloudWatch alarm for API 5xx errors has been removed from the
monitoring
module. See #2044.
INSAR_ISCE_BURST
jobs are now available in the azdwr-hyp3 deployment.
- Addressed breaking changes with upgrade to
moto[dynamodb]==5.0.0
- Fix how the
INSAR_ISCE_BURST
antimeridian error message is formatted.
- A validation check for
INSAR_ISCE_BURST
that will fail if a granule crosses the antimeridian.
- Upgrade the
openapi-core
,openapi-spec-validator
, andjsonschema
packages to their latest versions. This is now possible thanks to the pre-release of openapi-core v0.19.0a1, which fixes python-openapi/openapi-core#662. Resolves #1193.
legacy
option for thedem_name
parameter ofRTC_GAMMA
jobs. All RTC processing will now use the Copernicus DEM.
- The description of the INSAR_ISCE_BURST job's
apply_water_mask
to state that water masking now happens BEFORE unwrapping.
output_resolution
in theINSAR_ISCE_TEST
job spec is now correctly specified as an int instead of number, which can be a float or an int.
- Update
INSAR_ISCE
andINSAR_ISCE_TEST
job spec for GUNW version 3+ standard and custom productsframe_id
is now a required parameter and has no defaultcompute_solid_earth_tide
andestimate_ionosphere_delay
now default totrue
INSAR_ISCE_TEST
exposes customgoldstein_filter_power
,output_resolution
,dense_offsets
, andunfiltered_coherence
parameters
- Updated
WATER_MAP
job spec to point at the HydroSAR images instead of the ASF Tools images as the HydroSAR code is being migrated to the HydroSAR project repository.
- Reverted the new AWS Batch job retry strategy introduced in HyP3 v4.1.2. Fixes #1944
- Removed the unused
RIVER_WIDTH
job spec. - Removed the
WATER_MAP
job spec from UAT as it's not expected to be available in HyP3 production anytime soon.
- INSAR_ISCE_BURST job to EDC production deployment.
- Added a Lambda function that sets
Private DNS names enabled
to false for VPC endpoint.
- The
ESA_USERNAME
andESA_PASSWORD
secrets have been added to all of the job specs that require them.
- The
iterative_min_size
andminimization_metric
parameters have been moved from theWATER_MAP_TEST
job spec to theWATER_MAP
job spec. The defaultminimization_metric
value has been changed fromfmi
tots
. - The
known_water_threshold
parameter for theWATER_MAP
job type is now nullable, with a default value ofnull
instead of30.0
percent. A water threshold is computed when the value isnull
. - Use Amazon Linux 2023 AMI in non-Earthdata Cloud environments
- Reduced the memory reservation of some job types due to slightly less memory being available for AWS Batch jobs on the AL2023 AMI
- All deployments now use the
SPOT_PRICE_CAPACITY_OPTIMIZED
allocation strategy for AWS Batch. This includes JPL deployments, reverting the temporary change to On Demand instances in HyP3 v3.10.8
- The
WATER_MAP_TEST
job spec
- The
ami_id
for EDC platforms now uses the original AMI.
- Added
phase_filter_parameter
forINSAR_GAMMA
job type.
- Removed the
INSAR_GAMMA_TEST
job type from thehyp3-avo
andhyp3-enterprise-test
deployments, now that thephase_filter_parameter
option is available for theINSAR_GAMMA
job type.
- AWS Batch jobs are now retried twice, once after 10 minutes and once after 60 minutes, to reduce the number of jobs that fail due to transient errors, such as Earthdata Login and Sentinel-1 distribution outages.
- New DEM coverage map that allows COP90 tiles to fill the COP30 gaps over Azerbaijan and Armenia.
- Pinned
Werkzeug==2.3.7
inrequirements-apps-api.txt
. Mitigates #1861 pending a fix for logandk/serverless-wsgi#247
- New
parameter_file
parameter for theAUTORIFT_ITS_LIVE
andAUTORIFT_ITS_LIVE_TEST
job types.
- The Subscriptions feature has been removed.
- Removed the
/subscriptions
API endpoint. - Removed the
subscription_id
query parameter from theGET /jobs
API endpoint. - Removed the
subscription_id
field from the response body of theGET /jobs
API endpoint.
- Removed the
- Reduced vCPU limits for
hyp3-tibet-jpl
to 0 from 10,000.
- The public key for the JWT auth provider is now specified as a GitHub Secret. Fixes #1765
- HyP3 deployments at JPL now use On Demand instances instead of Spot instances to prevent
INSAR_ISCE
jobs from being interrupted. This should be a temporary change.
- The
INSAR_ISCE_BURST
job type now validates that polarizations and burst ids are the same.
- Increased vCPU limits for
hyp3-a19-jpl
andhyp3-tibet-jpl
from 1,600 to 10,000.
- Updated INSAR_ISCE job specification for DockerizedTopsApp v0.2.4
- Added larger
c6id
instance types to hyp3-a19-jpl and hyp3-nisar-jpl deployments
- The
hyp3-edc-uat
andhyp3-edc-prod
deployments now uses the latest Earthdata Cloud AMI with additional software installed.
- Increased product lifetime for hyp3-tibet-jpl deployment from 14 days to 60 days.
- The Subscriptions feature has been deprecated and will be removed as early as
2023-09-05
(September 5, 2023). Please read our Subscriptions docs for more details and take the recommended actions to avoid data loss. You can also follow our Jupyter notebook tutorials to learn how to reproduce subscription-like behavior using the HyP3 SDK.
- Updated default public key used to verify authentication cookie
- Added
INSAR_ISCE_BURST
job type tohyp3-test
deployment.
- Added larger
c6id
instance types to hyp3-tibet-jpl deployment - Set
++omp-num-threads=4
forINSAR_ISCE_TEST
jobs
- Removed
c5d.xlarge
instance types from hyp3-tibet-jpl and hyp3-nisar-jpl deployments.
- Added
r6idn
instance types and removedr5d
,r5dn
instance types from most deployments.
PermissionsBoundaryPolicyArn
stack parameter; this setting is no longer required for Earthdata Cloud deployments
apply_water_mask
option forINSAR_ISCE_BURST
jobs
POST /jobs
now returns HTTP 400 for Sentinel-1 Burst granules that do not exist in CMRPOST /jobs
now returns HTTP 400 for INSAR_ISCE_BURST jobs for burst granules that do not intersect the Copernicus GLO-30 Public DEM.
- Modified
start_execution_manager
to submit no more than 2 batches of 300 jobs, in order to reduce payload size. Fixes #1689.
- Reduced
start-execution-worker
concurrency to address AWS BatchToo Many Requests
errors. Fixes #1676. - Added jobs query pagination for
subscription-worker
so that all jobs will be retrieved when constructing the list of processed granules.
- Reverted
asf_search
to v6.0.2. Fixes #1673.
- Invalid
install_requires
clause indynamo/setup.py
. Fixes #1666.
- Added a new
hyp3-pdc
deployment.
- Added
INSAR_ISCE_BURST
job spec to thehyp3-enterprise-test
deployment. - Added the
S1_CORRECTION_ITS_LIVE
job spec to thehyp3-enterprise-test
andhyp3-its-live
deployments.
- The hyp3-autorift plugin now specifies the optimum number of OpenMP threads through the global
++omp-num-threads
argument
- Added the
WATER_MAP_TEST
job spec to thehyp3-watermap
deployment.
- the
flood_depth_estimator
parameter in both theWATER_MAP
andWATER_MAP_TEST
job spec is now nullable.
-
- Increased the
hyp3-tibet-jpl
vCPU limit from 0 to 1600.
- Increased the
- The
RIVER_WIDTH
job spec from thehyp3-streamflow
deployment.
- The
GET /jobs
endpoint now includes auser_id
parameter, which allows retrieving jobs submitted by another user. Ifuser_id
is not provided, jobs are returned for the current user. - Added the
WATER_MAP_EQ
job spec to thehyp3-watermap
deployment. - Added 20m resolution to the
WATER_MAP_EQ
job spec.
- Increased memory available to INSAR_GAMMA jobs in azdwr-hyp3 deployment.
- Job
status_code
field should only switch toRUNNING
if current value isPENDING
(fixes #1539).
- Added
resolution=20.0
option forRTC_GAMMA
jobs. - Added a
WATER_MAP_EQ
job spec to thehyp3-streamflow
andhyp3-enterprise-test
deployments.
- Added
resolution=20.0
option forWATER_MAP
jobs.
- Added
hyp3-carter
deployment. - An
INSAR_GAMMA_TEST.yml
job spec has been added, exposing the adaptive phase filter parameter used when processing InSAR products. INSAR_GAMMA_TEST.yml
job spec has been added to the HyP3 Enterprise Test and HyP3 AVO deployments.
- Increased the
hyp3-streamflow
product lifecycle from 14 days to 90 days. - Increased the
hyp3-streamflow
vCPU limit from 640 to 1600.
job_spec
s can now specify a required set of secrets and an AWS Secrets Manage Secret ARN to pull the secret values from. Notably, secrets are now externally managed and not part of the HyP3 stack.
INSAR_ISCE_TEST
jobs now accept ancompute_solid_earth_tide
option to compute the solid earth tide ionosphere correction layer.
INSAR_ISCE
andINSAR_ISCE_TEST
jobs no longer accept the unsupportedNCMR
weather model; see RAiDER#485.
INSAR_ISCE_TEST
jobs now accept aframe_id
parameter. GUNW products are subset to this frame.INSAR_ISCE_TEST
jobs now accept anestimate_ionosphere_delay
option to apply ionosphere correction.INSAR_ISCE_TEST
jobs now accept anesd_coherence_threshold
parameter to specify whether or not to perform the Enhanced Spectral Diversity (ESD), and what ESD coherence threshold to use.
INSAR_ISCE
andINSAR_ISCE_TEST
jobs now accepts all weather model parameters allowed byRAiDER
.- Added
hyp3-bgc-engineering
deployment.
WATER_MAP
andRIVER_WIDTH
jobs are now run as a series of multiple tasks.- The
flood_depth_estimator
parameter forWATER_MAP
jobs is now restricted to a set of possible values. - Changed the default value for the
flood_depth_estimator
parameter forWATER_MAP
jobs fromiterative
toNone
. A value ofNone
indicates that a flood map will not be included. - Reduced
ITS_LIVE
product lifetime cycle from 180 days to 45 days.
- Removed the
include_flood_depth
parameter forWATER_MAP
jobs.
INSAR_ISCE
andINSAR_ISCE_TEST
jobs now accept aweather_model
parameter to specify which weather model to use when estimating trophospheric delay data.- Increases the memory available to
AUTORIFT
jobs for Landsat pairs
- Made
resolution=10.0
parameter option for RTC_GAMMA and WATER_MAP jobs available in all deployments
- Updated hyp3-enterprise-test, hyp3-watermap, hyp3-streamflow, and hyp3-cargill deployments to include larger EC2 instance types capable of running multiple jobs per instance.
INSAR_ISCE
andINSAR_ISCE_TEST
jobs will now only accept SLC scenes with a polarization of VV or VV+VH.
- Set
++omp-num-threads 4
for RTC_GAMMA, INSAR_GAMMA, WATER_MAP, and AUTORIFT jobs to drastically reduce CPU contention when running multiple jobs on the same EC2 instance. - Updated DAAC deployments to include larger EC2 instance types capable of running multiple jobs per instance.
- In addition to
power
andamplitude
,decibel
can now be provided as thescale
forRTC_GAMMA
jobs
- Added
lambda_logging
library for re-usable Lambda logging functionality.
- Increase vCPU and monthly budget values for test and production EDC deployments.
- Drop the
c5d
instance family due to disk space limitations for GUNW product generation in the ACCESS19 JPL deployment.
- Added hyp3-cargill deployment
- AUTORIFT jobs for Sentinel-2 scenes can now only be submitted using ESA naming convention.
- Reduced hyp3-tibet-jpl deployment to 700 maximum VCPUs.
- Included
r5d.xlarge
EC2 instances types in most deployments to improve Spot availability
- A new
AUTORIFT_TEST
job type for the hyp3-its-live deployment running thetest
version of the container.
- Batches of step function executions are now started in parallel using a manager to launch one worker per batch of jobs (currently up to 3 batches of 300 jobs for a total of 900 jobs each time the manager runs).
- AutoRIFT jobs now allow submission of Landsat 4, 5, and 7 Collection 2 scene pairs
- Subscription handling is now parallelized using a manager to launch one worker per subscription, in order to help prevent timeouts while handling subscriptions.
- Upgraded Batch compute environments to the latest generation
r6id
/c6id
EC2 instance types
- Added
processing_times
field to the Job schema in the API in order to support jobs with multiple processing steps.
- Removed
processing_time_in_seconds
field from the Job schema.
- New
RIVER_WIDTH
job type.
- Job specifications can now specify multiple tasks.
WATER_MAP
andWATER_MAP_10M
now can run up to 10 hours before timing out.
WATER_MAP
andWATER_MAP_10M
now can run up to 19800s before timing out.
scale-cluster
now temporarily disables the Batch Compute Environment to allow maxvCpus to be reduced in cases when the current desired vCpus exceeds the new target value.
scale-cluster
now adjusts the compute environment size based on total month-to-date spending, rather than only EC2 spending.
- Granted additional IAM permissions required by AWS Step Functions to manage AWS Batch jobs.
- Changed the Swagger UI page header and title to reflect the particular HyP3 deployment.
- The
next
URL in paginatedGET /jobs
responses will now reflect the correct API hostname and path in load balanced environments. Fixes #1071.
- Added support for updating subscription
start
andintersectsWith
fields.
- Increased
MemorySize
forprocess-new-granules
function to improve performance when evaluating subscriptions.
BannedCidrBlocks
stack parameter to specify CIDR ranges that will receive HTTP 403 Forbidden responses from the API
- Granules for jobs are now validated against the updated 2021 release of Copernicus GLO-30 Public DEM coverage.
- GitHub action to scan Python dependencies and rendered CloudFormation templates for known security vulnerabilities using Snyk.
- AUTORIFT jobs can now be submitted for Sentinel-2 scenes with 25-character Earth Search names. Fixes #1022.
GET /jobs
requests now acceptname
parameters up to 100 characters. Fixes #1019.
- Fix how we fetch jobs that are waiting for step function execution, so that we actually start up to 400 executions at a time.
- Added default values for
logs
andexpiration_time
, which should prevent failed jobs from remaining inRUNNING
.
- Monthly job quota checks can now be suppressed for individual users by setting
max_jobs_per_month
tonull
in the users table. - A user can now be given a fixed Batch job priority for all jobs that they submit by setting the
priority
field in the users table.
- Handle missing log stream when uploading logs, which should prevent jobs from remaining in
RUNNING
status after failure. - Don't write error messages to
processing_time_in_seconds
field.
- Added
execution_started
field to job schema to indicate whether a step function execution has been started for the job.
- Job status code doesn't switch from
PENDING
toRUNNING
until the Batch job starts running.
- Added flood depth options to water map job (currently limited to
hyp3-test
).
- Increased job name length limit to 100 characters.
- Compute processing time when one or more Batch attempts is missing a
StartedAt
time.
- Convert floats to decimals when adding a subscription.
- Process new granules for
WATER_MAP
subscriptions.
- Step function now retries the CHECK_PROCESSING_TIME task when errors are encountered.
- Step function now retries transient
Batch.AWSBatchException
errors when submitting jobs to AWS Batch. Fixes #911.
- Expose CloudFront product download URLs for Earthdata Cloud environments via the HyP3 API.
OriginAccessIdentityId
stack parameter supporting content distribution via CloudFront.
- Upgraded AWS Lambda functions and Github Actions to Python 3.9
- Require HttpTokens to be consistent with EC2 instance metadata configured with Instance Metadata Service Version 2 (IMDSv2).
- Cloudformation stack parameters that are specific to Earthdata Cloud environments are now managed via Jinja templates, rather than CloudFormation conditions.
- A
JPL-public
security environment when rendering CloudFormation templates which will deploy a public bucket policy. To use this environment, the AWS S3 account level Block All Public Access setting must have been turned off by the JPL Cloud team.
- The
JPL
security environment, when rendering CloudFormation templates, will no longer deploy a public bucket policy as this is disallowed by default for JPL commercial cloud accounts.
- New
InstanceTypes
parameter to the cloudformation template to specify which EC2 Instance Types are available to the Compute Environment - Added
r5dn.xlarge
as an eligible instance type in most HyP3 deployments
- The
job_spec_files
positional argument torender_cf.py
has been switched to a required--job-spec-files
optional argument to support multiple open-ended arguments. - Set S3 Object Ownership to
Bucket owner enforced
for all buckets so that access via ACLs is no longer supported.
- The HyP3 API is now implemented as an API Gateway REST API, supporting private API deployments.
- AutoRIFT jobs now allow submission with Landsat 9 Collection 2 granules
- Add
processing_time_in_seconds
to thejob
API schema to allow plugin developers to check processing time.
- Encrypt Earthdata username and password using AWS Secrets Manager.
- Documentation about deploying to a JPL-managed commercial AWS account has been added to
docs/deployments
.
- Increase monthly job quota per user from 250 to 1,000.
- Limited the number of jobs a subscription can send at a time to avoid timing out. Fixes #794.
- Confirm there are no unprocessed granules before disabling subscriptions past their expiration date.
- Jobs are now assigned a
priority
attribute when submitted.priority
is calculated based on jobs already submitted month-to-date by the same user. Jobs with a higherpriority
value will run before jobs with a lower value. Batch.ServerException
errors encountered by the Step Function are now retried, to address intermittent errors when the Step Functions service calls the Batch SubmitJob API.
- HyP3 can now be deployed into a JPL managed commercial AWS account
- Selectable security environment when rendering CloudFormation templates, which will modify resources/configurations for:
ASF
(default) -- AWS accounts managed by the Alaska Satellite FacilityEDC
-- AWS accounts managed by the NASA Earthdata CLoudJPL
-- AWS accounts managed by the NASA Jet Propulsion Laboratory
- A
security_environment
Make variable used by therender
target (and any target that depends onrender
). Use likemake security_environment=ASF build
- All CloudFormation templates (
*-cf.yml
) are now rendered from jinja2 templates (*-cf.yml.j2
)
- The
EarthdataCloud
CloudFormation template parameter toapps/main-cf.yml
- Use Managed Policies for IAM permissions in support of future deployments using custom CloudFormation IAM resources
- Added build target to Makefile.
- Disabled default encryption for the monitoring SNS topic. Fixes #762.
- Enabled default encryption for the monitoring SNS topic
- Block Public Access settings for the S3 content bucket are now configured based on the EarthdataCloud stack parameter.
- s3 access log bucket is now encrypted using AWS S3 Bucket Keys
- The
scale-cluster
lambda now reducesdesiredVpucs
to matchmaxVcpus
when necessary to allow the compute environment to scale down immediately.
- The
DomainName
andCertificateArn
stack parameters are now optional, allowing HyP3 to be deployed without a custom domain name for the API.
- Support for automatic toggling between two
maxvCpus
values for the Batch compute environment, based on monthly budget vs month-to-date spending
- Api 400 responses now use a consistent JSON schema for the response body. Fixes #625
- Default autoRIFT parameter file was updated to point at the new
its-live-data
AWS S3 bucket instead ofits-live-data.jpl.nasa.gov
, except for the customautorift-eu
deployment which uses a copy ineu-central-1
. - Job specification YAMLs can now specify a container
image_tag
, which will override the deployment default image tag - Provided example granule pairs for INSAR_GAMMA and AUTORIFT jobs in the OpenApi schema
POST /jobs
no longer allows users to submit a job of onejob_type
with the parameters of anotherPOST /subscriptions
no longer allows user to submit a subscriptions of onejob_type
with the parameters of anotherProcessNewGranules
now convertsdecimal.Decimal
objects tofloat
orint
before passing toasf_search.search
- fixed typo in
search_parameteters['FlightDirection']
DECENDING -> DESCENDING
- New
AmiId
stack parameter to specify a specific AMI for the AWS Batch compute environment
job_spec/*.yml
files are now explicitly selected allowing per-deployment job customization
AutoriftImage
,AutoriftNamingScheme
, andAutoriftParameterFile
CloudFormation stack parameters have been removed and are instead captured in customjob_spec/*.yml
files.
- Optional
DeployLambdasInVpc
stack parameter to deploy all lambda functions into the givenVpcId
andSubnetIds
- Job types now are defined each in their own file under the
job_spec
directory - Api job parameters are now defined in the
job_spec
folder for the given job type
- Optional
PermissionsBoundaryPolicyArn
stack parameter to apply to all created IAM roles
- Resolved an issue where API requests would return HTTP 500 due to space-padded sourceIp value, e.g. ' 123.123.123.50'
BannedCidrBlocks
stack parameter to specify CIDR ranges that will receive HTTP 403 Forbidden responses from the API
- All job parameters of type
list
now are converted to space delimited strings prior to invoking job definitions in Batch.
- Exposed new
include_displacement_maps
API parameter for INSAR_GAMMA jobs, which will cause both a line-of-sight displacement and a vertical displacement GEOTIFF to be included in the product.
- Reduced default job quota to 250 jobs per user per month
- The
include_los_displacement
API parameter for INSAR_GAMMA jobs has been deprecated in favor of theinclude_displacement_maps
parameter, and will be removed in the future.
- The
logs
attribute inGET /jobs
responses is now only populated for FAILED jobs, and will be an empty list for SUCCEEDED jobs
POST /subscriptions
requests may now include avalidate_only
key which when set totrue
will not add the subscription to the database but still validate it.- in
POST /subscriptions
requests,search_parameters
andjob_specification
are now included undersubscription
GET /subscriptions
requests now include query parametersname
gets only subscriptions with the given namejob_type
gets only subscriptions with the given job typeenabled
gets only subscriptions whereenabled
matches
- subscriptions now include
creation_date
which indicates date and time of subscription creation, responses fromGET /subscriptions
are sorted bycreation_date
decending
PATCH /subscriptions
requests may now update a subscription'senabled
attribute in addition toend_date
GET /jobs
responses now include asubscription_id
field for jobs created by subscriptionsGET /jobs
requests now may include asubscription_id
query parameter to limit jobs based on subscription_id- Subscriptions are now evaluated every 16 minutes, instead of every hour
/subscriptions
endpoint which allows a user to define a subscription with search and processing criteriaPOST /subscriptions
to create a subscriptionGET /subscriptions
to list all subscriptions for the userPATCH /subscriptions/<subscription_id>
to update the end date of a subscriptionGET /subscriptions/<subscription_id>
to list the information for a specific subscription
process_new_granules
app which searches for unprocessed granules related to subscriptions and automatically starts jobs for them as they become available.
- HyP3 content bucket now allows Cross Origin Resource Headers
- Exposed new
apply_water_mask
API parameter for INSAR_GAMMA jobs, which sets pixels over coastal and large inland waterbodies as invalid for phase unwrapping.
- Modified retry strategies for Batch jobs and UploadLog lambda to address scaling errors
lib/dynamo
library to allow sharing common code among different apps.
POST /jobs
responses no longer include thejob_id
,request_time
,status_code
, oruser_id
fields whenvalidate_only=true
- moved dynamodb functionality from
hyp3_api/dynamo
tolib/dynamo
- moved job creation buisness logic from
hyp3_api/handlers
tolib/dynamo
AutoriftNamingScheme
CloudFormation parameter to set the naming scheme for autoRIFT products
- Sentinel-2 autoRIFT jobs now reserve 3/4 less memory, allowing more jobs to be run in parallel
- Increased default job quota to 1,000 jobs per user per month
- Allow the UI to be accessed from
/ui/
as well as/ui
- POST
/jobs
now generates an error for Sentinel-1 granules with partial-dual polarizations, fixes #376
- Removed
connexion
library due to inactivity, replaced withopen-api-core
- Error messages from invalid input according to the
api-spec
are different
- POST
/jobs
no longer throws 500 fordecimal.Inexact
errors, fixes #444 start_exectuion.py
will now submit at most 400 step function executions per run. Resolves an issue where no executions would be started when many PENDING jobs were available for submission.
- Landsat and Sentinel-2 autoRIFT jobs will utilize 1/3 less memory
- Exposed new
include_wrapped_phase
API parameter for INSAR_GAMMA jobs
- Exposed new
include_dem
API parameter for INSAR_GAMMA jobs
- RTC_GAMMA jobs now use the Copernicus DEM by default
- AUTORIFT jobs now accept Landsat 8 scenes with a sensor mode of ORI-only (
LO08
)
- INSAR_GAMMA jobs now expose
include_inc_map
parameter that allows users to include an incidence angle map.
- Updated API GATEWAY payload format to version 2.0 to support later versions of serverless wsgi
- Granules for INSAR_GAMMA jobs are now validated against Copernicus GLO-30 Public DEM coverage
- resolved
handlers.get_names_for_user
error whendynamo.query_jobs
requires paging. - Resolved HTTP 500 error when quota check requires paging.
- Exposed new
dem_name
api parameter for RTC_GAMMA jobsdem_name="copernicus"
will use the Copernicus GLO-30 Public DEMdem_name="legacy"
will use the DEM with the best coverage from ASF's legacy SRTM/NED data sets
util.get_job_count_for_month
now usesSelect='COUNT'
for better performance querying DynamoDB
- Granules for RTC_GAMMA jobs are now validated against the appropriate DEM coverage map based on the value of the
dem_name
job parameter
GET /jobs
now pages results. Large queries (that require paging) will contain anext
key in the root level of the json response with a URL to fetch subsequent pagesGET /jobs
now accepts ajob_type
query parameterGET /jobs
now provides jobs sorted byrequest_time
in descending order
- Exposed new
include_rgb
api parameter for RTC_GAMMA jobs
get_files.py
now only includes product files ending in.zip
or.nc
in thefiles
list returned inGET /jobs
API responses
- S3 content bucket now allows public
s3:ListBucket
ands3:GetObjectTagging
- Jobs now include a
logs
key containing a list of log file download urls
- Increased max capacity for compute environment to 1600 vCPUs
- Improved response latency when submitting new jobs via
POST /jobs
GET /jobs
responses now includes3.bucket
ands3.key
entries for each file to facilitate interacting with products using s3-aware tools.
- AUTORIFT jobs now correctly accept Sentinel-2 granules using Earth Search IDs of 23 characters.
- AutoRIFT jobs now allow submission with Landsat 8 Collection 2 granules
- AutoRIFT jobs now only accept Sentinel-2 L1C granules, rather than any Sentinel-2 granules
- API responses are no longer validated against the OpenAPI schema.
GET /jobs
requests for jobs with legacy parameter values (e.g. S2 L2A granules) will no longer return HTTP 500 errors.
- INSAR_GAMMA jobs now use the hyp3-gamma plugin to do processing
- RTC_GAMMA jobs now use the hyp3-gamma plugin to do processing
- Autorift jobs now allow submission with Sentinel 2 granules
- A new
include_scattering_area
paramter has been added forRTC_GAMMA
jobs, which includes a GeoTIFF of scattering area in the product package. This supports creation of composites of RTC images using Local Resolution Weighting per Small (2012) https://doi.org/10.1109/IGARSS.2012.6350465. - Cloudwatch request metrics are now enabled for the S3 content bucket
- Api Gateway access logs are now in JSON format for easier parsing by Cloudwatch Insights
- Api Gateway access logs now include
responseLatency
anduserAgent
fields. Unusedcaller
anduserId
fields are no longer included.
/
now redirects to/ui
- Increased compute to allow 200 concurrent instances.
- Refactored dynamodb interactions
dynamo.py
in the api code now manages all dynamodb interactions for the api- added tests for new dynamo module
- added paging for dynamodb query calls
- Added Code of Conduct and Contributing Guidelines
MonthlyJobQuotaPerUser
stack parameter no longer has a default, and the value can now be set to zero.- Value is now set to
200
for ASF deployments. - Value is now set to
0
for the autoRIFT deployment.
- Value is now set to
POST /jobs
requests now allow up to 200 jobs per request, up from 25
- User table which can be used to add custom quotas for users, users not in the table will still have the default.
- GET /jobs/{job_id} API endpoint to search for a job by its job_id
- Api and processing errors will now post to a SNS topic
- Parameters for
INSAR_GAMMA
jobs have been updated to reflect hyp3-insar-gamma v2.2.0.
- API behavior for different job types is now defined exclusively in
job_types.yml
. Available parameter types must still be defined inapps/api/src/hyp3_api/api-sec/job_parameters.yml.j2
, and available validation rules must still be defined inapps/api/src/hyp3_api/validation.py
.
- Added
AmazonS3ReadOnlyAccess
permission to batch task role to support container jobs fetching input data from S3.
- RTC_GAMMA jobs now use the user-provided value for the speckle filter parameter. Previously, the user-provided value was ignored and all jobs processed using the default value (false).
- Hyp3 now uses jinja templates in defining CloudFormation templates and the StepFunction definition, rendered at buildtime.
- Job types are now defined only in the API spec and the
job_types.yml
file, no job specific information needs to be added to AWS resource definitions. - Static Analysis now requires rendering before run.
- Added a new
AUTORIFT
job type for processing a pair of Sentinel-1 SLC IW scenes using autoRIFT. For details, refer to the hyp3-autorift plugin repository.
- Updated readme deployment instructions.
- Clarified job parameter descriptions in OpenAPI specification.
- Moved step function definition into it's own file and added static analysis on step funciton definition.
- Split OpenAPI spec into multiple files using File References to resolve, static analysis change from openapi-spec-validator to prance.
GET /user
response now includes ajob_names
list including all distinct job names previously submitted for the current user
- API is now deployed using Api Gateway V2 resources, resulting in lower response latency.
- Added a new
INSAR_GAMMA
job type for producing an interferogram from a pair of Sentinel-1 SLC IW scenes using GAMMA. For details, refer to the hyp3-insar-gamma plugin repository.
-
All job types requiring one or more granules now expose a single
granules
job parameter, formatted as a list of granule names:"granules": ["myGranule"]
forRTC_GAMMA
jobs"granules": ["granule1", "granule2"]
forINSAR_GAMMA
jobs
Note this is a breaking change for
RTC_GAMMA
jobs. -
Browse and thumbnail URLs for
RTC_GAMMA
jobs will now be sorted with the amplitude image first, followed by the rgb image, inGET /jobs
responses.
- Resolved HTTP 500 error when submitting jobs with a resolution with decimal precision (e.g.
30.0
)
- The
dem_matching
,speckle_filter
,include_dem
, andinclude_inc_map
api parameters are now booleans instead of strings. - The
resolution
api parameter is now a number instead of a string, and the10.0
option has been removed.
- Implemented 0.15° buffer and 20% threshold in DEM coverage checks when submitting new jobs. As a result slightly more granules will be rejected as having insufficient coverage.
- Removed optional
description
field for jobs
- Unit tests for
get-files
lambda function
- Resolved HTTP 500 errors for
POST /jobs
requests when the optionalvalidate_only
parameter was not provided - Jobs encountering unexpected errors in the
get-files
step will now correctly transition to aFAILED
status
POST /jobs
now accepts avalidate_only
key at root level, set to true to skip submitting jobs but run api validation.
get-files
get expiration only from product
get-files
step functions AMI roles for tags
POST /jobs
now accepts custom job parameters when submitting jobsGET /jobs
now shows parameters job was run with
get_files.py
now uses tags to identify file_type instead of path
- Name field to job parameter, set in the same way as description but with max length of 20
- Administrators can now shut down Hyp3-api by setting SystemAvailable flag to false in CloudFormation Template and deploying
- Retry policies to improve reliability
- POST /jobs now accepts GRDH Files in the IW beam mode.
- Removed scaling rules and moved to MANAGED compute environment to run jobs
- POST /jobs now checks granule intersects DEM Coverage map to provide faster feedback on common error cases
- Resolved bug not finding granules when submitting 10 unique granules in a RTC_GAMMA job
- README.md with instructions for deploying, testing, and running the application
- Descriptions for all parameters in the top level cloudformation template
- Descriptions for all schema objects in the OpenAPI specification
- Reduced monthly job quota per user from 100 to 25
- Reduced maximum number of jobs allowed in a single POST /jobs request from 100 to 25
- Removed user authorization requirement for submitting jobs
- Removed is_authorized field from GET /user response
- New GET /user API endpoint to query job quota and jobs remaining
- browse image key for each job containing a list of urls for browse images
- browse images expire at the same time as products
- thumbnail image key for each job containing a list of urls for thumbnail images
- thubnail images expire at the same time as products
- API checks granule exists in CMR and rejects jobs with granules that are not found