EOEPCA Reference Implementation - System
Explore the docs »
View Demo
·
Report Bug
·
Request Feature
- Table of Contents
- About The Project
- Getting Started
- System Documentation
- Technical Domains
- Releases
- Issues
- License
- Contact
- Acknowledgements
EO Exploitation Platform Common Architecture (EOEPCA)
The goal of the “Common Architecture” is to define and agree the technical interfaces for the future exploitation of Earth Observation data in a distributed environment. The Common Architecture will thus provide the interfaces to facilitate the federation of different EO resources into a “Network of EO Resources”. The “Common Architecture” will be defined using open interfaces that link the different Resource Servers (building blocks) so that a user can efficiently access and consume the disparate services of the “Network of EO Resources”.
This repository represents the system integration of the building blocks that comprise the Reference Implementation of the Common Architecture.
The system is designed for deployment to cloud infrastructure orchestrated by a Kubernetes cluster. We include here the automation required to provision, deploy and test the emerging EOEPCA system.
The EOEPCA system deployment comprises several steps. Instructions are provided for both cloud deployment, and local deployment for development purposes.
For the latest release (v1.3) ensure that the correct version of this README is followed.
The first step is to fork this repository into your GitHub account. Use of fork (rather than clone) is recommended to support our GitOps approach to deployment with Flux Continuous Delivery, which requires write access to your git repository for deployment configurations.
Having forked, clone the repository to your local platform...
$ git clone git@github.com:<user>/eoepca.git
$ cd eoepca
$ git checkout v1.3
NOTE that this clones the specific tag that is well tested. The develop
branch should alternatively be used for the latest development.
Step | Cloud (OpenStack) | Local Developer |
---|---|---|
Infrastructure | CREODIAS | n/a (local developer platform) |
Kubernetes Cluster | Rancher Kubernetes Engine | Minikube |
EOEPCA System Deployment ( flux ) |
EOEPCA GitOps | EOEPCA GitOps |
EOEPCA System Deployment (Deployment Guide) |
Deployment Guide | Deployment Guide |
Acceptance Test | Run Test Suite | Run Test Suite |
NOTE that, with release v1.3, the number of system components has been expanded to the point where it is more difficult to make a full system deployment in minikube, due to the required resource demands. Nevertheless, it is possible to make a minikube deployment to a single node with sufficient resources (8 cpu, 32GB) - as illustrated by the Deployment Guide.
NOTE also that the Deployment Guide provides a more detailed description of the deployment and configuration of the components, supported by some shell scripts that deploy the components directly using helm
(rather than using flux
GitOps). The Deployment Guide represents a more informative introduction, and the supporting scripts assume minikube
out-of-the-box.
To ease development/testing, the EOEPCA deployment is configured to use host/service names that embed IP-addresses - which avoids the need to configure public nameservers, (as would be necessary for a production deployment). Our services are exposed through Kubernetes ingress rules that use name-based routing, and so simple IP-addresses are insufficient. Therefore, we exploit the services of nip.io that provides dynamic DNS in which the hostname->IP-adress mapping is embedded in the hostname.
Thus, we use host/service names of the form <service-name>.<public-ip>.nip.io
, where the <public-ip>
is the public-facing IP-address of the deployment (delimited with dashes -
). For cloud deployment the public IP is that of the cloud load-balancer, or for minikube it is the minikube ip
- for example workspace.192-168-49-2.nip.io
.
NOTE: nip.io
supports either dots .
or dashes -
to delimit the IP-address in the DNS name. We use dashes, which seem to work better with LetsEncrypt rate limits.
NOTES:
We also maintain deployments for development and test, under the domainsdevelop.eoepca.org
anddemo.eoepca.org
.
Our public endpoint address is baked into our deployment configuration - in particular the Kubernetes Ingress resources. To re-use our deployment configuration these Ingress values must be updated to suit your deployment environment.
The following documentation supports the current release...
The following is the original documentation that was produced at the start of the project.
It is included here for context...
Building Block | Repository | Documentation |
---|---|---|
Application Deployment & Execution Service (ADES) | https://github.com/EOEPCA/proc-ades | https://github.com/EOEPCA/proc-ades/wiki https://deployment-guide.docs.eoepca.org/v1.3/eoepca/ades/ https://system-description.docs.eoepca.org/current/processing/ades/ |
Application Hub | https://github.com/EOEPCA/application-hub-chart https://github.com/EOEPCA/application-hub-context |
https://deployment-guide.docs.eoepca.org/v1.3/eoepca/application-hub/ https://system-description.docs.eoepca.org/current/processing/application-hub/ |
Sample Application: s-expression | https://github.com/EOEPCA/app-s-expression | https://github.com/EOEPCA/app-s-expression/blob/main/README.md |
Sample Application: snuggs | https://github.com/EOEPCA/app-snuggs | https://github.com/EOEPCA/app-snuggs#readme |
Sample Application: nhi | https://github.com/EOEPCA/app-nhi | https://github.com/EOEPCA/app-nhi/blob/main/README.md |
EOEPCA system releases are made to provide integrated deployments of the developed building blocks. The release history is as follows:
Date | Release |
---|---|
25/09/2023 | Release 1.3 |
20/12/2022 | Release 1.2 |
31/05/2022 | Release 1.1 |
24/12/2021 | Release 1.0 |
23/07/2021 | Release 0.9 |
10/03/2021 | Release 0.3 |
23/11/2020 | Release 0.2 |
13/08/2020 | Release 0.1.2 |
06/08/2020 | Release 0.1.1 |
22/06/2020 | Release 0.1 |
See the open issues for a list of proposed features (and known issues).
The EOEPCA SYSTEM is distributed under the Apache 2.0 Licence. See LICENSE
for more information.
Building-blocks and their sub-components are individually licensed. See their respective source repositories for details.
Project Link: Project Home (https://eoepca.org/)
- README.md is based on this template by Othneil Drew.