GeoNode Cloud is an advanced implementation of the GeoNode platform in the cloud, focused on maximizing the use of native or adapted technologies for cloud environments. This solution is designed to be deployed on Kubernetes, which facilitates its scalability, management and resilience. GeoNode Cloud incorporates the GeoServer Cloud project, which provides robust support for the publication, editing and management of geospatial data, thus reinforcing its purpose of offering a modern and efficient infrastructure for the management of geospatial information in the cloud. With GeoNode Cloud, organizations can benefit from greater flexibility, reduced operational costs, and seamless integration with other cloud-native tools and services.
The project structure for deploying GeoNode Cloud and GeoServer Cloud on Kubernetes is organized into key directories that contain the manifests required to configure and operate the applications. Within the following repository is the project that contains all the manifests that will be used to perform the deployment.
Main Directories
- gn-cloud
- gs-cloud
- configs
- database
The solution architecture is divided into the following components:
- Geonode Cloud Core
- GeoNode Cloud Mapstore Client
- Rabbitmq
- GeoServer Cloud
- Postgres with PostGis extension
- Nginx
- Flower
Specifically Geonode Cloud Core contains the following main technological components for its operation:
- Django Framework
- Memcached
- Geonode Import
- pyCSW
- Celery
- Geoserver App Django - ACL Capability
The architecture is based on the use of microservices, where it is planned to incorporate new microservices that today are in the monolithic component of Django.
GeoNode Cloud licensed under the GPLv3.0.
Docker images for all the services are available on DockerHub, under the KAN Territory & IT organization.
You can find production-suitable deployment files in the folders:
Please read the contribution guidelines before contributing pull requests to the GeoNode Cloud project.
v1.0.0
released on top of GeoNode 4.2.4
.
Read the Release Notes for more information.
GeoNode Cloud's issue tracking is at this Issues GitHub repository.
The upstream GeoServer project uses Issues GitHub for issue tracking.
Here you can see the proposed Roadmap
For the deployment of Geonode Cloud we can deploy it on different Kubernete platforms, here are the details of the deployment on MickoK8S
- MicroK8S.
- Ingress module.
- DNS module.
- Cert-manager module.
- Use snap to install microk8s
sudo snap install microk8s --classic
- Enable necesary micro8s modules
microk8s enable ingress
microk8s enable cert-manager
- Create certmanager config to enable letsencrypt using your own email
microk8s kubectl apply -f - <<EOF
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
email: YOUREMAIL@DOMAIN.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource that will be used to store the account's private key.
name: letsencrypt-account-key
# Add a single challenge solver, HTTP01 using nginx
solvers:
- http01:
ingress:
class: public
EOF
- Clone this repository
git clone https://github.com/Kan-T-IT/geonode-cloud.git && cd geonode-cloud
- Access to directory for microk8s
cd microk8s
- Edit all fields in .env file with the necesary information.
KUBERNETES_SITE_URL=GEONODE_CLOUD_FINAL_URL # i.e.: cloud.mygeonode.com
KUBERNETES_NODE_NAME=YOUR_CLUSTER_NAME_NAME # usually host machine name
KUBERNETES_VOL_DIR=YOUR_DESIRED_LOCATION # this path shold exist
CLUSTER_ISSUER_NAME=YOUR_CLUSTER_ISSUER_NAME # created earlier in this guide
SERVER_PUBLIC_IP=YOU.RPU.BLI.CIP # the public ipv4 of the server
GEONODE_PASSWORD=admin # password for geonode admin user
GEOSERVER_PASSWORD=geoserver # password for geoserver admin user
- Run
./install.sh
and enjoy.
Ensure that the EKS cluster is up and running and configured with the following:
- OIDC Provider and IAM: Configure the OIDC provider for the EKS cluster.
- IAM Service Account for AWS Load Balancer Controller: Create the IAM service account and attach the necessary policies.
- Necessary Addons: Install AWS Load Balancer Controller and EBS CSI Driver.
To deploy the necessary resources on EKS, follow this order:
-
Cluster and StorageClass
cluster.yaml
inclusterEksctl
(if the cluster is not already created).local-storageclass.yaml
inconfigs/storageclass
(to set up the StorageClass before creating volumes).
-
Database
- ConfigMap:
gndatabase-configmap.yaml
indatabase/configmaps
. - PVC:
dbdata-pvc.yaml
indatabase/volumes
. - Deployment:
gndatabase-deployment.yaml
indatabase/deployments
. - Service:
gndatabase-service.yaml
indatabase/services
.
- ConfigMap:
-
gn-cloud Components
- ConfigMaps in
gn-cloud/configmaps
(to make all configurations available). - PVCs:
statics-pvc.yaml
andtmp-pvc.yaml
ingn-cloud/volumes
. - Deployments: Deploy
celery
,django
,mapstore
,memcache
, andredis
ingn-cloud/deployments
. - Services for each component in
gn-cloud/services
.
- ConfigMaps in
-
gs-cloud Components
- ConfigMaps in
gs-cloud/configmaps
(to have all configurations ready). - PVCs:
geowebcache-data-persistentvolumeclaim.yaml
andrabbitmq-data-persistentvolumeclaim.yaml
ings-cloud/volumes
. - Deployments: Deploy
acl
,gateway
,gwc
,rabbitmq
,rest
,wcs
,webui
,wfs
, andwms
ings-cloud/deployments
. - Services in
gs-cloud/services
.
- ConfigMaps in
-
Ingress
- Finally, apply
geonode-ingress.yaml
inconfigs/ingress
to expose services to the outside.
- Finally, apply
After following these steps, verify the status of your pods and services.