Chart for postgis
Homepage: https://postgis.net/
Name | Url | |
---|---|---|
lucernae | lana.pcfre@gmail.com |
Repository | Name | Version |
---|---|---|
../../common/v1.0.1 | common | 1.0.1 |
This is Kartoza's Postgis Rancher charts
Postgis is an extension of PostgreSQL database with added support for Spatial Data
For helm:
helm install release-name kartoza/postgis
This chart bootstrap a PostgreSQL database with Postgis installed in its main database. It behaves like a PostgreSQL database, but with Postgis extension installed in its database.
The default install uses kartoza/postgis image, which can do the following:
- Generate superuser roles at startup
- Generate new database at startup if volume empty
- Generate one or more database with Postgis installed
- Accept locale and collations settings for the database
- Default TLS enabled
- GDAL Driver installed
- Support out-of-db rasters
- Enable multiple extensions
You can override the image to use your own Postgre Image.
By default, we created a Headless service. Headless service can only be accessed within the cluster itself. The name of the service can be used as the hostname. If you need to expose this, you can further cascade it using LoadBalancer or NodePort.
With cert-manager you can automatically create certificate. First, you need an issuer and also the certificate request object. Cert-manager will then create the certificate and store it into a secret. This should happen before you create the Postgis App.
Use the generated secret by filling out tls
config options.
Because Postgres works in L3/4 Layer, the generated CA must be accepted by
your OS if you want to connect using self-signed certificate.
If not, then you can just ignore the warning. However some Database client
will check the CA, depending on what is the mode of the connection,
which can be: disable, allow, required, verify-ca, verify-full.
Sometimes you want to execute certain scripts after the database is ready. Our default image can do that (and most Postgres image based on official Postgres).
The best way would be to create a ConfigMap with your scripts in it, then apply it to your Kubernetes Cluster. Reference: https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap
Then mount it to the pod using our extraVolume and
extraVolumeMounts config.
If you mount it in the pod's path: /docker-entrypoint-initdb.d/
,
it will be scanned by the image.
The executed scripts are only files with the extensions .sql
and .sh
.
Depending on the postgres image you use, you can also mount it to directory where the entrypoint script will be executed according to that image implementations.
TODO: Describe how replication works with stateful set pods.
Key | Type | Default | Description |
---|---|---|---|
tpl/string |
existingSecret: | |
Use this if you have predefined secrets object |
|
tpl/map |
extraConfigMap: |
#file_1: "conf content" |
Define this for extra config map |
|
tpl/list |
+ExpandextraPodEnv: |
#- name: KEY_1
# value: "VALUE_1"
#- name: KEY_2
# value: "VALUE_2"
- name: PASSWORD_AUTHENTICATION
value: "md5" |
Define this for extra pod environment variables |
|
tpl/map |
+ExpandextraPodSpec: |
##You can set pod attribute if needed
#ports:
# - containerPort: 5432
# name: tcp-port |
This will be evaluated as pod spec |
|
tpl/map |
extraSecret: |
#key_1: value_1 |
Define this for extra secrets to be included |
|
tpl/list |
+ExpandextraVolume: |
##You may potentially mount a config map/secret
#- name: custom-config
# configMap:
# name: geoserver-config |
Define this for extra volume (in pair with extraVolumeMounts) |
|
tpl/list |
+ExpandextraVolumeMounts: |
##You may potentially mount a config map/secret
#- name: custom-config
# mountPath: /docker-entrypoint.sh
# subPath: docker-entrypoint.sh
# readOnly: true |
Define this for extra volume mounts in the pod |
|
string |
null |
Storage class name used to provision PV |
|
object/container-image |
+Expand# -- Image registry
registry: docker.io
# -- Image repository
repository: kartoza/postgis
# -- Image tag
tag: "13-3.1"
# -- (k8s/containers/image/imagePullPolicy) Image pullPolicy
pullPolicy: IfNotPresent |
Image map |
|
k8s/containers/image/imagePullPolicy |
"IfNotPresent" |
Image pullPolicy |
|
string |
"docker.io" |
Image registry |
|
string |
"kartoza/postgis" |
Image repository |
|
string |
"13-3.1" |
Image tag |
|
list |
[
"ReadWriteOnce"
] |
Default Access Modes |
|
map |
{} |
You can specify extra annotations here |
|
bool |
true |
Enable persistence. If set to false, the data directory will use ephemeral volume |
|
string |
persistence.existingClaim: | |
A manually managed Persistent Volume and Claim If defined, PVC must be created manually before volume will be bound The value is evaluated as a template, so, for example, the name can depend on .Release or .Chart |
|
path |
"/opt/kartoza/postgis/data" |
The path the volume will be mounted at, useful when using different PostgreSQL images. |
|
string/size |
"8Gi" |
Size of the PV |
|
string |
null |
Storage class name used to provision PV |
|
string |
"data" |
The subdirectory of the volume to mount to, useful in dev environments and one PV for multiple services. Default provisioner usually have .lost+found directory, so you might want to use this so the container can have empty volume |
|
path |
"/opt/kartoza/postgis/data" |
PostgreSQL data dir. Location where you want to store the stateful data |
|
string |
"gis" |
default generated database name if the image support it, pass a comma-separated list of database name, and it will be exposed in environment variable POSTGRES_DBNAME. The first database will be used to check connection in the probe. |
|
object/common.secret |
+Expand# -- (string) Specify this password value. If not, it will be
# autogenerated everytime chart upgraded
value:
valueFrom:
secretKeyRef:
name:
key: postgresql-password |
Secret structure for postgres super user password Use this for prefilled password |
|
string |
null |
Specify this password value. If not, it will be autogenerated everytime chart upgraded |
|
string |
"docker" |
postgres super user |
|
k8s/containers/probe |
probe: | |
Probe can be overridden If set empty, it will use default probe |
|
k8s/containers/securityContext |
+ExpandsecurityContext: |
##You have to use fsGroup if you use custom certificate
#fsGroup: 101 # postgres group
#runAsUser: 1000 # run as root
#runAsGroup: 1000 # run as root |
Define this if you want more control with the security context of the pods |
|
tpl/map |
service.annotations: | |
Provide any additional annotations which may be required. Evaluated as a template. |
|
k8s/service/clusterIP |
"None" |
Set to None for Headless Service Otherwise set to "" to give a default cluster IP |
|
tpl/map |
service.labels: | |
Provide any additional annotations which may be required. Evaluated as a template. |
|
k8s/service/loadBalancerIP |
null |
Set the LoadBalancer service type to internal only. ref |
|
k8s/service/nodePort |
null |
Specify the nodePort value for the LoadBalancer and NodePort service types. ref |
|
k8s/service/port |
5432 |
Default TCP port |
|
k8s/service/type |
"ClusterIP" |
PostgresSQL service type |
|
tpl/array |
null |
List of containers override for testing |
|
string |
"ca.crt" |
Subpath of the secret CA |
|
string |
"tls.crt" |
Subpath of the secret Cert file |
|
bool |
false |
Enable to true if you can specify where the certificate is located. You must also enable securityContext.fsGroup if you want to use tls |
|
string |
"tls.key" |
Subpath of the secret TLS key |
|
string |
null |
Secret of a Certificate kind that stores the certificate |
common-v1.0.1
v2021.07.24