Skip to content

A reusable and extensible Terraform module that provisions an Oracle Linux Cloud Native Environment on Oracle Cloud Infrastructure.

License

Notifications You must be signed in to change notification settings

mark-au/terraform-oci-olcne

 
 

Repository files navigation

Terraform for Oracle Linux Cloud Native Environment

Current version: 0.1

The Terraform OCI OLCNE module for Oracle Cloud Infrastructure (OCI) provides a reusable and extensible Terraform module that provisions Oracle Linux Cloud Native Environment on OCI. It is developed as a tool for developers as a technical preview. It simplifies the setup needed to quickly deploy using Oracle Cloud compute infrastructure.

This Technical Preview is not intended for production use, and has the following limitations:

  • OLCNE is currently supported on Bare Metal shapes only. You can use this module to install on Virtual Machine shapes, but you should be aware that while that may work, it is not a supported configuration.

  • Multi-master clusters are not supported.

  • The OLCNE nodes must opt out of OS Management Service to prevent RPM conflicts.

If you are deploying a production Kubernetes cluster on OCI, you should consider using Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE). You can use terraform-oci-oke to provision an OKE cluster.

This module will create the following resources:

infrastructure
  1. Base module:

    • A VCN with internet, service and NAT gateways, and route tables.

    • A security list, subnet and a bastion host (using Oracle Autonomous Linux).

    • An optional notification topic and subscription.

  2. Network module:

    • Network security groups for operator, master and worker nodes as well as a public load balancer.

    • Separate subnets for operator, master, worker and load balancer.

  3. Operator module:

    • An operator node to perform installation of OLCNE on the master and worker nodes.

    • An ingress controller of type NodePort.

    • An optional Kata container runtime class.

  4. Master:

    • Single master node. Multi-master is not supported yet.

    • Instance pools to manage the master nodes.

  5. Worker:

    • A configurable number of worker nodes.

    • Instance pools to manage to worker nodes.

  6. Load balancer:

    • A public load balancer with automatic backend creation.

To use this module to create an OLCNE environment:

Create a vault to store the SSH keys securely.

  1. In the OCI Console, create a vault by navigating to Security > Vault. See Managing Vaults for more details.

  2. Click on the vault and click 'Create Key'. See Managing Keys for more details.

  1. Click on Secrets and click 'Create Secret'.

  2. Select compartment where you want to create the secret, enter a name and description.

  3. Select the encryption key you created previously.

  4. Set the secret type template as plain-text.

  5. Paste the contents of your private SSH key in secret contents.

  6. After the secret is created, click on the secret name and note down the OCID of the secret as you will need it later.

The base infrastructure consists of the bastion and the admin server. It reuses the terraform-oci-base module to create a VCN, a bastion host and an admin host with instance_principal enabled. You only need the bastion host; the admin_host is not needed.

  1. Copy terraform.tfvars.example:

    cp terraform.tfvars.example terraform.tfvars
  2. Edit terraform.tfvars and set the following parameters to the correct values for your environment:

    api_fingerprint = ""
    api_private_key_path = ""
    compartment_id = ""
    tenancy_id = ""
    user_id = ""
    ssh_private_key_path = "/path/to/ssh_private_key"
    ssh_public_key_path = "/path/to/ssh_public_key"
  3. In terraform.tfvars, enable only the bastion host:

    bastion_enabled = true
    admin_enabled = false
    admin_instance_principal = false
  4. Run Terraform and create the base module:

    terraform apply --target=module.base -auto-approve
  5. SSH to the bastion to check whether you can proceed:

    ssh opc@xxx.xxx.xxx

If you are not able to ssh to the bastion host, you will not be able to proceed any further.

  1. Update your terraform.tfvars and enter the values for the secret_id and certificate information to create private CA certificates.

    secret_id = "ocid1.vaultsecret....."
    org_unit = "my org unit"
    org = "my org"
    city = "Sydney"
    state = "NSW"
    country = "au"
    common_name = "common name"
  2. Run terraform apply again:

    terraform apply -auto-approve

When complete, Terraform will output details of how to connect to the bastion, master and operator, for example:

Outputs:

ssh_to_bastion = ssh -i /path/to/ssh/key opc@123.45.67.209
ssh_to_master = ssh -i /path/to/ssh/key -J opc@123.45.67.209 opc@10.0.3.2
ssh_to_operator = ssh -i /path/to/ssh/key -J opc@123.45.67.209 opc@10.0.0.146

You can SSH to the operator and access the cluster, for example:

[opc@cne-operator ~]$ kubectl get nodes
NAME                STATUS   ROLES    AGE   VERSION
cne-master          Ready    master   22m   v1.17.4+1.0.1.el7
cne-worker          Ready    <none>   21m   v1.17.4+1.0.1.el7
cne-worker-550781   Ready    <none>   21m   v1.17.4+1.0.1.el7
cne-worker-585063   Ready    <none>   21m   v1.17.4+1.0.1.el7

Only one master node is created.

By default, three worker nodes are created. You can change this by setting worker_size = 5.

If you want to use Kata containers, you must:

  1. Select one of the Bare Metal shapes for your worker nodes.

  2. Enable the creation of kata runtime class in terraform.tfvars.

    create_kata_runtime = true

By default, the name of the kata runtime class is 'kata'. You can configure that with the kata_runtime_class_name parameter.

When deploying kata containers, set the runtimeClassName accordingly:

apiVersion: v1
kind: Pod
metadata:
  name: kata-nginx
spec:
  runtimeClassName: kata
  containers:
    - name: nginx
      image: nginx
      ports:
      - containerPort: 80
  1. Print out the output to access the operator:

    terraform output
    ssh_to_operator = ssh -i ~/.ssh/id_rsa -J opc@XXX.XXX.XXX.XXX opc@10.0.0.146
  2. Copy the ssh_to_operator command and run:

    ssh -i ~/.ssh/id_rsa -J opc@XXX.XXX.XXX.XXX
  3. Deploy an application

    git clone https://github.com/hyder/okesamples/
    cd okesamples
    kubectl apply -f  ingresscontrollers/acme/
  4. Edit the ingresses in ingresscontrollers/nginx and replace www.acme.com with a domain within your control

  5. Create the ingresses:

    kubectl apply -f  ingresscontrollers/nginx/
  6. Follow the steps towards the end of this article to configure DNS in OCI and use the domain you set in the ingress above.

About

A reusable and extensible Terraform module that provisions an Oracle Linux Cloud Native Environment on Oracle Cloud Infrastructure.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • HCL 86.5%
  • Shell 13.5%