Skip to content

bobcorsaro/terraform-aws-cloud

 
 

Repository files navigation

terraform-aws-cloud

This repository contains opinionated Terraform modules used to deploy and configure an AWS EKS cluster for the StreamNative Platform. It is currently underpinned by the terraform-aws-eks module.

The working result is a Kubernetes cluster sized to your specifications, bootstrapped with StreamNative's Platform configuration, ready to receive a deployment of Apache Pulsar.

For more information on StreamNative Platform, head on over to our official documentation.

Prerequisites

The Terraform command line tool is required and must be installed. It's what we're using to manage the creation of a Kubernetes cluster and its bootstrap configuration, along with the necessary cloud provider infrastructure.

We use Helm for deploying the StreamNative Platform charts on the cluster, and while not necessary, it's recommended to have it installed for debugging purposes.

Your caller identity must also have the necessary AWS IAM permissions to create and work with EC2 (EKS, VPCs, etc.) and Route53.

Other Recommendations

Networking

EKS has multiple modes of network configuration for how you access the EKS cluster endpoint, as well as how the node groups communicate with the EKS control plane.

This Terraform module supports the following:

  • Public (EKS) / Private (Node Groups): The EKS cluster API server is accessible from the internet, and node groups use a private VPC endpoint to communicate with the cluster's controle plane (default configuration)
  • Public (EKS) / Public (Node Groups): The EKS cluster API server is accessible from the internet, and node groups use a public EKS endpoint to communicate with the cluster's control plane. This mode can be enabled by setting the input enable_node_group_private_networking = false in the module.

Note: Currently we do not support fully private EKS clusters with this module (i.e. all network traffic remains internal to the AWS VPC)

For your VPC configuration we require sets of public and private subnets (minimum of two each, one per AWS AZ). Both groups of subnets must have an outbound configuration to the internet. We also recommend using a seperate VPC reserved for the EKS cluster, with a minimum CIDR block per subnet of /24.

A Terraform sub-module is available that manages the VPC configuration to our specifications. It can be used in composition to the root module in this repo (see this example).

For more information on how EKS networking can be configured, refer to the following AWS guides:

Getting Started

A bare minimum configuration to execute the module:

data "aws_eks_cluster" "cluster" {
  name = module.eks_cluster.eks_cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.eks_cluster.eks_cluster_id
}

provider "aws" {
  region = var.region
}

provider "helm" {
  kubernetes {
    host                   = data.aws_eks_cluster.cluster.endpoint
    cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
    token                  = data.aws_eks_cluster_auth.cluster.token
  }
}

provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
  insecure               = false
}

variable "region" {
  default = "us-east-1"
}

module "sn_cluster" {
  source = "streamnative/cloud/aws"

  cluster_name                   = "sn-cluster-${var.region}"
  cluster_version                = "1.20"
  hosted_zone_id                 = "Z04554535IN8Z31SKDVQ2" # Change this to your hosted zone ID
  node_pool_instance_types       = ["c6i.xlarge"]
  extra_node_pool_instance_types = ["c6i.2xlarge"] # Defaults to empty list. Means don't create extra node pool
  node_pool_desired_size         = 2
  node_pool_min_size             = 1
  node_pool_max_size             = 6

  ## Note: EKS requires two subnets, each in their own availability zone
  public_subnet_ids  = ["subnet-abcde012", "subnet-bcde012a"]
  private_subnet_ids = ["subnet-vwxyz123", "subnet-efgh242a"]
  region             = var.region
  vpc_id             = "vpc-1234556abcdef"
}

In the example main.tf above, we create a StreamNative Platform EKS cluster using Kubernetes version 1.20, with two node groups (one per subnet1), each group being set with a desired capacity of two and a maximum scaling of six, meaning four c6i.xlarge worker nodes in total will initially be created, but depending on cluster usage it can autoscale up to twelve.

Note: If you are creating more than one EKS cluster in an AWS account, it is necessary to set the input create_iam_policies_for_cluster_addon_services = false. Otherwise Terraform will error stating that resources already exist with the desired name. This is a temporary workaround and will be improved in later versions of the module.

This creates an EKS cluster to your specifications, along with the following addons (and required IAM resources), which are enabled by default:

Creating a StreamNative Platform EKS Cluster

When deploying StreamNative Platform, there are additional resources to be created alongside (and inside!) the EKS cluster:

  • StreamNative operators for Pulsar
  • Vault Operator
  • Vault Resources
  • Tiered Storage Resources (optional)

We have made this easy by creating additional Terraform modules that can be included alongside your EKS module composition. Consider adding the following to the example main.tf file above:

#######
### This module creates resources used for tiered storage offloading in Pulsar
#######
module "sn_tiered_storage_resources" {
  source = "streamnative/cloud/aws//modules/tiered-storage-resources"

  cluster_name         = module.sn_cluster.eks_cluster_id
  oidc_issuer          = module.sn_cluster.eks_cluster_oidc_issuer_string
  pulsar_namespace     = "my-pulsar-namespace"
  service_account_name = "pulsar"

  tags = {
    Project     = "StreamNative Platform"
    Environment = var.environment
  }

  depends_on = [
    module.sn_cluster
  ]
}

#######
### This module creates resources used by Vault for storing and retrieving secrets related to the Pulsar cluster
#######
module "sn_tiered_storage_vault_resources" {
  source = "streamnative/cloud/aws//modules/vault-resources"

  cluster_name         = module.sn_cluster.eks_cluster_id
  oidc_issuer          = module.sn_cluster.eks_cluster_oidc_issuer_string
  pulsar_namespace     = "my-pulsar-namespace" # The namespace where you will be installing Pulsar
  service_account_name = "vault"               # The name of the service account used by Vault in the Pulsar namespace

  tags = {
    Project     = "StreamNative Platform"
    Environment = var.environment
  }

  depends_on = [
    module.sn_cluster
  ]
}

#######
### This module installs the necessary operators for StreamNative Platform
### See: https://registry.terraform.io/modules/streamnative/charts/helm/latest
#######
module "sn_bootstrap" {
  source = "streamnative/charts/helm"

  enable_function_mesh_operator = true
  enable_vault_operator         = true
  enable_pulsar_operator        = true

  depends_on = [
    module.sn_cluster,
  ]
}

To apply the configuration initialize the Terraform module in the directory containing your own version of the main.tf from the examples above:

terraform init

Validate and apply the configuration:

terraform apply

Deploy a StreamNative Platform Workload (an Apache Pulsar Cluster)

We use a Helm chart to deploy StreamNative Platform on the receiving Kubernetes cluster. Refer to our official documentation for more info.

Note: Since this module manages all of the Kubernetes addon dependencies required by StreamNative Platform, it is not necessary to perform all of the steps outlined in the Helm chart's README.. Please reach out to your customer representative if you have questions.

Using kubenertes-external-secrets with Amazon Secrets Manager

By default, kubernetes-external-secrets is enabled on the EKS cluster and the corresponding IRSA has access to retrieve all secrets created in the cluster's region. To clamp down access, you can specify the ARNs for just the secrets needed by passing a list to the input asm_secret_arns in your composition:

module "sn_cluster" {
  source = "streamnative/cloud/aws"

  asm_secret_arns = [
    "arn:aws:secretsmanager:us-west-2:111122223333:secret:aes128-1a2b3c",
    "arn:aws:secretsmanager:us-west-2:111122223333:secret:aes192-4D5e6F",
    "arn:aws:secretsmanager:us-west-2:111122223333:secret:aes256-7g8H9i",
  ]
}

You can also use secret prefixes and wildcards to scope access a bit more granularly, i.e. "arn:aws:secretsmanager:Region:AccountId:secret:TestEnv/*" and pass that to the module. Refer to the Secrets Manager docs for examples.

To get an ASM secret on the cluster, create an ExternalSecret manifiest yml file:

apiVersion: 'kubernetes-client.io/v1'
kind: ExternalSecret
metadata:
  name: my-cluster-secret
spec:
  backendType: secretsManager
  data:
    - key: secret-prefix/secret-id
      name: my-cluster-secret

Refer to the official docs for more details.

You can also disable kubernetes-external-secrets by setting the input enable-external-secret = false in your composition of the terraform-aws-cloud (this) module.

Requirements

Name Version
terraform >=1.0.0
aws >=3.61.0
helm 2.2.0
kubernetes >=2.6.1

Providers

Name Version
aws >=3.61.0
helm 2.2.0
kubernetes >=2.6.1

Modules

Name Source Version
eks terraform-aws-modules/eks/aws 17.24.0
istio github.com/streamnative/terraform-helm-charts//modules/istio-operator v0.8.4
vpc_tags ./modules/eks-vpc-tags n/a

Resources

Name Type
aws_autoscaling_group_tag.asg_group_vendor_tags resource
aws_ec2_tag.cluster_security_group resource
aws_iam_policy.aws_load_balancer_controller resource
aws_iam_policy.cert_manager resource
aws_iam_policy.cluster_autoscaler resource
aws_iam_policy.csi resource
aws_iam_policy.external_dns resource
aws_iam_policy.external_secrets resource
aws_iam_role.aws_load_balancer_controller resource
aws_iam_role.cert_manager resource
aws_iam_role.cluster resource
aws_iam_role.cluster_autoscaler resource
aws_iam_role.csi resource
aws_iam_role.external_dns resource
aws_iam_role.external_secrets resource
aws_iam_role_policy_attachment.aws_load_balancer_controller resource
aws_iam_role_policy_attachment.cert_manager resource
aws_iam_role_policy_attachment.cluster_AmazonEKSClusterPolicy resource
aws_iam_role_policy_attachment.cluster_AmazonEKSServicePolicy resource
aws_iam_role_policy_attachment.cluster_AmazonEKSVPCResourceControllerPolicy resource
aws_iam_role_policy_attachment.cluster_autoscaler resource
aws_iam_role_policy_attachment.csi resource
aws_iam_role_policy_attachment.csi_managed resource
aws_iam_role_policy_attachment.external_dns resource
aws_iam_role_policy_attachment.external_secrets resource
helm_release.aws_load_balancer_controller resource
helm_release.calico resource
helm_release.cert_issuer resource
helm_release.cert_manager resource
helm_release.cluster_autoscaler resource
helm_release.csi resource
helm_release.external_dns resource
helm_release.external_secrets resource
helm_release.metrics_server resource
helm_release.node_termination_handler resource
kubernetes_namespace.sn_system resource
kubernetes_storage_class.sn_default resource
kubernetes_storage_class.sn_ssd resource
aws_caller_identity.current data source
aws_iam_policy_document.aws_load_balancer_controller data source
aws_iam_policy_document.aws_load_balancer_controller_sts data source
aws_iam_policy_document.cert_manager data source
aws_iam_policy_document.cert_manager_sts data source
aws_iam_policy_document.cluster_assume_role_policy data source
aws_iam_policy_document.cluster_autoscaler data source
aws_iam_policy_document.cluster_autoscaler_sts data source
aws_iam_policy_document.csi data source
aws_iam_policy_document.csi_sts data source
aws_iam_policy_document.external_dns data source
aws_iam_policy_document.external_dns_sts data source
aws_iam_policy_document.external_secrets data source
aws_iam_policy_document.external_secrets_sts data source
aws_kms_key.ebs_default data source
aws_subnet.private_cidrs data source

Inputs

Name Description Type Default Required
add_vpc_tags Adds tags to VPC resources necessary for ingress resources within EKS to perform auto-discovery of subnets. Defaults to "true". Note that this may cause resource cycling (delete and recreate) if you are using Terraform to manage your VPC resources without having a lifecycle { ignore_changes = [ tags ] } block defined within them, since the VPC resources will want to manage the tags themselves and remove the ones added by this module. bool true no
additional_tags Additional tags to be added to the resources created by this module. map(any) {} no
allowed_public_cidrs List of CIDR blocks that are allowed to access the EKS cluster's public endpoint. Defaults to "0.0.0.0/0" (any). list(string)
[
"0.0.0.0/0"
]
no
asm_secret_arns The a list of ARNs for secrets stored in ASM. This grants the kubernetes-external-secrets controller select access to secrets used by resources within the EKS cluster. If no arns are provided via this input, the IAM policy will allow read access to all secrets created in the provided region. list(string) [] no
aws_load_balancer_controller_helm_chart_name The name of the Helm chart to use for the AWS Load Balancer Controller. string "aws-load-balancer-controller" no
aws_load_balancer_controller_helm_chart_repository The repository containing the Helm chart to use for the AWS Load Balancer Controller. string "https://aws.github.io/eks-charts" no
aws_load_balancer_controller_helm_chart_version The version of the Helm chart to use for the AWS Load Balancer Controller. The current version can be found in github: https://github.com/kubernetes-sigs/aws-load-balancer-controller/blob/main/helm/aws-load-balancer-controller/Chart.yaml. string "1.4.2" no
aws_load_balancer_controller_settings Additional settings which will be passed to the Helm chart values for the AWS Load Balancer Controller. See https://github.com/kubernetes-sigs/aws-load-balancer-controller/tree/main/helm/aws-load-balancer-controller for available options. map(string) {} no
aws_partition AWS partition: 'aws', 'aws-cn', or 'aws-us-gov', used when constructing IRSA trust relationship policies. string "aws" no
calico_helm_chart_name The name of the Helm chart in the repository for Calico, which is installed alongside the tigera-operator. string "tigera-operator" no
calico_helm_chart_repository The repository containing the calico helm chart. We are currently using a community provided chart, which is a fork of the official chart published by Tigera. This chart isn't as opinionated about namespaces, and should be used until this issue is resolved projectcalico/calico#4812. string "https://stevehipwell.github.io/helm-charts/" no
calico_helm_chart_version Helm chart version for Calico. Defaults to "1.0.5". See https://github.com/stevehipwell/helm-charts/tree/master/charts/tigera-operator for available version releases. string "1.5.0" no
calico_settings Additional settings which will be passed to the Helm chart values. See https://github.com/stevehipwell/helm-charts/tree/master/charts/tigera-operator for available options. map(any) {} no
cert_issuer_support_email The email address to receive notifications from the cert issuer. string "certs-support@streamnative.io" no
cert_manager_helm_chart_name The name of the Helm chart in the repository for cert-manager. string "cert-manager" no
cert_manager_helm_chart_repository The repository containing the cert-manager helm chart. string "https://charts.bitnami.com/bitnami" no
cert_manager_helm_chart_version Helm chart version for the cert-manager. See https://github.com/bitnami/charts/tree/master/bitnami/cert-manager for version releases. string "0.6.2" no
cert_manager_settings Additional settings which will be passed to the Helm chart values. See https://github.com/bitnami/charts/tree/master/bitnami/cert-manager for available options. map(any) {} no
cluster_autoscaler_helm_chart_name The name of the Helm chart in the repository for cluster-autoscaler. string "cluster-autoscaler" no
cluster_autoscaler_helm_chart_repository The repository containing the cluster-autoscaler helm chart. string "https://kubernetes.github.io/autoscaler" no
cluster_autoscaler_helm_chart_version Helm chart version for the cluster-autoscaler. Defaults to "9.10.4". See https://github.com/kubernetes/autoscaler/tree/master/charts/cluster-autoscaler for more details. string "9.19.2" no
cluster_autoscaler_settings Additional settings which will be passed to the Helm chart values for cluster-autoscaler, see https://github.com/kubernetes/autoscaler/tree/master/charts/cluster-autoscaler for options. map(any) {} no
cluster_enabled_log_types A list of the desired control plane logging to enable. For more information, see Amazon EKS Control Plane Logging documentation (https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html). list(string)
[
"api",
"audit",
"authenticator",
"controllerManager",
"scheduler"
]
no
cluster_log_kms_key_id If a KMS Key ARN is set, this key will be used to encrypt the corresponding log group. Please be sure that the KMS Key has an appropriate key policy (https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/encrypt-log-data-kms.html). string "" no
cluster_log_retention_in_days Number of days to retain log events. Defaults to 365 days. number 365 no
cluster_name The name of your EKS cluster and associated resources. Must be 16 characters or less. string "" no
cluster_version The version of Kubernetes to be installed. string "1.20" no
csi_helm_chart_name The name of the Helm chart in the repository for CSI. string "aws-ebs-csi-driver" no
csi_helm_chart_repository The repository containing the CSI helm chart string "https://kubernetes-sigs.github.io/aws-ebs-csi-driver/" no
csi_helm_chart_version Helm chart version for CSI string "2.8.0" no
csi_settings Additional settings which will be passed to the Helm chart values, see https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/charts/aws-ebs-csi-driver/values.yaml for available options. map(any) {} no
disk_encryption_kms_key_id The KMS Key ARN to use for disk encryption. string "" no
enable_aws_load_balancer_controller Whether to enable the AWS Load Balancer Controller addon on the cluster. Defaults to "true", and in most situations is required by StreamNative Cloud. bool true no
enable_calico Enables the Calico networking service on the cluster. Defaults to "false". bool false no
enable_cert_manager Enables the Cert-Manager addon service on the cluster. Defaults to "true", and in most situations is required by StreamNative Cloud. bool true no
enable_cluster_autoscaler Enables the Cluster Autoscaler addon service on the cluster. Defaults to "true", and in most situations is recommened for StreamNative Cloud. bool true no
enable_csi Enables the EBS Container Storage Interface (CSI) driver on the cluster, which allows for EKS manage the lifecycle of persistant volumes in EBS. bool true no
enable_external_dns Enables the External DNS addon service on the cluster. Defaults to "true", and in most situations is required by StreamNative Cloud. bool true no
enable_external_secrets Enables kubernetes-external-secrets addon service on the cluster. Defaults to "false" bool false no
enable_func_pool Enable an additional dedicated function pool. bool true no
enable_func_pool_monitoring Enable CloudWatch monitoring for the dedicated function pool(s). bool true no
enable_istio Enables Istio on the cluster. Set to "true" by default. bool true no
enable_metrics_server Enables the Kubernetes Metrics Server addon service on the cluster. Defaults to "true". bool true no
enable_node_group_private_networking Enables private networking for the EKS node groups (not the EKS cluster endpoint, which remains public), meaning Kubernetes API requests that originate within the cluster's VPC use a private VPC endpoint for EKS. Defaults to "true". bool true no
enable_node_pool_monitoring Enable CloudWatch monitoring for the default pool(s). bool true no
external_dns_helm_chart_name The name of the Helm chart in the repository for ExternalDNS. string "external-dns" no
external_dns_helm_chart_repository The repository containing the ExternalDNS helm chart. string "https://charts.bitnami.com/bitnami" no
external_dns_helm_chart_version Helm chart version for ExternalDNS. See https://hub.helm.sh/charts/bitnami/external-dns for updates. string "6.5.6" no
external_dns_settings Additional settings which will be passed to the Helm chart values, see https://hub.helm.sh/charts/bitnami/external-dns. map(any) {} no
external_secrets_helm_chart_name The name of the Helm chart in the repository for kubernetes-external-secrets. string "kubernetes-external-secrets" no
external_secrets_helm_chart_repository The repository containing the kubernetes-external-secrets helm chart. string "https://external-secrets.github.io/kubernetes-external-secrets" no
external_secrets_helm_chart_version Helm chart version for kubernetes-external-secrets. Defaults to "8.3.0". See https://github.com/external-secrets/kubernetes-external-secrets/tree/master/charts/kubernetes-external-secrets for updates. string "8.3.0" no
external_secrets_settings Additional settings which will be passed to the Helm chart values, see https://github.com/external-secrets/kubernetes-external-secrets/tree/master/charts/kubernetes-external-secrets for available options. map(any) {} no
func_pool_ami_id The AMI ID to use for the func pool nodes. Defaults to the latest EKS Optimized AMI provided by AWS string "" no
func_pool_ami_is_eks_optimized If the custom AMI is an EKS optimized image, ignored if ami_id is not set. If this is true then bootstrap.sh is called automatically (max pod logic needs to be manually set), if this is false you need to provide all the node configuration in pre_userdata bool true no
func_pool_desired_size Desired number of worker nodes number 0 no
func_pool_disk_size Disk size in GiB for function worker nodes. Defaults to 20. Terraform will only perform drift detection if a configuration value is provided. number 50 no
func_pool_disk_type Disk type for function worker nodes. Defaults to gp3. string "gp3" no
func_pool_instance_types Set of instance types associated with the EKS Node Group. Defaults to ["t3.large"]. Terraform will only perform drift detection if a configuration value is provided. list(string)
[
"c6i.large"
]
no
func_pool_labels Labels to apply to the function pool node group. Defaults to {}. map(string) {} no
func_pool_max_size The maximum size of the AutoScaling Group. number 5 no
func_pool_min_size The minimum size of the AutoScaling Group. number 0 no
func_pool_namespace The namespace where functions run. string "pulsar-funcs" no
func_pool_pre_userdata The pre-userdata script to run on the function worker nodes. string "" no
func_pool_sa_name The service account name the functions use. string "default" no
hosted_zone_id The ID of the Route53 hosted zone used by the cluster's External DNS configuration. string n/a yes
iam_path An IAM Path to be used for all IAM resources created by this module. Changing this from the default will cause issues with StreamNative's Vendor access, if applicable. string "/StreamNative/" no
istio_mesh_id The ID used by the Istio mesh. This is also the ID of the StreamNative Cloud Pool used for the workload environments. This is required when "enable_istio_operator" is set to "true". string null no
istio_network The name of network used for the Istio deployment. This is required when "enable_istio_operator" is set to "true". string "default" no
istio_network_loadbancer n/a string "internet_facing" no
istio_profile The path or name for an Istio profile to load. Set to the profile "default" if not specified. string "default" no
istio_revision_tag The revision tag value use for the Istio label "istio.io/rev". string "sn-stable" no
istio_settings Additional settings which will be passed to the Helm chart values map(any) {} no
istio_trust_domain The trust domain used for the Istio deployment, which corresponds to the root of a system. This is required when "enable_istio_operator" is set to "true". string "cluster.local" no
kiali_operator_settings Additional settings which will be passed to the Helm chart values map(any) {} no
map_additional_aws_accounts Additional AWS account numbers to add to config-map-aws-auth ConfigMap. list(string) [] no
map_additional_iam_roles Additional IAM roles to add to config-map-aws-auth ConfigMap.
list(object({
rolearn = string
username = string
groups = list(string)
}))
[] no
map_additional_iam_users Additional IAM roles to add to config-map-aws-auth ConfigMap.
list(object({
userarn = string
username = string
groups = list(string)
}))
[] no
metrics_server_helm_chart_name The name of the helm release to install string "metrics-server" no
metrics_server_helm_chart_repository The repository containing the external-metrics helm chart. string "https://kubernetes-sigs.github.io/metrics-server" no
metrics_server_helm_chart_version Helm chart version for Metrics server string "3.8.2" no
metrics_server_settings Additional settings which will be passed to the Helm chart values, see https://github.com/external-secrets/kubernetes-external-secrets/tree/master/charts/kubernetes-external-secrets for available options. map(any) {} no
node_pool_ami_id The AMI ID to use for the EKS cluster nodes. Defaults to the latest EKS Optimized AMI provided by AWS string "" no
node_pool_ami_is_eks_optimized If the custom AMI is an EKS optimized image, ignored if ami_id is not set. If this is true then bootstrap.sh is called automatically (max pod logic needs to be manually set), if this is false you need to provide all the node configuration in pre_userdata bool true no
node_pool_desired_size Desired number of worker nodes in the node pool. number 1 no
node_pool_disk_size Disk size in GiB for worker nodes in the node pool. Defaults to 50. number 50 no
node_pool_disk_type Disk type for worker nodes in the node pool. Defaults to gp3. string "gp3" no
node_pool_instance_types Set of instance types associated with the EKS Node Group. Defaults to ["c6i.large"]. list(string)
[
"c6i.large"
]
no
extra_node_pool_instance_types Set of instance types of an extra node pool. Same properties as default node pool except name and instance types. Defaults to []. list(string)
[]
no
node_pool_labels A map of kubernetes labels to add to the node pool. map(string) {} no
node_pool_max_size The maximum size of the node pool Autoscaling group. number n/a yes
node_pool_min_size The minimum size of the node pool AutoScaling group. number 1 no
node_pool_pre_userdata The user data to apply to the worker nodes in the node pool. This is applied before the bootstrap.sh script. string "" no
node_termination_handler_chart_version The version of the Helm chart to use for the AWS Node Termination Handler. string "0.18.5" no
node_termination_handler_helm_chart_name The name of the Helm chart to use for the AWS Node Termination Handler. string "aws-node-termination-handler" no
node_termination_handler_helm_chart_repository The repository containing the Helm chart to use for the AWS Node Termination Handler. string "https://aws.github.io/eks-charts" no
node_termination_handler_settings Additional settings which will be passed to the Helm chart values for the AWS Node Termination Handler. See https://github.com/kubernetes-sigs/aws-load-balancer-controller/tree/main/helm/aws-load-balancer-controller for available options. map(string) {} no
permissions_boundary_arn If required, provide the ARN of the IAM permissions boundary to use for restricting StreamNative's vendor access. string null no
private_subnet_ids The ids of existing private subnets. list(string) [] no
public_subnet_ids The ids of existing public subnets. list(string) [] no
region The AWS region. string null no
service_domain The DNS domain for external service endpoints. This must be set when enabling Istio or else the deployment will fail. string null no
sncloud_services_iam_policy_arn The IAM policy ARN to be used for all StreamNative Cloud Services that need to interact with AWS services external to EKS. This policy is typically created by the "modules/managed-cloud" sub-module in this repository, as a seperate customer driven process for managing StreamNative's Vendor Access into AWS. If no policy ARN is provided, the module will generate the policies needed by each cluster service we install and expects that the caller identity has appropriate IAM permissions that allow "iam:CreatePolicy" action. Otherwise the module will fail to run properly. Depends upon use string "" no
sncloud_services_lb_policy_arn A custom IAM policy ARN for LB load balancer controller. If not specified, and use_runt string "" no
use_runtime_policy Indicates to use the runtime policy and attach a predefined policies as opposed to create roles. Currently defaults to false bool false no
vpc_id The ID of the AWS VPC to use. string "" no
wait_for_cluster_timeout Time in seconds to wait for the newly provisioned EKS cluster's API/healthcheck endpoint to return healthy, before applying the aws-auth configmap. Defaults to 300 seconds in the parent module "terraform-aws-modules/eks/aws", which is often too short. Increase to at least 900 seconds, if needed. See also terraform-aws-modules/terraform-aws-eks#1420. number 0 no

Outputs

Name Description
cloudwatch_log_group_arn Arn of cloudwatch log group created
eks_cluster_arn The ARN for the EKS cluster created by this module
eks_cluster_id The id/name of the EKS cluster created by this module
eks_cluster_identity_oidc_issuer_arn The ARN for the OIDC issuer created by this module
eks_cluster_identity_oidc_issuer_string A formatted string containing the prefix for the OIDC issuer created by this module. Same as "cluster_oidc_issuer_url", but with "https://" stripped from the name. This output is typically used in other StreamNative modules that request the "oidc_issuer" input.
eks_cluster_identity_oidc_issuer_url The URL for the OIDC issuer created by this module
eks_cluster_primary_security_group_id The id of the primary security group created by the EKS service itself, not by this module. This is labeled "Cluster Security Group" in the EKS console.
eks_cluster_secondary_security_group_id The id of the secondary security group created by this module. This is labled "Additional Security Groups" in the EKS console.
node_groups Outputs from EKS node groups. Map of maps, keyed by var.node_groups keys
worker_https_ingress_security_group_rule Security group rule responsible for allowing pods to communicate with the EKS cluster API.
worker_iam_role_arn The IAM Role ARN used by the Worker configuration
worker_security_group_id Security group ID attached to the EKS node groups

Footnotes

  1. When running Apache Pulsar in Kubernetes, we make use of EBS backed Kubernetes Persistent Volume Claims (PVC). EBS volumes themselves are zonal, which means an EC2 instance can only mount a volume that exists in its same AWS Availability Zone. For this reason we have added node group "zone affinity" functionality into our module, where an EKS node group is created per AWS Availability Zone. This is controlled by the number of subnets you pass to the EKS module, creating one node group per subnet.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • HCL 79.9%
  • Smarty 20.1%