This Terraform module allows users to support Application Servers by integrating Consul Terraform Sync with Zscaler Private Access ZPA Cloud to dynamically manage the address field based on service definition in the Consul Catalog.
Using this Terraform module in conjunction with consul-terraform-sync enables teams to reduce manual ticketing processes and automate Day-2 operations related to application scale up/down in a way that is both declarative and repeatable across the organization and across multiple Application Servers.
This module supports the following:
- Create, update and delete Application Servers based on the services in the Consul catalog.
If there is a missing feature or a bug - open an issue
The consul-terraform-sync runs as a daemon that enables a publisher-subscriber paradigm between Consul and ZPA Cloud to support Network Infrastructure Automation (NIA).
-
consul-terraform-sync subscribes to updates from the Consul catalog and executes one or more automation "tasks" with appropriate value of service variables based on those updates. consul-terraform-sync leverages Terraform as the underlying automation tool and utilizes the Terraform provider ecosystem to drive relevant change to the network infrastructure.
-
Each task consists of a runbook automation written as a compatible Terraform module using resources and data sources for the underlying network infrastructure provider.
Please refer to this link for getting started with consul-terraform-sync
Name | Version |
---|---|
terraform | >= 0.13 |
zpa | >=2.3.2 |
Name | Version |
---|---|
zpa | 2.3.2 |
No modules.
Name | Type |
---|---|
zpa_app_connector_group.this | resource |
zpa_application_server.this | resource |
zpa_server_group.this | resource |
zpa_app_connector_group.this | data source |
zpa_server_group.this | data source |
Name | Description | Type | Default | Required |
---|---|---|---|---|
app_connector_group_city_country | City of the Country where the app connector is located i.e US or CA | string |
"San Jose, US" |
no |
app_connector_group_country_code | Code of the Country where the app connector is located i.e US or CA | string |
"US" |
no |
app_connector_group_description | Description of the App Connector Group. | string |
"AppConnectorGroup" |
no |
app_connector_group_dns_query_type | Whether to enable IPv4 or IPv6, or both, for DNS resolution of all applications in the App Connector Group. | string |
"IPV4_IPV6" |
no |
app_connector_group_enabled | Whether this App Connector Group is enabled or not. | bool |
true |
no |
app_connector_group_latitude | Latitude of the App Connector Group. | string |
"37.3382082" |
no |
app_connector_group_location | Location of the App Connector Group. | string |
"San Jose, CA, USA" |
no |
app_connector_group_longitude | Longitude of the App Connector Group. | string |
"-121.8863286" |
no |
app_connector_group_name | Name of the App Connector Group. | string |
"AppConnectorGroup" |
no |
app_connector_group_override_version_profile | Whether the default version profile of the App Connector Group is applied or overridden. | bool |
true |
no |
app_connector_group_upgrade_day | App Connectors in this group will attempt to update to a newer version of the software during this specified day. | string |
"SUNDAY" |
no |
app_connector_group_upgrade_time_in_secs | App Connectors in this group will attempt to update to a newer version of the software during this specified time. | string |
"66600" |
no |
app_connector_group_version_profile_id | ID of the version profile | string |
"2" |
no |
application_server_enabled | This field defines if the server group is enabled or disabled. | bool |
true |
no |
byo_app_connector_group | Bring your own App Connector Group | bool |
false |
no |
byo_app_connector_group_id | User provided existing App Connector Group ID | string |
null |
no |
byo_app_connector_group_name | User provided existing App Connector Group Name | string |
null |
no |
byo_server_group | Bring your own Server Group | bool |
false |
no |
byo_server_group_id | User provided existing Server Group ID | string |
null |
no |
byo_server_group_name | User provided existing Server Group ID | string |
null |
no |
cts_prefix | (Optional) Prefix that will be applied to all objects created via Consul-Terraform-Sync | string |
"cts-" |
no |
server_group_description | This field is the description of the server group. | string |
"ServerGroup" |
no |
server_group_enabled | This field defines if the server group is enabled or disabled. | bool |
true |
no |
server_group_name | This field defines the name of the server group. | string |
"ServerGroup" |
no |
services | Consul services monitored by Consul NIA | map( |
n/a | yes |
No outputs.
In order to use this module, you will need to install consul-terraform-sync, create a "task" with this Terraform module as a source within the task, and run consul-terraform-sync.
The users can subscribe to the services in the consul catalog and define the Terraform module which will be executed when there are any updates to the subscribed services using a "task".
~> Note: It is recommended to have the consul-terraform-sync config guide for reference.
-
Download the consul-terraform-sync on a node which is highly available (prefrably, a node running a consul client)
-
Add consul-terraform-sync to the PATH on that node
-
Check the installation
$ consul-terraform-sync --version 0.1.0 Compatible with Terraform ~>0.13.0
-
Create a config file "tasks.hcl" for consul-terraform-sync. Please note that this is just an example.
log_level = <log_level> # eg. "info"
driver "terraform" {
log = true
required_providers {
zpa = {
source = "zscaler/zpa"
}
}
}
consul {
address = "<consul agent address>" # eg. "1.1.1.1:8500"
}
provider "zpa" {
zpa_client_id = "xxxxxxxxx"
zpa_client_secret = "xxxxxxxxx"
zpa_customer_id = "123456789"
}
task {
name = <name of the task (has to be unique)> # eg. "Create_Application_Segment"
description = <description of the task> # eg. "Application Segment based on service definition"
source = "zscaler/application-segment/zpa" # to be updated
providers = ["zpa"]
condition "services" { ["<list of services you want to subscribe to>"] # eg. ["web", "api"]
names = ["nginx","web","api"]
}
variable_files = ["<list of files that have user variables for this module (please input full path)>"] # eg. ["/opt/zpa-config/demo.tfvars"]
}
- Start consul-terraform-sync
consul-terraform-sync -config-file=tasks.hcl
consul-terraform-sync will create right set of application segments in the ZPA Cloud based on the values in consul catalog.
consul-terraform-sync is now subscribed to the Consul catalog. Any updates to the services identified in the task will result in updating the application segment in the ZPA Cloud
~> Note: If you are interested in how consul-terraform-sync works, please refer to this section.
There are 2 aspects of consul-terraform-sync.
-
Updates from Consul catalog: In the backend, consul-terraform-sync creates a blocking API query session with the Consul agent indentified in the config to get updates from the Consul catalog. consul-terraform-sync. consul-terraform-sync will get an update for the services in the consul catalog when any of the following service attributes are created, updated or deleted. These updates include service creation and deletion as well.
- service id
- service name
- service address
- service port
- service meta
- service tags
- service namespace
- service health status
- node id
- node address
- node datacenter
- node tagged addresses
- node meta
-
Managing the entire Terraform workflow: If a task and is defined, one or more services are associated with the task, provider is declared in the task and a Terraform module is specified using the source field of the task, the following sequence of events will occur:
- consul-terraform-sync will install the required version of Terraform.
- consul-terraform-sync will install the required version of the Terraform provider defined in the config file and declared in the "task".
- A new direstory "nia-tasks" with a sub-directory corresponding to each "task" will be created. This is the reason for having strict guidelines around naming.
- Each sub-directory corresponds to a separate Terraform workspace.
- Within each sub-directory corresponding a task, consul-terraform-sync will template a main.tf, variables.tf, terraform.tfvars and terraform.tfvars.tmpl.
-
main.tf:
- This files contains declaration for the required terraform and provider versions based on the task definition.
- In addition, this file has the module (identified by the 'source' field in the task) declaration with the input variables
- Consul K/V is used as the backend state for fo this Terraform workspace.
example generated main.tf:
# This file is generated by Consul NIA. # # The HCL blocks, arguments, variables, and values are derived from the # operator configuration for Consul NIA. Any manual changes to this file # may not be preserved and could be clobbered by a subsequent update. terraform { required_version = "~>0.13.0" required_providers { zpa = { source = "zscaler/zpa" } } backend "consul" { address = "1.1.1.1:8500" gzip = true path = "consul-nia/terraform" } } provider "zpa" { zpa_client_id = var.zpa.zpa_client_id zpa_client_secret = var.zpa.zpa_client_secret zpa_customer_id = var.zpa.zpa_customer_id } # Dynamic Application Segment based on service definition module "Create_Application_Segment_on_ZPA" { source = "zscaler/application-segment/zpa" services = var.services }
-
variables.tf:
- This is variables.tf file defined in the module
example generated variables.tf
variable "services" { description = "Consul services monitored by Consul NIA" type = map( object({ id = string name = string address = string port = number status = string meta = map(string) tags = list(string) namespace = string node = string node_id = string node_address = string node_datacenter = string node_tagged_addresses = map(string) node_meta = map(string) }) ) } variable "appsegment_prefix" { type = string description = "(Optional) Prefix added to the dynamic application segment created by Consul" default = "" } variable "appsegment_suffix" { type = string description = "(Optional) Suffix added to the dynamic application segment created by Consul" default = "" }
-
terraform.tfvars:
- This is the most important file generated by consul-terraform-sync.
- This variables file is generated with the most updated values from Consul catalog for all the services identified in the task.
- consul-terraform-sync updates this file with the latest values when the corresponding service gets updated in Consul catalog.
example terraform.tfvars
services = { "web.hpc152-nginx.sgio01" = { id = "web" name = "web" kind = "" address = "10.0.31.152" port = 80 meta = {} tags = [] namespace = "" status = "passing" node = "hpc152-nginx" node_id = "517051df-974b-5765-9941-6399f2679106" node_address = "10.0.31.152" node_datacenter = "dc01" node_tagged_addresses = { lan = "10.0.31.152" lan_ipv4 = "10.0.31.152" wan = "10.0.31.152" wan_ipv4 = "10.0.31.152" } node_meta = { consul-network-segment = "" } cts_user_defined_meta = {} }, "web.hpc153-nginx.sgio01" = { id = "web" name = "web" kind = "" address = "10.0.31.153" port = 80 meta = {} tags = [] namespace = "" status = "passing" node = "hpc153-nginx" node_id = "2504f4de-287a-0fe1-dd78-6c313ba0cb58" node_address = "10.0.31.153" node_datacenter = "dc01" node_tagged_addresses = { lan = "10.0.31.153" lan_ipv4 = "10.0.31.153" wan = "10.0.31.153" wan_ipv4 = "10.0.31.153" } node_meta = { consul-network-segment = "" } cts_user_defined_meta = {} }, "web.hpc154-nginx.sgio01" = { id = "web" name = "web" kind = "" address = "10.0.31.154" port = 80 meta = {} tags = [] namespace = "" status = "passing" node = "hpc154-nginx" node_id = "8af55392-5756-2511-77e7-1b6f0627ff5f" node_address = "10.0.31.154" node_datacenter = "dc01" node_tagged_addresses = { lan = "10.0.31.154" lan_ipv4 = "10.0.31.154" wan = "10.0.31.154" wan_ipv4 = "10.0.31.154" } node_meta = { consul-network-segment = "" } cts_user_defined_meta = {} }, }
* **Network Infrastructure Automation (NIA) compatible modules are built to utilize the above service variables**
-
- consul-terraform-sync manages the entire Terraform workflow of plan, apply and destroy for all the individual workspaces corrresponding to the defined "tasks" based on the updates to the services to those tasks.
In summary, consul-terraform-sync triggers a Terraform workflow (plan, apply, destroy) based on updates it detects from Consul catalog.
These modules follow recommended release tagging in Semantic Versioning. You can find each new release, along with the changelog, on the GitHub Releases page.
Copyright (c) 2022 Zscaler, Inc.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.