Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A simplifed load balancer resource for AzureRM #6429

Closed

Conversation

buzztroll
Copy link
Contributor

This commit adds a resource for the ARM load balancer. The ARM LB is
quite complicated and has many different configuration options. Instead
of trying to implement them all in a potential confusing user experience
this resources exposes the most common uses. With it a load balancer can
be configured on a single public IP or subnet which can route to a single
backend pool according to one load balancing rule and 1 probe.

This could be expanded to support more features of the LB or live along side of a more sophisticated load balancer implementation as an easy to use option.

@buzztroll
Copy link
Contributor Author

Here is HCL that will put 2 VMs behind the load balancer and route ssh to them:
https://gist.github.com/buzztroll/5c92d312af5837f50618eef5b4c7e17b
I am closing #6335 in favor of this.
@aznashwan @phinze

@jen20
Copy link
Contributor

jen20 commented Apr 30, 2016

Hi @buzztroll! I agree that the load balancer is likely to be overly complex for a single resource. In lieu of the nested configurations discussed in a previous ticket, I think this might be a good route forward to getting something usable. Let me discuss with @phinze et al on Monday before merging - first impressions of the code are good, however! Thanks for your work here!

@buzztroll
Copy link
Contributor Author

@andreimc and @iaingblack you showed interest in a load balancer resource for azure. Would this simple flat one meet your needs (or at least some of your needs)?

@iaingblack
Copy link

I'll take it! For my use cases at least, a single subnet is fine. I'll give it a test as soon as I can. Reading the description, it says it can be configured "according to one load balancing rule and 1 probe". Does that mean only 1 rule can be created or can there be many? Multiple rules to allow exposing both RDP and winrm ports on a machine would be nice. But I can open all ports to a single VM from a specific IP if not.Thanks for the quick implementation, Buzztroll!

@buzztroll
Copy link
Contributor Author

With this implementation it is 1 rule, 1 probe, 1 frontend, 1 backend, 1 everything. It wouldnt be hard to extend this to be 1 of everything but allow many rules. I have a work in progress for a more complete load balancer but I ran into some buggy issues with the Azure API (the result of the Create does not match the result of a get preformed immediately after). If the community finds it better to have this simple implementation but with many rules I can extend it to do that fairly quickly

@iaingblack
Copy link

Cool. If you can do many rules that would be great. Having one rule to expose specific ports to the public/client and another rule to expose winrm/rdp to the sysadmin would be the ideal implementation, at least from my perspective.The rest would be nice but that would meet my requirements at least. As it is, this is already very encouraging. Thanks again for working on it Buzztroll :)

@buzztroll buzztroll force-pushed the azure/simple_load_balancer branch from 9f62375 to 51efe53 Compare May 9, 2016 18:31
@buzztroll
Copy link
Contributor Author

buzztroll commented May 9, 2016

@iaingblack I patched it to take multiple probes and rules. Example HCL can be found here:
https://gist.github.com/buzztroll/7b91efdc70194ea5bdd8fdf6ce270097

I am good with whatever incantation is easiest for hashicorp to merge. @jen20 do you have any preferences here?

@buzztroll buzztroll force-pushed the azure/simple_load_balancer branch 2 times, most recently from 5e68896 to 1b3d9d9 Compare May 9, 2016 21:04
@buzztroll buzztroll force-pushed the azure/simple_load_balancer branch from 1b3d9d9 to 1bab959 Compare May 16, 2016 23:19
@Bowbaq
Copy link
Contributor

Bowbaq commented May 20, 2016

I've been trying to use this, and I'm running into problems with the VMs failing to create. Here's my configuration:

variable "name"        { default = "dcos" }
variable "location"    { default = "West US" }
variable "environment" { default = "DC/OS" }

resource "azurerm_resource_group" "dcos" {
  name     = "${var.name}"
  location = "${var.location}"

  tags {
    environment = "${var.environment}"
  }
}

resource "azurerm_virtual_network" "dcos" {
  name                = "${var.name}"
  resource_group_name = "${azurerm_resource_group.dcos.name}"
  address_space       = ["10.0.0.0/16"]
  location            = "${var.location}"

  tags {
    environment = "${var.environment}"
  }
}

resource "azurerm_subnet" "test" {
  name                 = "public-slave"
  resource_group_name  = "${azurerm_resource_group.dcos.name}"
  virtual_network_name = "${azurerm_virtual_network.dcos.name}"
  address_prefix       = "10.0.16.0/22"
}

resource "azurerm_availability_set" "test" {
  name                = "test"
  location            = "${var.location}"
  resource_group_name = "${azurerm_resource_group.dcos.name}"

  tags {
    environment = "${var.environment}"
  }
}

resource "azurerm_storage_account" "test" {
  name                = "8d40b744a9ebe520f861c8e0"
  resource_group_name = "${azurerm_resource_group.dcos.name}"
  location            = "${var.location}"
  account_type        = "Standard_LRS"

  tags {
    environment = "${var.environment}"
  }
}

resource "azurerm_storage_container" "test" {
  name                  = "${var.name}-test-${count.index}-vhds"
  resource_group_name   = "${azurerm_resource_group.dcos.name}"
  storage_account_name  = "${azurerm_storage_account.test.name}"
  container_access_type = "private"

  count = 2
}

resource "azurerm_network_interface" "test" {
  name                = "${var.name}-test-${count.index}-nic"
  location            = "${var.location}"
  resource_group_name = "${azurerm_resource_group.dcos.name}"

  ip_configuration {
    name      = "test-${count.index}"
    subnet_id = "${azurerm_subnet.test.id}"

    private_ip_address_allocation = "dynamic"
    load_balancer_backend_address_pools_ids = ["${azurerm_simple_lb.test.backend_pool_id}"]
  }

  count = 2
}


resource "azurerm_virtual_machine" "test" {
  name                  = "${var.name}-test-${count.index}"
  location              = "${var.location}"
  resource_group_name   = "${azurerm_resource_group.dcos.name}"
  network_interface_ids = ["${element(azurerm_network_interface.test.*.id, count.index)}"]
  availability_set_id   = "${azurerm_availability_set.test.id}"
  vm_size               = "Standard_A6"

  storage_image_reference {
    publisher = "openlogic"
    offer     = "CentOS"
    sku       = "7.2"
    version   = "latest"
  }

  storage_os_disk {
    name          = "os-disk"
    vhd_uri       = "${azurerm_storage_account.test.primary_blob_endpoint}${element(azurerm_storage_container.test.*.name, count.index)}/os-disk.vhd"
    caching       = "ReadWrite"
    create_option = "FromImage"
  }

  os_profile {
    computer_name  = "${var.name}-test-${count.index}"
    admin_username = "centos"
    admin_password = "centos"
  }

  os_profile_linux_config {
    disable_password_authentication = false
  }

  tags {
    environment = "${var.environment}"
  }

  count = 2
}

resource "azurerm_public_ip" "test-lb" {
  name                         = "${var.name}-test-lb-ip"
  location                     = "${var.location}"
  resource_group_name          = "${azurerm_resource_group.dcos.name}"
  public_ip_address_allocation = "static"

  tags {
    environment = "${var.environment}"
  }
}

resource "azurerm_simple_lb" "test" {
  name                = "${var.name}-test"
  location            = "${var.location}"
  type                = "Microsoft.Network/loadBalancers"
  resource_group_name = "${azurerm_resource_group.dcos.name}"

  frontend_public_ip_address = "${azurerm_public_ip.test-lb.id}"
  frontend_allocation_method = "Dynamic"

  probe {
    name = "HTTP"
    protocol = "tcp"
    port = 80
    interval = 5
    number_of_probes = 2
  }
  rule {
    protocol          = "tcp"
    load_distribution = "Default"
    frontend_port     = 80
    backend_port      = 80
    name              = "HTTP"
    probe_name        = "HTTP"
  }

  probe {
    name = "HTTPS"
    protocol = "tcp"
    port = 443
    interval = 5
    number_of_probes = 2
  }
  rule {
    protocol          = "tcp"
    load_distribution = "Default"
    frontend_port     = 443
    backend_port      = 443
    name              = "HTTPS"
    probe_name        = "HTTPS"
  }
}

I'm getting the following error:

* azurerm_virtual_machine.test.1: Error waiting for Virtual Machine (dcos-test-1) to become available: unexpected state 'Failed', wanted target '[Succeeded]'
* azurerm_virtual_machine.test.0: Error waiting for Virtual Machine (dcos-test-0) to become available: unexpected state 'Failed', wanted target '[Succeeded]'

I've noticed that if I create the VMs first, then add the load balancer, things appear to work fine.

@buzztroll
Copy link
Contributor Author

@Bowbaq Thats for checking trying this out. The error messages reported in the azurerm provider are very limited. There is a patch out to help with there #6445. For now it would how if you could go into the azure portal and look at the audit trail for the resource group. There should be errors reported there in some very unfriendly looking json documents.

@buzztroll
Copy link
Contributor Author

@Bowbaq I attempted to run your tf file and got the same errors. Sadly there were no illuminating details in the azure portal. However when I doced out all of the references to the load balancer I got the same errors so I do not believe that this is related to the LB.

@buzztroll
Copy link
Contributor Author

buzztroll commented May 20, 2016

@Bowbaq I debugged your hcl a bit. Change the azure_virtual_machine.storage_os_disk like so:

  storage_os_disk {
    name          = "os-disk${count.index}"
    vhd_uri       = "${azurerm_storage_account.test.primary_blob_endpoint}${element(azurerm_storage_container.test.*.name, count.index)}/os-disk${count.index}.vhd"
    caching       = "ReadWrite"
    create_option = "FromImage"
  }

That worked for me. The uri needs to be unique across both VMs

@buzztroll
Copy link
Contributor Author

@jen20 @phinze there seems to be a bit of community interest in this. Would you guys mind giving it a second look?

@Bowbaq
Copy link
Contributor

Bowbaq commented May 23, 2016

@buzztroll I'm still pretty new to azure, wouldn't the separate storage container generate a different URI?

}
ruleSet.Add(r)
}
d.Set("subnet", ruleSet)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should be d.Set("rule", ruleSet)

@Bowbaq
Copy link
Contributor

Bowbaq commented May 23, 2016

@buzztroll I'm running into some issues updating an existing load balancer. Specifically, changing probes / rules seems to result in a 400 Bad Request every time. I think the problem is in the 2-step create then update process. I a probe gets removed, then during the first step, the existing rules end up pointing to a now removed probe, and it fails

@buzztroll
Copy link
Contributor Author

@Bowbaq thanks for the review! I fixed the things you caught.

When you remove a probe do you also remove all of the rules referencing it? I think azure requires that.

@Bowbaq
Copy link
Contributor

Bowbaq commented May 23, 2016

So what happened is that I had a 1 to 1 probe/rule setup. Then I modified the rules to use the same probe and removed the now useless extra probes. That's when I hit the problem above.

@buzztroll
Copy link
Contributor Author

Ok I see how that would cause a problem. The way that the ARM LB API works this has to be in 2 phases or the initial create will not have the right IDs. However I should be able to factor out the update. Thanks

@buzztroll buzztroll force-pushed the azure/simple_load_balancer branch from aa14ddd to 0660a30 Compare July 12, 2016 23:53
@pmcatominey
Copy link
Contributor

Hi @jen20 @stack72,

I think this PR is in good shape and fixes a rather large hole in providing load balancer support, could this be picked up for review again?

Cheers
Peter

@pmcatominey
Copy link
Contributor

We've actually noticed a draw back to this approach compared to #6467 which has the nested resources separated. We're unable to iterate through a set of objects or pass in a set of probes or rules to this resource.

Having separate resources would allow us to cut down on repetition in our code.

All up for discussion of course :).

@buzztroll
Copy link
Contributor Author

Agreed but there are other complications with that approach. For example a load balancer is invalid without a single frontend. This causes complications with updates and deletes.

@darinhoward
Copy link

I've been testing this out, works well for simple web load balancer. Would like to see it get merged in.

@straubt1
Copy link

I agree with @darinhoward, I have been using this for several months without issue. It would be great to see it merged into the main line and released.

How can we get movement on this?

@TStraub-rms
Copy link
Contributor

@buzztroll Are there any plans to get this merged? Or is this PR dead?

On a side note, it is interesting the Azure Network Peering makes it into codebase while still in preview, yet a simple Load Balancer is having some much trouble. Are there any steps I may take to assist in getting this moving?

@pmcatominey
Copy link
Contributor

Virtual Network Peering is a very simple resource where as Load Balancers
are highly configurable and can be used in many different methods.

Careful consideration is needed to develop a solution which meets as many
needs as possible.

On Thu, 29 Sep 2016, 15:39 Tom Straub, notifications@github.com wrote:

@buzztroll https://github.com/buzztroll Are there any plans to get this
merged? Or is this PR dead?

On a side note, it is interesting the Azure Network Peering makes it into
codebase while still in preview, yet a simple Load Balancer is having some
much trouble. Are there any steps I may take to assist in getting this
moving?


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#6429 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AA4FSDD1ocmAfmc6cU4G7kyPMBMaiYagks5qu82hgaJpZM4ITfrc
.

@TStraub-rms
Copy link
Contributor

TStraub-rms commented Sep 29, 2016

@pmcatominey No doubt on the complexity differences.

Would it not be better to have an LB that satisfies at least some of the basic use cases rather than nothing at all?
Currently we require basic LB's in azure and are forced to do a custom build of Terraform to work with the product. This is obviously not ideal.

@buzztroll
Copy link
Contributor Author

I would love to see it merged. Please let me know what I need to do to make
that happen.

On Thursday, September 29, 2016, Tom Straub notifications@github.com
wrote:

@buzztroll https://github.com/buzztroll Are there any plans to get this
merged? Or is this PR dead?

On a side note, it is interesting the Azure Network Peering makes it into
codebase while still in preview, yet a simple Load Balancer is having some
much trouble. Are there any steps I may take to assist in getting this
moving?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#6429 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAOTWwkN_5_Az8btLsY4qN-gO7Ccv73pks5qu82hgaJpZM4ITfrc
.

@mitchellh
Copy link
Contributor

Hi everyone,

This has been a hot topic for us in Slack this week. Of the folks who worked on the Azure provider, it is a unanimous agreement that:

  1. Load balancer support is desperately needed.
  2. This specific approach in this PR is not the right approach.

@jen20 in particular is heading up the conversation with a few companies that really want this feature as well as the proper Microsoft connections about some issues and questions we have. The solution is that we're prioritizing writing an Azure LB using individual separated resources rather than one big one.

As for this PR, we're going to close it because it isn't best for Terraform to support something like this long term since it doesn't map directly to how the API is. We agree that LB is complex right now and we're going to work to simplify it and document that.

What we recommend is the creators of this PR (or the community) extract this into a 3rd party provider. We're going to start building a web page linked from terraform.io directly to 3rd party providers so they can get some visibility instead of just hopeful Googling. I'd love for this to be a part of that.

I'll follow up with @jen20 to provide any guidance here and pointers to the new work being done. Sorry for not merging this, it always feels really bad to just "reject" work that has been done, but please understand that merging this means we can effectively never un-merge it. The Terraform team would be stuck maintaining this resource for a long, long time, and we don't think this is the approach we want to take with the work we have planned.

Terraform's very easy to use plugin model was made for exactly these scenarios, and is very stable (we dog food it ourselves!). Please use it to your advantage here, and stay tuned for more work on this.

@mitchellh mitchellh closed this Sep 30, 2016
@colemickens
Copy link

colemickens commented Sep 30, 2016

@mitchellh @jen20 I don't know who you're in contact with, but I've been watching this and wrote the LB integration for Kubernetes/Azure. I know where some of the warts are and I know people to contact about them. Please reach out if I can be of any assistance.

@mitchellh
Copy link
Contributor

@colemickens Will do, I'm wrangling a bit to make sure we get our communication right for this and to see if I can get any timelines for you. I'll make sure James sees your comment as well.

@jen20
Copy link
Contributor

jen20 commented Oct 3, 2016

We've started working on this in #9199 - please refer to that PR for future updates on Azure load balancing! Thanks @colemickens and all who have contributed so far.

@buzztroll
Copy link
Contributor Author

@mitchellh: Thank you for your comments they make sense. I am happy that the community found this useful and I was thrilled with how easy it was to make a provider plugin for terraform. I will watch #9199. I learned some tough lessons trying to tease the Azure LB into separate resources so hopefully I can be helpful there. If there is further interest in the simple LB I will break it out into a third party contribution.

@ghost
Copy link

ghost commented Apr 21, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 21, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.