Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatically add/remove master nodes from plan based on deployment size #635

Closed
arjanelolong2022 opened this issue May 3, 2023 · 2 comments
Assignees
Labels
bug Something isn't working story_points:3
Milestone

Comments

@arjanelolong2022
Copy link

arjanelolong2022 commented May 3, 2023

Resource definition

{    
    version: '8.6.1',
    deploymentTemplateId: 'aws-storage-optimized-v3',
    elasticsearch: {
      autoscale: 'true',
      topologies: [
        {
          id: 'hot_content',
          size: '8g',
          zoneCount: 2,
          autoscaling: {
            maxSize: '15g',
            maxSizeResource: 'memory'
          },
        },
        {
          id: 'warm',
          size: '0g',
          zoneCount: 1
        }
      ],
    }
}

One tie breaker master was automatically provisioned.

Expected Behavior

Changes will still be applied even without a master node definition.

Current Behavior

On apply the error below is thrown.

* api error: clusters.cluster_invalid_plan: Cluster must contain at least a master topology element and a data topology element. 'master' node type is missing,'master' node type exists in more than one topology element (resources.elasticsearch[0].cluster_topology)
* api error: clusters.cluster_invalid_plan: Instance configuration [aws.es.master.c5d] does not allow usage of node types [data,ingest]. You must either change instance configuration or use only allowed node types [master]. (resources.elasticsearch[0].cluster_topology[4].instance_configuration_id)

We are also not able to define/add a master node since we have less than 6 data nodes.

Context

After we have provisioned our deployment, we are no longer able to apply updates to our deployment.

Your Environment

We provision via Pulumi.

  • Version used: v0.5.1

Might be related

#343

@pascal-hofmann
Copy link
Contributor

I tested around with v 0.7.0 and different node counts and found out the following:

  1. Starting from 6 nodes, you need to specify at least one master via terraform.
  2. Below 6 nodes, you must not specify any masters via terraform.
  3. When switching from 5 nodes to 6 nodes with master, terraform apply will fail (a subsequent plan will show no drift related to this change though):
| When applying changes to module.[deployment_1.ec](http://deployment_1.ec/)_deployment.this, provider "provider[\"[registry.terraform.io/elastic/ec](http://registry.terraform.io/elastic/ec)\"]" produced an unexpected new value: .elasticsearch.hot.node_roles: planned set element cty.StringVal("master") does not correlate with any element in actual.
  1. Switching back from 6 nodes and master to only 5 nodes is not possible at all via terraform (it does work via UI though):
* clusters.cluster_invalid_plan: Cluster must contain at least a master topology element and a data topology element. 'master' node type is missing,'master' node type exists in more than one topology element (resources.elasticsearch[0].cluster_topology)

@tobio tobio changed the title Master node type is missing, 'master' node type exists in more than one topology element. Automatically add/remove master nodes from plan based on deployment size Aug 23, 2023
@Kushmaro Kushmaro modified the milestones: 0.9.0, 0.10.0 Sep 5, 2023
@dimuon
Copy link
Contributor

dimuon commented Feb 12, 2024

Just from a quick look at the issue - I think we need to change node_roles plan modifier to fix item 3 from that list.

@Kushmaro Kushmaro modified the milestones: 0.10.0, 0.11.0 Apr 4, 2024
@gigerdo gigerdo closed this as completed Jun 19, 2024
@gigerdo gigerdo self-assigned this Jun 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working story_points:3
Projects
None yet
Development

No branches or pull requests

6 participants