Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add an environment variable to limit the number of ENIs #251

Merged
merged 4 commits into from
Jan 22, 2019

Conversation

pdbogen
Copy link
Contributor

@pdbogen pdbogen commented Nov 30, 2018

Issue #, if available:

Description of changes:

This adds an environment variable to limit the number of ENIs assigned to a node. For the time being we're interested in only using one ENI per node, and this setting makes it possible to ensure that this is the case.

(I also add a Make target to run tests in the same docker container used for building, but, you know, take it or leave it.)

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

Copy link
Contributor

@mogren mogren left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious, what is the main reason to only use one ENI per node?

ipamd/ipamd.go Outdated Show resolved Hide resolved
ipamd/ipamd_test.go Outdated Show resolved Hide resolved
Makefile Show resolved Hide resolved
@pdbogen
Copy link
Contributor Author

pdbogen commented Nov 30, 2018

we're running EKS clusters in VPCs with public IP addresses but that need to communicate with peered VPCs and other things; basically networking only works properly when we run on the primary ENI, otherwise pods assigned to the secondary ENIs aren't able to communicate properly.

I.e., we're not running EKS in one of the two modes that the CNI seems to support, but, it works for primary ENIs with EXTERNALSNAT=true.

However, I'm wondering if #233 might have something to do with my troubles, so before I go farther down this path, I'm going to experiment with that.

@liwenwu-amazon
Copy link
Contributor

@pdbogen Have you tried setting WARM_ENI_TARGET=0 ?

@pdbogen
Copy link
Contributor Author

pdbogen commented Jan 1, 2019

I think I'd have to set both WARM_ENI_TARGET and WARM_IP_TARGET to 0, otherwise when the CNI has maxed out the first ANI, WARM_IP_TARGET will result in configuring a second ENI. I'd like to avoid that situation, but still be able to keep warm IPs around.

Copy link
Contributor

@mattlandis mattlandis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is looking good overall. Just the one comment.

README.md Outdated Show resolved Hide resolved
Copy link
Contributor

@mattlandis mattlandis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for not catching this sooner but, what is the behavior if we set the env on an instance that isn't in the map of max enis? I think the current behavior results in a value of 0. I think it would be a useful to use the value if it is not found. This would allow someone to set the ENIs for a new instance type that hasn't been updated yet.

@mogren What do you think?

@mogren
Copy link
Contributor

mogren commented Jan 22, 2019

Hmm, good idea, but that would require an override of GetENIipLimit as well to get the total max IP limit. This current change seems less invasive.

That said, we definitely need a better way to support new instance types. Another approach we have discussed is to add support for a config file to override the default config and add support for new ones.

@mattlandis
Copy link
Contributor

Good point. I think this is good as is. We can make using it to override when nothing is in the map a separate PR.

@mattlandis mattlandis merged commit 2cb11cb into aws:master Jan 22, 2019
@mogren mogren added this to the v1.4 milestone Mar 5, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants