Skip to content
This repository has been archived by the owner on Sep 30, 2020. It is now read-only.

The default IamFleetRole changed on AWS-side? #1022

Closed
mumoshu opened this issue Nov 22, 2017 · 6 comments
Closed

The default IamFleetRole changed on AWS-side? #1022

mumoshu opened this issue Nov 22, 2017 · 6 comments

Comments

@mumoshu
Copy link
Contributor

mumoshu commented Nov 22, 2017

kube-aws: v0.9.9-rc.3, but any version would be affected

I've just added a spot-fleet based node pool to my cluster and it failed like this:

image

The value for the IamFleetRole property was:

"IamFleetRole":{"Fn::Join":["",["arn:aws:iam::",{"Ref":"AWS::AccountId"},":role/aws-ec2-spot-fleet-role"]]}

which had certainly worked before.

As far as I remember, aws-ec2-spot-fleet-role was taken from the "Request Spot Instances" on AWS console because it was the default role at that time.
However, interestingly the default shown today is aws-ec2-spot-fleet-tagging-role.

image

Can we just change the default to the new role?
Does your AWS account shows the new role as the default one once you've browsed to the "Request Spot Instances" pane?

mumoshu added a commit to mumoshu/kube-aws that referenced this issue Nov 22, 2017
mumoshu added a commit to mumoshu/kube-aws that referenced this issue Nov 22, 2017
@cknowles
Copy link
Contributor

I checked our accounts and it's still aws-ec2-spot-fleet-role there. Can't find aws-ec2-spot-fleet-tagging-role anywhere. Probably we need to just include our own role with the same permissions.

@mumoshu
Copy link
Contributor Author

mumoshu commented Nov 24, 2017

@c-knowles Would you mind reviewing your IAM roles after you've navigated to the second page after clicking "Request Spot Instances" button in your AWS console?
The reason I'm asking is that the new role seems to have CreateTags permission, which probably is missing in the old role. So whenever you navigate there today, I suppose the new default role should be created.

image

@mumoshu
Copy link
Contributor Author

mumoshu commented Nov 24, 2017

Update: docs.aws.amazon.com seems to include the new role in all the examples I've reviewed today.

Also relevant: hashicorp/terraform-provider-aws#1232

I'm not saying we should completely drop support or comment for the old role.
We should, at least, suggest in cluster.yaml comments to encourage users to navigate to the spot fleet request page once again to create the new role(if it is created in such way

@cknowles
Copy link
Contributor

@mumoshu yes it has aws-ec2-spot-fleet-tagging-role there now I've clicked that so that does assume the user has taken the manual step to request a spot fleet prior to deploying. Since it's only a managed policy and not a managed role or service linked role, it seems like we could just create this role ourselves and remove that potential manual step?

@mumoshu
Copy link
Contributor Author

mumoshu commented Nov 27, 2017

@c-knowles Thanks for confirming!

it seems like we could just create this role ourselves and remove that potential manual step?

I had initially thought that I'd rather prefer not to create it ourselves - so that we can save number of IAM roles created by kube-aws.

However, I'm now convinced to do it - manual steps should be eliminated whereas possible :trollface:

I'll open an another issue for that.

@cknowles
Copy link
Contributor

@mumoshu yeah that's all I mean, if it's not a managed role then we're forced to include it 👍

camilb added a commit to camilb/kube-aws that referenced this issue Dec 1, 2017
* kubernetes-incubator/master:
  Add rkt container cleanup to journald-cloudwatch-logs service
  Support EC2 instance tags per node role This feature will be handy when e.g. your monitoring tools discovers EC2 instances and then groups resource metrics with EC2 instance tags.
  Fix the default FleetIamRole Closes kubernetes-retired#1022
  Fix the default FleetIamRole Closes kubernetes-retired#1022
davidmccormick pushed a commit to HotelsDotCom/kube-aws that referenced this issue Mar 21, 2018
…avour-0.9.9 to hcom-flavour

* commit '0e116d72ead70121c730d3bc4009f8d562e16912': (24 commits)
  RUN-788 Add kubectl run parameters
  Allow toggling Metrics Server installation
  Correct values for the `kubernetes.io/cluster/<Cluster ID>` tags Resolves kubernetes-retired#1025
  Fix dashboard doco links
  Fix install-kube-system when node drainer is enabled Follow-up for kubernetes-retired#1043
  Two fixes to 0.9.9 rc.3 (kubernetes-retired#1043)
  Update the documentation for Kubernetes Dashboard.
  Improve the configuration for Kubernetes Dashboard.
  Fix the creation of all metrics-server resources.
  Use templated image for metrics-server.
  Follow-ups for Kubernetes 1.8
  Metrics Server addon. (kubernetes-retired#973)
  Quick start and high availability guides
  Add rkt container cleanup to journald-cloudwatch-logs service
  Update Tiller image to v2.7.2
  Update kube-dns 1.14.7
  Bump Cluster Autoscaler version to 1.0.3
  Bump Kubernetes and ETCD version.
  Support EC2 instance tags per node role This feature will be handy when e.g. your monitoring tools discovers EC2 instances and then groups resource metrics with EC2 instance tags.
  Fix the default FleetIamRole Closes kubernetes-retired#1022
  ...
kylehodgetts pushed a commit to HotelsDotCom/kube-aws that referenced this issue Mar 27, 2018
kylehodgetts pushed a commit to HotelsDotCom/kube-aws that referenced this issue Mar 27, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants