Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

After 1.11.7 to 1.12.10 upgrade, cannot edit CoreDNS configmap #7762

Closed
ChienHuey opened this issue Oct 9, 2019 · 6 comments
Closed

After 1.11.7 to 1.12.10 upgrade, cannot edit CoreDNS configmap #7762

ChienHuey opened this issue Oct 9, 2019 · 6 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@ChienHuey
Copy link
Contributor

1. What kops version are you running? The command kops version, will display
this information.

1.12.3

2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.

1.12.10

3. What cloud provider are you using?
AWS

4. What commands did you run? What is the simplest way to reproduce this issue?
kubectl edit configmap coredns -n kube-system
<edit confgmap - add the log plug-in>
kubectl get configmap coredns -n kube-system -o yaml (to confirm that the config has the log plug-in there, do this fast because the config will revert in less than a minute)

kubectl get configmap coredns -n kube-system -o yaml

5. What happened after the commands executed?
The configmap reverts to the state it was before the edit (e.g. the log plugin is removed from the configmap)

6. What did you expect to happen?
The configmap edits remain in effect (e.g. the log plugin remains in the configmap)

7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.

apiVersion: kops/v1alpha2
kind: Cluster
metadata:
creationTimestamp: 2018-03-07T17:53:22Z
generation: 3
name: mycluster
spec:
additionalPolicies:
node: |
[
{
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:DescribeTags",
"autoscaling:SetDesiredCapacity",
"autoscaling:TerminateInstanceInAutoScalingGroup"
],
"Resource": ""
},
{
"Sid": "Kube2IAM",
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Resource": "
"
}
]
api:
dns: {}
authorization:
alwaysAllow: {}
channel: stable
cloudLabels:
Environment: PreProduction
Platform: k8s
Role: KubernetesCluster
Team: EngineeringProductivity
spot-enabled: "true"
cloudProvider: aws
configBase: s3://mys3bucket
dnsZone: mycluster
etcdClusters:

  • etcdMembers:
    • instanceGroup: master-us-east-1c
      name: c
    • instanceGroup: master-us-east-1d
      name: d
    • instanceGroup: master-us-east-1e
      name: e
      name: main
      provider: Manager
  • etcdMembers:
    • instanceGroup: master-us-east-1c
      name: c
    • instanceGroup: master-us-east-1d
      name: d
    • instanceGroup: master-us-east-1e
      name: e
      name: events
      provider: Manager
      iam:
      allowContainerRegistry: true
      legacy: false
      kubeAPIServer:
      authorizationMode: RBAC
      oidcClientID: myid.apps.googleusercontent.com
      oidcGroupsClaim: groups
      oidcIssuerURL: https://accounts.google.com
      oidcUsernameClaim: email
      runtimeConfig:
      batch/v2alpha1: "true"
      kubeDNS:
      provider: CoreDNS
      kubelet:
      anonymousAuth: false
      authenticationTokenWebhook: true
      authorizationMode: Webhook
      kubernetesApiAccess:
  • someIPs
    kubernetesVersion: 1.12.10
    masterInternalName: api.mycluster
    masterPublicName: api.mycluster
    networkCIDR: 10.108.0.0/16
    networkID: vpc-myid
    networking:
    flannel:
    backend: vxlan
    nonMasqueradeCIDR: 100.64.0.0/10
    sshAccess:
  • someIPs
    subnets:
  • cidr: 10.108.4.0/23
    id: subnet-my1
    name: us-east-1c
    type: Public
    zone: us-east-1c
  • cidr: 10.108.6.0/23
    id: subnet-my2
    name: us-east-1d
    type: Public
    zone: us-east-1d
  • cidr: 10.108.8.0/23
    id: subnet-my3
    name: us-east-1e
    type: Public
    zone: us-east-1e
    topology:
    dns:
    type: Public
    masters: public
    nodes: public

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: 2018-03-07T17:53:22Z
generation: 1
labels:
kops.k8s.io/cluster: mycluster
name: master-us-east-1c
spec:
additionalSecurityGroups:

  • sg-b49cdfc7
  • sg-0670e14f
    image: kope.io/k8s-1.11-debian-stretch-amd64-hvm-ebs-2018-08-17
    machineType: r5.large
    maxSize: 1
    minSize: 1
    nodeLabels:
    kops.k8s.io/instancegroup: master-us-east-1c
    role: Master
    subnets:
  • us-east-1c

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: 2018-03-07T17:53:22Z
generation: 1
labels:
kops.k8s.io/cluster: mycluster
name: master-us-east-1d
spec:
additionalSecurityGroups:

  • sg-b49cdfc7
  • sg-0670e14f
    image: kope.io/k8s-1.11-debian-stretch-amd64-hvm-ebs-2018-08-17
    machineType: r5.large
    maxSize: 1
    minSize: 1
    nodeLabels:
    kops.k8s.io/instancegroup: master-us-east-1d
    role: Master
    subnets:
  • us-east-1d

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: 2018-03-07T17:53:22Z
generation: 1
labels:
kops.k8s.io/cluster: mycluster
name: master-us-east-1e
spec:
additionalSecurityGroups:

  • mygroup1
  • mygroup2
    image: kope.io/k8s-1.11-debian-stretch-amd64-hvm-ebs-2018-08-17
    machineType: r4.large
    maxSize: 1
    minSize: 1
    nodeLabels:
    kops.k8s.io/instancegroup: master-us-east-1e
    role: Master
    subnets:
  • us-east-1e

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: 2018-03-07T17:53:22Z
labels:
kops.k8s.io/cluster: mycluster
node-role.kubernetes.io/node: workers
name: nodes
spec:
additionalSecurityGroups:

  • sg-mine
    image: kope.io/k8s-1.11-debian-stretch-amd64-hvm-ebs-2018-08-17
    machineType: r5.4xlarge
    maxSize: 20
    minSize: 3
    nodeLabels:
    kops.k8s.io/instancegroup: nodes
    role: Node
    rootVolumeSize: 200
    rootVolumeType: gp2
    subnets:
  • us-east-1c
  • us-east-1d

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: 2018-03-11T17:18:18Z
labels:
kops.k8s.io/cluster: mycluster
name: system-nodes
spec:
additionalSecurityGroups:

  • sg-mine
    image: kope.io/k8s-1.11-debian-stretch-amd64-hvm-ebs-2018-08-17
    machineType: m5a.large
    maxSize: 10
    minSize: 1
    nodeLabels:
    kops.k8s.io/instancegroup: nodes
    node-role.kubernetes.io/node: system
    role: Node
    rootVolumeSize: 200
    rootVolumeType: gp2
    subnets:
  • us-east-1c
  • us-east-1d

8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.

9. Anything else do we need to know?
The reverting behavior extends to the CoreDNS deployment and replicasets also. If I try to edit or delete the deployment or replicaset, they will be recreated so there is no way to even delete CoreDNS and redeploy it myself. The existing deployment will always recreate itself.

@olemarkus
Copy link
Member

Coredns is managed by kops and therefor will always be overwritten by kops during upgrades.
The best way of making things like coredns configurable is by adding that functionality to the kops addon itself.

@gjtempleton
Copy link
Member

I think you should be able to achieve what you want using the ExternalCoreFile provided by #7376 , albeit it's not as neat as just editing once as you'll have to provide the full CoreFile contents you want.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 13, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 12, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants