-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
After 1.11.7 to 1.12.10 upgrade, cannot edit CoreDNS configmap #7762
Comments
Coredns is managed by kops and therefor will always be overwritten by kops during upgrades. |
I think you should be able to achieve what you want using the |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
1. What
kops
version are you running? The commandkops version
, will displaythis information.
1.12.3
2. What Kubernetes version are you running?
kubectl version
will print theversion if a cluster is running or provide the Kubernetes version specified as
a
kops
flag.1.12.10
3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
kubectl edit configmap coredns -n kube-system
<edit confgmap - add the
log
plug-in>kubectl get configmap coredns -n kube-system -o yaml (to confirm that the config has the
log
plug-in there, do this fast because the config will revert in less than a minute)kubectl get configmap coredns -n kube-system -o yaml
5. What happened after the commands executed?
The configmap reverts to the state it was before the edit (e.g. the
log
plugin is removed from the configmap)6. What did you expect to happen?
The configmap edits remain in effect (e.g. the
log
plugin remains in the configmap)7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml
to display your cluster manifest.You may want to remove your cluster name and other sensitive information.
apiVersion: kops/v1alpha2
kind: Cluster
metadata:
creationTimestamp: 2018-03-07T17:53:22Z
generation: 3
name: mycluster
spec:
additionalPolicies:
node: |
[
{
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:DescribeTags",
"autoscaling:SetDesiredCapacity",
"autoscaling:TerminateInstanceInAutoScalingGroup"
],
"Resource": ""
},
{
"Sid": "Kube2IAM",
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Resource": ""
}
]
api:
dns: {}
authorization:
alwaysAllow: {}
channel: stable
cloudLabels:
Environment: PreProduction
Platform: k8s
Role: KubernetesCluster
Team: EngineeringProductivity
spot-enabled: "true"
cloudProvider: aws
configBase: s3://mys3bucket
dnsZone: mycluster
etcdClusters:
name: c
name: d
name: e
name: main
provider: Manager
name: c
name: d
name: e
name: events
provider: Manager
iam:
allowContainerRegistry: true
legacy: false
kubeAPIServer:
authorizationMode: RBAC
oidcClientID: myid.apps.googleusercontent.com
oidcGroupsClaim: groups
oidcIssuerURL: https://accounts.google.com
oidcUsernameClaim: email
runtimeConfig:
batch/v2alpha1: "true"
kubeDNS:
provider: CoreDNS
kubelet:
anonymousAuth: false
authenticationTokenWebhook: true
authorizationMode: Webhook
kubernetesApiAccess:
kubernetesVersion: 1.12.10
masterInternalName: api.mycluster
masterPublicName: api.mycluster
networkCIDR: 10.108.0.0/16
networkID: vpc-myid
networking:
flannel:
backend: vxlan
nonMasqueradeCIDR: 100.64.0.0/10
sshAccess:
subnets:
id: subnet-my1
name: us-east-1c
type: Public
zone: us-east-1c
id: subnet-my2
name: us-east-1d
type: Public
zone: us-east-1d
id: subnet-my3
name: us-east-1e
type: Public
zone: us-east-1e
topology:
dns:
type: Public
masters: public
nodes: public
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: 2018-03-07T17:53:22Z
generation: 1
labels:
kops.k8s.io/cluster: mycluster
name: master-us-east-1c
spec:
additionalSecurityGroups:
image: kope.io/k8s-1.11-debian-stretch-amd64-hvm-ebs-2018-08-17
machineType: r5.large
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: master-us-east-1c
role: Master
subnets:
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: 2018-03-07T17:53:22Z
generation: 1
labels:
kops.k8s.io/cluster: mycluster
name: master-us-east-1d
spec:
additionalSecurityGroups:
image: kope.io/k8s-1.11-debian-stretch-amd64-hvm-ebs-2018-08-17
machineType: r5.large
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: master-us-east-1d
role: Master
subnets:
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: 2018-03-07T17:53:22Z
generation: 1
labels:
kops.k8s.io/cluster: mycluster
name: master-us-east-1e
spec:
additionalSecurityGroups:
image: kope.io/k8s-1.11-debian-stretch-amd64-hvm-ebs-2018-08-17
machineType: r4.large
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: master-us-east-1e
role: Master
subnets:
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: 2018-03-07T17:53:22Z
labels:
kops.k8s.io/cluster: mycluster
node-role.kubernetes.io/node: workers
name: nodes
spec:
additionalSecurityGroups:
image: kope.io/k8s-1.11-debian-stretch-amd64-hvm-ebs-2018-08-17
machineType: r5.4xlarge
maxSize: 20
minSize: 3
nodeLabels:
kops.k8s.io/instancegroup: nodes
role: Node
rootVolumeSize: 200
rootVolumeType: gp2
subnets:
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: 2018-03-11T17:18:18Z
labels:
kops.k8s.io/cluster: mycluster
name: system-nodes
spec:
additionalSecurityGroups:
image: kope.io/k8s-1.11-debian-stretch-amd64-hvm-ebs-2018-08-17
machineType: m5a.large
maxSize: 10
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: nodes
node-role.kubernetes.io/node: system
role: Node
rootVolumeSize: 200
rootVolumeType: gp2
subnets:
8. Please run the commands with most verbose logging by adding the
-v 10
flag.Paste the logs into this report, or in a gist and provide the gist link here.
9. Anything else do we need to know?
The reverting behavior extends to the CoreDNS
deployment
andreplicasets
also. If I try to edit or delete the deployment or replicaset, they will be recreated so there is no way to even delete CoreDNS and redeploy it myself. The existing deployment will always recreate itself.The text was updated successfully, but these errors were encountered: