Skip to content

Latest commit

 

History

History
1252 lines (1042 loc) · 112 KB

CHANGELOG.md

File metadata and controls

1252 lines (1042 loc) · 112 KB

v1.4.0-alpha.1

Documentation & Examples

Downloads

binary sha1 hash md5 hash
kubernetes.tar.gz 11a199208c5164a291c1767a1b9e64e45fdea747 334f349daf9268d8ac091d7fcc8e4626

Changelog since v1.3.0

Experimental Features

  • An alpha implementation of the the TLS bootstrap API described in docs/proposals/kubelet-tls-bootstrap.md. (#25562, @gtank)

Action Required

  • [kubelet] Allow opting out of automatic cloud provider detection in kubelet. By default kubelet will auto-detect cloud providers (#28258, @vishh)
  • If you use one of the kube-dns replication controller manifest in cluster/saltbase/salt/kube-dns, i.e. cluster/saltbase/salt/kube-dns/{skydns-rc.yaml.base,skydns-rc.yaml.in}, either substitute one of __PILLAR__FEDERATIONS__DOMAIN__MAP__ or {{ pillar['federations_domain_map'] }} with the corresponding federation name to domain name value or remove them if you do not support cluster federation at this time. If you plan to substitute the parameter with its value, here is an example for {{ pillar['federations_domain_map'] } (#28132, @madhusudancs)
    • pillar['federations_domain_map'] = "- --federations=myfederation=federation.test"
    • where myfederation is the name of the federation and federation.test is the domain name registered for the federation.
  • Proportionally scale paused and rolling deployments (#20273, @kargakis)

Other notable changes

v1.3.0

Documentation & Examples

Downloads

binary sha1 hash md5 hash
kubernetes.tar.gz 88249c443d438666928379aa7fe865b389ed72ea 9270f001aef8c03ff5db63456ca9eecc

Highlights

  • Authorization:
    • Alpha RBAC authorization API group
  • Federation
    • federation api group is now beta
    • Services from all federated clusters are now registered in Cloud DNS (AWS and GCP).
  • Stateful Apps:
    • alpha PetSets manage stateful apps
    • alpha Init containers provide one-time setup for stateful containers
  • Updating:
    • Retry Pod/RC updates in kubectl rolling-update.
    • Stop 'kubectl drain' deleting pods with local storage.
    • Add kubectl rollout status
  • Security/Auth
    • L7 LB controller and disk attach controllers run on master, so nodes do not need those privileges.
    • Setting TLS1.2 minimum
    • kubectl create secret tls command
    • Webhook Token Authenticator
    • beta PodSecurityPolicy objects limits use of security-sensitive features by pods.
  • Kubectl
    • Display line number on JSON errors
    • Add flag -t as shorthand for --tty
  • Resources
    • Improved node stability by optionally evicting pods upon memory pressure - Design Doc
    • alpha: NVIDIA GPU support (#24836, @therc)
    • Adding loadBalancer services and nodeports services to quota system

Known Issues and Important Steps before Upgrading

The following versions of Docker Engine are supported - v1.10, v1.11 Although v1.9 is still compatible, we recommend upgrading to one of the supported versions. All prior versions of docker will not be supported.

ThirdPartyResource

If you use ThirdPartyResource objects, they have moved from being namespaced-scoped to be cluster-scoped. Before upgrading to 1.3.0, export and delete any existing ThirdPartyResource objects using a 1.2.x client:

kubectl get thirdpartyresource --all-namespaces -o yaml > tprs.yaml kubectl delete -f tprs.yaml

After upgrading to 1.3.0, re-register the third party resource objects at the root scope (using a 1.3 server and client):

kubectl create -f tprs.yaml

kubectl

Kubectl flag --container-port flag is deprecated: it will be removed in the future, please use --target-port instead.

kubernetes Core Known Issues

  • Kube Proxy crashes infrequently due to a docker bug (#24000)
    • This issue can be resolved by restarting docker daemon
  • CORS works only in insecure mode (#24086)
  • Persistent volume claims gets added incorrectly after being deleted under stress. Happens very infrequently. (#26082)

Docker runtime Known Issues

  • Kernel crash with Aufs storage driver on Debian Jessie (#27885)

  • File descriptors are leaked in docker v1.11 (#275)

  • Additional memory overhead per container in docker v1.11 (#21737)

  • List of upstream fixes for docker v1.10 identified by RedHat

Rkt runtime Known Issues

  • A detailed list of known issues can be found here

More Instructions coming soon

Provider-specific Notes

  • AWS
    • Support for ap-northeast-2 region (Seoul)
    • Allow cross-region image pulling with ECR
    • More reliable kube-up/kube-down
    • Enable ICMP Type 3 Code 4 for ELBs
    • ARP caching fix
    • Use /dev/xvdXX names
    • ELB:
      • ELB proxy protocol support
      • mixed plaintext/encrypted ports support in ELBs
      • SSL support for ELB listeners
    • Allow VPC CIDR to be specified (experimental)
    • Fix problems with >2 security groups
  • GCP:
    • Enable using gcr.io as a Docker registry mirror.
    • Make bigger master root disks in GCE for large clusters.
    • Change default clusterCIDRs from /16 to /14 allowing 1000 Node clusters by default.
    • Allow Debian Jessie on GCE.
    • Node problem detector addon pod detects and reports kernel deadlocks.
  • OpenStack
    • Provider added.
  • VSphere:
    • Provider updated.

Previous Releases Included in v1.3.0

v1.3.0-beta.3

Documentation & Examples

Downloads

binary sha1 hash md5 hash
kubernetes.tar.gz 9d18964a294f356bfdc841957dcad8ff35ed909c ee5fcdf86645135ed132663967876dd6

Changelog since v1.3.0-beta.2

Action Required

  • [kubelet] Allow opting out of automatic cloud provider detection in kubelet. By default kubelet will auto-detect cloud providers (#28258, @vishh)
  • If you use one of the kube-dns replication controller manifest in cluster/saltbase/salt/kube-dns, i.e. cluster/saltbase/salt/kube-dns/{skydns-rc.yaml.base,skydns-rc.yaml.in}, either substitute one of __PILLAR__FEDERATIONS__DOMAIN__MAP__ or {{ pillar['federations_domain_map'] }} with the corresponding federation name to domain name value or remove them if you do not support cluster federation at this time. If you plan to substitute the parameter with its value, here is an example for {{ pillar['federations_domain_map'] } (#28132, @madhusudancs)
    • pillar['federations_domain_map'] = "- --federations=myfederation=federation.test"
    • where myfederation is the name of the federation and federation.test is the domain name registered for the federation.
  • federation: Upgrading the groupversion to v1beta1 (#28186, @nikhiljindal)
  • Set Dashboard UI version to v1.1.0 (#27869, @bryk)

Other notable changes

  • Build: Add KUBE_GCS_RELEASE_BUCKET_MIRROR option to push-ci-build.sh (#28172, @zmerlynn)
  • Image GC logic should compensate for reserved blocks (#27996, @ronnielai)
  • Bump minimum API version for docker to 1.21 (#27208, @yujuhong)
  • Adding lock files for kubeconfig updating (#28034, @krousey)
  • federation service controller: fixing the logic to update DNS records (#27999, @quinton-hoole)
  • federation: Updating KubeDNS to try finding a local service first for federation query (#27708, @nikhiljindal)
  • Support journal logs in fluentd-gcp on GCI (#27981, @a-robinson)
  • Copy and display source location prominently on Kubernetes instances (#27985, @maisem)
  • Federation e2e support for AWS (#27791, @colhom)
  • Copy and display source location prominently on Kubernetes instances (#27840, @zmerlynn)
  • AWS/GCE: Spread PetSet volume creation across zones, create GCE volumes in non-master zones (#27553, @justinsb)
  • GCE provider: Create TargetPool with 200 instances, then update with rest (#27829, @zmerlynn)
  • Add sources to server tarballs. (#27830, @david-mcmahon)
  • Retry Pod/RC updates in kubectl rolling-update (#27509, @janetkuo)
  • AWS kube-up: Authorize route53 in the IAM policy (#27794, @justinsb)
  • Allow conformance tests to run on non-GCE providers (#26932, @aaronlevy)
  • AWS kube-up: move to Docker 1.11.2 (#27676, @justinsb)
  • Fixed an issue that Deployment may be scaled down further than allowed by maxUnavailable when minReadySeconds is set. (#27728, @janetkuo)

v1.2.5

Documentation & Examples

Downloads

binary sha1 hash md5 hash
kubernetes.tar.gz ddf12d7f37dfef25308798d71ad547761d0785ac 69d770df8fa4eceb57167e34df3962ca

Changes since v1.2.4

Other notable changes

  • Retry Pod/RC updates in kubectl rolling-update (#27509, @janetkuo)
  • GCE provider: Create TargetPool with 200 instances, then update with rest (#27865, @zmerlynn)
  • GCE provider: Limit Filter calls to regexps rather than large blobs (#27741, @zmerlynn)
  • Fix strategic merge diff list diff bug (#26418, @AdoHe)
  • AWS: Fix long-standing bug in stringSetToPointers (#26331, @therc)
  • AWS kube-up: Increase timeout waiting for docker start (#25405, @justinsb)
  • Fix hyperkube flag parsing (#25512, @colhom)
  • kubectl rolling-update support for same image (#24645, @jlowdermilk)
  • Return "410 Gone" errors via watch stream when using watch cache (#25369, @liggitt)

v1.3.0-beta.2

Documentation & Examples

Downloads

binary sha1 hash md5 hash
kubernetes.tar.gz 9c95762970b943d6c6547f0841c1e5471148b0e3 dc9e8560f24459b2313317b15910bee7

Changes since v1.3.0-beta.1

Experimental Features

  • Init containers enable pod authors to perform tasks before their normal containers start. Each init container is started in order, and failing containers will prevent the application from starting. (#23666, @smarterclayton)

Other notable changes

  • GCE provider: Limit Filter calls to regexps rather than large blobs (#27741, @zmerlynn)
  • Show LASTSEEN, the sorting key, as the first column in kubectl get event output (#27549, @therc)
  • GCI: fix kubectl permission issue #27643 (#27740, @andyzheng0831)
  • Add federation api and cm servers to hyperkube (#27586, @colhom)
  • federation: Creating kubeconfig files to be used for creating secrets for clusters on aws and gke (#27332, @nikhiljindal)
  • AWS: Enable ICMP Type 3 Code 4 for ELBs (#27677, @justinsb)
  • Bumped Heapster to v1.1.0. (#27542, @piosz)
  • Deleting federation-push.sh (#27400, @nikhiljindal)
  • Validate-cluster finishes shortly after at most ALLOWED_NOTREADY_NODE… (#26778, @gmarek)
  • AWS kube-down: Issue warning if VPC not found (#27518, @justinsb)
  • gce/kube-down: Parallelize IGM deletion, batch more (#27302, @zmerlynn)
  • Enable dynamic allocation of heapster/eventer cpu request/limit (#27185, @gmarek)
  • 'kubectl describe pv' now shows events (#27431, @jsafrane)
  • AWS kube-up: set net.ipv4.neigh.default.gc_thresh1=0 to avoid ARP over-caching (#27682, @justinsb)
  • AWS volumes: Use /dev/xvdXX names with EC2 (#27628, @justinsb)
  • Add a test config variable to specify desired Docker version to run on GCI. (#26813, @wonderfly)
  • Check for thin_is binary in path for devicemapper when using ThinPoolWatcher and fix uint64 overflow issue for CPU stats (#27591, @dchen1107)
  • Change default value of deleting-pods-burst to 1 (#27606, @gmarek)
  • MESOS: fix race condition in contrib/mesos/pkg/queue/delay (#24916, @jdef)
  • including federation binaries in the list of images we push during release (#27396, @nikhiljindal)
  • fix updatePod() of RS and RC controllers (#27415, @caesarxuchao)
  • Change default value of deleting-pods-burst to 1 (#27422, @gmarek)
  • A new volume manager was introduced in kubelet that synchronizes volume mount/unmount (and attach/detach, if attach/detach controller is not enabled). (#26801, @saad-ali)
    • This eliminates the race conditions between the pod creation loop and the orphaned volumes loops. It also removes the unmount/detach from the syncPod() path so volume clean up never blocks the syncPod loop.

v1.3.0-beta.1

Documentation & Examples

Downloads

binary sha1 hash md5 hash
kubernetes.tar.gz 2b54995ee8f52d78dc31c3d7291e8dfa5c809fe7 f1022a84c3441cae4ebe1d295470be8f

Changes since v1.3.0-alpha.5

Action Required

  • Fixing logic to generate ExternalHost in genericapiserver (#26796, @nikhiljindal)
  • federation: Updating federation-controller-manager to use secret to get federation-apiserver's kubeconfig (#26819, @nikhiljindal)

Other notable changes

  • federation: fix dns provider initialization issues (#27252, @mfanjie)
  • Updating federation up scripts to work in non e2e setup (#27260, @nikhiljindal)
  • version bump for gci to milestone 53 (#27210, @adityakali)
  • kubectl apply: retry applying a patch if a version conflict error is encountered (#26557, @AdoHe)
  • Revert "Wait for arc.getArchive() to complete before running tests" (#27130, @pwittrock)
  • ResourceQuota BestEffort scope aligned with Pod level QoS (#26969, @derekwaynecarr)
  • The AWS cloudprovider will cache results from DescribeInstances() if the set of nodes hasn't changed (#26900, @therc)
  • GCE provider: Log full contents of long operations (#26962, @zmerlynn)
  • Fix system container detection in kubelet on systemd. (#26586, @derekwaynecarr)
    • This fixed environments where CPU and Memory Accounting were not enabled on the unit that launched the kubelet or docker from reporting the root cgroup when monitoring usage stats for those components.
  • New default horizontalpodautoscaler/v1 generator for kubectl autoscale. (#26775, @piosz)
    • Use autoscaling/v1 in kubectl by default.
  • federation: Adding dnsprovider flags to federation-controller-manager (#27158, @nikhiljindal)
  • federation service controller: fixing a bug so that existing services are created in newly registered clusters (#27028, @mfanjie)
  • Rename environment variables (KUBE_)ENABLE_NODE_AUTOSCALER to (KUBE_)ENABLE_CLUSTER_AUTOSCALER. (#27117, @mwielgus)
  • support for mounting local-ssds on GCI (#27143, @adityakali)
  • AWS: support mixed plaintext/encrypted ports in ELBs via service.beta.kubernetes.io/aws-load-balancer-ssl-ports annotation (#26976, @therc)
  • Updating e2e docs with instructions on running federation tests (#27072, @colhom)
  • LBaaS v2 Support for Openstack Cloud Provider Plugin (#25987, @dagnello)
  • GCI: add support for network plugin (#27027, @andyzheng0831)
  • Bump cAdvisor to v0.23.3 (#27065, @timstclair)
  • Stop 'kubectl drain' deleting pods with local storage. (#26667, @mml)
  • Networking e2es: Wait for all nodes to be schedulable in kubeproxy and networking tests (#27008, @zmerlynn)
  • change clientset of service controller to versioned (#26694, @mfanjie)
  • Use gcr.io as a Docker registry mirror when setting up a cluster in GCE. (#25841, @ojarjur)
  • correction on rbd volume object and defaults (#25490, @rootfs)
  • Bump GCE debian image to container-v1-3-v20160604 (#26851, @zmerlynn)
  • Option to enable http2 on client connections. (#25280, @timothysc)
  • kubectl get ingress output remove rules (#26684, @AdoHe)
  • AWS kube-up: Remove SecurityContextDeny admission controller (to mirror GCE) (#25381, @zquestz)
  • Fix third party (#25894, @brendandburns)
  • AWS Route53 dnsprovider (#26049, @quinton-hoole)
  • GCI/Trusty: support the Docker registry mirror (#26745, @andyzheng0831)
  • Kubernetes v1.3 introduces a new Attach/Detach Controller. This controller manages attaching and detaching of volumes on-behalf of nodes. (#26351, @saad-ali)
    • This ensures that attachment and detachment of volumes is independent of any single nodes’ availability. Meaning, if a node or kubelet becomes unavailable for any reason, the volumes attached to that node will be detached so they are free to be attached to other nodes.
    • Specifically the new controller watches the API server for scheduled pods. It processes each pod and ensures that any volumes that implement the volume Attacher interface are attached to the node their pod is scheduled to.
    • When a pod is deleted, the controller waits for the volume to be safely unmounted by kubelet. It does this by waiting for the volume to no longer be present in the nodes Node.Status.VolumesInUse list. If the volume is not safely unmounted by kubelet within a pre-configured duration (3 minutes in Kubernetes v1.3), the controller unilaterally detaches the volume (this prevents volumes from getting stranded on nodes that become unavailable).
    • In order to remain backwards compatible, the new controller only manages attach/detach of volumes that are scheduled to nodes that opt-in to controller management. Nodes running v1.3 or higher of Kubernetes opt-in to controller management by default by setting the "volumes.kubernetes.io/controller-managed-attach-detach" annotation on the Node object on startup. This behavior is gated by a new kubelet flag, "enable-controller-attach-detach,” (default true).
    • In order to safely upgrade an existing Kubernetes cluster without interruption of volume attach/detach logic:
      • First upgrade the master to Kubernetes v1.3.
        • This will start the new attach/detach controller.
        • The new controller will initially ignore volumes for all nodes since they lack the "volumes.kubernetes.io/controller-managed-attach-detach" annotation.
      • Then upgrade nodes to Kubernetes v1.3.
        • As nodes are upgraded, they will automatically, by default, opt-in to attach/detach controller management, which will cause the controller to start managing attaches/detaches for volumes that get scheduled to those nodes.
  • Added DNS Reverse Record logic for service IPs (#26226, @ArtfulCoder)
  • read gluster log to surface glusterfs plugin errors properly in describe events (#24808, @screeley44)

v1.3.0-alpha.5

Documentation & Examples

Downloads

binary sha1 hash md5 hash
kubernetes.tar.gz 724bf5a4437ca9dc75d9297382f47a179e8dc5a6 2a8b4a5297df3007fce69f1e344fd87e

Changes since v1.3.0-alpha.4

Action Required

Other notable changes

  • Fix a bug with pluralization of third party resources (#25374, @brendandburns)
  • Run l7 controller on master (#26048, @bprashanth)
  • AWS: ELB proxy protocol support via annotation service.beta.kubernetes.io/aws-load-balancer-proxy-protocol (#24569, @williamsandrew)
  • kubectl run --restart=Never creates pods (#25253, @soltysh)
  • Add LabelSelector to PersistentVolumeClaimSpec (#25917, @pmorie)
  • Removed metrics api group (#26073, @piosz)
  • Fixed check in kubectl autoscale: cpu consumption can be higher than 100%. (#26162, @jszczepkowski)
  • Add support for 3rd party objects to kubectl label (#24882, @brendandburns)
  • Move shell completion generation into 'kubectl completion' command (#23801, @sttts)
  • Fix strategic merge diff list diff bug (#26418, @AdoHe)
  • Setting TLS1.2 minimum because TLS1.0 and TLS1.1 are vulnerable (#26169, @victorgp)
  • Kubelet: Periodically reporting image pulling progress in log (#26145, @Random-Liu)
  • Federation service controller is one key component of federation controller manager, it watches federation service, creates/updates services to all registered clusters, and update DNS records to global DNS server. (#26034, @mfanjie)
  • Stabilize map order in kubectl describe (#26046, @timoreimann)
  • Google Cloud DNS dnsprovider - replacement for #25389 (#26020, @quinton-hoole)
  • Fix system container detection in kubelet on systemd. (#25982, @derekwaynecarr)
    • This fixed environments where CPU and Memory Accounting were not enabled on the unit
    • that launched the kubelet or docker from reporting the root cgroup when
    • monitoring usage stats for those components.
  • Added pods-per-core to kubelet. #25762 (#25813, @rrati)
  • promote sourceRange into service spec (#25826, @freehan)
  • kube-controller-manager: Add configure-cloud-routes option (#25614, @justinsb)
  • kubelet: reading cloudinfo from cadvisor (#21373, @enoodle)
  • Disable cAdvisor event storage by default (#24771, @timstclair)
  • Remove docker-multinode (#26031, @luxas)
  • nodecontroller: Fix log message on successful update (#26207, @zmerlynn)
  • remove deprecated generated typed clients (#26336, @caesarxuchao)
  • Kubenet host-port support through iptables (#25604, @freehan)
  • Add metrics support for a GCE PD, EC2 EBS & Azure File volumes (#25852, @vishh)
  • Bump cAdvisor to v0.23.2 - See changelog for details (#25914, @timstclair)
  • Alpha version of "Role Based Access Control" API. (#25634, @ericchiang)
  • Add Seccomp API (#25324, @jfrazelle)
  • AWS: Fix long-standing bug in stringSetToPointers (#26331, @therc)
  • Add dnsmasq as a DNS cache in kube-dns pod (#26114, @ArtfulCoder)
  • routecontroller: Add wait.NonSlidingUntil, use it (#26301, @zmerlynn)
  • Attempt 2: Bump GCE containerVM to container-v1-3-v20160517 (Docker 1.11.1) again. (#26001, @dchen1107)
  • Downward API implementation for resources limits and requests (#24179, @aveshagarwal)
  • GCE clusters start using GCI as the default OS image for masters (#26197, @wonderfly)
  • Add a 'kubectl clusterinfo dump' option (#20672, @brendandburns)
  • Fixing heapster memory requirements. (#26109, @Q-Lee)
  • Handle federated service name lookups in kube-dns. (#25727, @madhusudancs)
  • Support sort-by timestamp in kubectl get (#25600, @janetkuo)
  • vSphere Volume Plugin Implementation (#24947, @abithap)
  • ResourceQuota controller uses rate limiter to prevent hot-loops in error situations (#25748, @derekwaynecarr)
  • Fix hyperkube flag parsing (#25512, @colhom)
  • Add a kubectl create secret tls command (#24719, @bprashanth)
  • Introduce a new add-on pod NodeProblemDetector. (#25986, @Random-Liu)
    • NodeProblemDetector is a DaemonSet running on each node, monitoring node health and reporting
    • node problems as NodeCondition and Event. Currently it already supports kernel log monitoring, and
    • will support more problem detection in the future. It is enabled by default on gce now.
  • Handle cAdvisor partial failures (#25933, @timstclair)
  • Use SkyDNS as a library for a more integrated kube DNS (#23930, @ArtfulCoder)
  • Introduce node memory pressure condition to scheduler (#25531, @ingvagabund)
  • Fix detection of docker cgroup on RHEL (#25907, @ncdc)
  • Kubelet evicts pods when available memory falls below configured eviction thresholds (#25772, @derekwaynecarr)
  • Use protobufs by default to communicate with apiserver (still store JSONs in etcd) (#25738, @wojtek-t)
  • Implement NetworkPolicy v1beta1 API object / client support. (#25638, @caseydavenport)
  • Only expose top N images in NodeStatus (#25328, @resouer)
  • Extend secrets volumes with path control (#25285, @ingvagabund)
  • With this PR, kubectl and other RestClient's using the AuthProvider framework can make OIDC authenticated requests, and, if there is a refresh token present, the tokens will be refreshed as needed. (#25270, @bobbyrullo)
  • Make addon-manager cross-platform and use it with hyperkube (#25631, @luxas)
  • kubelet: Optionally, have kubelet exit if lock file contention is observed, using --exit-on-lock-contention flag (#25596, @derekparker)
  • Bump up glbc version to 0.6.2 (#25446, @bprashanth)
  • Add "kubectl set image" for easier updating container images (for pods or resources with pod templates). (#25509, @janetkuo)
  • NodeController doesn't evict Pods if no Nodes are Ready (#25571, @gmarek)
  • Incompatible change of kube-up.sh: (#25734, @jszczepkowski)
    • when turning on cluster autoscaler by setting KUBE_ENABLE_NODE_AUTOSCALER=true,
    • KUBE_AUTOSCALER_MIN_NODES and KUBE_AUTOSCALER_MAX_NODES need to be set.
  • systemd node spec proposal (#17688, @derekwaynecarr)
  • Bump GCE ContainerVM to container-v1-3-v20160517 (Docker 1.11.1) (#25843, @zmerlynn)
  • AWS: Move enforcement of attached AWS device limit from kubelet to scheduler (#23254, @jsafrane)
  • Refactor persistent volume controller (#24331, @jsafrane)
  • Add support for running GCI on the GCE cloud provider (#25425, @andyzheng0831)
  • Implement taints and tolerations (#24134, @kevin-wangzefeng)
  • Add init containers to pods (#23567, @smarterclayton)

v1.3.0-alpha.4

Documentation & Examples

Downloads

binary sha1 hash md5 hash
kubernetes.tar.gz 758e97e7e50153840379ecd9f8fda1869543539f 4e18ae6a428c99fcc30e2137d7c41854

Changes since v1.3.0-alpha.3

Action Required

Other notable changes

  • Fix hyperkube's layer caching, and remove --make-symlinks at build time (#25693, @luxas)
  • AWS: More support for ap-northeast-2 region (#24464, @matthewrudy)
  • Make bigger master root disks in GCE for large clusters (#25670, @gmarek)
  • AWS kube-down: don't fail if ELB not in VPC - #23784 (#23785, @ajohnstone)
  • Build hyperkube in hack/local-up-cluster instead of separate binaries (#25627, @luxas)
  • enable recursive processing in kubectl rollout (#25110, @metral)
  • Support struct,array,slice types when sorting kubectl output (#25022, @zhouhaibing089)
  • federated api servers: Adding a discovery summarizer server (#20358, @nikhiljindal)
  • AWS: Allow cross-region image pulling with ECR (#24369, @therc)
  • Automatically add node labels beta.kubernetes.io/{os,arch} (#23684, @luxas)
  • kubectl "rm" will suggest using "delete"; "ps" and "list" will suggest "get". (#25181, @janetkuo)
  • Add IPv6 address support for pods - does NOT include services (#23090, @tgraf)
  • Use local disk for ConfigMap volume instead of tmpfs (#25306, @pmorie)
  • Alpha support for scheduling pods on machines with NVIDIA GPUs whose kubelets use the --experimental-nvidia-gpus flag, using the alpha.kubernetes.io/nvidia-gpu resource (#24836, @therc)
  • AWS: SSL support for ELB listeners through annotations (#23495, @therc)
  • Implement kubectl rollout status that can be used to watch a deployment's rollout status (#19946, @janetkuo)
  • Webhook Token Authenticator (#24902, @cjcullen)
  • Update PodSecurityPolicy types and add admission controller that could enforce them (#24600, @pweil-)
  • Introducing ScheduledJobs as described in the proposal as part of batch/v2alpha1 version (experimental feature). (#24970, @soltysh)
  • kubectl now supports validation of nested objects with different ApiGroups (e.g. objects in a List) (#25172, @pwittrock)
  • Change default clusterCIDRs from /16 to /14 in GCE configs allowing 1000 Node clusters by default. (#25350, @gmarek)
  • Add 'kubectl set' (#25444, @janetkuo)
  • vSphere Cloud Provider Implementation (#24703, @dagnello)
  • Added JobTemplate, a preliminary step for ScheduledJob and Workflow (#21675, @soltysh)
  • Openstack provider (#21737, @zreigz)
  • AWS kube-up: Allow VPC CIDR to be specified (experimental) (#23362, @miguelfrde)
  • Return "410 Gone" errors via watch stream when using watch cache (#25369, @liggitt)
  • GKE provider: Add cluster-ipv4-cidr and arbitrary flags (#25437, @zmerlynn)
  • AWS kube-up: Increase timeout waiting for docker start (#25405, @justinsb)
  • Sort resources in quota errors to avoid duplicate events (#25161, @derekwaynecarr)
  • Display line number on JSON errors (#25038, @mfojtik)
  • If the cluster node count exceeds the GCE TargetPool maximum (currently 1000), (#25178, @zmerlynn)
    • randomly select which nodes are members of Kubernetes External Load Balancers.
  • Clarify supported version skew between masters, nodes, and clients (#25087, @ihmccreery)
  • Move godeps to vendor/ (#24242, @thockin)
  • Introduce events flag for describers (#24554, @ingvagabund)
  • run kube-addon-manager in a static pod (#23600, @mikedanese)
  • Reimplement 'pause' in C - smaller footprint all around (#23009, @uluyol)
  • Add subPath to mount a child dir or file of a volumeMount (#22575, @MikaelCluseau)
  • Handle image digests in node status and image GC (#25088, @ncdc)
  • PLEG: reinspect pods that failed prior inspections (#25077, @ncdc)
  • Fix kubectl create secret/configmap to allow = values (#24989, @derekwaynecarr)
  • Upgrade installed packages when building hyperkube to improve the security profile (#25114, @aaronlevy)
  • GCI/Trusty: Support ABAC authorization (#24950, @andyzheng0831)
  • fix cinder volume dir umount issue #24717 (#24718, @chengyli)
  • Inter pod topological affinity and anti-affinity implementation (#22985, @kevin-wangzefeng)
  • start etcd compactor in background (#25010, @hongchaodeng)
  • GCI: Add two GCI specific metadata pairs (#25105, @andyzheng0831)
  • Ensure status is not changed during an update of PV, PVC, HPA objects (#24924, @mqliang)
  • GCE: Prefer preconfigured node tags for firewalls, if available (#25148, @a-robinson)
  • kubectl rolling-update support for same image (#24645, @jlowdermilk)
  • Add an entry to the salt config to allow Debian jessie on GCE. (#25123, @jlewi)
    • As with the existing Wheezy image on GCE, docker is expected
    • to already be installed in the image.
  • Mark kube-push.sh as broken (#25095, @ihmccreery)
  • AWS: Add support for ap-northeast-2 region (Seoul) (#24457, @leokhoa)
  • GCI: Update the command to get the image (#24987, @andyzheng0831)
  • Port-forward: use out and error streams instead of glog (#17030, @csrwng)
  • Promote Pod Hostname & Subdomain to fields (were annotations) (#24362, @ArtfulCoder)
  • Validate deletion timestamp doesn't change on update (#24839, @liggitt)
  • Add flag -t as shorthand for --tty (#24365, @janetkuo)
  • Add support for running clusters on GCI (#24893, @andyzheng0831)
  • Switch to ABAC authorization from AllowAll (#24210, @cjcullen)
  • Fix DeletingLoadBalancer event generation. (#24833, @a-robinson)

v1.2.4

Documentation & Examples

Downloads

binary sha1 hash md5 hash
kubernetes.tar.gz f3aea83f8f0e16b2b41998a2edc09eb42fd8d945 ab0aca3a20e8eba43c8ff9d672793618

Changes since v1.2.3

Other notable changes

v1.3.0-alpha.3

Documentation & Examples

Downloads

binary sha1 hash md5 hash
kubernetes.tar.gz 01e0dc68653173614dc99f44875173478f837b38 ae22c35f3a963743d21daa17683e0288

Changes since v1.3.0-alpha.2

Action Required

  • Updating go-restful to generate "type":"object" instead of "type":"any" in swagger-spec (breaks kubectl 1.1) (#22897, @nikhiljindal)
  • Make watch cache treat resourceVersion consistent with uncached watch (#24008, @liggitt)

Other notable changes

v1.2.3

Documentation & Examples

Downloads

binary sha1 hash md5 hash
kubernetes.tar.gz b2ce4e0c72562d09ba06e3c0913f0bd78da0285e 69e75650de30d5a52d144799e94a168d

Changes since v1.2.2

Action Required

  • Make watch cache treat resourceVersion consistent with uncached watch (#24008, @liggitt)

Other notable changes

v1.3.0-alpha.2

Documentation & Examples

Downloads

binary sha1 hash md5 hash
kubernetes.tar.gz 305c8c2af7e99d463dbbe4208ecfe2b50585e796 aadb8d729d855e69212008f8fda628c0

Changes since v1.3.0-alpha.1

Other notable changes

v1.2.2

Documentation & Examples

Downloads

binary sha1 hash md5 hash
kubernetes.tar.gz 8dede5833a1986434adea80749624f81a0db7bb4 72a5389f22827fb5133fdc3b7bfb9b3a

Changes since v1.2.1

Other notable changes

v1.2.1

Documentation & Examples

Downloads

binary sha1 hash md5 hash
kubernetes.tar.gz 1639807c5788e1c6b1ab51fd30b723fb5debd865 235a1da47972c96a560d718d3256ca4f

Changes since v1.2.0

Other notable changes

v1.3.0-alpha.1

Documentation & Examples

Downloads

binary sha1 hash md5 hash
kubernetes.tar.gz e0041b08e220a4704ea2ad90a6ec7c8f2120c2d3 7bb2df32aea94678f72a8d1f43a12098

Changes since v1.2.0

Action Required

  • Disabling swagger ui by default on apiserver. Adding a flag that can enable it (#23025, @nikhiljindal)
  • restore ability to run against secured etcd (#21535, @AdoHe)

Other notable changes

v1.2.0

Documentation & Examples

Downloads

binary sha1 hash md5 hash
kubernetes.tar.gz 52dd998e1191f464f581a9b87017d70ce0b058d9 c0ce9e6150e9d7a19455db82f3318b4c

Changes since v1.1.1

Major Themes

  • Significant scale improvements. Increased cluster scale by 400% to 1000 nodes with 30,000 pods per cluster. Kubelet supports 100 pods per node with 4x reduced system overhead.
  • Simplified application deployment and management.
    • Dynamic Configuration (ConfigMap API in the core API group) enables application configuration to be stored as a Kubernetes API object and pulled dynamically on container startup, as an alternative to baking in command-line flags when a container is built.
    • Turnkey Deployments (Deployment API (Beta) in the Extensions API group) automate deployment and rolling updates of applications, specified declaratively. It handles versioning, multiple simultaneous rollouts, aggregating status across all pods, maintaining application availability, and rollback.
  • Automated cluster management:
    • Kubernetes clusters can now span zones within a cloud provider. Pods from a service will be automatically spread across zones, enabling applications to tolerate zone failure.
    • Simplified way to run a container on every node (DaemonSet API (Beta) in the Extensions API group): Kubernetes can schedule a service (such as a logging agent) that runs one, and only one, pod per node.
    • TLS and L7 support (Ingress API (Beta) in the Extensions API group): Kubernetes is now easier to integrate into custom networking environments by supporting TLS for secure communication and L7 http-based traffic routing.
    • Graceful Node Shutdown (aka drain) - The new “kubectl drain” command gracefully evicts pods from nodes in preparation for disruptive operations like kernel upgrades or maintenance.
    • Custom Metrics for Autoscaling (HorizontalPodAutoscaler API in the Autoscaling API group): The Horizontal Pod Autoscaling feature now supports custom metrics (Alpha), allowing you to specify application-level metrics and thresholds to trigger scaling up and down the number of pods in your application.
  • New GUI (dashboard) allows you to get started quickly and enables the same functionality found in the CLI as a more approachable and discoverable way of interacting with the system. Note: the GUI is enabled by default in 1.2 clusters.

Dashboard UI screenshot showing cards that represent applications that run inside a cluster

Other notable improvements

  • Job was Beta in 1.1 and is GA in 1.2 .
    • apiVersion: batch/v1 is now available. You now do not need to specify the .spec.selector field — a unique selector is automatically generated for you.
    • The previous version, apiVersion: extensions/v1beta1, is still supported. Even if you roll back to 1.1, the objects created using the new apiVersion will still be accessible, using the old version. You can continue to use your existing JSON and YAML files until you are ready to switch to batch/v1. We may remove support for Jobs with apiVersion: extensions/v1beta1 in 1.3 or 1.4.
  • HorizontalPodAutoscaler was Beta in 1.1 and is GA in 1.2 .
    • apiVersion: autoscaling/v1 is now available. Changes in this version are:
      • Field CPUUtilization which was a nested structure CPUTargetUtilization in HorizontalPodAutoscalerSpec was replaced by TargetCPUUtilizationPercentage which is an integer.
      • ScaleRef of type SubresourceReference in HorizontalPodAutoscalerSpec which referred to scale subresource of the resource being scaled was replaced by ScaleTargetRef which points just to the resource being scaled.
      • In extensions/v1beta1 if CPUUtilization in HorizontalPodAutoscalerSpec was not specified it was set to 80 by default while in autoscaling/v1 HPA object without TargetCPUUtilizationPercentage specified is a valid object. Pod autoscaler controller will apply a default scaling policy in this case which is equivalent to the previous one but may change in the future.
    • The previous version, apiVersion: extensions/v1beta1, is still supported. Even if you roll back to 1.1, the objects created using the new apiVersions will still be accessible, using the old version. You can continue to use your existing JSON and YAML files until you are ready to switch to autoscaling/v1. We may remove support for HorizontalPodAutoscalers with apiVersion: extensions/v1beta1 in 1.3 or 1.4.
  • Kube-Proxy now defaults to an iptables-based proxy. If the --proxy-mode flag is specified while starting kube-proxy (‘userspace’ or ‘iptables’), the flag value will be respected. If the flag value is not specified, the kube-proxy respects the Node object annotation: ‘net.beta.kubernetes.io/proxy-mode’. If the annotation is not specified, then ‘iptables’ mode is the default. If kube-proxy is unable to start in iptables mode because system requirements are not met (kernel or iptables versions are insufficient), the kube-proxy will fall-back to userspace mode. Kube-proxy is much more performant and less resource-intensive in ‘iptables’ mode.
  • Node stability can be improved by reserving resources for the base operating system using --system-reserved and --kube-reserved Kubelet flags
  • Liveness and readiness probes now support more configuration parameters: periodSeconds, successThreshold, failureThreshold
  • The new ReplicaSet API (Beta) in the Extensions API group is similar to ReplicationController, but its selector is more general (supports set-based selector; whereas ReplicationController only supports equality-based selector).
  • Scale subresource support is now expanded to ReplicaSets along with ReplicationControllers and Deployments. Scale now supports two different types of selectors to accommodate both equality-based selectors supported by ReplicationControllers and set-based selectors supported by Deployments and ReplicaSets.
  • “kubectl run” now produces Deployments (instead of ReplicationControllers) and Jobs (instead of Pods) by default.
  • Pods can now consume Secret data in environment variables and inject those environment variables into a container’s command-line args.
  • Stable version of Heapster which scales up to 1000 nodes: more metrics, reduced latency, reduced cpu/memory consumption (~4mb per monitored node).
  • Pods now have a security context which allows users to specify:
    • attributes which apply to the whole pod:
      • User ID
      • Whether all containers should be non-root
      • Supplemental Groups
      • FSGroup - a special supplemental group
      • SELinux options
    • If a pod defines an FSGroup, that Pod’s system (emptyDir, secret, configMap, etc) volumes and block-device volumes will be owned by the FSGroup, and each container in the pod will run with the FSGroup as a supplemental group
  • Volumes that support SELinux labelling are now automatically relabeled with the Pod’s SELinux context, if specified
  • A stable client library release_1_2 is added. The library is here, and detailed doc is here. We will keep the interface of this go client stable.
  • New Azure File Service Volume Plugin enables mounting Microsoft Azure File Volumes (SMB 2.1 and 3.0) into a Pod. See example for details.
  • Logs usage and root filesystem usage of a container, volumes usage of a pod and node disk usage are exposed through Kubelet new metrics API.

Experimental Features

  • Dynamic Provisioning of PersistentVolumes: Kubernetes previously required all volumes to be manually provisioned by a cluster administrator before use. With this feature, volume plugins that support it (GCE PD, AWS EBS, and Cinder) can automatically provision a PersistentVolume to bind to an unfulfilled PersistentVolumeClaim.
  • Run multiple schedulers in parallel, e.g. one or more custom schedulers alongside the default Kubernetes scheduler, using pod annotations to select among the schedulers for each pod. Documentation is here, design doc is here.
  • More expressive node affinity syntax, and support for “soft” node affinity. Node selectors (to constrain pods to schedule on a subset of nodes) now support the operators {In, NotIn, Exists, DoesNotExist, Gt, Lt} instead of just conjunction of exact match on node label values. In addition, we’ve introduced a new “soft” kind of node selector that is just a hint to the scheduler; the scheduler will try to satisfy these requests but it does not guarantee they will be satisfied. Both the “hard” and “soft” variants of node affinity use the new syntax. Documentation is here (see section “Alpha feature in Kubernetes v1.2: Node Affinity“). Design doc is here.
  • A pod can specify its own Hostname and Subdomain via annotations (pod.beta.kubernetes.io/hostname, pod.beta.kubernetes.io/subdomain). If the Subdomain matches the name of a headless service in the same namespace, a DNS A record is also created for the pod’s FQDN. More details can be found in the DNS README. Changes were introduced in PR #20688.
  • New SchedulerExtender enables users to implement custom out-of-(the-scheduler)-process scheduling predicates and priority functions, for example to schedule pods based on resources that are not directly managed by Kubernetes. Changes were introduced in PR #13580. Example configuration and documentation is available here. This is an alpha feature and may not be supported in its current form at beta or GA.
  • New Flex Volume Plugin enables users to use out-of-process volume plugins that are installed to “/usr/libexec/kubernetes/kubelet-plugins/volume/exec/” on every node, instead of being compiled into the Kubernetes binary. See example for details.
  • vendor volumes into a pod. It expects vendor drivers are installed in the volume plugin path on each kubelet node. This is an alpha feature and may change in future.
  • Kubelet exposes a new Alpha metrics API - /stats/summary in a user friendly format with reduced system overhead. The measurement is done in PR #22542.

Action required

  • Docker v1.9.1 is officially recommended. Docker v1.8.3 and Docker v1.10 are supported. If you are using an older release of Docker, please upgrade. Known issues with Docker 1.9.1 can be found below.
  • CPU hardcapping will be enabled by default for containers with CPU limit set, if supported by the kernel. You should either adjust your CPU limit, or set CPU request only, if you want to avoid hardcapping. If the kernel does not support CPU Quota, NodeStatus will contain a warning indicating that CPU Limits cannot be enforced.
  • The following applies only if you use the Go language client (/pkg/client/unversioned) to create Job by defining Go variables of type "k8s.io/kubernetes/pkg/apis/extensions".Job). We think this is not common, so if you are not sure what this means, you probably aren't doing this. If you do this, then, at the time you re-vendor the "k8s.io/kubernetes/" code, you will need to set job.Spec.ManualSelector = true, or else set job.Spec.Selector = nil. Otherwise, the jobs you create may be rejected. See Specifying your own pod selector.
  • Deployment was Alpha in 1.1 (though it had apiVersion extensions/v1beta1) and was disabled by default. Due to some non-backward-compatible API changes, any Deployment objects you created in 1.1 won’t work with in the 1.2 release.
    • Before upgrading to 1.2, delete all Deployment alpha-version resources, including the Replication Controllers and Pods the Deployment manages. Then create Deployment Beta resources after upgrading to 1.2. Not deleting the Deployment objects may cause the deployment controller to mistakenly match other pods and delete them, due to the selector API change.
    • Client (kubectl) and server versions must match (both 1.1 or both 1.2) for any Deployment-related operations.
    • Behavior change:
      • Deployment creates ReplicaSets instead of ReplicationControllers.
      • Scale subresource now has a new targetSelector field in its status. This field supports the new set-based selectors supported by Deployments, but in a serialized format.
    • Spec change:
      • Deployment’s selector is now more general (supports set-based selector; it only supported equality-based selector in 1.1).
      • .spec.uniqueLabelKey is removed -- users can’t customize unique label key -- and its default value is changed from “deployment.kubernetes.io/podTemplateHash” to “pod-template-hash”.
      • .spec.strategy.rollingUpdate.minReadySeconds is moved to .spec.minReadySeconds
  • DaemonSet was Alpha in 1.1 (though it had apiVersion extensions/v1beta1) and was disabled by default. Due to some non-backward-compatible API changes, any DaemonSet objects you created in 1.1 won’t work with in the 1.2 release.
    • Before upgrading to 1.2, delete all DaemonSet alpha-version resources. If you do not want to disrupt the pods, use kubectl delete daemonset --cascade=false. Then create DaemonSet Beta resources after upgrading to 1.2.
    • Client (kubectl) and server versions must match (both 1.1 or both 1.2) for any DaemonSet-related operations.
    • Behavior change:
      • DaemonSet pods will be created on nodes with .spec.unschedulable=true and will not be evicted from nodes whose Ready condition is false.
      • Updates to the pod template are now permitted. To perform a rolling update of a DaemonSet, update the pod template and then delete its pods one by one; they will be replaced using the updated template.
    • Spec change:
      • DaemonSet’s selector is now more general (supports set-based selector; it only supported equality-based selector in 1.1).
  • Running against a secured etcd requires these flags to be passed to kube-apiserver (instead of --etcd-config):
    • --etcd-certfile, --etcd-keyfile (if using client cert auth)
    • --etcd-cafile (if not using system roots)
  • As part of preparation in 1.2 for adding support for protocol buffers (and the direct YAML support in the API available today), the Content-Type and Accept headers are now properly handled as per the HTTP spec. As a consequence, if you had a client that was sending an invalid Content-Type or Accept header to the API, in 1.2 you will either receive a 415 or 406 error. The only client this is known to affect is curl when you use -d with JSON but don't set a content type, helpfully sends "application/x-www-urlencoded", which is not correct. Other client authors should double check that you are sending proper accept and content type headers, or set no value (in which case JSON is the default). An example using curl: curl -H "Content-Type: application/json" -XPOST -d '{"apiVersion":"v1","kind":"Namespace","metadata":{"name":"kube-system"}}' "http://127.0.0.1:8080/api/v1/namespaces"
  • The version of InfluxDB is bumped from 0.8 to 0.9 which means storage schema change. More details here.
  • We have renamed “minions” to “nodes”. If you were specifying NUM_MINIONS or MINION_SIZE to kube-up, you should now specify NUM_NODES or NODE_SIZE.

Known Issues

  • Paused deployments can't be resized and don't clean up old ReplicaSets.
  • Minimum memory limit is 4MB. This is a docker limitation
  • Minimum CPU limits is 10m. This is a Linux Kernel limitation
  • “kubectl rollout undo” (i.e. rollback) will hang on paused deployments, because paused deployments can’t be rolled back (this is expected), and the command waits for rollback events to return the result. Users should use “kubectl rollout resume” to resume a deployment before rolling back.
  • “kubectl edit ” will open the editor multiple times, once for each resource in the list.
  • If you create HPA object using autoscaling/v1 API without specifying targetCPUUtilizationPercentage and read it using kubectl it will print default value as specified in extensions/v1beta1 (see details in #23196).
  • If a node or kubelet crashes with a volume attached, the volume will remain attached to that node. If that volume can only be attached to one node at a time (GCE PDs attached in RW mode, for example), then the volume must be manually detached before Kubernetes can attach it to other nodes.
  • If a volume is already attached to a node any subsequent attempts to attach it again (due to kubelet restart, for example) will fail. The volume must either be manually detached first or the pods referencing it deleted (which would trigger automatic volume detach).
  • In very large clusters it may happen that a few nodes won’t register in API server in a given timeframe for whatever reasons (networking issue, machine failure, etc.). Normally when kube-up script will encounter even one NotReady node it will fail, even though the cluster most likely will be working. We added an environmental variable to kube-up ALLOWED_NOTREADY_NODES that defines the number of nodes that if not Ready in time won’t cause kube-up failure.
  • “kubectl rolling-update” only supports Replication Controllers (it doesn’t support Replica Sets). It’s recommended to use Deployment 1.2 with “kubectl rollout” commands instead, if you want to rolling update Replica Sets.
  • When live upgrading Kubelet to 1.2 without draining the pods running on the node, the containers will be restarted by Kubelet (see details in #23104).

Docker Known Issues

1.9.1
  • Listing containers can be slow at times which will affect kubelet performance. More information here
  • Docker daemon restarts can fail. Docker checkpoints have to deleted between restarts. More information here
  • Pod IP allocation-related issues. Deleting the docker checkpoint prior to restarting the daemon alleviates this issue, but hasn’t been verified to completely eliminate the IP allocation issue. More information here
  • Daemon becomes unresponsive (rarely) due to kernel deadlocks. More information here

Provider-specific Notes

Various

Core changes:

  • Support for load balancers with source ranges

AWS

Core changes:

  • Support for ELBs with complex configurations: better subnet selection with multiple subnets, and internal ELBs
  • Support for VPCs with private dns names
  • Multiple fixes to EBS volume mounting code for robustness, and to support mounting the full number of AWS recommended volumes.
  • Multiple fixes to avoid hitting AWS rate limits, and to throttle if we do
  • Support for the EC2 Container Registry (currently in us-east-1 only)

With kube-up:

  • Automatically install updates on boot & reboot
  • Use optimized image based on Jessie by default
  • Add support for Ubuntu Wily
  • Master is configured with automatic restart-on-failure, via CloudWatch
  • Bootstrap reworked to be more similar to GCE; better supports reboots/restarts
  • Use an elastic IP for the master by default
  • Experimental support for node spot instances (set NODE_SPOT_PRICE=0.05)

GCE

  • Ubuntu Trusty support added

Please see the Releases Page for older releases.

Analytics