内容简介:今天一早到公司看日常新闻发现kubernetes更新到重点关注的更新:官方更新了功能,我也迫不及待的去升级了我的kubernetes环境。
今天一早到公司看日常新闻发现kubernetes更新到 1.15.0
版本,更新了不少的功能,具体功能见kubernetes的blog: https://kubernetes.io/blog/2019/06/19/kubernetes-1-15-release-announcement/
重点关注的更新:
- kubeadm证书管理在1.15中变得更加强大,kubeadm现在可以在它们到期之前无缝转动所有证书(升级时)。有关如何管理证书的信息,请查看 kubeadm文档
-
kubeadm配置文件API在
1.15
中从v1beta1
移动到v1beta2
- 在 Kubernetes Core 中支持 Go 模块
- 继续为云供应商的提取与代码组织需求做好准备。云服务供应商的代码已经被移动至 kubernetes/legacy-cloud-providers ,旨在降低后续删除与外部使用难度
- Kubectl的 get与describe 现可与各扩展成功协作
- 节点现可支持 第三方监控插件
- 发布新的alpha测试版本调度框架,用于管理各调度插件
- 用于在不同容器用例当中触发 hook 命令的 ExecutionHook API 现在进入 alpha 测试阶段
- 继续弃用 extensions/v1beta1、apps/v1beta1 以及 apps/v1beta2 APIs;这些扩展将在 1.16 版本中被彻底淘汰
官方更新了功能,我也迫不及待的去升级了我的kubernetes环境。
检查群集
检查群集可用于升级的版本和当前群集是否可升级
kubeadm upgrade plan
这里需要先升级 kubeadm
kubelet
kubectl
升级kubelet kubeadm kubectl
yum clean all // 如果yum查找不到1.15.0版本,先清理一下yum的本地缓存 yum install -y kubelet kubeadm kubectl
其他节点也需要执行
下载对应的镜像
- kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration kubernetesVersion: "v1.15.0" ... imageRepository: registry.aliyuncs.com/google_containers
在kubeadm初始化的配置中指定要更新的版本和镜像仓库
- 下载镜像
$ kubeadm config images pull --config=kubeadm-config.yaml [config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.15.0 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.15.0 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.15.0 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.15.0 [config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.1 [config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.3.10 [config/images] Pulled registry.aliyuncs.com/google_containers/coredns:1.3.1
镜像下载成功
升级群集组件
kubeadm upgrade apply v1.15.0 [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade/version] You have chosen to change the cluster version to "v1.15.0" [upgrade/versions] Cluster version: v1.14.2 [upgrade/versions] kubeadm version: v1.15.0 [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd] [upgrade/prepull] Prepulling image for component etcd. [upgrade/prepull] Prepulling image for component kube-apiserver. [upgrade/prepull] Prepulling image for component kube-controller-manager. [upgrade/prepull] Prepulling image for component kube-scheduler. [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd [upgrade/prepull] Prepulled image for component etcd. [upgrade/prepull] Prepulled image for component kube-controller-manager. [upgrade/prepull] Prepulled image for component kube-apiserver. [upgrade/prepull] Prepulled image for component kube-scheduler. [upgrade/prepull] Successfully prepulled the images for all the control plane components [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.15.0"... Static pod: kube-apiserver-k8s-11 hash: 2e138075197b77cbc857ed6c45d3e0a3 Static pod: kube-controller-manager-k8s-11 hash: d4e699449cae3b28f9f657d0eabfef0e Static pod: kube-scheduler-k8s-11 hash: a29556bf1d34f898bf5d0ce3c15a5948 [upgrade/etcd] Upgrading to TLS for etcd [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests653407144" [upgrade/staticpods] Preparing for "kube-apiserver" upgrade [upgrade/staticpods] Renewing apiserver certificate [upgrade/staticpods] Renewing apiserver-kubelet-client certificate [upgrade/staticpods] Renewing front-proxy-client certificate [upgrade/staticpods] Renewing apiserver-etcd-client certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-06-21-09-39-05/kube-apiserver.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-apiserver-k8s-11 hash: 2e138075197b77cbc857ed6c45d3e0a3 Static pod: kube-apiserver-k8s-11 hash: a0b1f68dcbfbbb58b72942275ea6e8c8 [apiclient] Found 1 Pods for label selector component=kube-apiserver [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade [upgrade/staticpods] Renewing controller-manager.conf certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-06-21-09-39-05/kube-controller-manager.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-controller-manager-k8s-11 hash: d4e699449cae3b28f9f657d0eabfef0e Static pod: kube-controller-manager-k8s-11 hash: e421c8900f2987ad26251124112ccba8 [apiclient] Found 1 Pods for label selector component=kube-controller-manager [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! [upgrade/staticpods] Preparing for "kube-scheduler" upgrade [upgrade/staticpods] Renewing scheduler.conf certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-06-21-09-39-05/kube-scheduler.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-scheduler-k8s-11 hash: a29556bf1d34f898bf5d0ce3c15a5948 Static pod: kube-scheduler-k8s-11 hash: b778c0dffa2d3c4049df6a82b96ea2c4 [apiclient] Found 1 Pods for label selector component=kube-scheduler [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [addons] Applied essential addon: CoreDNS [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [addons] Applied essential addon: kube-proxy [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.15.0". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
可以看到提示成功升级到 v1.15.0
版本
- 重启kubelet
systemctl daemon-reload systemctl restart kubelet
所有节点升级后都需要重启kubelet
- 升级其它master节点,如果有
kubeadm upgrade node control-plane
- 升级工作节点
kubeadm upgrade node
验证群集升级
$ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-11 Ready master 27d v1.15.0 k8s-12 Ready <none> 27d v1.15.0 k8s-13 Ready <none> 27d v1.15.0
$ kubectl version Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
参考文档: https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-15/
以上所述就是小编给大家介绍的《kubeadm升级kubernetes到1.15.0版本》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!
猜你喜欢:- 手动升级Coreos版本
- 升级Debian版本操作总结
- npm 升级版本号技巧
- 悟空 CRM 11.0 版本-20210502 升级内容【JAVA 版本】
- 升级Tensorflow到1.2版本
- GCC升级至高版本
本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
小米生态链战地笔记
小米生态链谷仓学院 / 中信出版集团 / 2017-5 / 56.00
2013年下半年,小米开始做一件事,就是打造一个生态链布局IoT(物联网);2016年年底,小米生态链上已经拥有了77家企业,生态链企业整体销售额突破100亿元。这3年,是小米生态链快速奔跑的3年,也是小米在商场中不断厮杀着成长的3年。 3年,77家生态链企业,16家年销售额破亿,4家独角兽公司,边实战,边积累经验。 小米生态链是一个基于企业生态的智能硬件孵化器。过去的3年中,在毫无先......一起来看看 《小米生态链战地笔记》 这本书的介绍吧!