内容简介:一步步打造基于Kubeadm的高可用Kubernetes集群-第一部分
Kubernetes集群的核心是其master node,但目前默认情况下master node只有一个,一旦master node出现问题,Kubernetes集群将陷入“瘫痪”,对集群的管理、Pod的调度等均将无法实施,即便此时某些用户的Pod依旧可以正常运行。这显然不能符合我们对于运行于生产环境下的Kubernetes集群的要求,我们需要一个高可用的Kubernetes集群。
不过,目前 Kubernetes官方 针对构建高可用(high-availability)的集群的支持还是非常有限的,只是针对少数cloud-provider提供了粗糙的部署方法,比如: 使用kube-up.sh脚本在GCE上 、 使用kops在AWS上 等等。
高可用Kubernetes集群是Kubernetes演进的必然方向,官方在“ Building High-Availability Clusters ”一文中给出了当前搭建HA cluster的粗略思路。 Kubeadm 也将HA列入了后续版本的 里程碑计划 ,并且已经出了一版 使用kubeadm部署高可用cluster的方法提议草案 。
在kubeadm没有真正支持自动bootstrap的HA Kubernetes cluster之前,如果要搭建一个HA k8s cluster,我们应该如何做呢?本文将探索性地一步一步的给出打造一个HA K8s cluster的思路和具体步骤。 不过需要注意的是:这里搭建的HA k8s cluser仅在实验室中测试ok,还并未在生产环境中run过,因此在某些未知的细节方面可能存在思路上的纰漏 。
一、测试环境
高可用Kubernetes集群主要就是master node的高可用,因此,我们申请了三台美国西部区域的阿里云ECS作为三个master节点。通过hostnamectl将这三个节点的static hostname分别改为shaolin、wudang和emei:
shaolin: 10.27.53.32 wudang: 10.24.138.208 emei: 10.27.52.72
三台主机运行的都是Ubuntu 16.04.2 LTS (GNU/Linux 4.4.0-63-generic x86_64),使用root用户。
Docker版本如下:
root@shaolin:~# docker version Client: Version: 17.03.1-ce API version: 1.27 Go version: go1.7.5 Git commit: c6d412e Built: Mon Mar 27 17:14:09 2017 OS/Arch: linux/amd64 Server: Version: 17.03.1-ce API version: 1.27 (minimum version 1.12) Go version: go1.7.5 Git commit: c6d412e Built: Mon Mar 27 17:14:09 2017 OS/Arch: linux/amd64 Experimental: false
Ubuntu上 Docker CE版本 的安装步骤可以参看 这里 ,由于我的服务器在美西,因此不存在”墙”的问题。对于主机在国内的朋友,你需要根据安装过程中是否输出错误日志自行决定是否需要配置一个加速器。另外,这里用的 docker 版本有些新,Kubernetes官网上提及最多的、兼容最好的还是 docker 1.12.x版本 ,你也可以直接安装这个版本。
二、Master节点高可用的思路
通过 对single-master node的探索 ,我们知道master节点上运行着如下几个Kubernetes组件:
- kube-apiserver:集群核心,集群API接口、集群各个组件通信的中枢;集群安全控制;
- etcd:集群的数据中心;
- kube-scheduler:集群Pod的调度中心;
- kube-controller-manager:集群状态管理器,当集群状态与期望不同时,kcm会努力让集群恢复期望状态,比如:当一个pod死掉,kcm会努力新建一个pod来恢复对应replicas set期望的状态;
- kubelet: kubernetes node agent,负责与node上的docker engine打交道;
- kubeproxy: 每个node上一个,负责service vip到endpoint pod的流量转发,当前主要通过设置iptables规则实现。
Kubernetes集群的高可用就是master节点的高可用,master节点的高可用归根结底就是上述这些运行于master node上的组件的高可用。因此,我们的思路就是考量如何让这些组件高可用起来!综合Kubernetes官方提供的资料以及一些proposal draft,我们知道完全从头搭建的 hard way形式 似乎不甚理智^0^,将一个由kubeadm创建的k8s cluster改造为一个ha的k8s cluster似乎更可行。下面是我的思路方案:
前面提到过,我们的思路是基于kubeadm启动的kubernetes集群,通过逐步修改配置或替换,形成最终HA的k8s cluster。上图是k8s ha cluster的最终图景,我们可以看到:
- kube-apiserver:得益于apiserver的无状态,每个master节点的apiserver都是active的,并处理来自Load Balance分配过来的流量;
- etcd:状态的集中存储区。通过将多个master节点上的etcd组成一个etcd集群,使得apiserver共享集群状态和数据;
- kube-controller-manager:kcm自带leader-elected功能,多个master上的kcm构成一个集群,但只有被elected为leader的kcm在工作。每个master节点上的kcm都连接本node上的apiserver;
- kube-scheduler:scheduler自带leader-elected功能,多个master上的scheduler构成一个集群,但只有被elected为leader的scheduler在工作。每个master节点上的scheduler都连接本node上的apiserver;
- kubelet: 由于master上的各个组件均以container的形式呈现,因此不承担workload的master节点上的kubelet更多是用来管理这些master组件容器。每个master节点上的kubelet都连接本node上的apiserver;
- kube-proxy: 由于master节点不承载workload,因此master节点上的kube-proxy同样仅服务于一些特殊的服务,比如: kube-dns等。由于kubeadm下kube-proxy没有暴露出可供外部调整的配置,因此kube-proxy需要连接Load Balance暴露的apiserver的端口。
接下来,我们就来一步步按照我们的思路,对kubeadm启动的single-master node k8s cluster进行改造,逐步演进到我们期望的ha cluster状态。
三、第一步:使用kubeadm安装single-master k8s cluster
距离第一次 使用kubeadm安装kubernetes 1.5.1集群 已经有一些日子了,kubernetes和kubeadm都有了一些变化。当前kubernetes和kubeadm的最新release版都是1.6.2版本:
root@wudang:~# kubeadm version
kubeadm version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:22:08Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
root@wudang:~# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
gcr.io/google_containers/kube-proxy-amd64 v1.6.2 7a1b61b8f5d4 3 weeks ago 109 MB
gcr.io/google_containers/kube-controller-manager-amd64 v1.6.2 c7ad09fe3b82 3 weeks ago 133 MB
gcr.io/google_containers/kube-apiserver-amd64 v1.6.2 e14b1d5ee474 3 weeks ago 151 MB
gcr.io/google_containers/kube-scheduler-amd64 v1.6.2 b55f2a2481b9 3 weeks ago 76.8 MB
... ...
虽然kubeadm版本有更新,但安装过程没有太多变化,这里仅列出一些关键步骤,一些详细信息输出就在这里省略了。
我们先在shaolin node上安装相关程序文件:
root@shaolin:~# apt-get update && apt-get install -y apt-transport-https root@shaolin:~# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - OK root@shaolin:~# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list > deb http://apt.kubernetes.io/ kubernetes-xenial main > EOF root@shaolin:~# apt-get update root@shaolin:~# apt-get install -y kubelet kubeadm kubectl kubernetes-cni
接下来,使用kubeadm启动集群。注意:由于在aliyun上 flannel 网络插件一直不好用,这里还是使用 weave network 。
root@shaolin:~/k8s-install# kubeadm init --apiserver-advertise-address 10.27.53.32 [kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. [init] Using Kubernetes version: v1.6.2 [init] Using Authorization mode: RBAC [preflight] Running pre-flight checks [preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.03.1-ce. Max validated version: 1.12 [preflight] Starting the kubelet service [certificates] Generated CA certificate and key. [certificates] Generated API server certificate and key. [certificates] API Server serving cert is signed for DNS names [shaolin kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.27.53.32] [certificates] Generated API server kubelet client certificate and key. [certificates] Generated service account token signing key and public key. [certificates] Generated front-proxy CA certificate and key. [certificates] Generated front-proxy client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [apiclient] Created API client, waiting for the control plane to become ready [apiclient] All control plane components are healthy after 17.045449 seconds [apiclient] Waiting for at least one node to register [apiclient] First node has registered after 5.008588 seconds [token] Using token: a8dd42.afdb86eda4a8c987 [apiconfig] Created RBAC rules [addons] Created essential addon: kube-proxy [addons] Created essential addon: kube-dns Your Kubernetes master has initialized successfully! To start using your cluster, you need to run (as a regular user): sudo cp /etc/kubernetes/admin.conf $HOME/ sudo chown $(id -u):$(id -g) $HOME/admin.conf export KUBECONFIG=$HOME/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: http://kubernetes.io/docs/admin/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join --token abcdefghijklmn 10.27.53.32:6443 root@shaolin:~/k8s-install# pods NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE kube-system etcd-shaolin 1/1 Running 0 34s 10.27.53.32 shaolin kube-system kube-apiserver-shaolin 1/1 Running 0 35s 10.27.53.32 shaolin kube-system kube-controller-manager-shaolin 1/1 Running 0 23s 10.27.53.32 shaolin kube-system kube-dns-3913472980-tkr91 0/3 Pending 0 1m <none> kube-system kube-proxy-bzvvk 1/1 Running 0 1m 10.27.53.32 shaolin kube-system kube-scheduler-shaolin 1/1 Running 0 46s 10.27.53.32 shaolin
k8s 1.6.2版本的weave network的安装与之前稍有不同,因为k8s 1.6启用了更为安全的机制,默认采用RBAC对运行于cluster上的workload进行有限授权。我们要使用的weave network plugin的yaml为 weave-daemonset-k8s-1.6.yaml :
root@shaolin:~/k8s-install# kubectl apply -f https://git.io/weave-kube-1.6 clusterrole "weave-net" created serviceaccount "weave-net" created clusterrolebinding "weave-net" created daemonset "weave-net" created
如果你的weave pod启动失败且原因类似如下日志:
Network 172.30.0.0/16 overlaps with existing route 172.16.0.0/12 on host.
你需要修改你的weave network的 IPALLOC_RANGE(这里我使用了172.32.0.0/16):
//weave-daemonset-k8s-1.6.yaml
... ...
spec:
template:
metadata:
labels:
name: weave-net
spec:
hostNetwork: true
hostPID: true
containers:
- name: weave
env:
- name: IPALLOC_RANGE
value: 172.32.0.0/16
... ...
master安装ok后,我们将wudang、emei两个node作为k8s minion node,来测试一下cluster的搭建是否是正确的,同时这一过程也在wudang、emei上安装上了kubelet和kube-proxy,这两个组件在后续的“改造”过程中是可以直接使用的:
以emei node为例: root@emei:~# kubeadm join --token abcdefghijklmn 10.27.53.32:6443 [kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. [preflight] Running pre-flight checks [preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.03.1-ce. Max validated version: 1.12 [preflight] Starting the kubelet service [discovery] Trying to connect to API Server "10.27.53.32:6443" [discovery] Created cluster-info discovery client, requesting info from "https://10.27.53.32:6443" [discovery] Cluster info signature and contents are valid, will use API Server "https://10.27.53.32:6443" [discovery] Successfully established connection with API Server "10.27.53.32:6443" [bootstrap] Detected server version: v1.6.2 [bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1) [csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request [csr] Received signed certificate from the API server, generating KubeConfig... [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" Node join complete: * Certificate signing request sent to master and response received. * Kubelet informed of new secure connection details. Run 'kubectl get nodes' on the master to see this machine join.
建立一个多pod的nginx服务,测试一下集群网络是否通!这里就不赘述了。
安装后的single-master kubernetes cluster的状态就如下图所示:
四、第二步:搭建etcd cluster for ha k8s cluster
k8s集群状态和数据都存储在etcd中,高可用的k8s集群离不开高可用的etcd cluster。我们需要为最终的ha k8s cluster提供一个ha的etcd cluster,如何做呢?
当前k8s cluster中,shaolin master node上的etcd存储着k8s集群的所有数据和状态。我们需要在wudang和emei两个节点上也建立起etcd实例,与现存在 etcd共同构建成为高可用的且存储有cluster数据和状态的集群。我们将这一过程再细化为几个小步骤:
0、在emei、wudang两个节点上启动kubelet服务
etcd cluster可以采用完全独立的、与k8s组件无关的建立方法。不过这里我采用的是和master一样的方式,即采用由wudang和emei两个node上kubelet启动的etcd作为etcd cluster的两个member。此时,wudang和emei两个node的角色是k8s minion node,我们需要首先清理一下这两个node的数据:
root@shaolin:~/k8s-install # kubectl drain wudang --delete-local-data --force --ignore-daemonsets node "wudang" cordoned WARNING: Ignoring DaemonSet-managed pods: kube-proxy-mxwp3, weave-net-03jbh; Deleting pods with local storage: weave-net-03jbh pod "my-nginx-2267614806-fqzph" evicted node "wudang" drained root@wudang:~# kubeadm reset [preflight] Running pre-flight checks [reset] Stopping the kubelet service [reset] Unmounting mounted directories in "/var/lib/kubelet" [reset] Removing kubernetes-managed containers [reset] No etcd manifest found in "/etc/kubernetes/manifests/etcd.yaml", assuming external etcd. [reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim] [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki] [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf] root@shaolin:~/k8s-install # kubectl drain emei --delete-local-data --force --ignore-daemonsets root@emei:~# kubeadm reset root@shaolin:~/k8s-install# kubectl delete node/wudang root@shaolin:~/k8s-install# kubectl delete node/emei
我们的小目标中:etcd cluster将由各个node上的kubelet自动启动;而kubelet则是由systemd在sys init时启动,且其启动配置如下:
root@wudang:~# cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf [Service] Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true" Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true" Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin" Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local" Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt" ExecStart= ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_EXTRA_ARGS
我们需要首先在wudang和emei node上将kubelet启动起来,我们以wudang node为例:
root@wudang:~# systemctl enable kubelet root@wudang:~# systemctl start kubelet
查看kubelet service日志:
root@wudang:~# journalctl -u kubelet -f May 10 10:58:41 wudang systemd[1]: Started kubelet: The Kubernetes Node Agent. May 10 10:58:41 wudang kubelet[27179]: I0510 10:58:41.798507 27179 feature_gate.go:144] feature gates: map[] May 10 10:58:41 wudang kubelet[27179]: error: failed to run Kubelet: invalid kubeconfig: stat /etc/kubernetes/kubelet.conf: no such file or directory May 10 10:58:41 wudang systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 10:58:41 wudang systemd[1]: kubelet.service: Unit entered failed state. May 10 10:58:41 wudang systemd[1]: kubelet.service: Failed with result 'exit-code'.
kubelet启动失败,因为缺少/etc/kubernetes/kubelet.conf这个配置文件。我们需要向shaolin node求援,我们需要将shaolin node上的同名配置文件copy到wudang和emei两个node下面,当然同时需要copy的还包括shaolin node上的/etc/kubernetes/pki目录:
root@wudang:~# kubectl --kubeconfig=/etc/kubernetes/kubelet.conf config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://10.27.53.32:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: system:node:shaolin
name: system:node:shaolin@kubernetes
current-context: system:node:shaolin@kubernetes
kind: Config
preferences: {}
users:
- name: system:node:shaolin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
root@wudang:~# ls /etc/kubernetes/pki
apiserver.crt apiserver-kubelet-client.crt ca.crt ca.srl front-proxy-ca.key front-proxy-client.key sa.pub
apiserver.key apiserver-kubelet-client.key ca.key front-proxy-ca.crt front-proxy-client.crt sa.key
systemctl daemon-reload; systemctl restart kubelet后,再查看kubelet service日志,你会发现kubelet起来了!
以wudang node为例: root@wudang:~# journalctl -u kubelet -f -- Logs begin at Mon 2017-05-08 15:12:01 CST. -- May 11 10:37:07 wudang kubelet[26907]: I0511 10:37:07.213529 26907 factory.go:54] Registering systemd factory May 11 10:37:07 wudang kubelet[26907]: I0511 10:37:07.213674 26907 factory.go:86] Registering Raw factory May 11 10:37:07 wudang kubelet[26907]: I0511 10:37:07.213813 26907 manager.go:1106] Started watching for new ooms in manager May 11 10:37:07 wudang kubelet[26907]: I0511 10:37:07.216383 26907 oomparser.go:185] oomparser using systemd May 11 10:37:07 wudang kubelet[26907]: I0511 10:37:07.217415 26907 manager.go:288] Starting recovery of all containers May 11 10:37:07 wudang kubelet[26907]: I0511 10:37:07.285428 26907 manager.go:293] Recovery completed May 11 10:37:07 wudang kubelet[26907]: I0511 10:37:07.344425 26907 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach May 11 10:37:07 wudang kubelet[26907]: E0511 10:37:07.356188 26907 eviction_manager.go:214] eviction manager: unexpected err: failed GetNode: node 'wudang' not found May 11 10:37:07 wudang kubelet[26907]: I0511 10:37:07.358402 26907 kubelet_node_status.go:77] Attempting to register node wudang May 11 10:37:07 wudang kubelet[26907]: I0511 10:37:07.363083 26907 kubelet_node_status.go:80] Successfully registered node wudang
此时此刻,我们先让wudang、emei node上的kubelet先连着shaolin node上的apiserver。
1、在emei、wudang两个节点上建立一个etcd cluster
我们以shaolin node上的/etc/kubernetes/manifests/etcd.yaml为蓝本,修改出wudang和emei上的etcd.yaml,主要的变化在于containers:command部分:
wudang上的/etc/kubernetes/manifests/etcd.yaml:
spec:
containers:
- command:
- etcd
- --name=etcd-wudang
- --initial-advertise-peer-urls=http://10.24.138.208:2380
- --listen-peer-urls=http://10.24.138.208:2380
- --listen-client-urls=http://10.24.138.208:2379,http://127.0.0.1:2379
- --advertise-client-urls=http://10.24.138.208:2379
- --initial-cluster-token=etcd-cluster
- --initial-cluster=etcd-wudang=http://10.24.138.208:2380,etcd-emei=http://10.27.52.72:2380
- --initial-cluster-state=new
- --data-dir=/var/lib/etcd
image: gcr.io/google_containers/etcd-amd64:3.0.17
emei上的/etc/kubernetes/manifests/etcd.yaml:
spec:
containers:
- command:
- etcd
- --name=etcd-emei
- --initial-advertise-peer-urls=http://10.27.52.72:2380
- --listen-peer-urls=http://10.27.52.72:2380
- --listen-client-urls=http://10.27.52.72:2379,http://127.0.0.1:2379
- --advertise-client-urls=http://10.27.52.72:2379
- --initial-cluster-token=etcd-cluster
- --initial-cluster=etcd-emei=http://10.27.52.72:2380,etcd-wudang=http://10.24.138.208:2380
- --initial-cluster-state=new
- --data-dir=/var/lib/etcd
image: gcr.io/google_containers/etcd-amd64:3.0.17
将这两个文件分别放入各自node的/etc/kubernetes/manifests目录后,各自node上的kubelet将会自动将对应的etcd pod启动起来!
root@shaolin:~# pods NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE kube-system etcd-emei 1/1 Running 0 11s 10.27.52.72 emei kube-system etcd-shaolin 1/1 Running 0 25m 10.27.53.32 shaolin kube-system etcd-wudang 1/1 Running 0 24s 10.24.138.208 wudang
我们查看一下当前etcd cluster的状态:
# etcdctl endpoint status --endpoints=10.27.52.72:2379,10.24.138.208:2379 10.27.52.72:2379, 6e80adf8cd57f826, 3.0.17, 25 kB, false, 17, 660 10.24.138.208:2379, f3805d1ab19c110b, 3.0.17, 25 kB, true, 17, 660 注:输出的列从左到右分别表示:endpoint URL, ID, version, database size, leadership status, raft term, and raft status. 因此,我们可以看出wudang(10.24.138.208)上的etcd被选为cluster leader了
我们测试一下etcd cluster,put一些key:
在wudang节点:(注意:export ETCDCTL_API=3) root@wudang:~# etcdctl put foo bar OK root@wudang:~# etcdctl put foo1 bar1 OK root@wudang:~# etcdctl get foo foo bar 在emei节点: root@emei:~# etcdctl get foo foo bar
至此,当前kubernetes cluster的状态示意图如下:
2、同步shaolin上etcd的数据到etcd cluster中
kubernetes 1.6.2版本默认使用3.x版本etcd。etcdctl 3.x版本提供了一个make-mirror功能用于在etcd cluster间同步数据,这样我们就可以通过etcdctl make-mirror将shaolin上etcd的k8s cluster数据同步到上述刚刚创建的etcd cluster中。在emei node上执行下面命令:
root@emei:~# etcdctl make-mirror --no-dest-prefix=true 127.0.0.1:2379 --endpoints=10.27.53.32:2379 --insecure-skip-tls-verify=true ... ... 261 302 341 380 420 459 498 537 577 616 655 ... ...
etcdctl make-mirror每隔30s输出一次日志,不过通过这些日志无法看出来同步过程。并且etcdctl make-mirror似乎是流式同步:没有结束的边界。因此你需要手工判断一下数据是否都同步过去了!比如通过查看某个key,对比两边的差异的方式:
# etcdctl get --from-key /api/v2/registry/clusterrolebindings/cluster-admin .. .. compact_rev_key 122912
或者通过endpoint status命令查看数据库size大小,对比双方的size是否一致。一旦差不多了,就可以停掉make-mirror的执行了!
3、将shaolin上的apiserver连接的etcd改为连接etcd cluster,停止并删除shaolin上的etcd
修改shaolin node上的/etc/kubernetes/manifests/kube-apiserver.yaml,让shaolin上的kube0-apiserver连接到emei node上的etcd:
修改下面一行: - --etcd-servers=http://10.27.52.72:2379
修改保存后,kubelet会自动重启kube-apiserver,重启后的kube-apiserver工作正常!
接下来,我们停掉并删除掉shaolin上的etcd(并删除相关数据存放目录):
root@shaolin:~# rm /etc/kubernetes/manifests/etcd.yaml root@shaolin:~# rm -fr /var/lib/etcd
再查看k8s cluster当前pod,你会发现etcd-shaolin不见了。
至此,k8s集群的当前状态示意图如下:
4、重新创建shaolin上的etcd ,并以member形式加入etcd cluster
我们首先需要在已存在的etcd cluster中添加etcd-shaolin这个member:
root@wudang:~/kubernetes-conf-shaolin/manifests# etcdctl member add etcd-shaolin --peer-urls=http://10.27.53.32:2380 Member 3184cfa57d8ef00c added to cluster 140cec6dd173ab61
然后,在shaolin node上基于原shaolin上的etcd.yaml文件进行如下修改:
// /etc/kubernetes/manifests/etcd.yaml
... ...
spec:
containers:
- command:
- etcd
- --name=etcd-shaolin
- --initial-advertise-peer-urls=http://10.27.53.32:2380
- --listen-peer-urls=http://10.27.53.32:2380
- --listen-client-urls=http://10.27.53.32:2379,http://127.0.0.1:2379
- --advertise-client-urls=http://10.27.53.32:2379
- --initial-cluster-token=etcd-cluster
- --initial-cluster=etcd-shaolin=http://10.27.53.32:2380,etcd-wudang=http://10.24.138.208:2380,etcd-emei=http://10.27.52.72:2380
- --initial-cluster-state=existing
- --data-dir=/var/lib/etcd
image: gcr.io/google_containers/etcd-amd64:3.0.17
修改保存后,kubelet将自动拉起etcd-shaolin:
root@shaolin:~/k8s-install# pods NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE kube-system etcd-emei 1/1 Running 0 3h 10.27.52.72 emei kube-system etcd-shaolin 1/1 Running 0 8s 10.27.53.32 shaolin kube-system etcd-wudang 1/1 Running 0 3h 10.24.138.208 wudang
查看etcd cluster状态:
root@shaolin:~# etcdctl endpoint status --endpoints=10.27.52.72:2379,10.24.138.208:2379,10.27.53.32:2379 10.27.52.72:2379, 6e80adf8cd57f826, 3.0.17, 11 MB, false, 17, 34941 10.24.138.208:2379, f3805d1ab19c110b, 3.0.17, 11 MB, true, 17, 34941 10.27.53.32:2379, 3184cfa57d8ef00c, 3.0.17, 11 MB, false, 17, 34941
可以看出三个etcd实例的数据size、raft status是一致的,wudang node上的etcd是leader!
5、将shaolin上的apiserver的etcdserver指向改回etcd-shaolin
// /etc/kubernetes/manifests/kube-apiserver.yaml ... ... - --etcd-servers=http://127.0.0.1:2379 ... ...
生效重启后,当前kubernetes cluster的状态如下面示意图:
第二部分在这里。
© 2017,bigwhite. 版权所有.
以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
Practical Django Projects, Second Edition
James Bennett / Apress / 2009 / 44.99
Build a django content management system, blog, and social networking site with James Bennett as he introduces version 1.1 of the popular Django framework. You’ll work through the development of ea......一起来看看 《Practical Django Projects, Second Edition》 这本书的介绍吧!