Ubuntu16.04安装Kubernetes集群

栏目: 编程工具 · 发布时间: 5年前

内容简介:规划是使用两台服务器安装kubernetes集群,分别为:kube-1、kube-2。我们这里使用阿里云的镜像,修改 /etc/apt/sources.list 为:并且执行以下命令安装kubernetes的软件源,参考:

准备工作

规划是使用两台服务器安装kubernetes集群,分别为:kube-1、kube-2。

修改ubuntu国内镜像源

我们这里使用阿里云的镜像,修改 /etc/apt/sources.list 为:

deb http://mirrors.aliyun.com/ubuntu/ xenial main
deb-src http://mirrors.aliyun.com/ubuntu/ xenial main
 
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates main
deb-src http://mirrors.aliyun.com/ubuntu/ xenial-updates main
 
deb http://mirrors.aliyun.com/ubuntu/ xenial universe
deb-src http://mirrors.aliyun.com/ubuntu/ xenial universe
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates universe
deb-src http://mirrors.aliyun.com/ubuntu/ xenial-updates universe
 
deb http://mirrors.aliyun.com/ubuntu/ xenial-security main
deb-src http://mirrors.aliyun.com/ubuntu/ xenial-security main
deb http://mirrors.aliyun.com/ubuntu/ xenial-security universe
deb-src http://mirrors.aliyun.com/ubuntu/ xenial-security universe

添加kubernetes包仓库

并且执行以下命令安装kubernetes的软件源,参考: https://opsx.alibaba.com/mirror

apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl

安装docker

apt-get install docker.io -y

随后启动docker

service docker start

如果需要普通用户执行 docker 命令,可以参考这篇文章进一步做设置: https://johng.cn/using-docker-without-root/

安装Kubernetes

在kube-1执行安装

我们现在kube-1节点上,使用 kubeadmin 工具包来安装kubernetes,使用以下命令初始化Kubernetes节点:

kubeadm init --image-repository loads  --kubernetes-version v1.13.2

其中, loads 为我个人创建的kubernetes镜像仓库, v1 . 13 . 2 为指定需要安装的kubernetes版本,不同的版本依赖的docker镜像会不同。

执行后,输出结果如下:

root@kube-1:~# kubeadm init --image-repository loads  --kubernetes-version v1.13.2 
[init] Using Kubernetes version: v1.13.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kube-1 localhost] and IPs [192.168.1.9 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kube-1 localhost] and IPs [192.168.1.9 127.0.0.1 ::1]
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kube-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.9]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 28.005808 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kube-1" as an annotation
[mark-control-plane] Marking the node kube-1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kube-1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: xvm4rc.qmlh7m5uprqfjt9g
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
 
Your Kubernetes master has initialized successfully!
 
To start using your cluster, you need to run the following as a regular user:
 
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
 
You can now join any number of machines by running the following on each node
as root:
 
  kubeadm join 192.168.1.9:6443 --token xvm4rc.qmlh7m5uprqfjt9g --discovery-token-ca-cert-hash sha256:bab0a640108a524fefd4574ccb9f63273087936fd403f4b51d6217b903cbf400
 
root@kube-1:~#

根据提示,初始化完成之后我们需要执行以下命令安装kubectl的配置文件:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

此外,需要注意的是这一段,记录下来,以便后续添加其他节点进入集群:

kubeadm join 192.168.1.9:6443 --token xvm4rc.qmlh7m5uprqfjt9g --discovery-token-ca-cert-hash sha256:bab0a640108a524fefd4574ccb9f63273087936fd403f4b51d6217b903cbf400

在kube-2上执行安装

在kube-2节点上完成准备工作后不要执行 kubeadmininit 指令,而是需要执行 kubeadminjoin 指令加入节点:

root@kube-2:~# kubeadm join 192.168.1.9:6443 --token xvm4rc.qmlh7m5uprqfjt9g --discovery-token-ca-cert-hash sha256:bab0a640108a524fefd4574ccb9f63273087936fd403f4b51d6217b903cbf400
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "192.168.1.9:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.9:6443"
[discovery] Requesting info from "https://192.168.1.9:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.1.9:6443"
[discovery] Successfully established connection with API Server "192.168.1.9:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kube-2" as an annotation
 
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
 
Run 'kubectl get nodes' on the master to see this node join the cluster.
 
root@kube-2:~#

如果之前已经初始化过节点,可随时通过 kubeadminreset 指令重置当前节点配置,并使用 servicekubeletrestart 重启kubelet服务,随后再通过 kubeadminjoin 指令加入集群。

Kubernetes常见问题

1、 runtimenetworknotready : NetworkReady = falsereason : NetworkPluginNotReadymessage : docker : networkpluginisnotready : cniconfiguninitialized

原因:

因为kubelet配置了 network - plugin = cni

,但是还没安装,所以状态会是NotReady,不想看这个报错或者不需要网络,就可以修改kubelet配置文件,去掉

network - plugin = cni

就可以了。

解决:

vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

删除最后一行里的 $ KUBELET _ NETWORK _ ARGS ,1.11.2+版本的封装在 / var / lib / kubelet / kubeadm - flags . env 文件中。


以上所述就是小编给大家介绍的《Ubuntu16.04安装Kubernetes集群》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

社会化营销

社会化营销

陈亮途 / 万卷出版公司 / 2011-10-1 / 45.00元

这是一本讲述社会化媒体营销的书。今天,社会化媒体营销和移动互联网的势头已经是锐不可当的了,而这两者正正是最需要创意才能跟顾客,跟大众建立关系,创造利润的。假如国内的企业还是以不规范的手段来做营销行为,那么我们的营销水平一定会更加低落。本书作者一直以提升国内营销素质和营销人员的水平作为使命,经常穿梭于世界各地,本书正是作者工作经验的结晶,在书中,作者列举了大量国内外的成功(失败)案例,以帮助读者理解......一起来看看 《社会化营销》 这本书的介绍吧!

HTML 压缩/解压工具
HTML 压缩/解压工具

在线压缩/解压 HTML 代码

RGB转16进制工具
RGB转16进制工具

RGB HEX 互转工具

正则表达式在线测试
正则表达式在线测试

正则表达式在线测试