内容简介:使用kubeadm搭建一个单节点kubernets实例,仅供学习. 运行环境和软件概要如下:以上系统和软件基本是2018最新的状态, 其中docker需要注意必须安装k8s支持到的版本.2.1 准备镜像
0. 概要
使用kubeadm搭建一个单节点kubernets实例,仅供学习. 运行环境和软件概要如下:
~ | 版本 | 备注 |
---|---|---|
OS | Ubuntu 18.0.4 | 192.168.132.152 my.servermaster.local/192.168.132.154 my.worker01.local |
Docker | 18.06.1~ce~3-0~ubuntu | k8s最新版(1.12.3)支持的最高版本, 必须固定 |
Kubernetes | 1.12.3 | 目标软件新 |
以上系统和软件基本是2018最新的状态, 其中 docker 需要注意必须安装k8s支持到的版本.
1. 安装步骤
- 关闭防火墙
swapoff -a
- 安装运行时, 默认使用docker, 安装docker即可
apt-get install docker-ce=18.06.1~ce~3-0~ubuntu
- 安装kubeadm 一下命令和官网的命令一致, 但是是包源改为阿里云
apt-get update && apt-get install -y apt-transport-https curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main EOF apt-get update apt-get install -y kubelet kubeadm kubectl apt-mark hold kubelet kubeadm kubectl
2. 使用kubeadm创建集群
2.1 准备镜像
因为国内是访问不到k8s.gcr.io所以需要将需要的镜像提前下载, 这次采用从阿里云镜像仓库下载, 并修改下载后的镜像tag为k8s.gcr.io
# a. 查看都需要哪些镜像需要下载 kubeadm config images list --kubernetes-version=v1.12.3 k8s.gcr.io/kube-apiserver:v1.12.3 k8s.gcr.io/kube-controller-manager:v1.12.3 k8s.gcr.io/kube-scheduler:v1.12.3 k8s.gcr.io/kube-proxy:v1.12.3 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.2.24 k8s.gcr.io/coredns:1.2 # b. 创建一个自动处理脚本下载镜像->重新tag->删除老tag vim ./load_images.sh #!/bin/bash ### config the image map declare -A images map=() images["k8s.gcr.io/kube-apiserver:v1.12.3"]="registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.12.3" images["k8s.gcr.io/kube-controller-manager:v1.12.3"]="registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.12.3" images["k8s.gcr.io/kube-scheduler:v1.12.3"]="registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.12.3" images["k8s.gcr.io/kube-proxy:v1.12.3"]="registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.12.3" images["k8s.gcr.io/pause:3.1"]="registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1" images["k8s.gcr.io/etcd:3.2.24"]="registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24" images["k8s.gcr.io/coredns:1.2.2"]="registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.2" ### re-tag foreach for key in ${!images[@]} do docker pull ${images[$key]} docker tag ${images[$key]} $key docker rmi ${images[$key]} done ### check docker images # c. 执行脚本准镜像 sudo chmod +x load_images.sh ./load_images.sh
2.2 初始化集群(master)
初始化需要指定至少两个参数:
- kubernetes-version: 方式kubeadm访问外网获取版本
- pod-network-cidr: flannel网络插件配置需要
### 执行初始化命令 sudo kubeadm init --kubernetes-version=v1.12.3 --pod-network-cidr=10.244.0.0/16 ### 最后的结果如下 ... ... Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.132.152:6443 --token ymny55.4jlbbkxiggmn9ezh --discovery-token-ca-cert-hash sha256:70265fafdb22d524c15616543d0b76527c686329221340b3b8da3652abed46b9
2.3 根据成功信息配置非管理员账号使用kubectl
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
使用非root账号查看节点情况:
kubectl get nodes NAME STATUS ROLES AGE VERSION servermaster NotReady master 28m v1.12.3
发现有一个master节点, 但是状态是NotReady, 这里需要做一个决定:
如果希望是单机则执行如下
kubectl taint nodes --all node-role.kubernetes.io/master-
如果希望搭建继续, 则继续后续步骤, 此时主节点状态可以忽略.
2.4 应用网络插件
查看 kube-flannel.yml 文件内容, 复制到本地文件避免terminal无法远程获取
kubectl apply -f kube-flannel.yml clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds-amd64 created daemonset.extensions/kube-flannel-ds-arm64 created daemonset.extensions/kube-flannel-ds-arm created daemonset.extensions/kube-flannel-ds-ppc64le created daemonset.extensions/kube-flannel-ds-s390x created
2.5 新建worker节点
worker节点新建参考[1. 安装步骤]在另外一台服务器上新建即可, worker节点不用准备2.1~2.3及之后的所有步骤, 仅需完成基本安装, 安装完毕进入新的worker节点, 执行上一步最后得到join命令:
kubeadm join 192.168.132.152:6443 --token ymny55.4jlbbkxiggmn9ezh --discovery-token-ca-cert-hash sha256:70265fafdb22d524c15616543d0b76527c686329221340b3b8da3652abed46b9 ... ... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster.
2.6 检查集群(1 master, 1 worker)
kubectl get nodes NAME STATUS ROLES AGE VERSION servermaster Ready master 94m v1.12.3 worker01 Ready <none> 54m v1.12.3
2.5 创建dashboard
复制 kubernetes-dashboard.yaml 内容到本地文件
kubectl create -f kubernetes-dashboard.yaml secret/kubernetes-dashboard-certs created serviceaccount/kubernetes-dashboard created role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created deployment.apps/kubernetes-dashboard created service/kubernetes-dashboard created
浏览器输入 worker节点 ip和端口使用 https 访问如下: https://my.worker01.local:30000/#!/login 即可以验证dashboard是否安装成功.
3. 遇到问题
-
master搭建好了, worker也join了get nodes发现还是NotReady状态
原因: 太复杂说不清楚任然是一个k8s issue , 查看issue基本可以确定是cni(Container Network Interface)问题,而flannel覆盖修改了这个问题
解决方法: 安装flannel插件(kubectl apply -f kube-flannel.yml )
-
配置错误重新开始搭建集群
解决方案: kubeadm reset
-
不能访问dashboard
原因: Back-off pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
解决方案:
-
修改 kubernetes-dashboard-ce.yaml 文件中的 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0 为 registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.0
-
提前下载好镜像并配置好tag, 注意下载的位置worker节点, 可以通过: kubectl describe pod kubernetes-dashboard-85477d54d7-wzt7 -n kube-system 查看比较具体的信息
-
4. 参考资料
安装kuadmin相关:
创建集群相关:
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网
猜你喜欢:- Elasticsearch 集群搭建和集群原理
- Zookeeper学习系列【二】Zookeeper 集群章节之集群搭建
- Spark集群环境搭建
- Zookeeper搭建集群
- FastDFS集群搭建
- Zookeeper集群环境搭建
本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
PHP and MySQL Web Development
Luke Welling、Laura Thomson / Sams / July 25, 2007 / $49.99
Book Description PHP and MySQL Web Development teaches you to develop dynamic, secure, commerical Web sites. Using the same accessible, popular teaching style of the three previous editions, this b......一起来看看 《PHP and MySQL Web Development》 这本书的介绍吧!