内容简介:1:服务器信息以及节点介绍初次使用系统信息:centos7
1:服务器信息以及节点介绍
初次使用 CoreDNS , Ingress , Calico
系统信息:centos7
主机名称 | IP | 备注 |
---|---|---|
master1 | 192.168.161.161 | master and etcd |
master2 | 192.168.161.162 | master and etcd |
master3 | 192.168.161.163 | etcd |
node1 | 192.168.161.77 | node1 |
node2 | 192.168.161.78 | node2 |
我这边将数据盘挂载了 /opt 目录下
一、环境初始化
1:分别在4台主机设置主机名称
hostnamectl set-hostname master1 hostnamectl set-hostname master2 hostnamectl set-hostname master3 hostnamectl set-hostname node1 hostnamectl set-hostname node2
2:配置主机映射
cat <<EOF > /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.161.161 master1 192.168.161.162 master2 192.168.161.163 master3 192.168.161.77 node1 192.168.161.78 node2 EOF
3:node01上执行ssh免密码登陆配置
ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.161.XXX
4:四台主机配置、停防火墙、关闭Swap、关闭Selinux、设置内核、K8S的yum源、安装依赖包、配置ntp(配置完后建议重启一次)
systemctl stop firewalld systemctl disable firewalld swapoff -a sed -i 's/.*swap.*/#&/' /etc/fstab setenforce 0 sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config modprobe br_netfilter cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl -p /etc/sysctl.d/k8s.conf ls /proc/sys/net/bridge yum install -y epel-release yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget vim ntpdate libseccomp libtool-ltdl systemctl enable ntpdate.service echo '*/30 * * * * /usr/sbin/ntpdate time7.aliyun.com >/dev/null 2>&1' > /tmp/crontab2.tmp crontab /tmp/crontab2.tmp systemctl start ntpdate.service echo "* soft nofile 65536" >> /etc/security/limits.conf echo "* hard nofile 65536" >> /etc/security/limits.conf echo "* soft nproc 65536" >> /etc/security/limits.conf echo "* hard nproc 65536" >> /etc/security/limits.conf echo "* soft memlock unlimited" >> /etc/security/limits.conf echo "* hard memlock unlimited" >> /etc/security/limits.conf
二、环境说明
基于 二进制 文件部署 本地化 kube-apiserver, kube-controller-manager , kube-scheduler 这里配置2个Master 2个node, Master-161、Master-162 做 Master + etcd, master3 仅仅etcd, node-01 node-02 只做单纯 Node
创建 验证
这里使用 CloudFlare 的 PKI 工具集 cfssl 来生成 Certificate Authority (CA) 证书和秘钥文件。
安装 cfssl
mkdir -p /opt/local/cfssl cd /opt/local/cfssl wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 mv cfssl_linux-amd64 cfssl wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 mv cfssljson_linux-amd64 cfssljson wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 mv cfssl-certinfo_linux-amd64 cfssl-certinfo chmod +x *
创建 CA 证书配置
mkdir /opt/ssl cd /opt/ssl
config.json 文件
vi config.json { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "87600h" } } } }
csr.json 文件
vi csr.json { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "ShenZhen", "L": "ShenZhen", "O": "k8s", "OU": "System" } ] }
生成 CA 证书和私钥
cd /opt/ssl/ /opt/local/cfssl/cfssl gencert -initca csr.json | /opt/local/cfssl/cfssljson -bare ca [root@master1 ssl]# ls -lt 总用量 20 -rw-r--r-- 1 root root 1005 9月 1 13:36 ca.csr -rw------- 1 root root 1679 9月 1 13:36 ca-key.pem -rw-r--r-- 1 root root 1363 9月 1 13:36 ca.pem -rw-r--r-- 1 root root 210 9月 1 13:35 csr.json -rw-r--r-- 1 root root 292 9月 1 13:35 config.json
分发证书
创建证书目录
mkdir -p /etc/kubernetes/ssl
拷贝所有文件到目录下
cp *.pem /etc/kubernetes/ssl cp ca.csr /etc/kubernetes/ssl
这里要将文件拷贝到所有的k8s机器上
scp *.pem *.csr 192.168.161.162:/etc/kubernetes/ssl/ scp *.pem *.csr 192.168.161.163:/etc/kubernetes/ssl/ scp *.pem *.csr 192.168.161.77:/etc/kubernetes/ssl/ scp *.pem *.csr 192.168.161.78:/etc/kubernetes/ssl/
三、安装 docker
所有服务器预先安装 docker-ce ,官方1.9 中提示, 目前 k8s 支持最高 Docker versions 1.11.2, 1.12.6, 1.13.1, and 17.03.1
# 导入 yum 源 # 安装 yum-config-manager yum -y install yum-utils # 导入 yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo # 更新 repo yum makecache # 查看yum 版本 yum list docker-ce.x86_64 --showduplicates |sort -r # 安装指定版本 docker-ce 17.03 被 docker-ce-selinux 依赖, 不能直接yum 安装 docker-ce-selinux wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm rpm -ivh docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm yum -y install docker-ce-17.03.2.ce docker version
更改 docker 配置
# 添加配置 vi /etc/systemd/system/docker.service [Unit] Description=Docker Application Container Engine Documentation=http://docs.docker.com After=network.target docker-storage-setup.service Wants=docker-storage-setup.service [Service] Type=notify Environment=GOTRACEBACK=crash ExecReload=/bin/kill -s HUP $MAINPID Delegate=yes KillMode=process ExecStart=/usr/bin/dockerd \ $DOCKER_OPTS \ $DOCKER_STORAGE_OPTIONS \ $DOCKER_NETWORK_OPTIONS \ $DOCKER_DNS_OPTIONS \ $INSECURE_REGISTRY LimitNOFILE=1048576 LimitNPROC=1048576 LimitCORE=infinity TimeoutStartSec=1min Restart=on-abnormal [Install] WantedBy=multi-user.target
修改其他配置
# 低版本内核, kernel 3.10.x 配置使用 overlay2 vi /etc/docker/daemon.json { "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ] } mkdir -p /etc/systemd/system/docker.service.d/ vi /etc/systemd/system/docker.service.d/docker-options.conf # 添加如下 : (注意 environment 必须在同一行,如果出现换行会无法加载) # docker 版本 17.03.2 之前配置为 --graph=/opt/docker # docker 版本 17.04.x 之后配置为 --data-root=/opt/docker [Service] Environment="DOCKER_OPTS=--insecure-registry=10.254.0.0/16 \ --graph=/opt/docker --log-opt max-size=50m --log-opt max-file=5" vi /etc/systemd/system/docker.service.d/docker-dns.conf # 添加如下 : [Service] Environment="DOCKER_DNS_OPTIONS=\ --dns 10.254.0.2 --dns 114.114.114.114 \ --dns-search default.svc.cluster.local --dns-search svc.cluster.local \ --dns-opt ndots:2 --dns-opt timeout:2 --dns-opt attempts:2"
重新读取配置,启动 docker
systemctl daemon-reload systemctl start docker systemctl enable docker
如果报错 请使用
systemctl status docker -l 或 journalctl -u docker 来定位问题
etcd 集群
etcd 是k8s集群最重要的组件, etcd 挂了,集群就挂了, 1.11.2 etcd 支持最新版本为 v3.2.18
安装 etcd
官方地址 https://github.com/coreos/etcd/releases
# 下载 二进制文件(3台master机器都需要) wget https://github.com/coreos/etcd/releases/download/v3.2.18/etcd-v3.2.18-linux-amd64.tar.gz tar zxvf etcd-v3.2.18-linux-amd64.tar.gz cd etcd-v3.2.18-linux-amd64 mv etcd etcdctl /usr/bin/
创建 etcd 证书
etcd 证书这里,默认配置三个,后续如果需要增加,更多的 etcd 节点 这里的认证IP 请多预留几个 ,以备后续添加能通过认证,不需要重新签发。
cd /opt/ssl/ vi etcd-csr.json { "CN": "etcd", "hosts": [ "127.0.0.1", "192.168.161.161", "192.168.161.162", "192.168.161.163" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "ShenZhen", "L": "ShenZhen", "O": "k8s", "OU": "System" } ] }
生成 etcd 密钥
/opt/local/cfssl/cfssl gencert -ca=/opt/ssl/ca.pem \ -ca-key=/opt/ssl/ca-key.pem \ -config=/opt/ssl/config.json \ -profile=kubernetes etcd-csr.json | /opt/local/cfssl/cfssljson -bare etcd
# 查看生成 [root@master1 ssl]# ls etcd* etcd.csr etcd-csr.json etcd-key.pem etcd.pem # 检查证书 [root@master1 ssl]# /opt/local/cfssl/cfssl-certinfo -cert etcd.pem # 拷贝到etcd服务器 # etcd-1 cp etcd*.pem /etc/kubernetes/ssl/ # etcd-2 scp etcd*.pem 192.168.161.162:/etc/kubernetes/ssl/ # etcd-3 scp etcd*.pem 192.168.161.163:/etc/kubernetes/ssl/ # 如果 etcd 非 root 用户,读取证书会提示没权限 chmod 644 /etc/kubernetes/ssl/etcd-key.pem
修改 etcd 配置
由于 etcd 是最重要的组件,所以 –data-dir 请配置到其他路径中
创建 etcd data 目录, 并授权
useradd etcd mkdir -p /opt/etcd chown -R etcd:etcd /opt/etcd
etcd-1
vi /etc/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify WorkingDirectory=/opt/etcd/ User=etcd # set GOMAXPROCS to number of processors ExecStart=/usr/bin/etcd \ --name=etcd1 \ --cert-file=/etc/kubernetes/ssl/etcd.pem \ --key-file=/etc/kubernetes/ssl/etcd-key.pem \ --peer-cert-file=/etc/kubernetes/ssl/etcd.pem \ --peer-key-file=/etc/kubernetes/ssl/etcd-key.pem \ --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \ --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \ --initial-advertise-peer-urls=https://192.168.161.161:2380 \ --listen-peer-urls=https://192.168.161.161:2380 \ --listen-client-urls=https://192.168.161.161:2379,http://127.0.0.1:2379 \ --advertise-client-urls=https://192.168.161.161:2379 \ --initial-cluster-token=k8s-etcd-cluster \ --initial-cluster=etcd1=https://192.168.161.161:2380,etcd2=https://192.168.161.162:2380,etcd3=https://192.168.161.163:2380 \ --initial-cluster-state=new \ --data-dir=/opt/etcd/ Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target
etcd-2
vi /etc/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify WorkingDirectory=/opt/etcd/ User=etcd # set GOMAXPROCS to number of processors ExecStart=/usr/bin/etcd \ --name=etcd2 \ --cert-file=/etc/kubernetes/ssl/etcd.pem \ --key-file=/etc/kubernetes/ssl/etcd-key.pem \ --peer-cert-file=/etc/kubernetes/ssl/etcd.pem \ --peer-key-file=/etc/kubernetes/ssl/etcd-key.pem \ --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \ --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \ --initial-advertise-peer-urls=https://192.168.161.162:2380 \ --listen-peer-urls=https://192.168.161.162:2380 \ --listen-client-urls=https://192.168.161.162:2379,http://127.0.0.1:2379 \ --advertise-client-urls=https://192.168.161.162:2379 \ --initial-cluster-token=k8s-etcd-cluster \ --initial-cluster=etcd1=https://192.168.161.161:2380,etcd2=https://192.168.161.162:2380,etcd3=https://192.168.161.163:2380 \ --initial-cluster-state=new \ --data-dir=/opt/etcd Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target
etcd-3
vi /etc/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify WorkingDirectory=/opt/etcd/ User=etcd # set GOMAXPROCS to number of processors ExecStart=/usr/bin/etcd \ --name=etcd3 \ --cert-file=/etc/kubernetes/ssl/etcd.pem \ --key-file=/etc/kubernetes/ssl/etcd-key.pem \ --peer-cert-file=/etc/kubernetes/ssl/etcd.pem \ --peer-key-file=/etc/kubernetes/ssl/etcd-key.pem \ --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \ --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \ --initial-advertise-peer-urls=https://192.168.161.163:2380 \ --listen-peer-urls=https://192.168.161.163:2380 \ --listen-client-urls=https://192.168.161.163:2379,http://127.0.0.1:2379 \ --advertise-client-urls=https://192.168.161.163:2379 \ --initial-cluster-token=k8s-etcd-cluster \ --initial-cluster=etcd1=https://192.168.161.161:2380,etcd2=https://192.168.161.162:2380,etcd3=https://192.168.161.163:2380 \ --initial-cluster-state=new \ --data-dir=/opt/etcd/ Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target
启动 etcd
分别启动 所有节点的 etcd 服务
systemctl daemon-reload systemctl enable etcd systemctl start etcd systemctl status etcd journalctl -u etcd -f ##用此命令来动态查看具体日志
验证 etcd 集群状态
etcdctl --endpoints=https://192.168.161.161:2379,https://192.168.161.162:2379,https://192.168.161.163:2379\ --cert-file=/etc/kubernetes/ssl/etcd.pem \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --key-file=/etc/kubernetes/ssl/etcd-key.pem \ cluster-health member 60ce394098258c3 is healthy: got healthy result from https://192.168.161.163:2379 member afe2d07db38fa5e2 is healthy: got healthy result from https://192.168.161.162:2379 member ba8a716d98dac47b is healthy: got healthy result from https://192.168.161.161:2379 cluster is healthy
查看 etcd 集群成员:
etcdctl --endpoints=https://192.168.161.161:2379,https://192.168.161.162:2379,https://192.168.161.163:2379\ --cert-file=/etc/kubernetes/ssl/etcd.pem \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --key-file=/etc/kubernetes/ssl/etcd-key.pem \ member list 60ce394098258c3: name=etcd3 peerURLs=https://192.168.161.163:2380 clientURLs=https://192.168.161.163:2379 isLeader=false afe2d07db38fa5e2: name=etcd2 peerURLs=https://192.168.161.162:2380 clientURLs=https://192.168.161.162:2379 isLeader=false ba8a716d98dac47b: name=etcd1 peerURLs=https://192.168.161.161:2380 clientURLs=https://192.168.161.161:2379 isLeader=true
配置 Kubernetes 集群
kubectl 安装在所有需要进行操作的机器上
Master and Node
Master 需要部署 kube-apiserver , kube-scheduler , kube-controller-manager 这三个组件。 kube-scheduler 作用是调度pods分配到那个node里,简单来说就是资源调度。 kube-controller-manager 作用是 对 deployment controller , replication controller, endpoints controller, namespace controller, and serviceaccounts controller等等的循环控制,与kube-apiserver交互。
安装组件
# 从github 上下载版本 (在两台master上节点执行) cd /usr/local/src wget https://dl.k8s.io/v1.11.2/kubernetes-server-linux-amd64.tar.gz tar -xzvf kubernetes-server-linux-amd64.tar.gz cd kubernetes cp -r server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kubelet,kubeadm} /usr/local/bin/ scp server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet,kubeadm} 192.168.161.162:/usr/local/bin/ scp server/bin/{kube-proxy,kubelet} 192.168.161.77:/usr/local/bin/ scp server/bin/{kube-proxy,kubelet} 192.168.161.78:/usr/local/bin/
创建 admin 证书
kubectl 与 kube-apiserver 的安全端口通信,需要为安全通信提供 TLS 证书和秘钥。
cd /opt/ssl/ vi admin-csr.json { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "ShenZhen", "L": "ShenZhen", "O": "system:masters", "OU": "System" } ] }
# 生成 admin 证书和私钥 cd /opt/ssl/ /opt/local/cfssl/cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \ -ca-key=/etc/kubernetes/ssl/ca-key.pem \ -config=/opt/ssl/config.json \ -profile=kubernetes admin-csr.json | /opt/local/cfssl/cfssljson -bare admin # 查看生成 [root@master1 ssl]# ls admin* admin.csr admin-csr.json admin-key.pem admin.pem cp admin*.pem /etc/kubernetes/ssl/ scp admin*.pem 192.168.161.162:/etc/kubernetes/ssl/
生成 kubernetes 配置文件
生成证书相关的配置文件存储与 /root/.kube 目录中
# 配置 kubernetes 集群 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:6443 # 配置 客户端认证 kubectl config set-credentials admin \ --client-certificate=/etc/kubernetes/ssl/admin.pem \ --embed-certs=true \ --client-key=/etc/kubernetes/ssl/admin-key.pem kubectl config set-context kubernetes \ --cluster=kubernetes \ --user=admin kubectl config use-context kubernetes
创建 kubernetes 证书
cd /opt/ssl vi kubernetes-csr.json { "CN": "kubernetes", "hosts": [ "127.0.0.1", "192.168.161.161", "192.168.161.162", "192.168.161.163", "192.168.161.77", "192.168.161.78", "10.254.0.1", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "ShenZhen", "L": "ShenZhen", "O": "k8s", "OU": "System" } ] } ## 这里 hosts 字段中 三个 IP 分别为 127.0.0.1 本机, 192.168.161.161 和 172.16.161.162 为 Master 的IP,多个Master需要写多个。 10.254.0.1 为 kubernetes SVC 的 IP, 一般是 部署网络的第一个IP , 如: 10.254.0.1 , 在启动完成后,我们使用 kubectl get svc , 就可以查看到
生成 kubernetes 证书和私钥
/opt/local/cfssl/cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \ -ca-key=/etc/kubernetes/ssl/ca-key.pem \ -config=/opt/ssl/config.json \ -profile=kubernetes kubernetes-csr.json | /opt/local/cfssl/cfssljson -bare kubernetes # 查看生成 [root@master1 ssl]# ls -lt kubernetes* -rw-r--r-- 1 root root 1277 9月 1 15:31 kubernetes.csr -rw------- 1 root root 1679 9月 1 15:31 kubernetes-key.pem -rw-r--r-- 1 root root 1651 9月 1 15:31 kubernetes.pem -rw-r--r-- 1 root root 531 9月 1 15:31 kubernetes-csr.json # 拷贝到目录 cp kubernetes*.pem /etc/kubernetes/ssl/ scp kubernetes*.pem 192.168.161.162:/etc/kubernetes/ssl/
配置 kube-apiserver
kubelet 首次启动时向 kube-apiserver 发送 TLS Bootstrapping 请求,kube-apiserver 验证 kubelet 请求中的 token 是否与它配置的 token 一致,如果一致则自动为 kubelet生成证书和秘钥。
# 生成 token [root@kubernetes-64 ssl]# head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 97606de41d5ee3c3392aae432eb3143d # 创建 encryption-config.yaml 配置 cat > encryption-config.yaml <<EOF kind: EncryptionConfig apiVersion: v1 resources: - resources: - secrets providers: - aescbc: keys: - name: key1 secret: 97606de41d5ee3c3392aae432eb3143d - identity: {} EOF # 拷贝 cp encryption-config.yaml /etc/kubernetes/ scp encryption-config.yaml 192.168.161.162:/etc/kubernetes/
# 生成高级审核配置文件 > 官方说明 https://kubernetes.io/docs/tasks/debug-application-cluster/audit/ > > 如下为最低限度的日志审核 cd /etc/kubernetes cat >> audit-policy.yaml <<EOF # Log all requests at the Metadata level. apiVersion: audit.k8s.io/v1beta1 kind: Policy rules: - level: Metadata EOF # 拷贝 scp audit-policy.yaml 192.168.161.162:/etc/kubernetes/
创建 kube-apiserver.service 文件
# 自定义 系统 service 文件一般存于 /etc/systemd/system/ 下 # 配置为 各自的本地 IP vi /etc/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] User=root ExecStart=/usr/local/bin/kube-apiserver \ --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction \ --anonymous-auth=false \ --experimental-encryption-provider-config=/etc/kubernetes/encryption-config.yaml \ --advertise-address=192.168.161.161 \ --allow-privileged=true \ --apiserver-count=3 \ --audit-policy-file=/etc/kubernetes/audit-policy.yaml \ --audit-log-maxage=30 \ --audit-log-maxbackup=3 \ --audit-log-maxsize=100 \ --audit-log-path=/var/log/kubernetes/audit.log \ --authorization-mode=Node,RBAC \ --bind-address=0.0.0.0 \ --secure-port=6443 \ --client-ca-file=/etc/kubernetes/ssl/ca.pem \ --kubelet-client-certificate=/etc/kubernetes/ssl/kubernetes.pem \ --kubelet-client-key=/etc/kubernetes/ssl/kubernetes-key.pem \ --enable-swagger-ui=true \ --etcd-cafile=/etc/kubernetes/ssl/ca.pem \ --etcd-certfile=/etc/kubernetes/ssl/etcd.pem \ --etcd-keyfile=/etc/kubernetes/ssl/etcd-key.pem \ --etcd-servers=https://192.168.161.161:2379,https://192.168.161.162:2379,https://192.168.161.163:2379 \ --event-ttl=1h \ --kubelet-https=true \ --insecure-bind-address=127.0.0.1 \ --insecure-port=8080 \ --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \ --service-cluster-ip-range=10.254.0.0/18 \ --service-node-port-range=30000-32000 \ --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ --enable-bootstrap-token-auth \ --v=1 Restart=on-failure RestartSec=5 Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target # --experimental-encryption-provider-config ,替代之前 token.csv 文件 # 这里面要注意的是 --service-node-port-range=30000-32000 # 这个地方是 映射外部端口时 的端口范围,随机映射也在这个范围内映射,指定映射端口必须也在这个范围内。 记得在另外一台master上修改IP地址
启动 kube-apiserver
systemctl daemon-reload systemctl enable kube-apiserver systemctl start kube-apiserver systemctl status kube-apiserver
查看启动端口
[root@master1 kubernetes]# netstat -lntp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 192.168.161.161:2379 0.0.0.0:* LISTEN 3605/etcd tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 3605/etcd tcp 0 0 192.168.161.161:2380 0.0.0.0:* LISTEN 3605/etcd tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 3844/kube-apiserver tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1715/sshd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 2066/master tcp6 0 0 :::6443 :::* LISTEN 3844/kube-apiserver tcp6 0 0 :::22 :::* LISTEN 1715/sshd tcp6 0 0 ::1:25 :::* LISTEN 2066/master
配置 kube-controller-manager
两台master都需要配置:
新增几个配置,用于自动 续期证书 –feature-gates=RotateKubeletServerCertificate=true –experimental-cluster-signing-duration=86700h0m0s
# 创建 kube-controller-manager.service 文件 vi /etc/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/usr/local/bin/kube-controller-manager \ --address=0.0.0.0 \ --master=http://127.0.0.1:8080 \ --allocate-node-cidrs=true \ --service-cluster-ip-range=10.254.0.0/18 \ --cluster-cidr=10.254.64.0/18 \ --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \ --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \ --feature-gates=RotateKubeletServerCertificate=true \ --controllers=*,tokencleaner,bootstrapsigner \ --experimental-cluster-signing-duration=86700h0m0s \ --cluster-name=kubernetes \ --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \ --root-ca-file=/etc/kubernetes/ssl/ca.pem \ --leader-elect=true \ --node-monitor-grace-period=40s \ --node-monitor-period=5s \ --pod-eviction-timeout=5m0s \ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target
启动 kube-controller-manager
systemctl daemon-reload systemctl enable kube-controller-manager systemctl start kube-controller-manager systemctl status kube-controller-manager
查看启动端口
[root@master1 kubernetes]# netstat -lntp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 192.168.161.161:2379 0.0.0.0:* LISTEN 3605/etcd tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 3605/etcd tcp 0 0 192.168.161.161:2380 0.0.0.0:* LISTEN 3605/etcd tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 3844/kube-apiserver tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1715/sshd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 2066/master tcp6 0 0 :::6443 :::* LISTEN 3844/kube-apiserver tcp6 0 0 :::10252 :::* LISTEN 3970/kube-controlle tcp6 0 0 :::22 :::* LISTEN 1715/sshd tcp6 0 0 ::1:25 :::* LISTEN 2066/master
配置 kube-scheduler
# 创建 kube-cheduler.service 文件 vi /etc/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/usr/local/bin/kube-scheduler \ --address=0.0.0.0 \ --master=http://127.0.0.1:8080 \ --leader-elect=true \ --v=1 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target
启动 kube-scheduler
systemctl daemon-reload systemctl enable kube-scheduler systemctl start kube-scheduler systemctl status kube-scheduler
查看启动端口
[root@master1 kubernetes]# netstat -lntp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 192.168.161.161:2379 0.0.0.0:* LISTEN 3605/etcd tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 3605/etcd tcp 0 0 192.168.161.161:2380 0.0.0.0:* LISTEN 3605/etcd tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 3844/kube-apiserver tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1715/sshd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 2066/master tcp6 0 0 :::10251 :::* LISTEN 4023/kube-scheduler tcp6 0 0 :::6443 :::* LISTEN 3844/kube-apiserver tcp6 0 0 :::10252 :::* LISTEN 3970/kube-controlle tcp6 0 0 :::22 :::* LISTEN 1715/sshd tcp6 0 0 ::1:25 :::* LISTEN 2066/master
验证 Master 节点
[root@master1 kubernetes]# kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health": "true"} etcd-2 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"} [root@master2 bin]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-1 Healthy {"health": "true"} etcd-0 Healthy {"health": "true"} etcd-2 Healthy {"health": "true"}
以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网
猜你喜欢:- 照片整理系列二 —— 照片整理及归档的辛酸历程
- 我自己整理的码农周刊一周分享整理
- 【复习资料】ES6/ES7/ES8/ES9资料整理(个人整理)
- Hibernate 关系映射整理
- 大数据框架整理
- 树莓派资源整理
本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。