K8s和 TiDB 都是目前开源社区中活跃的开源产品,TiDB Operator项目是一个在K8s上编排管理TiDB集群的项目。本文详细记录了部署K8s及install TiDB Operator 的详细实施过程,希望能对刚"入坑"的同学有所帮助。
一、环境
Ubuntu 16.04
K8s 1.14.1
二、Kubespray安装K8s
配置免密登录
1yum -y install expect
-
vi /tmp/autocopy.exp
1#!/usr/bin/expect 2 3set timeout 4set user_hostname [lindex $argv ] 5set password [lindex $argv ] 6spawn ssh-copy-id $user_hostname 7 expect { 8 "(yes/no)?" 9 { 10 send "yes\n" 11 expect "*assword:" { send "$password\n"} 12 } 13 "*assword:" 14 { 15 send "$password\n" 16 } 17 } 18expect eof
1ssh-keyscan addedip >> ~/.ssh/known_hosts 2 3ssh-keygen -t rsa -P '' 4 5for i in 10.0.0.{31,32,33,40,10,20,50}; do ssh-keyscan $i >> ~/.ssh/known_hosts ; done 6 7/tmp/autocopy.exp root@addeip 8ssh-copy-id addedip 9 10/tmp/autocopy.exp root@10.0.0.31 11/tmp/autocopy.exp root@10.0.0.32 12/tmp/autocopy.exp root@10.0.0.33 13/tmp/autocopy.exp root@10.0.0.40 14/tmp/autocopy.exp root@10.0.0.10 15/tmp/autocopy.exp root@10.0.0.20 16/tmp/autocopy.exp root@10.0.0.50
配置Kubespray
1pip install -r requirements.txt 2cp -rfp inventory/sample inventory/mycluster
-
inventory/mycluster/inventory.ini
-
inventory/mycluster/inventory.ini
1# ## Configure 'ip' variable to bind kubernetes services on a 2# ## different ip than the default iface 3# ## We should set etcd_member_name for etcd cluster. The node that is not a etcd member do not need to set the value, or can set the empty string value. 4[all] 5# node1 ansible_host=95.54.0.12 # ip=10.3.0.1 etcd_member_name=etcd1 6# node2 ansible_host=95.54.0.13 # ip=10.3.0.2 etcd_member_name=etcd2 7# node3 ansible_host=95.54.0.14 # ip=10.3.0.3 etcd_member_name=etcd3 8# node4 ansible_host=95.54.0.15 # ip=10.3.0.4 etcd_member_name=etcd4 9# node5 ansible_host=95.54.0.16 # ip=10.3.0.5 etcd_member_name=etcd5 10# node6 ansible_host=95.54.0.17 # ip=10.3.0.6 etcd_member_name=etcd6 11etcd1 ansible_host=10.0.0.31 etcd_member_name=etcd1 12etcd2 ansible_host=10.0.0.32 etcd_member_name=etcd2 13etcd3 ansible_host=10.0.0.33 etcd_member_name=etcd3 14master1 ansible_host=10.0.0.40 15node1 ansible_host=10.0.0.10 16node2 ansible_host=10.0.0.20 17node3 ansible_host=10.0.0.50 18 19# ## configure a bastion host if your nodes are not directly reachable 20# bastion ansible_host=x.x.x.x ansible_user=some_user 21 22[kube-master] 23# node1 24# node2 25master1 26[etcd] 27# node1 28# node2 29# node3 30etcd1 31etcd2 32etcd3 33 34[kube-node] 35# node2 36# node3 37# node4 38# node5 39# node6 40node1 41node2 42node3 43 44[k8s-cluster:children] 45kube-master 46kube-node
节点所需镜像的文件
由于某些镜像国内无法访问需要现将镜像通过代理下载到本地然后上传到本地镜像仓库或DockerHub,同时修改配置文件,个别组件存放位置https://storage.googleapis.com,需要新建Nginx服务器分发文件。
建立Nginx服务器
-
~/distribution/docker-compose.yml
-
创建文件目录及Nginx配置文件目录
-
~/distribution/conf.d/open_distribute.conf
-
启动
-
下载并上传所需文件 具体版本号参考roles/download/defaults/main.yml文件中kubeadm_version、kube_version、image_arch参数
-
安装 Docker 及Docker-Compose
1apt-get install \ 2apt-transport-https \ 3ca-certificates \ 4curl \ 5gnupg-agent \ 6software-properties-common 7 8curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - 9 10add-apt-repository \ 11"deb [arch=amd64] https://download.docker.com/linux/ubuntu \ 12$(lsb_release -cs) \ 13stable" 14 15apt-get update 16 17apt-get install docker-ce docker-ce-cli containerd.io 18 19chmod +x /usr/local/bin/docker-compose 20sudo curl -L "https://github.com/docker/compose/releases/download/1.24.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
-
新建Nginx docker-compose.yml
1mkdir ~/distribution 2vi ~/distribution/docker-compose.yml
1# distribute 2version: '2' 3services: 4 distribute: 5 image: nginx:1.15.12 6 volumes: 7 - ./conf.d:/etc/nginx/conf.d 8 - ./distributedfiles:/usr/share/nginx/html 9 network_mode: "host" 10 container_name: nginx_distribute
1mkdir ~/distribution/distributedfiles 2mkdir ~/distribution/ 3mkdir ~/distribution/conf.d 4vi ~/distribution/conf.d/open_distribute.conf
1#open_distribute.conf 2 3server { 4 #server_name distribute.search.leju.com; 5 listen 8888; 6 7 root /usr/share/nginx/html; 8 9 add_header Access-Control-Allow-Origin *; 10 add_header Access-Control-Allow-Headers X-Requested-With; 11 add_header Access-Control-Allow-Methods GET,POST,OPTIONS; 12 13 location / { 14 # index index.html; 15 autoindex on; 16 } 17 expires off; 18 location ~ .*\.(gif|jpg|jpeg|png|bmp|swf|eot|ttf|woff|woff2|svg)$ { 19 expires -1; 20 } 21 22 location ~ .*\.(js|css)?$ { 23 expires -1 ; 24 } 25} # end of public static files domain : [ distribute.search.leju.com ]
1docker-compose up -d
1wget https://storage.googleapis.com/kubernetes-release/release/v1.14.1/bin/linux/amd64/kubeadm 2 3scp /tmp/kubeadm 10.0.0.60:/root/distribution/distributedfiles 4 5wget https://storage.googleapis.com/kubernetes-release/release/v1.14.1/bin/linux/amd64/hyperkube
-
需要下载并上传到私有仓库的镜像
1docker pull k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.4.0 2docker tag k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.4.0 jiashiwen/cluster-proportional-autoscaler-amd64:1.4.0 3docker push jiashiwen/cluster-proportional-autoscaler-amd64:1.4.0 4 5docker pull k8s.gcr.io/k8s-dns-node-cache:1.15.1 6docker tag k8s.gcr.io/k8s-dns-node-cache:1.15.1 jiashiwen/k8s-dns-node-cache:1.15.1 7docker push jiashiwen/k8s-dns-node-cache:1.15.1 8 9docker pull gcr.io/google_containers/pause-amd64:3.1 10docker tag gcr.io/google_containers/pause-amd64:3.1 jiashiwen/pause-amd64:3.1 11docker push jiashiwen/pause-amd64:3.1 12 13docker pull gcr.io/google_containers/kubernetes-dashboard-amd64:v1.10.1 14docker tag gcr.io/google_containers/kubernetes-dashboard-amd64:v1.10.1 jiashiwen/kubernetes-dashboard-amd64:v1.10.1 15docker push jiashiwen/kubernetes-dashboard-amd64:v1.10.1 16 17docker pull gcr.io/google_containers/kube-apiserver:v1.14.1 18docker tag gcr.io/google_containers/kube-apiserver:v1.14.1 jiashiwen/kube-apiserver:v1.14.1 19docker push jiashiwen/kube-apiserver:v1.14.1 20 21docker pull gcr.io/google_containers/kube-controller-manager:v1.14.1 22docker tag gcr.io/google_containers/kube-controller-manager:v1.14.1 jiashiwen/kube-controller-manager:v1.14.1 23docker push jiashiwen/kube-controller-manager:v1.14.1 24 25docker pull gcr.io/google_containers/kube-scheduler:v1.14.1 26docker tag gcr.io/google_containers/kube-scheduler:v1.14.1 jiashiwen/kube-scheduler:v1.14.1 27docker push jiashiwen/kube-scheduler:v1.14.1 28 29docker pull gcr.io/google_containers/kube-proxy:v1.14.1 30docker tag gcr.io/google_containers/kube-proxy:v1.14.1 jiashiwen/kube-proxy:v1.14.1 31docker push jiashiwen/kube-proxy:v1.14.1 32 33docker pull gcr.io/google_containers/pause:3.1 34docker tag gcr.io/google_containers/pause:3.1 jiashiwen/pause:3.1 35docker push jiashiwen/pause:3.1 36 37docker pull gcr.io/google_containers/coredns:1.3.1 38docker tag gcr.io/google_containers/coredns:1.3.1 jiashiwen/coredns:1.3.1 39docker push jiashiwen/coredns:1.3.1
-
用于下载上传镜像的脚本
1#!/bin/bash 2 3privaterepo=jiashiwen 4 5k8sgcrimages=( 6cluster-proportional-autoscaler-amd64:1.4.0 7k8s-dns-node-cache:1.15.1 8) 9 10gcrimages=( 11pause-amd64:3.1 12kubernetes-dashboard-amd64:v1.10.1 13kube-apiserver:v1.14.1 14kube-controller-manager:v1.14.1 15kube-scheduler:v1.14.1 16kube-proxy:v1.14.1 17pause:3.1 18coredns:1.3.1 19) 20 21 22for k8sgcrimageName in ${k8sgcrimages[@]} ; do 23echo $k8sgcrimageName 24docker pull k8s.gcr.io/$k8sgcrimageName 25docker tag k8s.gcr.io/$k8sgcrimageName $privaterepo/$k8sgcrimageName 26docker push $privaterepo/$k8sgcrimageName 27done 28 29 30for gcrimageName in ${gcrimages[@]} ; do 31echo $gcrimageName 32docker pull gcr.io/google_containers/$gcrimageName 33docker tag gcr.io/google_containers/$gcrimageName $privaterepo/$gcrimageName 34docker push $privaterepo/$gcrimageName 35done
-
修改文件inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml,修改K8s镜像仓库
1# kube_image_repo: "gcr.io/google-containers" 2kube_image_repo: "jiashiwen"
-
修改roles/download/defaults/main.yml
1#dnsautoscaler_image_repo: "k8s.gcr.io/cluster-proportional-autoscaler-{{ image_arch }}" 2dnsautoscaler_image_repo: "jiashiwen/cluster-proportional-autoscaler-{{ image_arch }}" 3 4#kube_image_repo: "gcr.io/google-containers" 5kube_image_repo: "jiashiwen" 6 7#pod_infra_image_repo: "gcr.io/google_containers/pause-{{ image_arch }}" 8pod_infra_image_repo: "jiashiwen/pause-{{ image_arch }}" 9 10#dashboard_image_repo: "gcr.io/google_containers/kubernetes-dashboard-{{ image_arch }}" 11dashboard_image_repo: "jiashiwen/kubernetes-dashboard-{{ image_arch }}" 12 13#nodelocaldns_image_repo: "k8s.gcr.io/k8s-dns-node-cache" 14nodelocaldns_image_repo: "jiashiwen/k8s-dns-node-cache" 15 16#kubeadm_download_url: "https://storage.googleapis.com/kubernetes-release/ release/{{ kubeadm_version }}/bin/linux/{{ image_arch }}/kubeadm" 17kubeadm_download_url: "http://10.0.0.60:8888/kubeadm" 18 19#hyperkube_download_url: "https://storage.googleapis.com/ kubernetes-release/release/{{ kube_version }}/bin/linux/{{ image_arch }}/ hyperkube" 20hyperkube_download_url: "http://10.0.0.60:8888/hyperkube"
三、执行安装
-
安装命令
1ansible-playbook -i inventory/mycluster/inventory.ini cluster.yml
-
重置命令
1ansible-playbook -i inventory/mycluster/inventory.ini reset.yml
四、验证K8s集群
安装Kubectl
-
本地浏览器打开https://storage.googleapis.com/kubernetes-release/release/stable.txt得到最新版本为v1.14.1
-
用上一步得到的最新版本号v1.7.1替换下载地址中的$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)得到真正的下载地址https:// storage.googleapis.com/kubernetes-release/release/v1.14.1/bin/linux/amd64/kubectl
-
上传下载好的kubectl
1scp /tmp/kubectl root@xxx:/root
-
修改属性
1chmod +x ./kubectl 2mv ./kubectl /usr/local/bin/kubectl
-
Ubuntu
1sudo snap install kubectl --classic
-
CentOS
将master节点上的~/.kube/config 文件复制到你需要访问集群的客户端上即可
1scp 10.0.0.40:/root/.kube/config ~/.kube/config
执行命令验证集群
1kubectl get nodes 2kubectl cluster-info
五、TiDB-Operaor部署
安装helm
https://blog.csdn.net/bbwangj/article/details/81087911
-
安装helm
1curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh 2chmod 700 get_helm.sh 3./get_helm.sh
-
查看helm版本
1helm version
-
初始化
1helm init --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.13.1 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
为K8s提供 local volumes
-
参考文档https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md
tidb-operator启动会为pd和tikv绑定pv,需要在discovery directory下创建多个目录
-
格式化并挂载磁盘
1mkfs.ext4 /dev/vdb 2DISK_UUID=$(blkid -s UUID -o value /dev/vdb) 3mkdir /mnt/$DISK_UUID 4mount -t ext4 /dev/vdb /mnt/$DISK_UUID
-
/etc/fstab持久化mount
1echo UUID=`sudo blkid -s UUID -o value /dev/vdb` /mnt/$DISK_UUID ext4 defaults 0 2 | sudo tee -a /etc/fstab
-
创建多个目录并mount到discovery directory
1for i in $(seq 1 10); do 2sudo mkdir -p /mnt/${DISK_UUID}/vol${i} /mnt/disks/${DISK_UUID}_vol${i} 3sudo mount --bind /mnt/${DISK_UUID}/vol${i} /mnt/disks/${DISK_UUID}_vol${i} 4done
-
/etc/fstab持久化mount
1for i in $(seq 1 10); do 2echo /mnt/${DISK_UUID}/vol${i} /mnt/disks/${DISK_UUID}_vol${i} none bind 0 0 | sudo tee -a /etc/fstab 3done
-
为tidb-operator创建local-volume-provisioner
1$ kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml 2$ kubectl get po -n kube-system -l app=local-volume-provisioner 3$ kubectl get pv --all-namespaces | grep local-storage
六、Install TiDB Operator
-
项目中使用了gcr.io/google-containers/hyperkube,国内访问不了,简单的办法是把镜像重新push到dockerhub然后修改charts/tidb-operator/values.yaml
1scheduler: 2 # With rbac.create=false, the user is responsible for creating this account 3 # With rbac.create=true, this service account will be created 4 # Also see rbac.create and clusterScoped 5 serviceAccount: tidb-scheduler 6 logLevel: 2 7 replicas: 1 8 schedulerName: tidb-scheduler 9 resources: 10 limits: 11 cpu: 250m 12 memory: 150Mi 13 requests: 14 cpu: 80m 15 memory: 50Mi 16 # kubeSchedulerImageName: gcr.io/google-containers/hyperkube 17 kubeSchedulerImageName: yourrepo/hyperkube 18 # This will default to matching your kubernetes version 19 # kubeSchedulerImageTag: latest
-
TiDB Operator使用CRD扩展Kubernetes,因此要使用TiDB Operator,首先应该创建TidbCluster自定义资源类型。
1kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/crd.yaml 2kubectl get crd tidbclusters.pingcap.com
-
安装 TiDB-Operator
1$ git clone https://github.com/pingcap/tidb-operator.git 2$ cd tidb-operator 3$ helm install charts/tidb-operator --name=tidb-operator --namespace=tidb-admin 4$ kubectl get pods --namespace tidb-admin -l app.kubernetes.io/ instance=tidb-operator
七、部署TiDB
1helm install charts/tidb-cluster --name=demo --namespace=tidb 2watch kubectl get pods --namespace tidb -l app.kubernetes.io/instance=demo -o wide
八、验证
安装 MySQL 客户端
-
参考文档https://dev.mysql.com/doc/refman/8.0/en/linux-installation.html
-
CentOS安装
1wget https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm 2yum localinstall mysql80-community-release-el7-3.noarch.rpm -y 3yum repolist all | grep mysql 4yum-config-manager --disable mysql80-community 5yum-config-manager --enable mysql57-community 6yum install mysql-community-client
-
Ubuntu安装
1wget https://dev.mysql.com/get/mysql-apt-config_0.8.13-1_all.deb 2dpkg -i mysql-apt-config_0.8.13-1_all.deb 3apt update 4 5# 选择MySQL版本 6dpkg-reconfigure mysql-apt-config 7apt install mysql-client -y
九、映射TiDB端口
-
查看TiDB Service
1kubectl get svc --all-namespaces
-
映射 TiDB 端口
1# 仅本地访问 2kubectl port-forward svc/demo-tidb 4000:4000 --namespace=tidb 3 4# 其他主机访问 5kubectl port-forward --address 0.0.0.0 svc/demo-tidb 4000:4000 --namespace=tidb
-
首次登录MySQL
1mysql -h 127.0.0.1 -P 4000 -u root -D test
-
修改 TiDB 密码
1SET PASSWORD FOR 'root'@'%' = 'wD3cLpyO5M'; FLUSH PRIVILEGES;
趟坑小记
1、K8s国内安装
K8s镜像多在gcr.io国内访问不到,基本做法是把镜像导入DockerHub或者私有镜像,这一点在K8s部署章节有详细过程就不累述了。
2、TiDB-Operator 本地存储配置
Operator在启动集群时pd和TiKV需要绑定本地存储如果挂载点不足会导致pod启动过程中找不到可已bond的pv始终处于pending或createing状态,详细配请参阅https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md中“Sharing a disk filesystem by multiple filesystem PVs”一节,同一块磁盘绑定多个挂载目录,为Operator提供足够的bond
3、MySQL客户端版本问题
目前TiDB只支持MySQL5.7版本客户端8.0会报ERROR 1105 (HY000): Unknown charset id 255
以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网
猜你喜欢:- 【技术干货】日志服务架构设计和实践
- 移动 Web 最佳实践(干货长文,建议收藏)
- 让 Elasticsearch 飞起来:性能优化实践干货
- 干货 | 京东云原生容器—SpringCloud实践(一)
- 干货!前端 Code Review 的最佳实践方案
- Kerberos 和 Apache Sentry 干货实践(下)
本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
互联网+ 战略版
刘润 / 中国华侨出版社 / 2015-5-1 / 49.8
1、“互联网+”上升为国家战略,“互联网+”成为下一个超级畅销书的热点话题在商业环境巨变的今天,传统企业该怎么走?传统企业转型是一个系统工程,如何定战略、抓主要矛盾? 2、首本“互联网+传统企业”的战略指导书。“我互联网+”时代到来了,传统企业的外部环境发生了哪些变化?了解商业新生代的新商业环境,跟之前工业时代的不同,从战略上指导传统企业转型,更安全也更大局把握游刃有余。一起来看看 《互联网+ 战略版》 这本书的介绍吧!