内容简介:1、搭建GlusterFS复制卷(使用至少2个存储节点)2、配置Keepalived管理GlusterFS主从存储节点3、配置Keepalived浮动IP(VIP)对外提供存储服务
GlusterFS分布式存储高可用方案设计
1、搭建GlusterFS复制卷(使用至少2个存储节点)
2、配置Keepalived管理GlusterFS主从存储节点
3、配置Keepalived浮动IP(VIP)对外提供存储服务
4、实现存储高可用(即使用两台GlusterFS服务器提供双节点复制卷,并提供快速故障转移,实现存储的持续高可用)
5、可以应用在关键性数据存储业务场景
一、环境准备
Hostname | ||||
172.16.10.10 | data-node-01 | /dev/sdb1 | CentOS7 |
172.16.10.220 说明:用于对外提供存储服务 |
172.16.10.11 | data-node-02 | /dev/sdb1 | CentOS7 | |
172.16.10.12 | web-node-12 |
/etc/hosts配置
172.16.10.10 data-node-01 172.16.10.11 data-node-02
建立GlusterFS存储挂载点
mkdir -p /glusterfs/storage1 echo "/dev/sdb1 /glusterfs/storage1 xfs defaults 0 0" >> /etc/fstab mount -a
二、安装GlusterFS服务端软件
1、安装gluster源,并安装glusterfs及相关软件包
yum install centos-release-gluster -y yum install glusterfs glusterfs-server glusterfs-cli glusterfs-geo-replication glusterfs-rdma -y
2、客户端安装GlusterFS客户端软件
yum install glusterfs-fuse
3、启动Glusterd服务
systemctl start glusterd
4、在任意一个节点上添加信任节点
gluster peer probe data-node-02 gluster peer probe data-node-01 gluster peer status
5、在任意一个节点上建立复制卷
mkdir /glusterfs/storage1/rep_vol1 gluster volume create rep_vol1 replica 2 data-node-01:/glusterfs/storage1/rep_vol1 data-node-02:/glusterfs/storage1/rep_vol1
6、启动复制卷
gluster volume start rep_vol1
7、查看复制卷状态
gluster volume status gluster volume info
8、客户端测试挂载复制卷
mount -t glusterfs data-node-01:rep_vol1 /data/
9、客户端测试复制卷数据存储
for i in `seq -w 1 3`;do cp -rp /var/log/messages /data/test-$i;done [root@localhost ~]# ls /data/ 111 1.txt 2.txt anaconda-ks.cfg test-1 test-2 test-3
三、安装与配置Keepalived
1、安装Keepalived
yum -y install keepalived
2、启动keepalived服务
systemctl start keepalived
3、主节点keepalived配置
! Configuration File for keepalived global_defs { notification_email { mail@huangming.org } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id GFS_HA_MASTER vrrp_skip_check_adv_addr } vrrp_sync_group GFS_HA_GROUP { group { GFS_HA_1 } } vrrp_script monitor_glusterfs_status { script "/etc/keepalived/scripts/monitor_glusterfs_status.sh" interval 5 fall 3 rise 1 weight 20 } vrrp_instance GFS_HA_1 { state BACKUP interface ens34 virtual_router_id 107 priority 100 advert_int 2 nopreempt authentication { auth_type PASS auth_pass 11112222 } virtual_ipaddress { 172.16.10.220/24 dev ens34 } track_script { monitor_glusterfs_status } track_interface { ens34 } notify_master "/etc/keepalived/scripts/keepalived_notify.sh master" notify_backup "/etc/keepalived/scripts/keepalived_notify.sh backup" notify_fault "/etc/keepalived/scripts/keepalived_notify.sh fault" notify_stop "/etc/keepalived/scripts/keepalived_notify.sh stop" }
4、备节点keepalived配置
! Configuration File for keepalived global_defs { notification_email { mail@huangming.org } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id GFS_HA_MASTER vrrp_skip_check_adv_addr } vrrp_sync_group GFS_HA_GROUP { group { GFS_HA_1 } } vrrp_script monitor_glusterfs_status { script "/etc/keepalived/scripts/monitor_glusterfs_status.sh" interval 5 fall 3 rise 1 weight 20 } vrrp_instance GFS_HA_1 { state BACKUP interface ens34 virtual_router_id 107 priority 90 advert_int 2 authentication { auth_type PASS auth_pass 11112222 } virtual_ipaddress { 172.16.10.220/24 dev ens34 } track_script { monitor_glusterfs_status } track_interface { ens34 } notify_master "/etc/keepalived/scripts/keepalived_notify.sh master" notify_backup "/etc/keepalived/scripts/keepalived_notify.sh backup" notify_fault "/etc/keepalived/scripts/keepalived_notify.sh fault" notify_stop "/etc/keepalived/scripts/keepalived_notify.sh stop" }
5、keepalived vrrp监控脚本
#!/bin/bash #check glusterfsd and glusterd process systemctl status glusterd &>/dev/null if [ $? -eq 0 ];then systemctl status glusterfsd &>/dev/null if [ $? -eq 0 ];then exit 0 else exit 2 fi else systemctl start glusterd &>/dev/null systemctl stop keepalived &>/dev/null && exit 1 fi
6、keepalived通知脚本(管理Glusterd服务)
#!/bin/bash #keepalived script for glusterd master() { systemctl status glusterd if [ $? -ne 0 ];then systemctl start glusterd else systemctl restart glusterd fi } backup() { systemctl status glusterd if [ $? -ne 0 ];then systemctl start glusterd fi } case $1 in master) master ;; backup) backup ;; fault) backup ;; stop) backup systemctl restart keepalived ;; *) echo $"Usage: $0 {master|backup|fault|stop}" esac
四、测试Keepalived自动接管GlusterFS服务及存储的可用性
1、重新启动keepalived服务
systemctl restart keepalived.service
2、查看VIP接管情况
## 节点1上 [root@data-node-01 ~]# ip a show dev ens34 3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:b2:b5:2a brd ff:ff:ff:ff:ff:ff inet 172.16.10.10/24 brd 172.16.10.255 scope global ens34 valid_lft forever preferred_lft forever inet 172.16.10.220/24 scope global secondary ens34 valid_lft forever preferred_lft forever inet6 fe80::ce9a:ee2e:7b6c:a6bb/64 scope link valid_lft forever preferred_lft forever ## 节点2上 [root@data-node-02 ~]# ip a show dev ens34 3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:ba:42:cf brd ff:ff:ff:ff:ff:ff inet 172.16.10.11/24 brd 172.16.10.255 scope global ens34 valid_lft forever preferred_lft forever inet6 fe80::e23:ce0:65c3:ffbf/64 scope link valid_lft forever preferred_lft forever
3、客户端上使用VIP挂载GlusterFS提供的复制卷,并测试是否可用
mount -t glusterfs 172.16.10.220:rep_vol1 /data/ [root@localhost ~]# ls /data/ 111 1.txt 2.txt anaconda-ks.cfg test test-1 test-2 test-3 [root@localhost ~]# mkdir /data/test [root@localhost ~]# echo 1111 >/data/test/1.txt [root@localhost ~]# ls /data/test 1.txt [root@localhost ~]# cat /data/test/1.txt 1111
查看GluserFS节点复制卷的使用情况
[root@data-node-02 ~]# ls /glusterfs/storage1/rep_vol1/ 111 1.txt 2.txt anaconda-ks.cfg test test-1 test-2 test-3
3、测试GlusterFS服务故障转移
将主节点(节点1)关机或重启,查看GlusterFS服务与VIP是否转移至节点2
[root@data-node-01 ~]# reboot [root@data-node-02 ~]# ip a show dev ens34 3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:ba:42:cf brd ff:ff:ff:ff:ff:ff inet 172.16.10.11/24 brd 172.16.10.255 scope global ens34 valid_lft forever preferred_lft forever inet 172.16.10.220/24 scope global secondary ens34 valid_lft forever preferred_lft forever inet6 fe80::e23:ce0:65c3:ffbf/64 scope link valid_lft forever preferred_lft forever [root@data-node-02 ~]# tail -f /var/log/messages Aug 27 22:56:19 data-node-02 Keepalived_vrrp[2563]: SECURITY VIOLATION - scripts are being executed but script_security not enabled. Aug 27 22:56:19 data-node-02 Keepalived_vrrp[2563]: Sync group GFS_HA_GROUP has only 1 virtual router(s) - removing Aug 27 22:56:19 data-node-02 Keepalived_vrrp[2563]: VRRP_Instance(GFS_HA_1) removing protocol VIPs. Aug 27 22:56:19 data-node-02 Keepalived_vrrp[2563]: Using LinkWatch kernel netlink reflector... Aug 27 22:56:19 data-node-02 Keepalived_vrrp[2563]: VRRP_Instance(GFS_HA_1) Entering BACKUP STATE Aug 27 22:56:19 data-node-02 Keepalived_vrrp[2563]: VRRP sockpool: [ifindex(3), proto(112), unicast(0), fd(10,11)] Aug 27 22:56:19 data-node-02 Keepalived_vrrp[2563]: VRRP_Script(monitor_glusterfs_status) succeeded Aug 27 22:56:19 data-node-02 kernel: nf_conntrack version 0.5.0 (16384 buckets, 65536 max) Aug 27 22:56:19 data-node-02 kernel: IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) Aug 27 22:56:19 data-node-02 kernel: IPVS: Connection hash table configured (size=4096, memory=64Kbytes) Aug 27 22:56:19 data-node-02 kernel: IPVS: Creating netns size=2040 id=0 Aug 27 22:56:19 data-node-02 kernel: IPVS: ipvs loaded. Aug 27 22:56:19 data-node-02 Keepalived_healthcheckers[2562]: Opening file '/etc/keepalived/keepalived.conf'. Aug 27 22:56:21 data-node-02 Keepalived_vrrp[2563]: VRRP_Instance(GFS_HA_1) Changing effective priority from 90 to 110 Aug 27 23:01:01 data-node-02 systemd: Started Session 3 of user root. Aug 27 23:01:01 data-node-02 systemd: Starting Session 3 of user root. Aug 27 23:03:09 data-node-02 Keepalived_vrrp[2563]: VRRP_Instance(GFS_HA_1) Transition to MASTER STATE Aug 27 23:03:11 data-node-02 Keepalived_vrrp[2563]: VRRP_Instance(GFS_HA_1) Entering MASTER STATE Aug 27 23:03:11 data-node-02 Keepalived_vrrp[2563]: VRRP_Instance(GFS_HA_1) setting protocol VIPs. Aug 27 23:03:11 data-node-02 Keepalived_vrrp[2563]: Sending gratuitous ARP on ens34 for 172.16.10.220 Aug 27 23:03:11 data-node-02 Keepalived_vrrp[2563]: VRRP_Instance(GFS_HA_1) Sending/queueing gratuitous ARPs on ens34 for 172.16.10.220 Aug 27 23:03:11 data-node-02 Keepalived_vrrp[2563]: Sending gratuitous ARP on ens34 for 172.16.10.220 Aug 27 23:03:11 data-node-02 Keepalived_vrrp[2563]: Sending gratuitous ARP on ens34 for 172.16.10.220 Aug 27 23:03:11 data-node-02 Keepalived_vrrp[2563]: Sending gratuitous ARP on ens34 for 172.16.10.220 Aug 27 23:03:11 data-node-02 Keepalived_vrrp[2563]: Sending gratuitous ARP on ens34 for 172.16.10.220 Aug 27 23:03:11 data-node-02 systemd: Stopping GlusterFS, a clustered file-system server... Aug 27 23:03:11 data-node-02 systemd: Starting GlusterFS, a clustered file-system server... Aug 27 23:03:12 data-node-02 systemd: Started GlusterFS, a clustered file-system server. Aug 27 23:03:16 data-node-02 Keepalived_vrrp[2563]: Sending gratuitous ARP on ens34 for 172.16.10.220 Aug 27 23:03:16 data-node-02 Keepalived_vrrp[2563]: VRRP_Instance(GFS_HA_1) Sending/queueing gratuitous ARPs on ens34 for 172.16.10.220 Aug 27 23:03:16 data-node-02 Keepalived_vrrp[2563]: Sending gratuitous ARP on ens34 for 172.16.10.220 Aug 27 23:03:16 data-node-02 Keepalived_vrrp[2563]: Sending gratuitous ARP on ens34 for 172.16.10.220 Aug 27 23:03:16 data-node-02 Keepalived_vrrp[2563]: Sending gratuitous ARP on ens34 for 172.16.10.220 Aug 27 23:03:16 data-node-02 Keepalived_vrrp[2563]: Sending gratuitous ARP on ens34 for 172.16.10.220
在客户端上测试存储是否仍然可用
[root@localhost ~]# df -Th Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/cl-root xfs 40G 1.2G 39G 3% / devtmpfs devtmpfs 1.9G 0 1.9G 0% /dev tmpfs tmpfs 1.9G 0 1.9G 0% /dev/shm tmpfs tmpfs 1.9G 8.6M 1.9G 1% /run tmpfs tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/sda1 xfs 1014M 139M 876M 14% /boot tmpfs tmpfs 378M 0 378M 0% /run/user/0 172.16.10.220:rep_vol1 fuse.glusterfs 10G 136M 9.9G 2% /data [root@localhost ~]# ls /data/ 111 1.txt 2.txt anaconda-ks.cfg test test-1 test-2 test-3 [root@localhost ~]# touch /data/test.log [root@localhost ~]# ls -l /data/ total 964 drwxr-xr-x 3 root root 4096 Aug 27 21:58 111 -rw-r--r-- 1 root root 10 Aug 27 21:23 1.txt -rw-r--r-- 1 root root 6 Aug 27 21:36 2.txt -rw------- 1 root root 2135 Aug 27 21:44 anaconda-ks.cfg drwxr-xr-x 2 root root 4096 Aug 27 22:59 test -rw------- 1 root root 324951 Aug 27 21:23 test-1 -rw------- 1 root root 324951 Aug 27 21:23 test-2 -rw------- 1 root root 324951 Aug 27 21:23 test-3 -rw-r--r-- 1 root root 0 Aug 27 23:05 test.log
查看节点1状态
[root@data-node-01 ~]# ip a show dev ens34 3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:b2:b5:2a brd ff:ff:ff:ff:ff:ff inet 172.16.10.10/24 brd 172.16.10.255 scope global ens34 valid_lft forever preferred_lft forever inet6 fe80::ce9a:ee2e:7b6c:a6bb/64 scope link valid_lft forever preferred_lft forever
重新启动keepalived服务
[root@data-node-01 ~]# systemctl start keepalived.service
查看keepalived日志(主备状态)
Aug 27 23:07:42 data-node-01 systemd: Starting LVS and VRRP High Availability Monitor... Aug 27 23:07:43 data-node-01 Keepalived[2914]: Starting Keepalived v1.3.5 (03/19,2017), git commit v1.3.5-6-g6fa32f2 Aug 27 23:07:43 data-node-01 Keepalived[2914]: Opening file '/etc/keepalived/keepalived.conf'. Aug 27 23:07:43 data-node-01 Keepalived[2915]: Starting Healthcheck child process, pid=2916 Aug 27 23:07:43 data-node-01 systemd: Started LVS and VRRP High Availability Monitor. Aug 27 23:07:43 data-node-01 Keepalived[2915]: Starting VRRP child process, pid=2917 Aug 27 23:07:43 data-node-01 Keepalived_vrrp[2917]: Registering Kernel netlink reflector Aug 27 23:07:43 data-node-01 Keepalived_vrrp[2917]: Registering Kernel netlink command channel Aug 27 23:07:43 data-node-01 Keepalived_vrrp[2917]: Registering gratuitous ARP shared channel Aug 27 23:07:43 data-node-01 Keepalived_vrrp[2917]: Opening file '/etc/keepalived/keepalived.conf'. Aug 27 23:07:43 data-node-01 Keepalived_vrrp[2917]: WARNING - default user 'keepalived_script' for script execution does not exist - please create. Aug 27 23:07:43 data-node-01 Keepalived_vrrp[2917]: SECURITY VIOLATION - scripts are being executed but script_security not enabled. Aug 27 23:07:43 data-node-01 Keepalived_vrrp[2917]: Sync group GFS_HA_GROUP has only 1 virtual router(s) - removing Aug 27 23:07:43 data-node-01 Keepalived_vrrp[2917]: VRRP_Instance(GFS_HA_1) removing protocol VIPs. Aug 27 23:07:43 data-node-01 Keepalived_vrrp[2917]: Using LinkWatch kernel netlink reflector... Aug 27 23:07:43 data-node-01 Keepalived_vrrp[2917]: VRRP_Instance(GFS_HA_1) Entering BACKUP STATE Aug 27 23:07:43 data-node-01 Keepalived_vrrp[2917]: VRRP sockpool: [ifindex(3), proto(112), unicast(0), fd(10,11)] Aug 27 23:07:43 data-node-01 Keepalived_vrrp[2917]: VRRP_Script(monitor_glusterfs_status) succeeded Aug 27 23:07:43 data-node-01 kernel: nf_conntrack version 0.5.0 (16384 buckets, 65536 max) Aug 27 23:07:43 data-node-01 kernel: IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) Aug 27 23:07:43 data-node-01 kernel: IPVS: Connection hash table configured (size=4096, memory=64Kbytes) Aug 27 23:07:43 data-node-01 kernel: IPVS: Creating netns size=2040 id=0 Aug 27 23:07:43 data-node-01 Keepalived_healthcheckers[2916]: Opening file '/etc/keepalived/keepalived.conf'. Aug 27 23:07:43 data-node-01 kernel: IPVS: ipvs loaded. Aug 27 23:07:45 data-node-01 Keepalived_vrrp[2917]: VRRP_Instance(GFS_HA_1) Changing effective priority from 100 to 120
由此可见当节点1故障恢复后,keepalived会进入到备用状态,同时继续监管GlusterFS服务,当节点2故障时会将服务、存储和VIP切换到节点1,继续对外提供存储服务,从而实现存储的高可用
以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
Python编程初学者指南
[美]Michael Dawson / 王金兰 / 人民邮电出版社 / 2014-10-1
Python是一种解释型、面向对象、动态数据类型的高级程序设计语言。Python可以用于很多的领域,从科学计算到游戏开发。 《Python编程初学者指南》尝试以轻松有趣的方式来帮助初学者掌握Python语言和编程技能。《Python编程初学者指南》共12章,每一章都会用一个完整的游戏来演示其中的关键知识点,并通过编写好玩的小软件这种方式来学习编程,引发读者的兴趣,降低学习的难度。每章最后都会......一起来看看 《Python编程初学者指南》 这本书的介绍吧!