内容简介:本文主要讨论 openstack swift 的独立安装,一个节点作为代理节点运行 proxy 服务,其他节点作为存储节点。proxy node 和 storage node 都需要安装。ubuntu 环境
本文主要讨论 openstack swift 的独立安装,一个节点作为代理节点运行 proxy 服务,其他节点作为存储节点。
安装各种依赖
proxy node 和 storage node 都需要安装。
ubuntu 环境
apt-get update; apt-get upgrade
sudo apt-get install curl gcc memcached rsync sqlite xfsprogs git-core \
libffi-dev xinetd liberasurecode-dev \
python-setuptools \
python-coverage python-dev python-nose \
python-xattr python-eventlet \
python-greenlet python-pastedeploy \
python-netifaces python-pip python-dns \
python-mock \
libxml2-dev libxslt1-dev zlib1g-dev \
autoconf automake libtool
centos 环境
yum update; yum upgrade
sudo yum install curl gcc memcached rsync sqlite xfsprogs git-core \
libffi-devel xinetd liberasurecode-devel \
python-setuptools \
python-coverage python-devel python-nose \
pyxattr python-eventlet \
python-greenlet python-paste-deploy \
python-netifaces python-pip python-dns \
python-mock \
libxml2-devel libxslt1-devel zlib1g-devel \
autoconf automake libtool
安装 liberasurecode
git clone https://github.com/openstack/liberasurecode.git cd liberasurecode ./autogen.sh ./configure make make test sudo make install
在 /etc/ld.so.conf
中添加一行: /usr/local/lib
运行 sudo ldconfig
安装 swift client(如果不在此节点运行 client 程序可以不安装)
cd ~ git clone https://github.com/openstack/python-swiftclient.git cd ./python-swiftclient sudo pip install -r requirements.txt sudo python setup.py develop
安装 swift client
cd ~ git clone https://github.com/openstack/swift.git cd ./swift sudo pip install -r requirements.txt sudo python setup.py develop
需要在每次开机时都创建一些需要的目录,修改 /etc/rc.local
添加如下内容:
sudo mkdir -p /var/run/swift sudo chown user:user /var/run/swift sudo mkdir -p /var/cache/swift sudo chown user:user /var/cache/swift #user需要根据实际情况进行更改
创建 swift 目录
mkdir /etc/swift chown -R user:user /etc/swift #user需要根据实际情况进行更改
在 proxy node 上安装服务
下载 proxy 的配置文件
curl -o /etc/swift/proxy-server.conf \ https://git.openstack.org/cgit/openstack/swift/plain/etc/proxy-server.conf-sample
删除注释和空格
cp proxy-server.conf proxy-server.conf.bak cat proxy-server.conf.bak | grep -v '^#'| grep -v '^$'> proxy-server.conf
更改: proxy-server.conf
:
[DEFAULT] bind_port = 8080 user = user # 填实际的用户名 [pipeline:main] pipeline = healthcheck cache tempauth proxy-logging proxy-server [app:proxy-server] use = egg:swift #proxy allow_account_management = true account_autocreate = true [filter:tempauth] use = egg:swift#tempauth user_admin_admin = admin .admin .reseller_admin user_test_tester = testing .admin user_test2_tester2 = testing2 .admin user_test_tester3 = testing3 user_test5_tester5 = testing5 service reseller_prefix = AUTH token_life = 86400 [filter:cache] use = egg:swift #memcache memcache_servers = 127.0.0.1:11211 [filter:healthcheck] use = egg:swift #healthcheck [filter:proxy-logging] use = egg:swift #proxy_logging
pipeline = ... tempauth ...
Tempauth 在配置文件中的格式:
[group] [group] […] [storage_url]
key: 密码
group有两种,.admin和.reseller_admin
.reseller_admin 具有对任何account操作的权限。
.admin 具有对所在的account 操作的权限。
如果没有设置以上组,则用户只能访问那些被.admin或.reseller_admin所允许的container。
在storage node上安装服务
下面的操作要在每个node上都安装一次
格式化设备
查看设备
fdisk -l
找到一个新设备,分区(也可以不分区)
fdisk /dev/sdb
格式化为xfs文件系统
mkfs.xfs /dev/sdb1
创建目录挂载点
mkdir -p /srv/node/sdb1 sudo chown -R user:user /srv/node # user为实际的用户名
设置开机自动挂载,编辑 /etc/fstab
文件,添加如下内容:
/dev/sdb1 /srv/node/sdb1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
挂载
mount /srv/node/sdb1
配置 rsyncd 服务
修改 /etc/rsyncd.conf
:
uid = user # 根据实际情况填写 gid = user # 根据实际情况填写 log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = 0.0.0.0 [account] max connections = 25 path = /srv/node/ read only = false lock file = /var/lock/account.lock [container] max connections = 25 path = /srv/node/ read only = false lock file = /var/lock/container.lock [object] max connections = 25 path = /srv/node/ read only = false lock file = /var/lock/object.lock
编辑 /etc/default/rsync
,找到其中的RSYNC_ENABLE,将值设为true
RSYNC_ENABLE=true
启动 rsyncd 和 memcached, 并设置开机启动
ubuntu 环境中
apt-get install sysv-rc-conf sysv-rc-conf rsyncd on service rsync start service memcached start sysv-rc-conf memcached on
centos 环境中
systemctl enable rsyncd.service systemctl start rsyncd.service systemctl enable memcached .service systemctl start memcached .service systemctl status rsyncd.service #查看状态 systemctl status memcached .service #查看状态
测试
rsync rsync://pub@127.0.0.1
也可以在别的机器上测试
配置 account container 和 object 服务
下载样例配置文件
curl -o /etc/swift/account-server.conf \ https://git.openstack.org/cgit/openstack/swift/plain/etc/account-server.conf-sample curl -o /etc/swift/container-server.conf \ https://git.openstack.org/cgit/openstack/swift/plain/etc/container-server.conf-sample curl -o /etc/swift/object-server.conf \ https://git.openstack.org/cgit/openstack/swift/plain/etc/object-server.conf-sample
删除注释和空格
cp container-server.conf container-server.conf.bak cat container-server.conf.bak | grep -v '^#'| grep -v '^$'> container-server.conf cp account-server.conf account-server.conf.bak cat account-server.conf.bak | grep -v '^#'| grep -v '^$'> account-server.conf cp object-server.conf object-server.conf.bak cat object-server.conf.bak | grep -v '^#'| grep -v '^$'> object-server.conf
配置 account-server.conf
:
[DEFAULT] bind_port = 6202 devices = /srv/node bind_ip = 0.0.0.0 user = chenhao swift_dir = /etc/swift mount_check = true [pipeline:main] pipeline = healthcheck recon account-server [app:account-server] use = egg:swift#account [filter:healthcheck] use = egg:swift #healthcheck [filter:recon] use = egg:swift #recon recon_cache_path = /var/cache/swift [account-replicator] [account-auditor] [account-reaper] [filter:xprofile] use = egg:swift #xprofile
配置 container -server.conf
:
[DEFAULT] bind_port = 6201 devices = /srv/node bind_ip = 0.0.0.0 user = chenhao swift_dir = /etc/swift mount_check = true [pipeline:main] pipeline = healthcheck recon container-server [app:container-server] use = egg:swift#container [filter:healthcheck] use = egg:swift#healthcheck [filter:recon] use = egg:swift#recon recon_cache_path = /var/cache/swift [container-replicator] [container-updater] [container-auditor] [container-sync] [filter:xprofile] use = egg:swift#xprofile [container-sharder]
配置 object-server.conf
:
[DEFAULT] bind_port = 6200 devices = /srv/node bind_ip = 0.0.0.0 user = chenhao swift_dir = /etc/swift mount_check = true [pipeline:main] pipeline = healthcheck recon object-server [app:object-server] use = egg:swift #object [filter:healthcheck] use = egg:swift #healthcheck [filter:recon] use = egg:swift #recon recon_cache_path = /var/cache/swift recon_lock_path = /var/lock [object-replicator] [object-reconstructor] [object-updater] [object-auditor] [filter:xprofile] use = egg:swift #xprofile
配置swift.conf
在 proxy node 上操作。
下载样例文件:
curl -o /etc/swift/swift.conf \ https://git.openstack.org/cgit/openstack/swift/plain/etc/swift.conf-sample
在[swift-hash]段落中,修改掉suffix和prefix的内容。随便什么内容都好,但是一旦你设置好了,就不能再次修改了。
[swift-hash] swift_hash_path_suffix = b1b8198f swift_hash_path_prefix = c167cd22
然后在[storage-policy:0]段落中,编辑一下默认的存储策略:
[storage-policy:0] name = Policy-0 default = yes
将配置的文件复制到其他的 storage node 上,如果还有其他的 proxy node,也要复制到其他的 proxy node 上。
创建 ring
创建ring:account ring, container ring, object ring.
swift-ring-builder account.builder create 18 3 1 swift-ring-builder container.builder create 18 3 1 swift-ring-builder object.builder create 18 3 1
18 表示 partition 数为2的18次方,3表示 replication 数为3,1 表示分区数据的最短迁移间隔时间为1小时。
向 ring 中加入设备
swift-ring-builder account.builder add r1z1-192.168.2.129:6002/sdb1 100 swift-ring-builder container.builder add r1z1-192.168.2.129:6001/sdb1 100 swift-ring-builder object.builder add r1z1-192.168.2.129:6000/sdb1 100 swift-ring-builder account.builder add r1z1-192.168.2.130:6002/sdb1 100 swift-ring-builder container.builder add r1z1-192.168.2.130:6001/sdb1 100 swift-ring-builder object.builder add r1z1-192.168.2.130:6000/sdb1 100 swift-ring-builder account.builder add r1z1-192.168.2.131:6002/sdb1 100 swift-ring-builder container.builder add r1z1-192.168.2.131:6001/sdb1 100 swift-ring-builder object.builder add r1z1-192.168.2.131:6000/sdb1 100
r1表示 region1, z1 表示 zone1, zone 这个概念是虚拟的,可以将一个 device 划定到一个 zone,在分配 partition 的时候会考虑到这个因素,尽量划分到不同的 zone 中。
sdb1 为 swift 所使用的存储空间。
100 代表设备的权重,也是在分配 partition 的时候会考虑的因素。
验证查看 ring
swift-ring-builder account.builder swift-ring-builder container.builder swift-ring-builder object.builder
重新平衡 ring
swift-ring-builder account.builder rebalance swift-ring-builder container.builder rebalance swift-ring-builder object.builder rebalance
分发 ring
将 /etc/swift 目录下生成的 container.ring.gz,object.ring.gz 和 account.ring.gz 复制到每一个 storage node上的 /etc/swift 目录中。
如果还有其他的 node 运行着 swift proxy 服务,那么这些配置文件也需要复制过去。
启动服务
在 proxy node 上启动 proxy 服务
sudo swift-init proxy start
在 storage node 启动服务
sudo swift-init account-server start sudo swift-init account-replicator start sudo swift-init account-auditor start sudo swift-init container-server start sudo swift-init container-replicator start sudo swift-init container-updater start sudo swift-init container-auditor start sudo swift-init object-server start sudo swift-init object-replicator start sudo swift-init object-updater start sudo swift-init object-auditor start`
停止服务只要将 start 改为 stop, 可以将上述命令写进一个脚本里一次运行。
测试
需要先安装 swift client
import swiftclient
authurl,username,password=('http://192.168.2.189:8080/auth/v1.0','test:tester','testing')
conn = swiftclient.Connection(authurl=authurl,user=username,key=password);
#cont_headers,containers = conn.get_account();
# add a container
conn.put_container("container_demo")
# upload a file
with open('hello.txt', 'r') as localfile:
conn.put_object("container_demo", "hello_file",contents=localfile.read())
# upload string
conn.put_object("container_demo", "hello_string",contents="hello world!")
# list the files in a container
for data in conn.get_container("container_demo")[1]:
print '{0}\t{1}\t{2}'.format(data['name'], data['bytes'], data['last_modified'])
# download a file
obj_tuple = conn.get_object("container_demo", "hello_file")
with open('down_hello_file', 'wb') as localfile:
localfile.write(obj_tuple[1])
#delete a file
conn.delete_object("container_demo", "hello_file")
写在后面
动态扩容,加上需要添加的机器信息,重新生成 ring 文件。将新生成的 ring 文件放到每一个服务器的 /etc/swift 目录下。swift 的各个组件程序会定期重新读取 rings 文件。
服务器时间必须一致,不一致的话会出现问题。测试中有一台服务器比其他的慢了2小时,结果向这台 proxy 上传文件,返回成功,但是其实并没有上传成功,可能是由于时间原因,在其他机器看来这个文件已经被删除掉了。
以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网
猜你喜欢:- ELK学习实验003:Elasticsearch 集群安装
- k8s 学习之路(一):安装
- 从安装到入门:ElasticSearch 快速学习手册
- GOlang学习笔记(踩坑记)1,安装与资源
- 入门教程 | 从安装部署开始学习 Elasticsearch
- ELK学习实验012:Logstash的安装和使用
本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
Agile Web Development with Rails, Third Edition
Sam Ruby、Dave Thomas、David Heinemeier Hansson / Pragmatic Bookshelf / 2009-03-17 / USD 43.95
Rails just keeps on changing. Rails 2, released in 2008, brings hundreds of improvements, including new support for RESTful applications, new generator options, and so on. And, as importantly, we’ve a......一起来看看 《Agile Web Development with Rails, Third Edition》 这本书的介绍吧!