docker-swarm部署mongo分片集群

栏目: 数据库 · 发布时间: 5年前

内容简介:前面三步执行完集群就可以使用了,不需要授权登录可不用执行后面4个步骤在在
  • 本文主要介绍在docker-swarm环境下搭建mongo分片集群。
  • 本文以授权模式创建集群,但是如果之间启动授权的脚本,将无法创建用户。需要在无授权模式下把用户创建好,然后再以授权模式重启。(这两种模式启动脚本不同,但是挂载同一个文件目录)

架构图

docker-swarm部署mongo分片集群
  • 共三个节点:breakpad(主服务器),bpcluster,bogon

前置步骤

  • 安装docker
  • 初始化swarm集群
    • docker swarm init

部署步骤

前面三步执行完集群就可以使用了,不需要授权登录可不用执行后面4个步骤

  1. 创建目录
  2. 部署服务(无授权模式)
  3. 配置分片信息
  4. 生成keyfile文件,并修改权限
  5. 拷贝keyfile到其他节点
  6. 添加用户信息
  7. 重启服务(授权模式)

1. 创建目录

所有服务器 执行 before-deploy.sh

#!/bin/bash

DIR=/data/fates
DATA_PATH="${DIR}/mongo"
PWD='1qaz2wsx!@#'

DATA_DIR_LIST=('config' 'shard1' 'shard2' 'shard3' 'script')

function check_directory() {
  if [ ! -d "${DATA_PATH}" ]; then
    echo "create directory: ${DATA_PATH}"
    echo ${PWD} | sudo -S mkdir -p ${DATA_PATH}
  else
    echo "directory ${DATA_PATH} already exists."
  fi


  cd "${DATA_PATH}"

  for SUB_DIR in ${DATA_DIR_LIST[@]}
  do
    if [ ! -d "${DATA_PATH}/${SUB_DIR}" ]; then
      echo "create directory: ${DATA_PATH}/${SUB_DIR}"
      echo ${PWD} | sudo -S mkdir -p "${DATA_PATH}/${SUB_DIR}"
    else
      echo "directory: ${DATA_PATH}/${SUB_DIR} already exists."
    fi
  done

  echo ${PWD} | sudo -S chown -R $USER:$USER "${DATA_PATH}"
}

check_directory

复制代码

2. 无授权模式启动mongo集群

  • 这一步还没有授权,无需登录就可以操作,用于创建用户

主服务器 下创建fate-mongo.yaml,并执行以下脚本(注意根据自己的机器名称修改constraints属性)

docker stack deploy -c fates-mongo.yaml fates-mongo
复制代码
version: '3.4'
services:
  shard1-server1:
    image: mongo:4.0.5
    # --shardsvr: 这个参数仅仅只是将默认的27017端口改为27018,如果指定--port参数,可用不需要这个参数
    # --directoryperdb:每个数据库使用单独的文件夹
    command: mongod --shardsvr --directoryperdb --replSet shard1
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard1:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==bpcluster
  shard2-server1:
    image: mongo:4.0.5
    command: mongod --shardsvr --directoryperdb --replSet shard2
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard2:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==bpcluster
  shard3-server1:
    image: mongo:4.0.5
    command: mongod --shardsvr --directoryperdb --replSet shard3
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard3:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==bpcluster
  shard1-server2:
    image: mongo:4.0.5
    command: mongod --shardsvr --directoryperdb --replSet shard1
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard1:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==bogon
  shard2-server2:
    image: mongo:4.0.5
    command: mongod --shardsvr --directoryperdb --replSet shard2
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard2:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==bogon
  shard3-server2:
    image: mongo:4.0.5
    command: mongod --shardsvr --directoryperdb --replSet shard3
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard3:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==bogon
  shard1-server3:
    image: mongo:4.0.5
    command: mongod --shardsvr --directoryperdb --replSet shard1
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard1:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==breakpad
  shard2-server3:
    image: mongo:4.0.5
    command: mongod --shardsvr --directoryperdb --replSet shard2
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard2:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==breakpad
  shard3-server3:
    image: mongo:4.0.5
    command: mongod --shardsvr --directoryperdb --replSet shard3
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard3:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==breakpad
  config1:
    image: mongo:4.0.5
    # --configsvr: 这个参数仅仅是将默认端口由27017改为27019, 如果指定--port可不添加该参数
    command: mongod --configsvr --directoryperdb --replSet fates-mongo-config --smallfiles
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/config:/data/configdb
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==bpcluster
  config2:
    image: mongo:4.0.5
    command: mongod --configsvr --directoryperdb --replSet fates-mongo-config --smallfiles
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/config:/data/configdb
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==bogon
  config3:
    image: mongo:4.0.5
    command: mongod --configsvr --directoryperdb --replSet fates-mongo-config --smallfiles
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/config:/data/configdb
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==breakpad
  mongos:
    image: mongo:4.0.5
    # mongo3.6版默认绑定IP为127.0.0.1,此处绑定0.0.0.0是允许其他容器或主机可以访问
    command: mongos --configdb fates-mongo-config/config1:27019,config2:27019,config3:27019 --bind_ip 0.0.0.0 --port 27017
    networks:
      - mongo
    ports:
      - 27017:27017
    volumes:
      - /etc/localtime:/etc/localtime
    depends_on:
      - config1
      - config2
      - config3
    deploy:
      restart_policy:
        condition: on-failure
      mode: global

networks:
  mongo:
    driver: overlay
    # 如果外部已经创建好网络,下面这句话放开
    # external: true

复制代码

3. 配置分片信息

# 添加配置服务器
docker exec -it $(docker ps | grep "config" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id: \"fates-mongo-config\",configsvr: true, members: [{ _id : 0, host : \"config1:27019\" },{ _id : 1, host : \"config2:27019\" }, { _id : 2, host : \"config3:27019\" }]})' | mongo --port 27019"

# 添加分片服务器
docker exec -it $(docker ps | grep "shard1" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id : \"shard1\", members: [{ _id : 0, host : \"shard1-server1:27018\" },{ _id : 1, host : \"shard1-server2:27018\" },{ _id : 2, host : \"shard1-server3:27018\", arbiterOnly: true }]})' | mongo --port 27018"
docker exec -it $(docker ps | grep "shard2" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id : \"shard2\", members: [{ _id : 0, host : \"shard2-server1:27018\" },{ _id : 1, host : \"shard2-server2:27018\" },{ _id : 2, host : \"shard3-server3:27018\", arbiterOnly: true }]})' | mongo --port 27018"
docker exec -it $(docker ps | grep "shard3" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id : \"shard3\", members: [{ _id : 0, host : \"shard3-server1:27018\" },{ _id : 1, host : \"shard2-server2:27018\" },{ _id : 2, host : \"shard3-server3:27018\", arbiterOnly: true }]})' | mongo --port 27018"

# 添加分片集群到mongos中
docker exec -it $(docker ps | grep "mongos" | awk '{ print $1 }') bash -c "echo 'sh.addShard(\"shard1/shard1-server1:27018,shard1-server2:27018,shard1-server3:27018\")' | mongo "
docker exec -it $(docker ps | grep "mongos" | awk '{ print $1 }') bash -c "echo 'sh.addShard(\"shard1/shard2-server1:27018,shard2-server2:27018,shard2-server3:27018\")' | mongo "
docker exec -it $(docker ps | grep "mongos" | awk '{ print $1 }') bash -c "echo 'sh.addShard(\"shard1/shard3-server1:27018,shard3-server2:27018,shard3-server3:27018\")' | mongo "
复制代码

4. 生成密钥文件

执行前面三步,已经可用确保mongo分片集群启动成功可使用了,如果不需要加授权,后面的步骤不用看。

主服务器 执行 generate-keyfile.sh

#!/bin/bash

DATA_PATH=/data/fates/mongo
PWD='1qaz2wsx!@#'

function check_directory() {
  if [ ! -d "${DATA_PATH}" ]; then
    echo "directory: ${DATA_PATH} not exists, please run before-depoly.sh."
  fi
}

function generate_keyfile() {
  cd "${DATA_PATH}/script"
  if [ ! -f "${DATA_PATH}/script/mongo-keyfile" ]; then
    echo 'create mongo-keyfile.'
    openssl rand -base64 756 -out mongo-keyfile
    echo "${PWD}" | sudo -S chmod 600 mongo-keyfile
    echo "${PWD}" | sudo -S chown 999 mongo-keyfile
  else
    echo 'mongo-keyfile already exists.'
  fi
}

check_directory
generate_keyfile

复制代码

5. 拷贝密钥文件到其他服务器的script目录下

在刚才生成keyfile文件的服务器上执行拷贝(注意 -p 参数,保留前面修改的权限)

sudo scp -p /data/fates/mongo/script/mongo-keyfile username@server2:/data/fates/mongo/script
sduo scp -p /data/fates/mongo/script/mongo-keyfile username@server3:/data/fates/mongo/script
复制代码

6. 添加用户信息

主服务器 下执行 add-user.sh

脚本给的用户名和密码都是root,权限为root权限。可自定义修改

docker exec -it $(docker ps | grep "mongos" | awk '{ print $1 }') bash -c "echo -e 'use admin\n db.createUser({user:\"root\",pwd:\"root\",roles:[{role:\"root\",db:\"admin\"}]})' | mongo"
复制代码

7. 创建 docker 启动的yaml脚本文件(授权)

  • 这一步授权登录,需要输入上一步创建的用户名和密码才可操作

主服务器 下创建fate-mongo-key.yaml,然后再以授权模式重启(脚本不同,挂载路径使用之前的)

docker stack deploy -c fates-mongo-key.yaml fates-mongo
复制代码
version: '3.4'
services:
  shard1-server1:
    image: mongo:4.0.5
    # --shardsvr: 这个参数仅仅只是将默认的27017端口改为27018,如果指定--port参数,可用不需要这个参数
    # --directoryperdb:每个数据库使用单独的文件夹
    command: mongod --shardsvr --directoryperdb --replSet shard1 --keyFile /data/mongo-keyfile
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard1:/data/db
      - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==bpcluster
  shard2-server1:
    image: mongo:4.0.5
    command: mongod --shardsvr --directoryperdb --replSet shard2 --keyFile /data/mongo-keyfile
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard2:/data/db
      - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==bpcluster
  shard3-server1:
    image: mongo:4.0.5
    command: mongod --shardsvr --directoryperdb --replSet shard3 --keyFile /data/mongo-keyfile
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard3:/data/db
      - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==bpcluster
  shard1-server2:
    image: mongo:4.0.5
    command: mongod --shardsvr --directoryperdb --replSet shard1 --keyFile /data/mongo-keyfile
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard1:/data/db
      - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==bogon
  shard2-server2:
    image: mongo:4.0.5
    command: mongod --shardsvr --directoryperdb --replSet shard2 --keyFile /data/mongo-keyfile
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard2:/data/db
      - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==bogon
  shard3-server2:
    image: mongo:4.0.5
    command: mongod --shardsvr --directoryperdb --replSet shard3 --keyFile /data/mongo-keyfile
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard3:/data/db
      - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==bogon
  shard1-server3:
    image: mongo:4.0.5
    command: mongod --shardsvr --directoryperdb --replSet shard1 --keyFile /data/mongo-keyfile
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard1:/data/db
      - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==breakpad
  shard2-server3:
    image: mongo:4.0.5
    command: mongod --shardsvr --directoryperdb --replSet shard2 --keyFile /data/mongo-keyfile
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard2:/data/db
      - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==breakpad
  shard3-server3:
    image: mongo:4.0.5
    command: mongod --shardsvr --directoryperdb --replSet shard3 --keyFile /data/mongo-keyfile
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard3:/data/db
      - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==breakpad
  config1:
    image: mongo:4.0.5
    # --configsvr: 这个参数仅仅是将默认端口由27017改为27019, 如果指定--port可不添加该参数
    command: mongod --configsvr --directoryperdb --replSet fates-mongo-config --smallfiles --keyFile /data/mongo-keyfile
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/config:/data/configdb
      - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==bpcluster
  config2:
    image: mongo:4.0.5
    command: mongod --configsvr --directoryperdb --replSet fates-mongo-config --smallfiles --keyFile /data/mongo-keyfile
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/config:/data/configdb
      - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==bogon
  config3:
    image: mongo:4.0.5
    command: mongod --configsvr --directoryperdb --replSet fates-mongo-config --smallfiles --keyFile /data/mongo-keyfile
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/config:/data/configdb
      - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==breakpad
  mongos:
    image: mongo:4.0.5
    # mongo3.6版默认绑定IP为127.0.0.1,此处绑定0.0.0.0是允许其他容器或主机可以访问
    command: mongos --configdb fates-mongo-config/config1:27019,config2:27019,config3:27019 --bind_ip 0.0.0.0 --port 27017  --keyFile /data/mongo-keyfile
    networks:
      - mongo
    ports:
      - 27017:27017
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
    depends_on:
      - config1
      - config2
      - config3
    deploy:
      restart_policy:
        condition: on-failure
      mode: global

networks:
  mongo:
    driver: overlay
    # 如果外部已经创建好网络,下面这句话放开
    # external: true
复制代码

遇到的问题

启动失败

通过docker service logs name查看日志,发现配置文件找不到,因为没有挂载进容器内部


以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

移动交互设计精髓

移动交互设计精髓

班格 (Cameron Banga)、温霍尔德 (Josh Weinhold) / 傅小贞、张颖鋆 / 电子工业出版社 / 2015-4-1 / CNY 89.00

越来越多的人正涌入移动应用领域,而设计和体验将是移动应用成败的关键。作者通过上百款应用的设计实践,系统化地梳理了移动应用的设计方法论,在理解用户、跨平台和适配设计、移动组件应用、界面视觉感染力、简约设计等方面都进行了深入阐述。此外,作者还介绍了一些非常实用的移动设计工具,分享了设计师该如何与开发工程师协同工作,以及如何收集用户反馈、甄别版本迭代的更新需求等。 《移动交互设计精髓——设计完美的......一起来看看 《移动交互设计精髓》 这本书的介绍吧!

HTML 压缩/解压工具
HTML 压缩/解压工具

在线压缩/解压 HTML 代码

XML 在线格式化
XML 在线格式化

在线 XML 格式化压缩工具

UNIX 时间戳转换
UNIX 时间戳转换

UNIX 时间戳转换