docker-swarm部署mongo分片集群
时间:2022-07-26
本文章向大家介绍docker-swarm部署mongo分片集群,主要内容包括其使用实例、应用技巧、基本知识点总结和需要注意事项,具有一定的参考价值,需要的朋友可以参考一下。
概述
- 本文主要介绍在docker-swarm环境下搭建mongo分片集群。
- 本文以授权模式创建集群,但是如果之间启动授权的脚本,将无法创建用户。需要在无授权模式下把用户创建好,然后再以授权模式重启。(这两种模式启动脚本不同,但是挂载同一个文件目录)
架构图
- 共三个节点:breakpad(主服务器),bpcluster,bogon
前置步骤
- 安装docker
- 初始化swarm集群
- docker swarm init
部署步骤
前面三步执行完集群就可以使用了,不需要授权登录可不用执行后面4个步骤
- 创建目录
- 部署服务(无授权模式)
- 配置分片信息
- 生成keyfile文件,并修改权限
- 拷贝keyfile到其他节点
- 添加用户信息
- 重启服务(授权模式)
1. 创建目录
在所有服务器
执行before-deploy.sh
#!/bin/bash
DIR=/data/fates
DATA_PATH="${DIR}/mongo"
PWD='1qaz2wsx!@#'
DATA_DIR_LIST=('config' 'shard1' 'shard2' 'shard3' 'script')
function check_directory() {
if [ ! -d "${DATA_PATH}" ]; then
echo "create directory: ${DATA_PATH}"
echo ${PWD} | sudo -S mkdir -p ${DATA_PATH}
else
echo "directory ${DATA_PATH} already exists."
fi
cd "${DATA_PATH}"
for SUB_DIR in ${DATA_DIR_LIST[@]}
do
if [ ! -d "${DATA_PATH}/${SUB_DIR}" ]; then
echo "create directory: ${DATA_PATH}/${SUB_DIR}"
echo ${PWD} | sudo -S mkdir -p "${DATA_PATH}/${SUB_DIR}"
else
echo "directory: ${DATA_PATH}/${SUB_DIR} already exists."
fi
done
echo ${PWD} | sudo -S chown -R $USER:$USER "${DATA_PATH}"
}
check_directory
2. 无授权模式启动mongo集群
- 这一步还没有授权,无需登录就可以操作,用于创建用户
在主服务器
下创建fate-mongo.yaml,并执行以下脚本(注意根据自己的机器名称修改constraints属性)
docker stack deploy -c fates-mongo.yaml fates-mongo
version: '3.4'
services:
shard1-server1:
image: mongo:4.0.5
# --shardsvr: 这个参数仅仅只是将默认的27017端口改为27018,如果指定--port参数,可用不需要这个参数
# --directoryperdb:每个数据库使用单独的文件夹
command: mongod --shardsvr --directoryperdb --replSet shard1
networks:
- mongo
volumes:
- /etc/localtime:/etc/localtime
- /data/fates/mongo/shard1:/data/db
deploy:
restart_policy:
condition: on-failure
replicas: 1
placement:
constraints:
- node.hostname==bpcluster
shard2-server1:
image: mongo:4.0.5
command: mongod --shardsvr --directoryperdb --replSet shard2
networks:
- mongo
volumes:
- /etc/localtime:/etc/localtime
- /data/fates/mongo/shard2:/data/db
deploy:
restart_policy:
condition: on-failure
replicas: 1
placement:
constraints:
- node.hostname==bpcluster
shard3-server1:
image: mongo:4.0.5
command: mongod --shardsvr --directoryperdb --replSet shard3
networks:
- mongo
volumes:
- /etc/localtime:/etc/localtime
- /data/fates/mongo/shard3:/data/db
deploy:
restart_policy:
condition: on-failure
replicas: 1
placement:
constraints:
- node.hostname==bpcluster
shard1-server2:
image: mongo:4.0.5
command: mongod --shardsvr --directoryperdb --replSet shard1
networks:
- mongo
volumes:
- /etc/localtime:/etc/localtime
- /data/fates/mongo/shard1:/data/db
deploy:
restart_policy:
condition: on-failure
replicas: 1
placement:
constraints:
- node.hostname==bogon
shard2-server2:
image: mongo:4.0.5
command: mongod --shardsvr --directoryperdb --replSet shard2
networks:
- mongo
volumes:
- /etc/localtime:/etc/localtime
- /data/fates/mongo/shard2:/data/db
deploy:
restart_policy:
condition: on-failure
replicas: 1
placement:
constraints:
- node.hostname==bogon
shard3-server2:
image: mongo:4.0.5
command: mongod --shardsvr --directoryperdb --replSet shard3
networks:
- mongo
volumes:
- /etc/localtime:/etc/localtime
- /data/fates/mongo/shard3:/data/db
deploy:
restart_policy:
condition: on-failure
replicas: 1
placement:
constraints:
- node.hostname==bogon
shard1-server3:
image: mongo:4.0.5
command: mongod --shardsvr --directoryperdb --replSet shard1
networks:
- mongo
volumes:
- /etc/localtime:/etc/localtime
- /data/fates/mongo/shard1:/data/db
deploy:
restart_policy:
condition: on-failure
replicas: 1
placement:
constraints:
- node.hostname==breakpad
shard2-server3:
image: mongo:4.0.5
command: mongod --shardsvr --directoryperdb --replSet shard2
networks:
- mongo
volumes:
- /etc/localtime:/etc/localtime
- /data/fates/mongo/shard2:/data/db
deploy:
restart_policy:
condition: on-failure
replicas: 1
placement:
constraints:
- node.hostname==breakpad
shard3-server3:
image: mongo:4.0.5
command: mongod --shardsvr --directoryperdb --replSet shard3
networks:
- mongo
volumes:
- /etc/localtime:/etc/localtime
- /data/fates/mongo/shard3:/data/db
deploy:
restart_policy:
condition: on-failure
replicas: 1
placement:
constraints:
- node.hostname==breakpad
config1:
image: mongo:4.0.5
# --configsvr: 这个参数仅仅是将默认端口由27017改为27019, 如果指定--port可不添加该参数
command: mongod --configsvr --directoryperdb --replSet fates-mongo-config --smallfiles
networks:
- mongo
volumes:
- /etc/localtime:/etc/localtime
- /data/fates/mongo/config:/data/configdb
deploy:
restart_policy:
condition: on-failure
replicas: 1
placement:
constraints:
- node.hostname==bpcluster
config2:
image: mongo:4.0.5
command: mongod --configsvr --directoryperdb --replSet fates-mongo-config --smallfiles
networks:
- mongo
volumes:
- /etc/localtime:/etc/localtime
- /data/fates/mongo/config:/data/configdb
deploy:
restart_policy:
condition: on-failure
replicas: 1
placement:
constraints:
- node.hostname==bogon
config3:
image: mongo:4.0.5
command: mongod --configsvr --directoryperdb --replSet fates-mongo-config --smallfiles
networks:
- mongo
volumes:
- /etc/localtime:/etc/localtime
- /data/fates/mongo/config:/data/configdb
deploy:
restart_policy:
condition: on-failure
replicas: 1
placement:
constraints:
- node.hostname==breakpad
mongos:
image: mongo:4.0.5
# mongo3.6版默认绑定IP为127.0.0.1,此处绑定0.0.0.0是允许其他容器或主机可以访问
command: mongos --configdb fates-mongo-config/config1:27019,config2:27019,config3:27019 --bind_ip 0.0.0.0 --port 27017
networks:
- mongo
ports:
- 27017:27017
volumes:
- /etc/localtime:/etc/localtime
depends_on:
- config1
- config2
- config3
deploy:
restart_policy:
condition: on-failure
mode: global
networks:
mongo:
driver: overlay
# 如果外部已经创建好网络,下面这句话放开
# external: true
3. 配置分片信息
# 添加配置服务器
docker exec -it $(docker ps | grep "config" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id: "fates-mongo-config",configsvr: true, members: [{ _id : 0, host : "config1:27019" },{ _id : 1, host : "config2:27019" }, { _id : 2, host : "config3:27019" }]})' | mongo --port 27019"
# 添加分片服务器
docker exec -it $(docker ps | grep "shard1" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id : "shard1", members: [{ _id : 0, host : "shard1-server1:27018" },{ _id : 1, host : "shard1-server2:27018" },{ _id : 2, host : "shard1-server3:27018", arbiterOnly: true }]})' | mongo --port 27018"
docker exec -it $(docker ps | grep "shard2" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id : "shard2", members: [{ _id : 0, host : "shard2-server1:27018" },{ _id : 1, host : "shard2-server2:27018" },{ _id : 2, host : "shard3-server3:27018", arbiterOnly: true }]})' | mongo --port 27018"
docker exec -it $(docker ps | grep "shard3" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id : "shard3", members: [{ _id : 0, host : "shard3-server1:27018" },{ _id : 1, host : "shard2-server2:27018" },{ _id : 2, host : "shard3-server3:27018", arbiterOnly: true }]})' | mongo --port 27018"
# 添加分片集群到mongos中
docker exec -it $(docker ps | grep "mongos" | awk '{ print $1 }') bash -c "echo 'sh.addShard("shard1/shard1-server1:27018,shard1-server2:27018,shard1-server3:27018")' | mongo "
docker exec -it $(docker ps | grep "mongos" | awk '{ print $1 }') bash -c "echo 'sh.addShard("shard1/shard2-server1:27018,shard2-server2:27018,shard2-server3:27018")' | mongo "
docker exec -it $(docker ps | grep "mongos" | awk '{ print $1 }') bash -c "echo 'sh.addShard("shard1/shard3-server1:27018,shard3-server2:27018,shard3-server3:27018")' | mongo "
4. 生成密钥文件
执行前面三步,已经可用确保mongo分片集群启动成功可使用了,如果不需要加授权,后面的步骤不用看。
在主服务器
执行generate-keyfile.sh
#!/bin/bash
DATA_PATH=/data/fates/mongo
PWD='1qaz2wsx!@#'
function check_directory() {
if [ ! -d "${DATA_PATH}" ]; then
echo "directory: ${DATA_PATH} not exists, please run before-depoly.sh."
fi
}
function generate_keyfile() {
cd "${DATA_PATH}/script"
if [ ! -f "${DATA_PATH}/script/mongo-keyfile" ]; then
echo 'create mongo-keyfile.'
openssl rand -base64 756 -out mongo-keyfile
echo "${PWD}" | sudo -S chmod 600 mongo-keyfile
echo "${PWD}" | sudo -S chown 999 mongo-keyfile
else
echo 'mongo-keyfile already exists.'
fi
}
check_directory
generate_keyfile
5. 拷贝密钥文件到其他服务器的script目录下
在刚才生成keyfile文件的服务器上执行拷贝(注意-p
参数,保留前面修改的权限)
sudo scp -p /data/fates/mongo/script/mongo-keyfile username@server2:/data/fates/mongo/script
sduo scp -p /data/fates/mongo/script/mongo-keyfile username@server3:/data/fates/mongo/script
6. 添加用户信息
在主服务器
下执行add-user.sh
脚本给的用户名和密码都是root,权限为root权限。可自定义修改
docker exec -it $(docker ps | grep "mongos" | awk '{ print $1 }') bash -c "echo -e 'use adminn db.createUser({user:"root",pwd:"root",roles:[{role:"root",db:"admin"}]})' | mongo"
7. 创建docker启动的yaml脚本文件(授权)
- 这一步授权登录,需要输入上一步创建的用户名和密码才可操作
在主服务器
下创建fate-mongo-key.yaml,然后再以授权模式重启(脚本不同,挂载路径使用之前的)
docker stack deploy -c fates-mongo-key.yaml fates-mongo
version: '3.4'
services:
shard1-server1:
image: mongo:4.0.5
# --shardsvr: 这个参数仅仅只是将默认的27017端口改为27018,如果指定--port参数,可用不需要这个参数
# --directoryperdb:每个数据库使用单独的文件夹
command: mongod --shardsvr --directoryperdb --replSet shard1 --keyFile /data/mongo-keyfile
networks:
- mongo
volumes:
- /etc/localtime:/etc/localtime
- /data/fates/mongo/shard1:/data/db
- /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
deploy:
restart_policy:
condition: on-failure
replicas: 1
placement:
constraints:
- node.hostname==bpcluster
shard2-server1:
image: mongo:4.0.5
command: mongod --shardsvr --directoryperdb --replSet shard2 --keyFile /data/mongo-keyfile
networks:
- mongo
volumes:
- /etc/localtime:/etc/localtime
- /data/fates/mongo/shard2:/data/db
- /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
deploy:
restart_policy:
condition: on-failure
replicas: 1
placement:
constraints:
- node.hostname==bpcluster
shard3-server1:
image: mongo:4.0.5
command: mongod --shardsvr --directoryperdb --replSet shard3 --keyFile /data/mongo-keyfile
networks:
- mongo
volumes:
- /etc/localtime:/etc/localtime
- /data/fates/mongo/shard3:/data/db
- /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
deploy:
restart_policy:
condition: on-failure
replicas: 1
placement:
constraints:
- node.hostname==bpcluster
shard1-server2:
image: mongo:4.0.5
command: mongod --shardsvr --directoryperdb --replSet shard1 --keyFile /data/mongo-keyfile
networks:
- mongo
volumes:
- /etc/localtime:/etc/localtime
- /data/fates/mongo/shard1:/data/db
- /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
deploy:
restart_policy:
condition: on-failure
replicas: 1
placement:
constraints:
- node.hostname==bogon
shard2-server2:
image: mongo:4.0.5
command: mongod --shardsvr --directoryperdb --replSet shard2 --keyFile /data/mongo-keyfile
networks:
- mongo
volumes:
- /etc/localtime:/etc/localtime
- /data/fates/mongo/shard2:/data/db
- /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
deploy:
restart_policy:
condition: on-failure
replicas: 1
placement:
constraints:
- node.hostname==bogon
shard3-server2:
image: mongo:4.0.5
command: mongod --shardsvr --directoryperdb --replSet shard3 --keyFile /data/mongo-keyfile
networks:
- mongo
volumes:
- /etc/localtime:/etc/localtime
- /data/fates/mongo/shard3:/data/db
- /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
deploy:
restart_policy:
condition: on-failure
replicas: 1
placement:
constraints:
- node.hostname==bogon
shard1-server3:
image: mongo:4.0.5
command: mongod --shardsvr --directoryperdb --replSet shard1 --keyFile /data/mongo-keyfile
networks:
- mongo
volumes:
- /etc/localtime:/etc/localtime
- /data/fates/mongo/shard1:/data/db
- /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
deploy:
restart_policy:
condition: on-failure
replicas: 1
placement:
constraints:
- node.hostname==breakpad
shard2-server3:
image: mongo:4.0.5
command: mongod --shardsvr --directoryperdb --replSet shard2 --keyFile /data/mongo-keyfile
networks:
- mongo
volumes:
- /etc/localtime:/etc/localtime
- /data/fates/mongo/shard2:/data/db
- /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
deploy:
restart_policy:
condition: on-failure
replicas: 1
placement:
constraints:
- node.hostname==breakpad
shard3-server3:
image: mongo:4.0.5
command: mongod --shardsvr --directoryperdb --replSet shard3 --keyFile /data/mongo-keyfile
networks:
- mongo
volumes:
- /etc/localtime:/etc/localtime
- /data/fates/mongo/shard3:/data/db
- /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
deploy:
restart_policy:
condition: on-failure
replicas: 1
placement:
constraints:
- node.hostname==breakpad
config1:
image: mongo:4.0.5
# --configsvr: 这个参数仅仅是将默认端口由27017改为27019, 如果指定--port可不添加该参数
command: mongod --configsvr --directoryperdb --replSet fates-mongo-config --smallfiles --keyFile /data/mongo-keyfile
networks:
- mongo
volumes:
- /etc/localtime:/etc/localtime
- /data/fates/mongo/config:/data/configdb
- /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
deploy:
restart_policy:
condition: on-failure
replicas: 1
placement:
constraints:
- node.hostname==bpcluster
config2:
image: mongo:4.0.5
command: mongod --configsvr --directoryperdb --replSet fates-mongo-config --smallfiles --keyFile /data/mongo-keyfile
networks:
- mongo
volumes:
- /etc/localtime:/etc/localtime
- /data/fates/mongo/config:/data/configdb
- /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
deploy:
restart_policy:
condition: on-failure
replicas: 1
placement:
constraints:
- node.hostname==bogon
config3:
image: mongo:4.0.5
command: mongod --configsvr --directoryperdb --replSet fates-mongo-config --smallfiles --keyFile /data/mongo-keyfile
networks:
- mongo
volumes:
- /etc/localtime:/etc/localtime
- /data/fates/mongo/config:/data/configdb
- /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
deploy:
restart_policy:
condition: on-failure
replicas: 1
placement:
constraints:
- node.hostname==breakpad
mongos:
image: mongo:4.0.5
# mongo3.6版默认绑定IP为127.0.0.1,此处绑定0.0.0.0是允许其他容器或主机可以访问
command: mongos --configdb fates-mongo-config/config1:27019,config2:27019,config3:27019 --bind_ip 0.0.0.0 --port 27017 --keyFile /data/mongo-keyfile
networks:
- mongo
ports:
- 27017:27017
volumes:
- /etc/localtime:/etc/localtime
- /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
depends_on:
- config1
- config2
- config3
deploy:
restart_policy:
condition: on-failure
mode: global
networks:
mongo:
driver: overlay
# 如果外部已经创建好网络,下面这句话放开
# external: true
遇到的问题
启动失败
通过docker service logs name查看日志,发现配置文件找不到,因为没有挂载进容器内部
config3启动失败
配置文件中挂载路径写错了
容器启动成功,但是连接失败,被拒绝
只执行了启动容器的脚本,后续的配置都没有设置(第3步)
mongo-keyfile没权限:error opening file: /data/mongo-keyfile: Permission denied
- mongo-keyfile文件必须修改所有者为999, 权限为600
addShard失败
- 必须等mongos启动完毕才能执行
- 根据服务器名称,自动修改脚本里面constraints的属性
分片全部完成后发现数据只保存在一个分片上:
分片的一个chrunk默认200MB,数据量太小,只用一个chunk就够。可修改小这个参数验证效果
- JavaScript 教程
- JavaScript 编辑工具
- JavaScript 与HTML
- JavaScript 与Java
- JavaScript 数据结构
- JavaScript 基本数据类型
- JavaScript 特殊数据类型
- JavaScript 运算符
- JavaScript typeof 运算符
- JavaScript 表达式
- JavaScript 类型转换
- JavaScript 基本语法
- JavaScript 注释
- Javascript 基本处理流程
- Javascript 选择结构
- Javascript if 语句
- Javascript if 语句的嵌套
- Javascript switch 语句
- Javascript 循环结构
- Javascript 循环结构实例
- Javascript 跳转语句
- Javascript 控制语句总结
- Javascript 函数介绍
- Javascript 函数的定义
- Javascript 函数调用
- Javascript 几种特殊的函数
- JavaScript 内置函数简介
- Javascript eval() 函数
- Javascript isFinite() 函数
- Javascript isNaN() 函数
- parseInt() 与 parseFloat()
- escape() 与 unescape()
- Javascript 字符串介绍
- Javascript length属性
- javascript 字符串函数
- Javascript 日期对象简介
- Javascript 日期对象用途
- Date 对象属性和方法
- Javascript 数组是什么
- Javascript 创建数组
- Javascript 数组赋值与取值
- Javascript 数组属性和方法
- Flutter 1.17 新 Material motion 规范的预构建动画
- Canonical通过Flutter启用Linux桌面应用程序支持
- Flutter 快捷开发 Mac Android Studio 篇
- TRTC Android端开发接入学习之互动直播(七)
- Flutter 实现酷炫的3D效果
- 【Flutter 实战】一文学会20多个动画组件
- 【Flutter 实战】动画序列、共享动画、路由动画
- 【Flutter 实战】自定义动画-涟漪和雷达扫描
- Flutter —布局系统概述
- 【Flutter 实战】全局点击空白处隐藏键盘
- 我对Flutter的第一次失望
- 【Flutter 实战】各种各样形状的组件
- 『Flutter-绘制篇』实现炫酷的雨雪特效
- 图书管理系统(Servlet+Jsp+Java+Mysql,附下载演示地址)
- Vuex 映射完全指南