分片就是水平的扩展,将数据分拆到不同的机器上,以达到存储更多的数据,处理更大的负载。可以选定将按照指定的文档键值进行分片。
配置
整体部署方案:
启动三个分片服务,两个做数据存储,另一个作为config,
配置分片的副本集,创建管理员用户,关闭mongod,打开keyfile,启动mongos,配置分片服务。
角色
分片一般有三个组成部分:
分片服务(Shard Server),mongod 实例,2个以上,负责存储实际的数据分片,生产环境中一个Shard Server可由几台服务器组成一个Replica Set代替,避免主机单点故障;
路由服务(Routing Process),mongos实例,1个以上,它负责管理分片,客户端由此前端路由接入,且让整个集群看起来像单一数据库,客户端应用可以透明使用,Routing Process不存储数据,数据来自Config Server;
配置服务(Config Server),mongod 实例,1个以上,负责存储整个集群的配置信息:即数据和片的对应关系。
因为测试资源有限, 采取三台虚拟机的方式,
分片的架构图如下:
192.168.100.101所有配置如下:
############config-1############ configsvr = true replSet = config port = 30001 dbpath = /opt/mongo/data/config-1 logpath = /opt/mongo/logs/config-1.log logappend = true fork = true profile = 1 slowms = 500 keyFile = /opt/mongo/config/keyfile maxConns=20000 storageEngine = wiredTiger ############route############ configdb = config/192.168.100.101:30001,192.168.100.102:30002,192.168.100.103:30003 port = 20000 logpath = /opt/mongo/logs/route.log logappend = true fork = true #chunkSize = 256 keyFile = /opt/mongo/config/keyfile maxConns=20000 ############rs1-1############ port = 10001 fork = true dbpath = /opt/mongo/data/rs1-1 logpath = /opt/mongo/logs/rs1-1.log replSet = test1 logappend = true profile = 1 slowms = 500 directoryperdb = true keyFile = /opt/mongo/config/keyfile maxConns=20000 storageEngine = wiredTiger ############rs2-a############ port = 20003 fork = true dbpath = /opt/mongo/data/rs2-a logpath = /opt/mongo/logs/rs2-a.log replSet = test2 logappend = true profile = 1 slowms = 500 directoryperdb = true keyFile = /opt/mongo/config/keyfile maxConns=20000 storageEngine = wiredTiger
192.168.100.102所有配置如下:
############config-2############ configsvr = true replSet = config port = 30002 dbpath = /opt/mongo/data/config-2 logpath = /opt/mongo/logs/config-2.log logappend = true fork = true profile = 1 slowms = 500 keyFile = /opt/mongo/config/keyfile maxConns=20000 storageEngine = wiredTiger ############route############ configdb = config/192.168.100.101:30001,192.168.100.102:30002,192.168.100.103:30003 port = 20000 logpath = /opt/mongo/logs/route.log logappend = true fork = true #chunkSize = 256 keyFile = /opt/mongo/config/keyfile maxConns=20000 ############rs1-2############ port = 10002 fork = true dbpath = /opt/mongo/data/rs1-2 logpath = /opt/mongo/logs/rs1-2.log replSet = test1 logappend = true profile = 1 slowms = 500 directoryperdb = true keyFile = /opt/mongo/config/keyfile maxConns=20000 storageEngine = wiredTiger ############rs2-2############ port = 20002 fork = true dbpath = /opt/mongo/data/rs2-2 logpath = /opt/mongo/logs/rs2-2.log replSet = test2 logappend = true profile = 1 slowms = 500 directoryperdb = true keyFile = /opt/mongo/config/keyfile maxConns=20000 storageEngine = wiredTiger
192.168.100.103所有配置如下:
############config-3############ configsvr = true replSet = config port = 30003 dbpath = /opt/mongo/data/config-3 logpath = /opt/mongo/logs/config-3.log logappend = true fork = true profile = 1 slowms = 500 keyFile = /opt/mongo/config/keyfile maxConns=20000 storageEngine = wiredTiger ############route############ configdb = config/192.168.100.101:30001,192.168.100.102:30002,192.168.100.103:30003 port = 20000 logpath = /opt/mongo/logs/route.log logappend = true fork = true #chunkSize = 256 keyFile = /opt/mongo/config/keyfile maxConns=20000 ############rs1-a############ port = 10003 fork = true dbpath = /opt/mongo/data/rs1-a logpath = /opt/mongo/logs/rs1-a.log replSet = test1 logappend = true profile = 1 slowms = 500 directoryperdb = true keyFile = /opt/mongo/config/keyfile maxConns=20000 storageEngine = wiredTiger ############rs2-1############ port = 20001 fork = true dbpath = /opt/mongo/data/rs2-1 logpath = /opt/mongo/logs/rs2-1.log replSet = test2 logappend = true profile = 1 slowms = 500 directoryperdb = true keyFile = /opt/mongo/config/keyfile maxConns=20000 storageEngine = wiredTiger
openfile 可以使用命令生成:
openssl rand -base64 500 > keyfile chmod 400 keyfile
确保每台机器的keyfile一致
依次按照上面配置文件的内容创建配置文件和存储目录日志目录
创建完成依次启动所有的mongod节点
注意此时要把keyfile选项注释掉,否则启动之后未创建角色就要认证,无法进入操作
ls |grep -v keyfile| xargs sed -i "s/^keyFile/#keyFile/g"
启动之后依次连接每个分片的主执行以下操作:
>config = { _id: "test1", members: [{ _id: 0, host: "192.168.100.101:10001" }, { _id: 1, host: "192.168.100.102:10002" }, { _id: 2, host: "192.168.100.103:10003", arbiterOnly: true }] } >rs.initiate(config) >rs.status() >db.isMaster( ) >use admin; >db.createRole({role:"superman", privileges:[{resource:{anyResource: true}, actions:["anyAction"]}], roles:["root"]}) >db.createUser({user:"test",pwd:"test",roles:[{role:"superman", db:"admin"}]})
依次执行成功之后,
停止所有的mongod节点:
for i in `seq 10`;do killall mongod ;done
开启keyfile
ls |grep -v keyfile| xargs sed -i "s/^#keyFile/keyFile/g"
之后依次启动mongod节点
然后启动mongos节点
连接mongos节点
mongos>use admin; mongos>db.auth("test","test") mongos>sh.addShard("test1/192.168.100.101:10001") mongos>sh.addShard("test2/192.168.100.103:20001") mongos>sh.status()
至此分片配置完成。