实战mongodb3.06 Relica Sets+sharding集群

本文涉及的产品
云数据库 MongoDB,通用型 2核4GB
云数据库 MongoDB,独享型 2核8GB
推荐场景:
构建全方位客户视图
日志服务 SLS,月写入数据量 50GB 1个月
简介:

实战mongodb3.06 Relica Sets+sharding集群

  

         MongoDB 的Sharding机制解决了海量存储和动态扩容的问题,但离实际生产环境所需要的高可靠、高可用还有些距离,例如Shard Server的单点故障就无法解决,所以提出”ReplicatSets +Sharding”的解决方案。本方案是某某公司真实实例介绍采用MongoDB复制集和Sharding高可能用方案。本方案采用MongoDB 3.06版本。

MongoDB3.0以上版本提升7到10倍的写入效率以及增加80%的数据压缩率,还能减少95%的运维成本 。

MonoDB3.0新特性主括:可插入式的存储引擎API、支持WiredTiger存储引擎、MMAPv1提升、复制集全面提升、集群方面的改进、提升了安全性。

MongoDB3.0以上版本拥有大幅度的改进,本作者所以采用最新的3.06版本讲解。本实例采用最新的配置文件,在官方网可以看到。

1Relica Sets+sharding架构

Replica Sets+sharding解决方案内容如下:

   Shard服务器:使用Replica Sets确保每个数据节点都具有备份、自动容错转移、自动恢复能力

   配置服务器:使用3个配置服务器确保元数据完整性。

   路由进程:使用3个路由进程实现负载均衡,提高客户端接入性能。

配置完成的Replica Sets+sharding环境如下图所示。

   

wKioL1Zrb6Xw3QjoAACxp7k8aSE251.png

2.搭建一个高可用架构

     采用Replica Sets+sharding 架构,可以避免单机Sharding架构中的ShardServer单点故障,这方案组合解决的sharding架构中的高可用问题。

各服务器开放的监听端口如图所示。

 

 

 

主机

IP

服务及端口

Mongodb01

172.16.202.201

Mongod shard1_1   11731

Mongod shard2_1   11732

Mongod shard3_1   11733

Mongod config     30000

Mongos   1         60000

Mongodb02

172.16.202.202

Mongod shard1_2   11731

Mongod shard2_2   11732

Mongod shard3_2   11733

Mongod config     30000

Mongos   2        60000

Mongodb03

172.16.202.203

Mongod shard1_3   11731

Mongod shard2_3   11732

Mongod shard3_3   11733

Mongod config     30000

Mongos   3        60000

 

2.1.创建mongo用户

在三台服务器中创建mongo用户,如下面的代码所示

[root@mongodb01 ~]# useradd mongo

[root@mongodb01 ~]#passwd mongo

 [root@mongodb01 ~]# su - mongo

[mongo@mongodb01 ~]$

2.2.创建数据目录

  首先要在mongo用户下创建shard server和Config Server的数据目录,用于存储数据,创建logs的日志目录、创建config存放配置文件目录。

在mongodb01上创建shard server和Config Server的数据目录、logs的日志目录、config存放配置文件目录。

[mongo@mongodb01 ~]$ mkdir -p /home/mongo/data/shard1_1

[mongo@mongodb01 ~]$ mkdir -p /home/mongo/data/shard2_1

[mongo@mongodb01 ~]$ mkdir -p /home/mongo/data/shard3_1

[mongo@mongodb01 ~]$ mkdir -p /home/mongo/data/config

[mongo@mongodb01 ~]$ mkdir -p /home/mongo/data/logs

[mongo@mongodb01 ~]$ mkdir -p /home/mongo/config 

如以上代码所示,目录/home/mongo/data/shard1_1供 shard1主节点使用,目录/home/mongo/data/shard2_1供shard2仲裁使用,目录/home/mongo/data/shard3_1 供shard3副本使用,目录/home/mongo/data/config 供整个ReplicaSets+sharding架构中的1个configServer使用,目录/home/mongo/data/logs供日志使用,

目录/home/mongo/config供配置文件使用。

在mongodb02上创建shard server和Config Server的数据目录、logs的日志目录、config存放配置文件目录。

[mongo@mongodb02 ~]$ mkdir -p /home/mongo/data/shard1_2

[mongo@mongodb02 ~]$ mkdir -p /home/mongo/data/shard2_2

[mongo@mongodb02 ~]$ mkdir -p /home/mongo/data/shard3_2

[mongo@mongodb02 ~]$ mkdir -p /home/mongo/data/config

[mongo@mongodb02 ~]$ mkdir -p /home/mongo/data/logs

[mongo@mongodb02 ~]$ mkdir -p /home/mongo/config 

如以上代码所示,目录/home/mongo/data/shard1_2供 shard1副本使用,目录/home/mongo/data/shard2_2供shard2主节点使用,目录/home/mongo/data/shard3_1 供shard3仲裁使用,目录/home/mongo/data/config 供整个ReplicaSets+sharding架构中的1个configServer使用,目录/home/mongo/data/logs供日志使用,

目录/home/mongo/config供配置文件使用。

在mongodb03上创建shard server和Config Server的数据目录、logs的日志目录、config存放配置文件目录。

[mongo@mongodb03 ~]$ mkdir -p /home/mongo/data/shard1_3

[mongo@mongodb03 ~]$ mkdir -p /home/mongo/data/shard2_3

[mongo@mongodb03 ~]$ mkdir -p /home/mongo/data/shard3_3

[mongo@mongodb03 ~]$ mkdir -p /home/mongo/data/config

[mongo@mongodb03 ~]$ mkdir -p /home/mongo/data/logs

[mongo@mongodb03 ~]$ mkdir -p /home/mongo/config 

如以上代码所示,目录/home/mongo/data/shard1_3供 shard1仲裁使用,目录/home/mongo/data/shard2_3供shard2副本使用,目录/home/mongo/data/shard3_3 供shard3主节点使用,目录/home/mongo/data/config 供整个ReplicaSets+sharding架构中的1个configServer使用,目录/home/mongo/data/logs供日志使用,

目录/home/mongo/config供配置文件使用。

2.3.配置Replica Sets

在三台服务器上解压mongodb-linux-x86_64-3.0.6.tgz

    [mongo@mongodb01 ~]$ tar zxvfmongodb-linux-x86_64-3.0.6.tgz

[mongo@mongodb01 ~]$ mvmongodb-linux-x86_64-3.0.6 mongodb

2.3.1.配置shard1所用到的Relica Set 1

   #注意配置文件缩进

  在mongodb01上操作,如下的代码所示:

[mongo@mongodb01 ~]$ cd config/

[mongo@mongodb01 config]$ cat shard1_1.conf

systemLog:

 destination: file

 ##Log

 path:/home/mongo/data/logs/shard1_1.log

 logAppend: true

storage:

 journal:

  enabled: true

 dbPath: /home/mongo/data/shard1_1

 directoryPerDB: true

 engine: wiredTiger

 wiredTiger:

  engineConfig:

   cacheSizeGB: 1

   directoryForIndexes:true

  collectionConfig:

   blockCompressor: snappy

processManagement:

  fork: true 

net:

 bindIp: 172.16.202.201

 port: 11731

replication:

 oplogSizeMB: 500

 replSetName: shard1

sharding:

 clusterRole: shardsvr

#sercurity:

 #authorization: enabled

 #keyFile:/home/mongo/key/security

 

[mongo@mongodb01 config]$ /home/mongo/mongodb/bin/mongod -f/home/mongo/config/shard1_1.conf

如以上代码所示,在mongodb01启动Replica Set1中的1个成员节点,复制集名字是shard1,监听端口是11731。

  在mongodb02上操作,如下的代码所示:

[mongo@mongodb02 ~]$ cd config/

[mongo@mongodb02 config]$ cat shard1_2.conf

systemLog:

 destination: file

 ##Log

 path: /home/mongo/data/logs/shard1_2.log

 logAppend: true

storage:

 journal:

  enabled: true

 dbPath: /home/mongo/data/shard1_2

 directoryPerDB: true

 engine: wiredTiger

 wiredTiger:

  engineConfig:

   cacheSizeGB: 1

   directoryForIndexes:true

  collectionConfig:

   blockCompressor: snappy

processManagement:

  fork: true 

net:

 bindIp: 172.16.202.202

 port: 11731

replication:

 oplogSizeMB: 500

 replSetName: shard1

sharding:

 clusterRole: shardsvr

#sercurity:

 #authorization: enabled

 #keyFile:/home/mongo/key/security

 

[mongo@mongodb02 config]$ /home/mongo/mongodb/bin/mongod -f/home/mongo/config/shard1_2.conf

如以上代码所示,在mongodb2启动Replica Set1中的1个成员节点,复制集名字是shard1,监听端口是11731。

  在mongodb03上操作,如下的代码所示:

[mongo@mongodb03~]$ cd config/

[mongo@mongodb03 config]$ cat shard1_3.conf

systemLog:

 destination: file

 ##Log

 path: /home/mongo/data/logs/shard1_3.log

 logAppend: true

storage:

 journal:

  enabled: true

 dbPath: /home/mongo/data/shard1_3

 directoryPerDB: true

 engine: wiredTiger

 wiredTiger:

  engineConfig:

   cacheSizeGB: 1

   directoryForIndexes:true

  collectionConfig:

   blockCompressor: snappy

processManagement:

  fork: true 

net:

 bindIp: 172.16.202.203

 port: 11731

replication:

 oplogSizeMB: 500

 replSetName: shard1

sharding:

 clusterRole: shardsvr

#sercurity:

 #authorization: enabled

 #keyFile:/home/mongo/key/security

 

[mongo@mongodb03 config]$ /home/mongo/mongodb/bin/mongod -f/home/mongo/config/shard1_3.conf

如以上代码所示,在mongodb3启动Replica Set1中的1个成员节点,复制集名字是shard1,监听端口是11731。

连接mongodb01的11731端口的mongod,初始化Replicat Set1,如下代码所示:

 

[mongo@mongodb01 ~]$ /home/mongo/mongodb/bin/mongo 172.16.202.201:11731

MongoDB shell version: 3.0.6

connecting to: 172.16.202.201:11731/test

config={_id:'shard1',members:[{_id:0,host:'172.16.202.201:11731',priority:2},{_id:1,host:'172.16.202.202:11731'},{_id:2,host:'172.16.202.203:11731',arbiterOnly:true}]}

{

         "_id": "shard1",

         "members": [

                   {

                            "_id": 0,

                            "host": "172.16.202.201:11731",

                            "priority": 2

                   },

                   {

                            "_id": 1,

                            "host": "172.16.202.202:11731"

                   },

                   {

                            "_id": 2,

                            "host": "172.16.202.203:11731",

                            "arbiterOnly": true

                   }

         ]

}

>rs.initiate(config)

{ "ok" : 1 }

 

以上代码通过执行rs.initiate(config)命令来初始化shard1的复制集Replica Set 1。

 

2.3.2.配置shard2所用到的Relica Set 2

  在mongodb01上操作,如下的代码所示:

[mongo@mongodb01 ~]$ cd config/

[mongo@mongodb01 config]$ cat shard2_1.conf

systemLog:

 destination: file

 ##Log

 path: /home/mongo/data/logs/shard2_1.log

 logAppend: true

storage:

 journal:

  enabled: true

 dbPath: /home/mongo/data/shard2_1

 directoryPerDB: true

 engine: wiredTiger

 wiredTiger:

  engineConfig:

   cacheSizeGB: 1

   directoryForIndexes:true

  collectionConfig:

   blockCompressor: snappy

processManagement:

  fork: true 

net:

 bindIp: 172.16.202.201

 port: 11732

replication:

 oplogSizeMB: 500

 replSetName: shard2

sharding:

 clusterRole: shardsvr

#sercurity:

 #authorization: enabled

 #keyFile:/home/mongo/key/security

 

[mongo@mongodb01 config]$ /home/mongo/mongodb/bin/mongod -f/home/mongo/config/shard2_1.conf

如以上代码所示,在mongodb01启动Replica Set1中的1个成员节点,复制集名字是shard2,监听端口是11732。

  在mongodb02上操作,如下的代码所示:

[mongo@mongodb02 ~]$ cd config/

[mongo@mongodb02 config]$ cat shard2_2.conf

systemLog:

 destination: file

 ##Log

 path: /home/mongo/data/logs/shard2_2.log

 logAppend: true

storage:

 journal:

  enabled: true

 dbPath: /home/mongo/data/shard2_2

 directoryPerDB: true

 engine: wiredTiger

 wiredTiger:

  engineConfig:

   cacheSizeGB: 1

   directoryForIndexes:true

  collectionConfig:

   blockCompressor: snappy

processManagement:

  fork: true 

net:

 bindIp: 172.16.202.202

 port: 11732

replication:

 oplogSizeMB: 500

 replSetName: shard2

sharding:

 clusterRole: shardsvr

#sercurity:

 #authorization: enabled

 #keyFile:/home/mongo/key/security

 

[mongo@mongodb02 config]$ /home/mongo/mongodb/bin/mongod -f/home/mongo/config/shard2_2.conf

如以上代码所示,在mongodb2启动Replica Set1中的1个成员节点,复制集名字是shard2,监听端口是11732。

  在mongodb03上操作,如下的代码所示:

[mongo@mongodb03~]$ cd config/

[mongo@mongodb03 config]$ cat shard2_3.conf

systemLog:

 destination: file

 ##Log

 path: /home/mongo/data/logs/shard2_3.log

 logAppend: true

storage:

 journal:

  enabled: true

 dbPath: /home/mongo/data/shard2_3

 directoryPerDB: true

 engine: wiredTiger

 wiredTiger:

  engineConfig:

   cacheSizeGB: 1

   directoryForIndexes:true

  collectionConfig:

   blockCompressor: snappy

processManagement:

  fork: true 

net:

 bindIp: 172.16.202.203

 port: 11732

replication:

 oplogSizeMB: 500

 replSetName: shard2

sharding:

 clusterRole: shardsvr

#sercurity:

 #authorization: enabled

 #keyFile:/home/mongo/key/security

 

[mongo@mongodb03 config]$ /home/mongo/mongodb/bin/mongod -f/home/mongo/config/shard2_3.conf

如以上代码所示,在mongodb3启动Replica Set1中的1个成员节点,复制集名字是shard2,监听端口是11732。

连接mongodb02的11732端口的mongod,初始化Replicat Set1,如下代码所示:

 

[mongo@mongodb02 config]$/home/mongo/mongodb/bin/mongo 172.16.202.202:11732

MongoDB shell version: 3.0.6

connecting to: 172.16.202.202:11732/test

config={_id:'shard2',members:[{_id:0,host:'172.16.202.201:11732',arbiterOnly:true},{_id:1,host:'172.16.202.202:11732',priority:2},{_id:2,host:'172.16.202.203:11732'}]}

{

         "_id": "shard2",

         "members": [

                   {

                            "_id": 0,

                            "host": "172.16.202.201:11732",

                            "arbiterOnly": true

                   },

                   {

                            "_id": 1,

                            "host": "172.16.202.202:11732",

                            "priority": 2

                   },

                   {

                            "_id": 2,

                            "host": "172.16.202.203:11732"

                   }

         ]

}

rs.initiate(config)

{ "ok" : 1 }

 

 

以上代码通过执行rs.initiate(config)命令来初始化shard2的复制集Replica Set 1。

 

 

2.3.3.配置shard3所用到的Relica Set 3

  在mongodb01上操作,如下的代码所示:

[mongo@mongodb01 ~]$ cd config/

[mongo@mongodb01 config]$ cat shard3_1.conf

systemLog:

 destination: file

 ##Log

 path: /home/mongo/data/logs/shard3_1.log

 logAppend: true

storage:

 journal:

  enabled: true

 dbPath: /home/mongo/data/shard3_1

 directoryPerDB: true

 engine: wiredTiger

 wiredTiger:

  engineConfig:

   cacheSizeGB: 1

   directoryForIndexes:true

  collectionConfig:

   blockCompressor: snappy

processManagement:

  fork: true 

net:

 bindIp: 172.16.202.201

 port: 11733

replication:

 oplogSizeMB: 500

 replSetName: shard3

sharding:

 clusterRole: shardsvr

#sercurity:

 #authorization: enabled

 #keyFile:/home/mongo/key/security

 

 [mongo@mongodb01 config]$/home/mongo/mongodb/bin/mongod -f /home/mongo/config/shard3_1.conf

如以上代码所示,在mongodb01启动Replica Set1中的1个成员节点,复制集名字是shard3,监听端口是11733。

  在mongodb02上操作,如下的代码所示:

[mongo@mongodb02 ~]$ cd config/

[mongo@mongodb02 config]$ cat shard3_2.conf

systemLog:

 destination: file

 ##Log

 path: /home/mongo/data/logs/shard3_2.log

 logAppend: true

storage:

 journal:

  enabled: true

 dbPath: /home/mongo/data/shard3_2

 directoryPerDB: true

 engine: wiredTiger

 wiredTiger:

  engineConfig:

   cacheSizeGB: 1

   directoryForIndexes:true

  collectionConfig:

   blockCompressor: snappy

processManagement:

  fork: true 

net:

 bindIp: 172.16.202.202

 port: 11733

replication:

 oplogSizeMB: 500

 replSetName: shard3

sharding:

 clusterRole: shardsvr

#sercurity:

 #authorization: enabled

 #keyFile:/home/mongo/key/security

 

[mongo@mongodb02 config]$ /home/mongo/mongodb/bin/mongod -f/home/mongo/config/shard3_2.conf

如以上代码所示,在mongodb2启动Replica Set1中的1个成员节点,复制集名字是shard3,监听端口是11733。

  在mongodb03上操作,如下的代码所示:

[mongo@mongodb03~]$ cd config/

[mongo@mongodb03 config]$ cat shard3_3.conf

systemLog:

 destination: file

 ##Log

 path: /home/mongo/data/logs/shard3_3.log

 logAppend: true

storage:

 journal:

  enabled: true

 dbPath: /home/mongo/data/shard3_3

 directoryPerDB: true

 engine: wiredTiger

 wiredTiger:

  engineConfig:

   cacheSizeGB: 1

   directoryForIndexes:true

  collectionConfig:

   blockCompressor: snappy

processManagement:

  fork: true 

net:

 bindIp: 172.16.202.203

 port: 11733

replication:

 oplogSizeMB: 500

 replSetName: shard3

sharding:

 clusterRole: shardsvr

#sercurity:

 #authorization: enabled

 #keyFile:/home/mongo/key/security

 

[mongo@mongodb03 config]$ /home/mongo/mongodb/bin/mongod -f/home/mongo/config/shard3_3.conf

如以上代码所示,在mongodb3启动Replica Set1中的1个成员节点,复制集名字是shard3,监听端口是11733。

连接mongodb03的11733端口的mongod,初始化Replicat Set1,如下代码所示:

[mongo@mongodb03 config]$/home/mongo/mongodb/bin/mongo 172.16.202.203:11733

MongoDB shell version: 3.0.6

connecting to: 172.16.202.203:11733/test

config={_id:'shard3',members:[{_id:0,host:'172.16.202.201:11733'},{_id:1,host:'172.16.202.202:11733',arbiterOnly:true},{_id:2,host:'172.16.202.203:11733',priority:2}]}

{

         "_id": "shard3",

         "members": [

                   {

                            "_id": 0,

                            "host": "172.16.202.201:11733"

                   },

                   {

                            "_id": 1,

                            "host": "172.16.202.202:11733",

                            "arbiterOnly": true

                   },

                   {

                            "_id": 2,

                            "host": "172.16.202.203:11733",

                            "priority": 2

                   }

         ]

}

rs.initiate(config)

{ "ok" : 1 }

 

 

 

 

以上代码通过执行rs.initiate(config)命令来初始化shard2的复制集Replica Set 1。

2.3.4.查看复制集状态

shard1:PRIMARY> rs.status()

{

         "set": "shard1",

         "date": ISODate("2015-11-25T10:53:06.091Z"),

         "myState": 1,

         "members": [

                   {

                            "_id": 0,

                            "name": "172.16.202.201:11731",

                            "health": 1,

                            "state": 1,

                            "stateStr": "PRIMARY", #主库

                            "uptime": 3009,

                            "optime": Timestamp(1448448493, 1),

                            "optimeDate": ISODate("2015-11-25T10:48:13Z"),

                            "electionTime": Timestamp(1448448497, 1),

                            "electionDate": ISODate("2015-11-25T10:48:17Z"),

                            "configVersion": 1,

                            "self": true

                   },

                   {

                            "_id": 1,

                            "name": "172.16.202.202:11731",

                            "health": 1,

                            "state": 2,

                            "stateStr": "SECONDARY", #复本

                            "uptime": 292,

                            "optime": Timestamp(1448448493, 1),

                            "optimeDate": ISODate("2015-11-25T10:48:13Z"),

                            "lastHeartbeat": ISODate("2015-11-25T10:53:05.389Z"),

                            "lastHeartbeatRecv": ISODate("2015-11-25T10:53:05.391Z"),

                            "pingMs": 0,

                            "lastHeartbeatMessage": "could not find member to sync from",

                            "configVersion": 1

                   },

                   {

                            "_id": 2,

                            "name": "172.16.202.203:11731",

                            "health": 1,

                            "state": 7,

                            "stateStr": "ARBITER",  #仲裁

                            "uptime" : 292,

                            "lastHeartbeat": ISODate("2015-11-25T10:53:05.391Z"),

                            "lastHeartbeatRecv": ISODate("2015-11-25T10:53:05.390Z"),

                            "pingMs": 0,

                            "configVersion": 1

                   }

         ],

         "ok": 1

}

 

 

2.4.配置3Config Server

三台上执行操作如下代码所示:

[mongo@mongodb01 config]$ catconfig.conf

systemLog:

 destination: file

 ##Log

 path: /home/mongo/data/logs/config.log

 logAppend: true

storage:

 journal:

 enabled: true

 dbPath:/home/mongo/data/config

 directoryPerDB: true

processManagement:

 fork: true 

net:

 #bindIp:172.16.202.201 #这里根据自己来是否绑定IP

 port: 30000

sharding:

 clusterRole: configsvr

 

 [mongo@mongodb01 config]$ /home/mongo/mongodb/bin/mongos-f /home/mongo/config/config.conf

如以上代码所示,在三台服务器上分别执行启动ConfigServer进程,并指定此进程监听是30000

2.5.配置3Route Process

三台上执行操作如下代码所示:

[mongo@mongodb01 config]$ catmongos.conf

systemLog:

 destination: file

 ##Log

 path: /home/mongo/data/logs/mongo.log

 logAppend: true

 

processManagement:

  fork: true 

net:

 #bindIp:172.16.202.201

 port: 6000

sharding:

 configDB: 172.16.202.201:30000,172.16.202.202:30000,172.16.202.203:30000

 

[mongo@mongodb01config]$ /home/mongo/mongodb/bin/mongos -f /home/mongo/config/mongos.conf

 

如以上代码所示,在三台服务器上分别启动路由控制器,并指定此进程监听端口是60000,同时指定三台服务器上的Config Server的IP和端口。

2.6.配置Shard Cluster

连接到其中一台机器的端口60000的mongos进程,并切换到admin数据到开始配置Sharding环境,如下面的代码所示:

 

[mongo@mongodb01 logs]$/home/mongo/mongodb/bin/mongo 172.16.202.201:60000

MongoDB shell version: 3.0.6

connecting to: 172.16.202.201:60000/test

mongos> use admin

switched to db admin

mongos>db.runCommand({addshard:"shard1/172.16.202.201:11731,172.16.202.202:11731,172.16.202.203:11731",name:"shard1"});

{ "shardAdded" :"shard1", "ok" : 1 }

mongos>db.runCommand({addshard:"shard2/172.16.202.201:11732,172.16.202.202:11732,172.16.202.203:11732",name:"shard2"});

{ "shardAdded" :"shard2", "ok" : 1 }

mongos>db.runCommand({addshard:"shard3/172.16.202.201:11733,172.16.202.202:11733,172.16.202.203:11733",name:"shard3"});

{ "shardAdded" :"shard3", "ok" : 1 }

以上代码通过执行以下命令:

db.runCommand({addshard:"shard1/172.16.202.201:11731,172.16.202.202:11731,172.16.202.203:11731",name:"shard1"});

将Replica Set 1在三台服务器上的3个成员节点作为Shard Server 1添加进sharding环境中。

以上代码通过执行以下命令:

db.runCommand({addshard:"shard2/172.16.202.201:11732,172.16.202.202:11732,172.16.202.203:11732",name:"shard2"});

将Replica Set 2在三台服务器上的3个成员节点作为Shard Server 2添加进sharding环境中。

以上代码通过执行以下命令:

db.runCommand({addshard:"shard3/172.16.202.201:11733,172.16.202.202:11733,172.16.202.203:11733",name:"shard3"});

将Replica Set 3在三台服务器上的3个成员节点作为Shard Server 3添加进sharding环境中。

接下来激活分片,如下面的代码所示:    采用hash分片

[mongo@mongodb01 logs]$/home/mongo/mongodb/bin/mongo 172.16.202.201:60000

MongoDB shell version: 3.0.6

connecting to: 172.16.202.201:60000/test

mongos> use admin

switched to db admin

mongos> db.runCommand({enablesharding:"logs"})

{"ok" : 1 }

mongos>db.runCommand({shardcollection:"logs.users",key:{id:"hashed"}})

{ "collectionsharded" :"logs.users", "ok" : 1 }

如以上代码所示,首先执行db.runCommand({enablesharding:"logs"})命令激活logs库上的分片功能;然后执行

db.runCommand({shardcollection:"logs.methodlog",key:{_id:1}})

命令激活users表的分片功能。




本文转自 jxzhfei  51CTO博客,原文链接:http://blog.51cto.com/jxzhfei/1722243

相关实践学习
MongoDB数据库入门
MongoDB数据库入门实验。
快速掌握 MongoDB 数据库
本课程主要讲解MongoDB数据库的基本知识,包括MongoDB数据库的安装、配置、服务的启动、数据的CRUD操作函数使用、MongoDB索引的使用(唯一索引、地理索引、过期索引、全文索引等)、MapReduce操作实现、用户管理、Java对MongoDB的操作支持(基于2.x驱动与3.x驱动的完全讲解)。 通过学习此课程,读者将具备MongoDB数据库的开发能力,并且能够使用MongoDB进行项目开发。   相关的阿里云产品:云数据库 MongoDB版 云数据库MongoDB版支持ReplicaSet和Sharding两种部署架构,具备安全审计,时间点备份等多项企业能力。在互联网、物联网、游戏、金融等领域被广泛采用。 云数据库MongoDB版(ApsaraDB for MongoDB)完全兼容MongoDB协议,基于飞天分布式系统和高可靠存储引擎,提供多节点高可用架构、弹性扩容、容灾、备份回滚、性能优化等解决方案。 产品详情: https://www.aliyun.com/product/mongodb
相关文章
|
7月前
|
NoSQL Cloud Native MongoDB
MongoDB 主从集群 2
MongoDB 主从集群 2
|
8月前
|
运维 NoSQL 安全
【最佳实践】高可用mongodb集群(1分片+3副本):规划及部署
结合我们的生产需求,本次详细整理了最新版本 MonogoDB 7.0 集群的规划及部署过程,具有较大的参考价值,基本可照搬使用。 适应数据规模为T级的场景,由于设计了分片支撑,后续如有大数据量需求,可分片横向扩展。
682 1
|
21天前
|
存储 负载均衡 NoSQL
MongoDB的架构设计基于三种集群模式
【6月更文挑战第5天】MongoDB的架构设计基于三种集群模式
23 3
|
7月前
|
存储 NoSQL 网络协议
MongoDB 主从集群 1
MongoDB 主从集群 1
|
1月前
|
存储 缓存 NoSQL
MongoDB详解(三)——MongoDB集群
MongoDB详解(三)——MongoDB集群
45 4
|
8月前
|
NoSQL MongoDB
MongoDB分片+副本集高可用集群的启停步骤
MongoDB分片+副本集高可用集群的启停步骤
179 0
|
9月前
|
存储 NoSQL 网络安全
如何开通MongoDB的专属集群
本案例旨在展示如何开通MongoDB的专属集群。
77 1
|
1月前
|
NoSQL MongoDB
搭建MongoDB分片式集群
搭建MongoDB分片式集群
39 0
|
6月前
|
Kubernetes NoSQL MongoDB
k8s教程(pod篇)-使用StatefulSet搭建MongoDB集群
k8s教程(pod篇)-使用StatefulSet搭建MongoDB集群
645 1
|
6月前
|
NoSQL MongoDB Docker
docker搭建mongodb集群
docker搭建mongodb集群
124 0