溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點(diǎn)擊 登錄注冊 即表示同意《億速云用戶服務(wù)條款》

MongoDB之分片

發(fā)布時(shí)間:2020-08-09 12:56:15 來源:ITPUB博客 閱讀:145 作者:stonebox1122 欄目:關(guān)系型數(shù)據(jù)庫
1、環(huán)境
操作系統(tǒng)信息:
IP 操作系統(tǒng) MongoDB
10.163.91.15 RHLE6.5_x64 mongodb-linux-x86_64-rhel62-3.4.7.tgz
10.163.91.16 RHLE6.5_x64 mongodb-linux-x86_64-rhel62-3.4.7.tgz
10.163.91.17 RHLE6.5_x64 mongodb-linux-x86_64-rhel62-3.4.7.tgz


服務(wù)器規(guī)劃:
10.163.97.15 10.163.97.16 10.163.97.17 端口
mongos mongos mongos 20000
config server config server config server 21000
shard server1 主節(jié)點(diǎn) shard server1 副節(jié)點(diǎn) shard server1 仲裁 27001
shard server2 仲裁 shard server2 主節(jié)點(diǎn) shard server2 副節(jié)點(diǎn) 27002
shard server3 副節(jié)點(diǎn) shard server3 仲裁 shard server3 主節(jié)點(diǎn) 27003


從上表可以看到有四個(gè)組件:mongos、config server、shard、replica set。
mongos:數(shù)據(jù)庫集群請求的入口,所有的請求都通過mongos進(jìn)行協(xié)調(diào),不需要在應(yīng)用程序添加一個(gè)路由選擇器,mongos自己就是一個(gè)請求分發(fā)中心,它負(fù)責(zé)把對應(yīng)的數(shù)據(jù)請求轉(zhuǎn)發(fā)到對應(yīng)的shard服務(wù)器上。在生產(chǎn)環(huán)境通常有多mongos作為請求的入口,防止其中一個(gè)掛掉所有的mongodb請求都沒有辦法操作。
config server:顧名思義為配置服務(wù)器,存儲所有數(shù)據(jù)庫元信息(路由、分片)的配置。mongos本身沒有物理存儲分片服務(wù)器和數(shù)據(jù)路由信息,只是緩存在內(nèi)存里,配置服務(wù)器則實(shí)際存儲這些數(shù)據(jù)。mongos第一次啟動(dòng)或者關(guān)掉重啟就會從 config server 加載配置信息,以后如果配置服務(wù)器信息變化會通知到所有的 mongos 更新自己的狀態(tài),這樣 mongos 就能繼續(xù)準(zhǔn)確路由。在生產(chǎn)環(huán)境通常有多個(gè) config server 配置服務(wù)器(必須配置為1個(gè)或者3個(gè)),因?yàn)樗鎯α朔制酚傻脑獢?shù)據(jù),防止數(shù)據(jù)丟失!
shard:分片(sharding)是指將數(shù)據(jù)庫拆分,將其分散在不同的機(jī)器上的過程。將數(shù)據(jù)分散到不同的機(jī)器上,不需要功能強(qiáng)大的服務(wù)器就可以存儲更多的數(shù)據(jù)和處理更大的負(fù)載?;舅枷刖褪菍⒓锨谐尚K,這些塊分散到若干片里,每個(gè)片只負(fù)責(zé)總數(shù)據(jù)的一部分,最后通過一個(gè)均衡器來對各個(gè)分片進(jìn)行均衡(數(shù)據(jù)遷移)。
replica set:中文翻譯副本集,其實(shí)就是shard的備份,防止shard掛掉之后數(shù)據(jù)丟失。復(fù)制提供了數(shù)據(jù)的冗余備份,并在多個(gè)服務(wù)器上存儲數(shù)據(jù)副本,提高了數(shù)據(jù)的可用性, 并可以保證數(shù)據(jù)的安全性。
仲裁者(Arbiter):是復(fù)制集中的一個(gè)MongoDB實(shí)例,它并不保存數(shù)據(jù)。仲裁節(jié)點(diǎn)使用最小的資源,不能將Arbiter部署在同一個(gè)數(shù)據(jù)集節(jié)點(diǎn)中,可以部署在其他應(yīng)用服務(wù)器或者監(jiān)視服務(wù)器中,也可部署在單獨(dú)的虛擬機(jī)中。為了確保復(fù)制集中有奇數(shù)的投票成員(包括primary),需要添加仲裁節(jié)點(diǎn)做為投票,否則primary不能運(yùn)行時(shí)不會自動(dòng)切換primary。
簡單了解之后,可以這樣總結(jié)一下,應(yīng)用請求mongos來操作mongodb的增刪改查,配置服務(wù)器存儲數(shù)據(jù)庫元信息,并且和mongos做同步,數(shù)據(jù)最終存入在shard(分片)上,為了防止數(shù)據(jù)丟失同步在副本集中存儲了一份,仲裁在數(shù)據(jù)存儲到分片的時(shí)候決定存儲到哪個(gè)節(jié)點(diǎn)。
總共規(guī)劃了mongos 3個(gè), config server 3個(gè),數(shù)據(jù)分3片 shard server 3個(gè),每個(gè)shard 有一個(gè)副本一個(gè)仲裁也就是 3 * 2 = 6 個(gè),總共需要部署15個(gè)實(shí)例。這些實(shí)例可以部署在獨(dú)立機(jī)器也可以部署在一臺機(jī)器,我們這里測試資源有限,只準(zhǔn)備了 3臺機(jī)器,在同一臺機(jī)器只要端口不同就可以了。


2、安裝mongodb
分別在3臺機(jī)器上面安裝mongodb
[root@D2-POMS15 ~]# tar -xvzf mongodb-linux-x86_64-rhel62-3.4.7.tgz -C /usr/local/
[root@D2-POMS15 ~]# mv /usr/local/mongodb-linux-x86_64-rhel62-3.4.7/ /usr/local/mongodb

配置環(huán)境變量
[root@D2-POMS15 ~]# vim .bash_profile
export PATH=$PATH:/usr/local/mongodb/bin/
[root@D2-POMS15 ~]# source .bash_profile


分別在每臺機(jī)器建立conf、mongos、config、shard1、shard2、shard3六個(gè)目錄,因?yàn)閙ongos不存儲數(shù)據(jù),只需要建立日志文件目錄即可。
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/conf
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/mongos/log
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/config/data
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/config/log
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/shard1/data
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/shard1/log
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/shard2/data
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/shard2/log
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/shard3/data
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/shard3/log


3、config server配置服務(wù)器
mongodb3.4以后要求配置服務(wù)器也創(chuàng)建副本集,不然集群搭建不成功。
在三臺服務(wù)器上面添加配置文件:
[root@D2-POMS15 ~]# vi /usr/local/mongodb/conf/config.conf
## 配置文件內(nèi)容
pidfilepath = /usr/local/mongodb/config/log/configsrv.pid
dbpath = /usr/local/mongodb/config/data
logpath = /usr/local/mongodb/config/log/congigsrv.log
logappend = true

bind_ip = 0.0.0.0
port = 21000
fork = true

#declare this is a config db of a cluster;
configsvr = true

#副本集名稱
replSet=configs

#設(shè)置最大連接數(shù)
maxConns=20000

分別啟動(dòng)三臺服務(wù)器的config server
[root@D2-POMS15 ~]# mongod -f /usr/local/mongodb/conf/config.conf
about to fork child process, waiting until server is ready for connections.
forked process: 15368
child process started successfully, parent exiting

登錄任意一臺配置服務(wù)器,初始化配置副本集
[root@D2-POMS15 ~]# mongo --port 21000
> config = {
...    _id : "configs",
...     members : [
...         {_id : 0, host : "10.163.97.15:21000" },
...         {_id : 1, host : "10.163.97.16:21000" },
...         {_id : 2, host : "10.163.97.17:21000" }
...     ]
... }
{
        "_id" : "configs",
        "members" : [
                {
                        "_id" : 0,
                        "host" : "10.163.97.15:21000"
                },
                {
                        "_id" : 1,
                        "host" : "10.163.97.16:21000"
                },
                {
                        "_id" : 2,
                        "host" : "10.163.97.17:21000"
                }
        ]
}
> rs.initiate(config)
{ "ok" : 1 }

其中,"_id" : "configs"應(yīng)與配置文件中配置的replSet一致,"members" 中的 "host" 為三個(gè)節(jié)點(diǎn)的 ip 和 port。

4、配置分片副本集(三臺機(jī)器)
設(shè)置第一個(gè)分片副本集
添加配置文件:
[root@D2-POMS15 ~]# vi /usr/local/mongodb/conf/shard1.conf
#配置文件內(nèi)容
pidfilepath = /usr/local/mongodb/shard1/log/shard1.pid
dbpath = /usr/local/mongodb/shard1/data
logpath = /usr/local/mongodb/shard1/log/shard1.log
logappend = true

bind_ip = 0.0.0.0
port = 27001
fork = true

#打開web監(jiān)控
httpinterface=true
rest=true

#副本集名稱
replSet=shard1

#declare this is a shard db of a cluster;
shardsvr = true

#設(shè)置最大連接數(shù)
maxConns=20000

啟動(dòng)三臺服務(wù)器的shard1 server
[root@D2-POMS15 ~]# mongod -f /usr/local/mongodb/conf/shard1.conf
about to fork child process, waiting until server is ready for connections.
forked process: 15497
child process started successfully, parent exiting

登陸一臺服務(wù)器(不要在仲裁節(jié)點(diǎn)),初始化副本集
[root@D2-POMS15 ~]# mongo --port 27001
#使用admin數(shù)據(jù)庫
> use admin
switched to db admin
#定義副本集配置,第三個(gè)節(jié)點(diǎn)的 "arbiterOnly":true 代表其為仲裁節(jié)點(diǎn)。
> config = {
...    _id : "shard1",
...     members : [
...         {_id : 0, host : "10.163.97.15:27001" },
...         {_id : 1, host : "10.163.97.16:27001" },
...         {_id : 2, host : "10.163.97.17:27001" , arbiterOnly: true }
...     ]
... }
{
        "_id" : "shard1",
        "members" : [
                {
                        "_id" : 0,
                        "host" : "10.163.97.15:27001"
                },
                {
                        "_id" : 1,
                        "host" : "10.163.97.16:27001"
                },
                {
                        "_id" : 2,
                        "host" : "10.163.97.17:27001",
                        "arbiterOnly" : true
                }
        ]
}
#初始化副本集配置
> rs.initiate(config);
{ "ok" : 1 }


設(shè)置第二個(gè)分片副本集
添加配置文件:
[root@D2-POMS15 ~]# vi /usr/local/mongodb/conf/shard2.conf
#配置文件內(nèi)容
pidfilepath = /usr/local/mongodb/shard2/log/shard2.pid
dbpath = /usr/local/mongodb/shard2/data
logpath = /usr/local/mongodb/shard2/log/shard2.log
logappend = true

bind_ip = 0.0.0.0
port = 27002
fork = true

#打開web監(jiān)控
httpinterface=true
rest=true

#副本集名稱
replSet=shard2

#declare this is a shard db of a cluster;
shardsvr = true

#設(shè)置最大連接數(shù)
maxConns=20000

啟動(dòng)三臺服務(wù)器的shard2 server:
[root@D2-POMS15 ~]# mongod -f /usr/local/mongodb/conf/shard2.conf
about to fork child process, waiting until server is ready for connections.
forked process: 15622
child process started successfully, parent exiting

登陸一臺服務(wù)器(不要在仲裁節(jié)點(diǎn)),初始化副本集
[root@D2-POMS15 ~]# mongo --port 27002
> use admin
switched to db admin
> config = {
...    _id : "shard2",
...     members : [
...         {_id : 0, host : "10.163.97.15:27002"  , arbiterOnly: true },
...         {_id : 1, host : "10.163.97.16:27002" },
...         {_id : 2, host : "10.163.97.17:27002" }
...     ]
... }
{
        "_id" : "shard2",
        "members" : [
                {
                        "_id" : 0,
                        "host" : "10.163.97.15:27002",
                        "arbiterOnly" : true
                },
                {
                        "_id" : 1,
                        "host" : "10.163.97.16:27002"
                },
                {
                        "_id" : 2,
                        "host" : "10.163.97.17:27002"
                }
        ]
}
> rs.initiate(config);
{ "ok" : 1 }

設(shè)置第三個(gè)分片副本集
添加配置文件:
[root@D2-POMS15 ~]# vi /usr/local/mongodb/conf/shard3.conf
#配置文件內(nèi)容
pidfilepath = /usr/local/mongodb/shard3/log/shard3.pid
dbpath = /usr/local/mongodb/shard3/data
logpath = /usr/local/mongodb/shard3/log/shard3.log
logappend = true

bind_ip = 0.0.0.0
port = 27003
fork = true

#打開web監(jiān)控
httpinterface=true
rest=true

#副本集名稱
replSet=shard3

#declare this is a shard db of a cluster;
shardsvr = true

#設(shè)置最大連接數(shù)
maxConns=20000

啟動(dòng)三臺服務(wù)器的shard3 server
[root@D2-POMS15 ~]# mongod -f /usr/local/mongodb/conf/shard3.conf
about to fork child process, waiting until server is ready for connections.
forked process: 15742
child process started successfully, parent exiting

登陸一臺服務(wù)器(不要在仲裁節(jié)點(diǎn)),初始化副本集
> use admin
switched to db admin
> config = {
...    _id : "shard3",
...     members : [
...         {_id : 0, host : "10.163.97.15:27003" },
...         {_id : 1, host : "10.163.97.16:27003" , arbiterOnly: true},
...         {_id : 2, host : "10.163.97.17:27003" }
...     ]
... }
{
        "_id" : "shard3",
        "members" : [
                {
                        "_id" : 0,
                        "host" : "10.163.97.15:27003"
                },
                {
                        "_id" : 1,
                        "host" : "10.163.97.16:27003",
                        "arbiterOnly" : true
                },
                {
                        "_id" : 2,
                        "host" : "10.163.97.17:27003"
                }
        ]
}
> rs.initiate(config);
{ "ok" : 1 }

可以看到目前已經(jīng)啟動(dòng)了配置服務(wù)器和分片服務(wù)器。
[root@D2-POMS15 ~]# ps -ef | grep mongo | grep -v grep
root     15368     1  0 15:52 ?        00:00:07 mongod -f /usr/local/mongodb/conf/config.conf
root     15497     1  0 16:00 ?        00:00:04 mongod -f /usr/local/mongodb/conf/shard1.conf
root     15622     1  0 16:06 ?        00:00:02 mongod -f /usr/local/mongodb/conf/shard2.conf
root     15742     1  0 16:21 ?        00:00:00 mongod -f /usr/local/mongodb/conf/shard3.conf


5、配置路由服務(wù)器 mongos
在三臺服務(wù)器上面添加配置文件:
[root@D2-POMS15 ~]# vi /usr/local/mongodb/conf/mongos.conf
#內(nèi)容
pidfilepath = /usr/local/mongodb/mongos/log/mongos.pid
logpath = /usr/local/mongodb/mongos/log/mongos.log
logappend = true

bind_ip = 0.0.0.0
port = 20000
fork = true

#監(jiān)聽的配置服務(wù)器,只能有1個(gè)或者3個(gè) configs為配置服務(wù)器的副本集名字
configdb = configs/10.163.97.15:21000,10.163.97.16:21000,10.163.97.17:21000

#設(shè)置最大連接數(shù)
maxConns=20000

啟動(dòng)三臺服務(wù)器的mongos server
[root@D2-POMS15 ~]# mongos -f /usr/local/mongodb/conf/mongos.conf
about to fork child process, waiting until server is ready for connections.
forked process: 20563
child process started successfully, parent exiting

[root@D2-POMS15 ~]# mongo --port 20000
mongos> db.stats()
{
        "raw" : {
                "shard1/10.163.97.15:27001,10.163.97.16:27001" : {
                        "db" : "admin",
                        "collections" : 1,
                        "views" : 0,
                        "objects" : 3,
                        "avgObjSize" : 146.66666666666666,
                        "dataSize" : 440,
                        "storageSize" : 36864,
                        "numExtents" : 0,
                        "indexes" : 2,
                        "indexSize" : 65536,
                        "ok" : 1,
                        "$gleStats" : {
                                "lastOpTime" : Timestamp(0, 0),
                                "electionId" : ObjectId("7fffffff0000000000000001")
                        }
                },
                "shard2/10.163.97.16:27002,10.163.97.17:27002" : {
                        "db" : "admin",
                        "collections" : 1,
                        "views" : 0,
                        "objects" : 2,
                        "avgObjSize" : 114,
                        "dataSize" : 228,
                        "storageSize" : 16384,
                        "numExtents" : 0,
                        "indexes" : 2,
                        "indexSize" : 32768,
                        "ok" : 1,
                        "$gleStats" : {
                                "lastOpTime" : Timestamp(0, 0),
                                "electionId" : ObjectId("7fffffff0000000000000001")
                        }
                },
                "shard3/10.163.97.15:27003,10.163.97.17:27003" : {
                        "db" : "admin",
                        "collections" : 1,
                        "views" : 0,
                        "objects" : 2,
                        "avgObjSize" : 114,
                        "dataSize" : 228,
                        "storageSize" : 16384,
                        "numExtents" : 0,
                        "indexes" : 2,
                        "indexSize" : 32768,
                        "ok" : 1,
                        "$gleStats" : {
                                "lastOpTime" : Timestamp(0, 0),
                                "electionId" : ObjectId("7fffffff0000000000000002")
                        }
                }
        },
        "objects" : 7,
        "avgObjSize" : 127.71428571428571,
        "dataSize" : 896,
        "storageSize" : 69632,
        "numExtents" : 0,
        "indexes" : 6,
        "indexSize" : 131072,
        "fileSize" : 0,
        "extentFreeList" : {
                "num" : 0,
                "totalSize" : 0
        },
        "ok" : 1
}


6、啟用分片
目前搭建了mongodb配置服務(wù)器、路由服務(wù)器,各個(gè)分片服務(wù)器,不過應(yīng)用程序連接到mongos路由服務(wù)器并不能使用分片機(jī)制,還需要在路由服務(wù)器里設(shè)置分片配置,讓分片生效。
登陸任意一臺mongos
[root@D2-POMS15 ~]# mongo --port 20000
#使用admin數(shù)據(jù)庫
mongos> use admin
switched to db admin
#串聯(lián)路由服務(wù)器與分配副本集
mongos> sh.addShard("shard1/10.163.97.15:27001,10.163.97.16:27001,10.163.97.17:27001")
{ "shardAdded" : "shard1", "ok" : 1 }
mongos> sh.addShard("shard2/10.163.97.15:27002,10.163.97.16:27002,10.163.97.17:27002")
{ "shardAdded" : "shard2", "ok" : 1 }
mongos> sh.addShard("shard3/10.163.97.15:27003,10.163.97.16:27003,10.163.97.17:27003")
{ "shardAdded" : "shard3", "ok" : 1 }
#查看集群狀態(tài)
mongos> sh.status()
--- Sharding Status ---
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("599d34bf612249caec3fc9fe")
}
  shards:
        {  "_id" : "shard1",  "host" : "shard1/10.163.97.15:27001,10.163.97.16:27001",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/10.163.97.16:27002,10.163.97.17:27002",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/10.163.97.15:27003,10.163.97.17:27003",  "state" : 1 }
  active mongoses:
        "3.4.7" : 1
 autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
                Balancer lock taken at Wed Aug 23 2017 15:54:40 GMT+0800 (CST) by ConfigServer:Balancer
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours:
                No recent migrations
  databases:


7、測試
目前配置服務(wù)、路由服務(wù)、分片服務(wù)、副本集服務(wù)都已經(jīng)串聯(lián)起來了,但我們的目的是希望插入數(shù)據(jù),數(shù)據(jù)能夠自動(dòng)分片。連接在mongos上,準(zhǔn)備讓指定的數(shù)據(jù)庫、指定的集合分片生效。
[root@D2-POMS15 ~]# mongo --port 20000
mongos> use admin
switched to db admin
#指定testdb分片生效
mongos> db.runCommand({enablesharding :"testdb"});
{ "ok" : 1 }
#指定數(shù)據(jù)庫里需要分片的集合和片鍵
mongos> db.runCommand( { shardcollection : "testdb.table1",key : {id: 1} } )
{ "collectionsharded" : "testdb.table1", "ok" : 1 }
mongos> sh.status()
--- Sharding Status ---
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("599d34bf612249caec3fc9fe")
}
  shards:
        {  "_id" : "shard1",  "host" : "shard1/10.163.97.15:27001,10.163.97.16:27001",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/10.163.97.16:27002,10.163.97.17:27002",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/10.163.97.15:27003,10.163.97.17:27003",  "state" : 1 }
  active mongoses:
        "3.4.7" : 1
 autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
                Balancer lock taken at Wed Aug 23 2017 15:54:40 GMT+0800 (CST) by ConfigServer:Balancer
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours:
                4 : Success
  databases:
        {  "_id" : "testdb",  "primary" : "shard1",  "partitioned" : true }
                testdb.table1
                        shard key: { "id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1  1
                        { "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0)

以上就是設(shè)置testdb的 table1 表需要分片,根據(jù) id 自動(dòng)分片到 shard1 ,shard2,shard3 上面去。要這樣設(shè)置是因?yàn)椴皇撬衜ongodb 的數(shù)據(jù)庫和表 都需要分片!

測試分片:
#連接mongos服務(wù)器
[root@D2-POMS15 ~]# mongo --port 20000
#使用testdb
mongos> use testdb
switched to db testdb
#插入測試數(shù)據(jù)
mongos> for(var i=1;i<=100000;i++){db.table1.insert({id:i,"test1":"testval1"})}
WriteResult({ "nInserted" : 1 })
#查看分片情況
mongos> sh.status()
--- Sharding Status ---
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("599d34bf612249caec3fc9fe")
}
  shards:
        {  "_id" : "shard1",  "host" : "shard1/10.163.97.15:27001,10.163.97.16:27001",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/10.163.97.16:27002,10.163.97.17:27002",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/10.163.97.15:27003,10.163.97.17:27003",  "state" : 1 }
  active mongoses:
        "3.4.7" : 1
 autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
                Balancer lock taken at Wed Aug 23 2017 15:54:40 GMT+0800 (CST) by ConfigServer:Balancer
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours:
                6 : Success
  databases:
        {  "_id" : "testdb",  "primary" : "shard1",  "partitioned" : true }
                testdb.table1
                        shard key: { "id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1  1
                                shard2  1
                                shard3  1
                        { "id" : { "$minKey" : 1 } } -->> { "id" : 2 } on : shard2 Timestamp(2, 0)
                        { "id" : 2 } -->> { "id" : 20 } on : shard3 Timestamp(3, 0)
                        { "id" : 20 } -->> { "id" : { "$maxKey" : 1 } } on : shard1 Timestamp(3, 1)

可以看到這里分片很不均衡,原因是默認(rèn)的chunkSize為64M,這里的數(shù)據(jù)量沒有達(dá)到64M,可以修改一下chunkSize的大小,方便測試:
[root@D2-POMS15 ~]# mongo --port 20000
mongos> use config
switched to db config
mongos> db.settings.save( { _id:"chunksize", value: 1 } )
WriteResult({ "nMatched" : 0, "nUpserted" : 1, "nModified" : 0, "_id" : "chunksize" })
mongos> db.settings.find();
{ "_id" : "balancer", "stopped" : false, "mode" : "full" }
{ "_id" : "chunksize", "value" : 1 }

修改后重新來測試:
mongos> use testdb
switched to db testdb
mongos> db.table1.drop();
true
mongos> use admin
switched to db admin
mongos> db.runCommand( { shardcollection : "testdb.table1",key : {id: 1} } )
{ "collectionsharded" : "testdb.table1", "ok" : 1 }
mongos> use testdb
switched to db testdb
mongos> for(var i=1;i<=100000;i++){db.table1.insert({id:i,"test1":"testval1"})}
WriteResult({ "nInserted" : 1 })
mongos> sh.status()
--- Sharding Status ---
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("599d34bf612249caec3fc9fe")
}
  shards:
        {  "_id" : "shard1",  "host" : "shard1/10.163.97.15:27001,10.163.97.16:27001",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/10.163.97.16:27002,10.163.97.17:27002",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/10.163.97.15:27003,10.163.97.17:27003",  "state" : 1 }
  active mongoses:
        "3.4.7" : 1
 autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
                Balancer lock taken at Wed Aug 23 2017 15:54:40 GMT+0800 (CST) by ConfigServer:Balancer
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours:
                14 : Success
  databases:
        {  "_id" : "testdb",  "primary" : "shard1",  "partitioned" : true }
                testdb.table1
                        shard key: { "id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1  4
                                shard2  4
                                shard3  3
                        { "id" : { "$minKey" : 1 } } -->> { "id" : 2 } on : shard2 Timestamp(5, 1)
                        { "id" : 2 } -->> { "id" : 20 } on : shard3 Timestamp(6, 1)
                        { "id" : 20 } -->> { "id" : 9729 } on : shard1 Timestamp(7, 1)
                        { "id" : 9729 } -->> { "id" : 21643 } on : shard1 Timestamp(3, 3)
                        { "id" : 21643 } -->> { "id" : 31352 } on : shard2 Timestamp(4, 2)
                        { "id" : 31352 } -->> { "id" : 43021 } on : shard2 Timestamp(4, 3)
                        { "id" : 43021 } -->> { "id" : 52730 } on : shard3 Timestamp(5, 2)
                        { "id" : 52730 } -->> { "id" : 64695 } on : shard3 Timestamp(5, 3)
                        { "id" : 64695 } -->> { "id" : 74404 } on : shard1 Timestamp(6, 2)
                        { "id" : 74404 } -->> { "id" : 87088 } on : shard1 Timestamp(6, 3)
                        { "id" : 87088 } -->> { "id" : { "$maxKey" : 1 } } on : shard2 Timestamp(7, 0)

mongos> db.table1.stats()
{
        "sharded" : true,
        "capped" : false,
        "ns" : "testdb.table1",
        "count" : 100000,
        "size" : 5400000,
        "storageSize" : 1736704,
        "totalIndexSize" : 2191360,
        "indexSizes" : {
                "_id_" : 946176,
                "id_1" : 1245184
        },
        "avgObjSize" : 54,
        "nindexes" : 2,
        "nchunks" : 11,
        "shards" : {
                "shard1" : {
                        "ns" : "testdb.table1",
                        "size" : 2376864,
                        "count" : 44016,
                        "avgObjSize" : 54,
                        "storageSize" : 753664,
                        "capped" : false,
                        "nindexes" : 2,
                        "totalIndexSize" : 933888,
                        "indexSizes" : {
                                "_id_" : 405504,
                                "id_1" : 528384
                        },
                        "ok" : 1
                },
                "shard2" : {
                        "ns" : "testdb.table1",
                        "size" : 1851768,
                        "count" : 34292,
                        "avgObjSize" : 54,
                        "storageSize" : 606208,
                        "capped" : false,
                        "nindexes" : 2,
                        "totalIndexSize" : 774144,
                        "indexSizes" : {
                                "_id_" : 335872,
                                "id_1" : 438272
                        },
                        "ok" : 1
                },
                "shard3" : {
                        "ns" : "testdb.table1",
                        "size" : 1171368,
                        "count" : 21692,
                        "avgObjSize" : 54,
                        "storageSize" : 376832,
                        "capped" : false,
                        "nindexes" : 2,
                        "totalIndexSize" : 483328,
                        "indexSizes" : {
                                "_id_" : 204800,
                                "id_1" : 278528
                        },
                        "ok" : 1
                }
        },
        "ok" : 1
}

可以看到現(xiàn)在的數(shù)據(jù)分布就均勻多了。


8、后期運(yùn)維
啟動(dòng)關(guān)閉
mongodb的啟動(dòng)順序是,先啟動(dòng)配置服務(wù)器,在啟動(dòng)分片,最后啟動(dòng)mongos.
mongod -f /usr/local/mongodb/conf/config.conf
mongod -f /usr/local/mongodb/conf/shard1.conf
mongod -f /usr/local/mongodb/conf/shard2.conf
mongod -f /usr/local/mongodb/conf/shard3.conf
mongod -f /usr/local/mongodb/conf/mongos.conf
關(guān)閉時(shí),直接killall殺掉所有進(jìn)程
killall mongod
killall mongos

參考:
https://docs.mongodb.com/manual/sharding/
http://www.lanceyan.com/tech/arch/mongodb_shard1.html
http://www.cnblogs.com/ityouknow/p/7344005.html
向AI問一下細(xì)節(jié)

免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。

AI