您的位置:首页 > 数据库 > Mongodb

MongoDB 3.2.7 for rhel6.4 副本集-分片集群部署

2016-11-15 15:22 936 查看
    今天,一同事反映,他安装部署mongodb副本集--分片集群,初始化分片时遇到问题:初始化分片必须使用主机名(也就是必须有相当于DNS服务的解析),这样以来,mongo副本集--分片集群就
会出现DNS服务器单点故障问题。为了验证这一问题,我单独使用ip部署mongo 3.2.7 for rhel 6.4 副本集--分片集群,验证结果是:副本集及分片初始化时使用IP,则同时均使用IP方式,使用host或
DNS解析,则副本集及分片初始化时均使用主机名或域名解析方式,可成功部署mongo 3.2.7 for rhel 6.4 副本集--分片集群。另外,个人观点:建议使用DNS或host域名解析,因为主机名可以不改变,

而主机的IP地址的改变可能性是很大的。

   mongo 3.2.7 for rhel 6.4 副本集--分片集群的部署过程如下:

   首先,确保rhel 6.4环境能支持3.2.7的安装部署,mongodb3.2.7单实例安装过程及可能遇到的问题可参考:

   MongoDB
3.2 for RHEL6.4 installation(地址:http://blog.itpub.net/29357786/viewspace-2119891/)

   本次实验过程涉及3台服务器:

   角色:副本集仲裁者--分片集群配置服务器(192.168.144.111)

[root@arbiter ~]# hostname

arbiter

[root@arbiter ~]# cat /etc/redhat-release 

Red Hat Enterprise Linux Server release 6.4 (Santiago)

[root@arbiter ~]# 

    角色:副本集firstset主节点--分片集群分片1(192.168.144.130)

[root@mongo2 ~]# hostname

mongo2

[root@mongo2 ~]# cat /etc/redhat-release 

Red Hat Enterprise Linux Server release 6.4 (Santiago)

[root@mongo2 ~]# 

    角色:副本集secondset主节点--分片集群分片2(192.168.144.120)

[root@mongo1 ~]# hostname

mongo1

[root@mongo1 ~]# cat /etc/redhat-release 

Red Hat Enterprise Linux Server release 6.4 (Santiago)

[root@mongo1 ~]# 

    副本集仲裁者--分片集群配置服务器(192.168.144.111)需要创建的文件目录

#数据存放目录

/opt/mongo/data/dns_arbiter1
/opt/mongo/data/dns_arbiter2
/opt/mongo/data/dns_sdconfig1
/opt/mongo/data/dns_sdconfig2

#日志存放目录

/opt/mongo/logs/dns_aribter1
/opt/mongo/logs/dns_aribter2
/opt/mongo/logs/dns_config1
/opt/mongo/logs/dns_config2

   副本集firstset主节点--分片集群分片1(192.168.144.130)需要创建的文件目录

#数据存放目录

/opt/mongo/data/dns_repset1
/opt/mongo/data/dns_repset2
/opt/mongo/data/dns_shard2

#日志存放目录

/opt/mongo/logs/dns_sd2
/opt/mongo/logs/firstset
/opt/mongo/logs/secondset

   副本集second主节点--分片集群分片2(192.168.144.120)需要创建的文件目录

#数据存放目录
/opt/mongo/data/dns_repset1
/opt/mongo/data/dns_repset2

/opt/mongo/data/dns_shard2

#日志存放目录

/opt/mongo/logs/dns_sd1
/opt/mongo/logs/firstset
/opt/mongo/logs/secondset

   第一步:初始化副本集1

#初始化副本集1的实例进程,三个节点需要执行的命令

aibiter执行命令

mongod --dbpath /opt/mongo/data/dns_arbiter1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/dns_aribter1/aribter1.log --logappend --nojournal --directoryperdb

mongo1r执行命令

mongod --dbpath /opt/mongo/data/dns_repset1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/firstset/firstset.log --logappend --nojournal --directoryperdb

mongo2r执行命令

mongod --dbpath /opt/mongo/data/dns_repset1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/firstset/firstset.log --logappend --nojournal --directoryperdb

#三个节点命令执行过程日志

[mongo@arbiter logs]$ mongod --dbpath /opt/mongo/data/dns_arbiter1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/dns_aribter1/aribter1.log --logappend --nojournal --directoryperdb

2016-11-14T18:53:14.653-0800 I CONTROL  [main] ** WARNING: --rest is specified without --httpinterface,

2016-11-14T18:53:14.653-0800 I CONTROL  [main] **          enabling http interface

about to fork child process, waiting until server is ready for connections.

forked process: 6566

child process started successfully, parent exiting

[mongo@arbiter logs]$

[mongo@mongo1 logs]$ mongod --dbpath /opt/mongo/data/dns_repset1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/firstset/firstset.log --logappend --nojournal --directoryperdb

2016-11-14T18:53:26.838-0800 I CONTROL  [main] ** WARNING: --rest is specified without --httpinterface,

2016-11-14T18:53:26.838-0800 I CONTROL  [main] **          enabling http interface

about to fork child process, waiting until server is ready for connections.

forked process: 10478

child process started successfully, parent exiting

[mongo@mongo1 logs]$ 

[mongo@mongo2 logs]$ mongod --dbpath /opt/mongo/data/dns_repset1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/firstset/firstset.log --logappend --nojournal --directoryperdb

2016-11-14T18:53:43.808-0800 I CONTROL  [main] ** WARNING: --rest is specified without --httpinterface,

2016-11-14T18:53:43.808-0800 I CONTROL  [main] **          enabling http interface

about to fork child process, waiting until server is ready for connections.

forked process: 6374

child process started successfully, parent exiting

[mongo@mongo2 logs]$

#初始化副本集1(firstset)

#mongo2需要执行的命令

config={_id:"firstset",members:[]}

config.members.push({_id:0,host:"192.168.144.120:10001"})

config.members.push({_id:1,host:"192.168.144.130:10001"})

config.members.push({_id:2,host:"192.168.144.111:10001",arbiterOnly:true})

rs.initiate(config);

#mongo2执行命令过程日志

[mongo@mongo2 logs]$ mongo --port 10001

MongoDB shell version: 3.2.7

connecting to: 127.0.0.1:10001/test

Server has startup warnings: 

2016-11-14T18:53:43.808-0800 I CONTROL  [main] ** WARNING: --rest is specified without --httpinterface,

2016-11-14T18:53:43.808-0800 I CONTROL  [main] **          enabling http interface

2016-11-14T18:53:43.921-0800 I CONTROL  [initandlisten] 

2016-11-14T18:53:43.921-0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.

2016-11-14T18:53:43.921-0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'

2016-11-14T18:53:43.921-0800 I CONTROL  [initandlisten] 

2016-11-14T18:53:43.921-0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.

2016-11-14T18:53:43.921-0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'

2016-11-14T18:53:43.921-0800 I CONTROL  [initandlisten] 

> config={_id:"firstset",members:[]}

{ "_id" : "firstset", "members" : [ ] }

> config.members.push({_id:0,host:"192.168.144.120:10001"})

1

> config.members.push({_id:1,host:"192.168.144.130:10001"})

2

> config.members.push({_id:2,host:"192.168.144.111:10001",arbiterOnly:true})

3

> rs.initiate(config);

{ "ok" : 1 }

firstset:OTHER> use dns_testdb

switched to db dns_testdb

firstset:PRIMARY> rs.isMaster()

{

"hosts" : [

"192.168.144.120:10001",

"192.168.144.130:10001"

],

"arbiters" : [

"192.168.144.111:10001"

],

"setName" : "firstset",

"setVersion" : 1,

"ismaster" : true,

"secondary" : false,
"primary" : "192.168.144.130:10001",
"me" : "192.168.144.130:10001",

"electionId" : ObjectId("7fffffff0000000000000001"),

"maxBsonObjectSize" : 16777216,

"maxMessageSizeBytes" : 48000000,

"maxWriteBatchSize" : 1000,

"localTime" : ISODate("2016-11-15T03:07:06.392Z"),

"maxWireVersion" : 4,

"minWireVersion" : 0,

"ok" : 1

}

#通过mongo2向firstset副本集插入100000条数据

firstset:OTHER> use dns_testdb

switched to db dns_testdb

firstset:PRIMARY> animal = ["dog", "tiger", "cat", "lion", "elephant", "bird", "horse", "pig", "rabbit", "cow", "dragon", "snake"];

[

"dog",

"tiger",

"cat",

"lion",

"elephant",

"bird",

"horse",

"pig",

"rabbit",

"cow",

"dragon",

"snake"

]

firstset:PRIMARY> for(var i=0; i<100000; i++){

...   name = animal[Math.floor(Math.random()*animal.length)];

...   user_id = i;

...   boolean = [true, false][Math.floor(Math.random()*2)];

...   added_at = new Date();

...   number = Math.floor(Math.random()*10001);

...   db.test_collection.save({"name":name, "user_id":user_id, "boolean": boolean, "added_at":added_at, "number":number });

... }

WriteResult({ "nInserted" : 1 })

firstset:PRIMARY> firstset:PRIMARY> show collections

test_collection

firstset:PRIMARY> db.test_collection.findOne();

{

"_id" : ObjectId("582a7c095490e553bc98919e"),

"name" : "snake",

"user_id" : 0,

"boolean" : false,

"added_at" : ISODate("2016-11-15T03:07:53.561Z"),

"number" : 746

}

firstset:PRIMARY> db.test_collection.count();

100000

firstset:PRIMARY> show dbs

dns_testdb  0.004GB

local       0.006GB

    第二步:初始化分片

#启动配置服务器进程,三个节点需要执行的命令

aribter

mongod --configsvr --dbpath /opt/mongo/data/dns_sdconfig1 --port 20001 --fork --logpath /opt/mongo/logs/dns_config1/config1.log --logappend

mongo1

mongod --configsvr --dbpath /opt/mongo/data/dns_shard1 --port 20001 --fork --logpath /opt/mongo/logs/dns_sd1/sd1_mymongo1.log --logappend

mongo2

mongod --configsvr --dbpath /opt/mongo/data/dns_shard2 --port 20001 --fork --logpath /opt/mongo/logs/dns_sd2/sd1_mymongo2.log --logappend
#三个节点命令执行过程日志

[mongo@arbiter dns_config1]$ mongod --configsvr --dbpath /opt/mongo/data/dns_sdconfig1 --port 20001 --fork --logpath /opt/mongo/logs/dns_config1/config1.log --logappend

about to fork child process, waiting until server is ready for connections.

forked process: 7038

child process started successfully, parent exiting

[mongo@arbiter dns_config1]$

[mongo@mongo1 dns_shard1]$ mongod --configsvr --dbpath /opt/mongo/data/dns_shard1 --port 20001 --fork --logpath /opt/mongo/logs/dns_sd1/sd1_mymongo1.log --logappend

about to fork child process, waiting until server is ready for connections.

forked process: 11566

child process started successfully, parent exiting

[mongo@mongo1 dns_shard1]$

[mongo@mongo2 logs]$ mongod --configsvr --dbpath /opt/mongo/data/dns_shard2 --port 20001 --fork --logpath /opt/mongo/logs/dns_sd2/sd1_mymongo2.log --logappend

about to fork child process, waiting until server is ready for connections.

forked process: 6670

child process started successfully, parent exiting

[mongo@mongo2 logs]$ l

#在节点mongo1、mongo2启动Mongos进程,2个节点需要执行的命令

mongos --configdb 192.168.144.111:20001,192.168.144.120:20001,192.168.144.130:20001 --port 27017 --chunkSize 1 --fork --logpath /opt/mongo/logs/dns_sd.log --logappend

#2个节点执行命令过程日志

[mongo@mongo1 logs]$ mongos --configdb 192.168.144.111:20001,192.168.144.120:20001,192.168.144.130:20001 --port 27017 --chunkSize 1 --fork --logpath /opt/mongo/logs/dns_sd.log --logappend

about to fork child process, waiting until server is ready for connections.

forked process: 14689

child process started successfully, parent exiting

[mongo@mongo1 logs]$

[mongo@mongo2 logs]$ mongos --configdb 192.168.144.111:20001,192.168.144.120:20001,192.168.144.130:20001 --port 27017 --chunkSize 1 --fork --logpath /opt/mongo/logs/dns_sd.log --logappend

about to fork child process, waiting until server is ready for connections.

forked process: 7093

child process started successfully, parent exiting

[mongo@mongo2 logs]$

#在mongo1配置分片,将firstset添加到分片

mongo1需要执行的命令

mongo --port 27017

use admin

db.runCommand( { addShard : "firstset/192.168.144.120:10001,192.168.144.130:10001,192.168.144.111:10001" } )

#mongo1执行命令的日志

[mongo@mongo1 logs]$ mongo --port 27017

MongoDB shell version: 3.2.7

connecting to: 127.0.0.1:27017/test

mongos> use admin

switched to db admin

mongos> db.runCommand( { addShard : "firstset/192.168.144.120:10001,192.168.144.130:10001,192.168.144.111:10001" } )

{ "shardAdded" : "firstset", "ok" : 1 }

mongos> 

   第三步:初始化副本集2(secondset)

#初始化副本集2的实例进程,三个节点需要执行的命令

aibiter执行命令

mongod --dbpath /opt/mongo/data/dns_arbiter2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/dns_aribter2/aribter2.log --logappend --nojournal --directoryperdb

mongo1执行命令

mongod --dbpath /opt/mongo/data/dns_repset2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/secondset/secondset.log --logappend --nojournal --directoryperdb

mongo2执行命令

mongod --dbpath /opt/mongo/data/dns_repset2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/secondset/secondset.log --logappend --nojournal --directoryperdb
#三个节点执行命令的日志

[mongo@arbiter dns_aribter2]$ mongod --dbpath /opt/mongo/data/dns_arbiter2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/dns_aribter2/aribter2.log --logappend --nojournal --directoryperdb

2016-11-14T20:32:29.478-0800 I CONTROL  [main] ** WARNING: --rest is specified without --httpinterface,

2016-11-14T20:32:29.478-0800 I CONTROL  [main] **          enabling http interface

about to fork child process, waiting until server is ready for connections.

forked process: 10192

child process started successfully, parent exiting

[mongo@arbiter dns_aribter2]$ 

[mongo@mongo1 secondset]$ mongod --dbpath /opt/mongo/data/dns_repset2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/secondset/secondset.log --logappend --nojournal --directoryperdb

2016-11-14T20:32:38.786-0800 I CONTROL  [main] ** WARNING: --rest is specified without --httpinterface,

2016-11-14T20:32:38.786-0800 I CONTROL  [main] **          enabling http interface

about to fork child process, waiting until server is ready for connections.

forked process: 15770

child process started successfully, parent exiting

[mongo@mongo1 secondset]$

[mongo@mongo2 secondset]$ mongod --dbpath /opt/mongo/data/dns_repset2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/secondset/secondset.log --logappend --nojournal --directoryperdb

2016-11-14T20:32:53.327-0800 I CONTROL  [main] ** WARNING: --rest is specified without --httpinterface,

2016-11-14T20:32:53.327-0800 I CONTROL  [main] **          enabling http interface

about to fork child process, waiting until server is ready for connections.

forked process: 7344

child process started successfully, parent exiting

[mongo@mongo2 secondset]$

#初始化副本集2(secondset)

mongo1需要执行的命令

mongo 192.168.144.120:30001/admin

config={_id:"secondset",members:[]}

config.members.push({_id:0,host:"192.168.144.120:30001"})

config.members.push({_id:1,host:"192.168.144.130:30001"})

config.members.push({_id:2,host:"192.168.144.111:30001",arbiterOnly:true})

rs.initiate(config);

#mongo1执行命令的日志(红色部分个人觉得有点莫名其妙,但是不妨碍操作正常进程)

[mongo@mongo1 secondset]$ mongo 192.168.144.120:30001/admin

MongoDB shell version: 3.2.7

connecting to: 192.168.144.120:30001/admin

Server has startup warnings: 

2016-11-14T20:32:38.786-0800 I CONTROL  [main] ** WARNING: --rest is specified without --httpinterface,

2016-11-14T20:32:38.786-0800 I CONTROL  [main] **          enabling http interface

2016-11-14T20:32:38.858-0800 I CONTROL  [initandlisten] 

2016-11-14T20:32:38.859-0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.

2016-11-14T20:32:38.859-0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'

2016-11-14T20:32:38.859-0800 I CONTROL  [initandlisten] 

2016-11-14T20:32:38.859-0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.

2016-11-14T20:32:38.859-0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'

2016-11-14T20:32:38.859-0800 I CONTROL  [initandlisten] 

> config={_id:"secondset",members:[]}

{ "_id" : "secondset", "members" : [ ] }

> config.members.push({_id:0,host:"192.168.144.120:30001"})

1

> config.members.push({_id:1,host:"192.168.144.130:30001"})

2

> config.members.push({_id:2,host:"192.168.144.111:30001",arbiterOnly:true})

3

> rs.initiate(config);

{ "ok" : 1 }

secondset:OTHER> firstset:PRIMARY> rs.isMaster()
2016-11-14T20:33:32.215-0800 E QUERY    [thread1] ReferenceError: PRIMARY is not defined :
@(shell):1:10
secondset:SECONDARY> firstset:PRIMARY> rs.isMaster();
2016-11-14T20:33:40.892-0800 E QUERY    [thread1] ReferenceError: PRIMARY is not defined :
@(shell):1:10
secondset:PRIMARY> rs.isMaster();

{

"hosts" : [

"192.168.144.120:30001",

"192.168.144.130:30001"

],

"arbiters" : [

"192.168.144.111:30001"

],

"setName" : "secondset",

"setVersion" : 1,

"ismaster" : true,

"secondary" : false,
"primary" : "192.168.144.120:30001",
"me" : "192.168.144.120:30001",

"electionId" : ObjectId("7fffffff0000000000000001"),

"maxBsonObjectSize" : 16777216,

"maxMessageSizeBytes" : 48000000,

"maxWriteBatchSize" : 1000,

"localTime" : ISODate("2016-11-15T04:34:35.210Z"),

"maxWireVersion" : 4,

"minWireVersion" : 0,

"ok" : 1

}

secondset:PRIMARY> 

   第四部:将secondset添加到分片集群

#mongo1需要执行的命令

mongo --port 27017

use admin

db.runCommand( { addShard : "secondset/192.168.144.120:30001,192.168.144.130:30001,192.168.144.111:30001" } )

#mongo1执行命令的日志

[mongo@mongo1 logs]$ mongo --port 27017

MongoDB shell version: 3.2.7

connecting to: 127.0.0.1:27017/test

mongos> use admin

mongos> db.runCommand( { addShard : "secondset/192.168.144.120:30001,192.168.144.130:30001,192.168.144.111:30001" } )

{ "shardAdded" : "secondset", "ok" : 1 }

mongos> db.runCommand({listShards:1})

{

"shards" : [

{

"_id" : "firstset",

"host" : "firstset/192.168.144.120:10001,192.168.144.130:10001"

},

{

"_id" : "secondset",

"host" : "secondset/192.168.144.120:30001,192.168.144.130:30001"

}

],

"ok" : 1

}

mongos> 

   第五步:开启测试数据库dns_testdb的分片功能并打开集合分片

#mongo1需要执行的命令

mongo --port 27017

use admin

sh.enableSharding("dns_testdb");

db.runCommand({"shardcollection":"dns_testdb.test_collection","key":{"_id":1}});

#mongo1执行命令日志
[mongo@mongo1 logs]$ mongo --port 27017

MongoDB shell version: 3.2.7

connecting to: 127.0.0.1:27017/test

mongos> use admin

switched to db admin

mongos> sh.enableSharding("dns_testdb");

{ "ok" : 1 }

mongos> db.runCommand({"shardcollection":"dns_testdb.test_collection","key":{"_id":1}});

{ "collectionsharded" : "dns_testdb.test_collection", "ok" : 1 }

#查看副本集--分片集群信息

mongos> use config

switched to db config

mongos>  db.shards.find();

{ "_id" : "firstset", "host" : "firstset/192.168.144.120:10001,192.168.144.130:10001" }

{ "_id" : "secondset", "host" : "secondset/192.168.144.120:30001,192.168.144.130:30001" }

mongos>

mongos> use config

switched to db config

mongos> db.printShardingStatus()

--- Sharding Status --- 

  sharding version: {

"_id" : 1,

"minCompatibleVersion" : 5,

"currentVersion" : 6,

"clusterId" : ObjectId("582a888bd617c6f7926f8843")

}

  shards:

{  "_id" : "firstset",  "host" : "firstset/192.168.144.120:10001,192.168.144.130:10001" }

{  "_id" : "secondset",  "host" : "secondset/192.168.144.120:30001,192.168.144.130:30001" }

  active mongoses:

"3.2.7" : 2

  balancer:

Currently enabled:  yes

Currently running:  yes

Balancer lock taken at Mon Nov 14 2016 21:34:45 GMT-0800 (PST) by mongo2:27017:1479182487:368039610:Balancer:970925433

Collections with active migrations: 

dns_testdb.test_collection started at Mon Nov 14 2016 21:34:45 GMT-0800 (PST)

Failed balancer rounds in last 5 attempts:  0

Migration Results for the last 24 hours: 

6 : Success

  databases:

{  "_id" : "dns_testdb",  "primary" : "firstset",  "partitioned" : true }

dns_testdb.test_collection

shard key: { "_id" : 1 }

unique: false

balancing: true

chunks:

firstset 13

secondset 6

{ "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("582a7c0c5490e553bc98a683") } on : secondset Timestamp(2, 0) 

{ "_id" : ObjectId("582a7c0c5490e553bc98a683") } -->> { "_id" : ObjectId("582a7c0e5490e553bc98bb69") } on : secondset Timestamp(3, 0) 

{ "_id" : ObjectId("582a7c0e5490e553bc98bb69") } -->> { "_id" : ObjectId("582a7c115490e553bc98d04f") } on : secondset Timestamp(4, 0) 

{ "_id" : ObjectId("582a7c115490e553bc98d04f") } -->> { "_id" : ObjectId("582a7c135490e553bc98e535") } on : secondset Timestamp(5, 0) 

{ "_id" : ObjectId("582a7c135490e553bc98e535") } -->> { "_id" : ObjectId("582a7c155490e553bc98fa1b") } on : secondset Timestamp(6, 0) 

{ "_id" : ObjectId("582a7c155490e553bc98fa1b") } -->> { "_id" : ObjectId("582a7c175490e553bc990f01") } on : secondset Timestamp(7, 0) 

{ "_id" : ObjectId("582a7c175490e553bc990f01") } -->> { "_id" : ObjectId("582a7c1a5490e553bc9923e7") } on : firstset Timestamp(7, 1) 

{ "_id" : ObjectId("582a7c1a5490e553bc9923e7") } -->> { "_id" : ObjectId("582a7c1c5490e553bc9938cd") } on : firstset Timestamp(1, 7) 

{ "_id" : ObjectId("582a7c1c5490e553bc9938cd") } -->> { "_id" : ObjectId("582a7c1e5490e553bc994db3") } on : firstset Timestamp(1, 8) 

{ "_id" : ObjectId("582a7c1e5490e553bc994db3") } -->> { "_id" : ObjectId("582a7c215490e553bc996299") } on : firstset Timestamp(1, 9) 

{ "_id" : ObjectId("582a7c215490e553bc996299") } -->> { "_id" : ObjectId("582a7c235490e553bc99777f") } on : firstset Timestamp(1, 10) 

{ "_id" : ObjectId("582a7c235490e553bc99777f") } -->> { "_id" : ObjectId("582a7c255490e553bc998c65") } on : firstset Timestamp(1, 11) 

{ "_id" : ObjectId("582a7c255490e553bc998c65") } -->> { "_id" : ObjectId("582a7c275490e553bc99a14b") } on : firstset Timestamp(1, 12) 

{ "_id" : ObjectId("582a7c275490e553bc99a14b") } -->> { "_id" : ObjectId("582a7c2a5490e553bc99b631") } on : firstset Timestamp(1, 13) 

{ "_id" : ObjectId("582a7c2a5490e553bc99b631") } -->> { "_id" : ObjectId("582a7c2c5490e553bc99cb17") } on : firstset Timestamp(1, 14) 

{ "_id" : ObjectId("582a7c2c5490e553bc99cb17") } -->> { "_id" : ObjectId("582a7c2e5490e553bc99dffd") } on : firstset Timestamp(1, 15) 

{ "_id" : ObjectId("582a7c2e5490e553bc99dffd") } -->> { "_id" : ObjectId("582a7c305490e553bc99f4e3") } on : firstset Timestamp(1, 16) 

{ "_id" : ObjectId("582a7c305490e553bc99f4e3") } -->> { "_id" : ObjectId("582a7c335490e553bc9a09c9") } on : firstset Timestamp(1, 17) 

{ "_id" : ObjectId("582a7c335490e553bc9a09c9") } -->> { "_id" : { "$maxKey" : 1 } } on : firstset Timestamp(1, 18) 

mongos> 

mongos> db.printShardingStatus()

--- Sharding Status --- 

  sharding version: {

"_id" : 1,

"minCompatibleVersion" : 5,

"currentVersion" : 6,

"clusterId" : ObjectId("582a888bd617c6f7926f8843")

}

  shards:

{  "_id" : "firstset",  "host" : "firstset/192.168.144.120:10001,192.168.144.130:10001" }

{  "_id" : "secondset",  "host" : "secondset/192.168.144.120:30001,192.168.144.130:30001" }

  active mongoses:

"3.2.7" : 2

  balancer:

Currently enabled:  yes

Currently running:  no

Failed balancer rounds in last 5 attempts:  0

Migration Results for the last 24 hours: 

9 : Success

  databases:

{  "_id" : "dns_testdb",  "primary" : "firstset",  "partitioned" : true }

dns_testdb.test_collection

shard key: { "_id" : 1 }

unique: false

balancing: true

chunks:
firstset 10
secondset 9

{ "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("582a7c0c5490e553bc98a683") } on : secondset Timestamp(2, 0) 

{ "_id" : ObjectId("582a7c0c5490e553bc98a683") } -->> { "_id" : ObjectId("582a7c0e5490e553bc98bb69") } on : secondset Timestamp(3, 0) 

{ "_id" : ObjectId("582a7c0e5490e553bc98bb69") } -->> { "_id" : ObjectId("582a7c115490e553bc98d04f") } on : secondset Timestamp(4, 0) 

{ "_id" : ObjectId("582a7c115490e553bc98d04f") } -->> { "_id" : ObjectId("582a7c135490e553bc98e535") } on : secondset Timestamp(5, 0) 

{ "_id" : ObjectId("582a7c135490e553bc98e535") } -->> { "_id" : ObjectId("582a7c155490e553bc98fa1b") } on : secondset Timestamp(6, 0) 

{ "_id" : ObjectId("582a7c155490e553bc98fa1b") } -->> { "_id" : ObjectId("582a7c175490e553bc990f01") } on : secondset Timestamp(7, 0) 

{ "_id" : ObjectId("582a7c175490e553bc990f01") } -->> { "_id" : ObjectId("582a7c1a5490e553bc9923e7") } on : secondset Timestamp(8, 0) 

{ "_id" : ObjectId("582a7c1a5490e553bc9923e7") } -->> { "_id" : ObjectId("582a7c1c5490e553bc9938cd") } on : secondset Timestamp(9, 0) 

{ "_id" : ObjectId("582a7c1c5490e553bc9938cd") } -->> { "_id" : ObjectId("582a7c1e5490e553bc994db3") } on : secondset Timestamp(10, 0) 

{ "_id" : ObjectId("582a7c1e5490e553bc994db3") } -->> { "_id" : ObjectId("582a7c215490e553bc996299") } on : firstset Timestamp(10, 1) 

{ "_id" : ObjectId("582a7c215490e553bc996299") } -->> { "_id" : ObjectId("582a7c235490e553bc99777f") } on : firstset Timestamp(1, 10) 

{ "_id" : ObjectId("582a7c235490e553bc99777f") } -->> { "_id" : ObjectId("582a7c255490e553bc998c65") } on : firstset Timestamp(1, 11) 

{ "_id" : ObjectId("582a7c255490e553bc998c65") } -->> { "_id" : ObjectId("582a7c275490e553bc99a14b") } on : firstset Timestamp(1, 12) 

{ "_id" : ObjectId("582a7c275490e553bc99a14b") } -->> { "_id" : ObjectId("582a7c2a5490e553bc99b631") } on : firstset Timestamp(1, 13) 

{ "_id" : ObjectId("582a7c2a5490e553bc99b631") } -->> { "_id" : ObjectId("582a7c2c5490e553bc99cb17") } on : firstset Timestamp(1, 14) 

{ "_id" : ObjectId("582a7c2c5490e553bc99cb17") } -->> { "_id" : ObjectId("582a7c2e5490e553bc99dffd") } on : firstset Timestamp(1, 15) 

{ "_id" : ObjectId("582a7c2e5490e553bc99dffd") } -->> { "_id" : ObjectId("582a7c305490e553bc99f4e3") } on : firstset Timestamp(1, 16) 

{ "_id" : ObjectId("582a7c305490e553bc99f4e3") } -->> { "_id" : ObjectId("582a7c335490e553bc9a09c9") } on : firstset Timestamp(1, 17) 

{ "_id" : ObjectId("582a7c335490e553bc9a09c9") } -->> { "_id" : { "$maxKey" : 1 } } on : firstset Timestamp(1, 18) 

mongos> 
到此,验证操作已经完毕,总结:副本集--分片集群部署过程中,可以只使用IP地址,但是需要注意的是副本集及分片配置时,如果使用IP则均使用IP地址。

附副本集--分片集群部署完成后,3个节点的mong0进程:

[mongo@arbiter ~]$ ps -ef|grep mongo

root      6125  5922  0 18:17 pts/5    00:00:00 su - mongo

mongo     6126  6125  0 18:17 pts/5    00:00:00 -bash

mongo     6566     1  0 18:53 ?        00:01:55 mongod --dbpath /opt/mongo/data/dns_arbiter1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/dns_aribter1/aribter1.log --logappend --nojournal --directoryperdb

mongo     7038     1  0 19:22 ?        00:01:47 mongod --configsvr --dbpath /opt/mongo/data/dns_sdconfig1 --port 20001 --fork --logpath /opt/mongo/logs/dns_config1/config1.log --logappend

mongo    10192     1  0 20:32 ?        00:00:56 mongod --dbpath /opt/mongo/data/dns_arbiter2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/dns_aribter2/aribter2.log --logappend --nojournal --directoryperdb

mongo    12503  6126  7 23:03 pts/5    00:00:00 ps -ef

mongo    12504  6126  0 23:03 pts/5    00:00:00 grep mongo

[mongo@arbiter ~]$

[root@mongo1 ~]# ps -ef|grep mongo

root      9467  9040  0 18:20 pts/4    00:00:00 su - mongo

mongo     9468  9467  0 18:20 pts/4    00:00:00 -bash

mongo    10478     1  1 18:53 ?        00:04:20 mongod --dbpath /opt/mongo/data/dns_repset1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/firstset/firstset.log --logappend --nojournal --directoryperdb

mongo    11566     1  0 19:24 ?        00:01:29 mongod --configsvr --dbpath /opt/mongo/data/dns_shard1 --port 20001 --fork --logpath /opt/mongo/logs/dns_sd1/sd1_mymongo1.log --logappend

mongo    14689     1  0 20:01 ?        00:00:35 mongos --configdb 192.168.144.111:20001,192.168.144.120:20001,192.168.144.130:20001 --port 27017 --chunkSize 1 --fork --logpath /opt/mongo/logs/dns_sd.log --logappend

mongo    14793  9468  0 20:02 pts/4    00:00:00 mongo --port 27017

root     15383 15365  0 20:21 pts/0    00:00:00 su - mongo

mongo    15384 15383  0 20:21 pts/0    00:00:00 -bash

mongo    15770     1  1 20:32 ?        00:01:55 mongod --dbpath /opt/mongo/data/dns_repset2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/secondset/secondset.log --logappend --nojournal --directoryperdb

mongo    15796 15384  0 20:32 pts/0    00:00:00 mongo 192.168.144.120:30001/admin

root     20392 18350  0 23:03 pts/1    00:00:00 grep mongo

[root@mongo1 ~]# 

[root@mongo2 ~]# ps -ef|grep mongo

root      6187  3834  0 18:18 pts/1    00:00:00 su - mongo

mongo     6188  6187  0 18:18 pts/1    00:00:00 -bash

mongo     6374     1  1 18:53 ?        00:04:26 mongod --dbpath /opt/mongo/data/dns_repset1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/firstset/firstset.log --logappend --nojournal --directoryperdb

mongo     6401  6188  0 18:55 pts/1    00:00:28 mongo --port 10001

root      6589  6534  0 19:16 pts/2    00:00:00 su - mongo

mongo     6590  6589  0 19:16 pts/2    00:00:00 -bash

mongo     6670     1  0 19:25 ?        00:01:26 mongod --configsvr --dbpath /opt/mongo/data/dns_shard2 --port 20001 --fork --logpath /opt/mongo/logs/dns_sd2/sd1_mymongo2.log --logappend

mongo     7093     1  0 20:01 ?        00:00:34 mongos --configdb 192.168.144.111:20001,192.168.144.120:20001,192.168.144.130:20001 --port 27017 --chunkSize 1 --fork --logpath /opt/mongo/logs/dns_sd.log --logappend

mongo     7344     1  1 20:32 ?        00:01:53 mongod --dbpath /opt/mongo/data/dns_repset2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/secondset/secondset.log --logappend --nojournal --directoryperdb

mongo     7666  6590  0 21:11 pts/2    00:00:00 mongo --port 27017

root      8253  7909  0 23:04 pts/0    00:00:00 grep mongo

[root@mongo2 ~]# 
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息