您的位置:首页 > 数据库 > Mongodb

MongoDB管理与开发精要《红丸出品》22.6 Sharding分片之管理维护

2012-06-23 18:29 375 查看


[title2]


[/title2]

22.6 管理维护Sharding

22.6.1列出所有的Shard Server

> db.runCommand({ listshards: 1 }) --列出所有的Shard Server

{

"shards" : [

{

"_id" : "shard0000",

"host" : "localhost:20000"

},

{

"_id" : "shard0001",

"host" : "localhost:20001"

}

],

"ok" : 1

}

22.6.2查看Sharding信息

> printShardingStatus() --查看Sharding信息

--- Sharding Status ---

sharding version: { "_id" : 1, "version" : 3 }

shards:

{ "_id" : "shard0000", "host" : "localhost:20000" }

{ "_id" : "shard0001", "host" : "localhost:20001" }

databases:

{ "_id" : "admin", "partitioned" : false, "primary" : "config" }

{ "_id" : "test", "partitioned" : true, "primary" : "shard0000" }

test.users chunks:

shard0000 1

{ "_id" : { $minKey : 1 } } -->> { "_id" : { $maxKey : 1 } } on : shard0000 { "t" : 1000, "i" : 0 }

>

22.6.3判断是否是Sharding

> db.runCommand({ isdbgrid:1 })

{ "isdbgrid" : 1, "hostname" : "localhost", "ok" : 1 }

>

22.6.4 对现有的表进行Sharding

刚才我们是对表test.users进行分片了,下面我们将对库中现有的未分片的表test.users_2进行分片处理

表最初状态如下,可以看出他没有被分片过:

> db.users_2.stats()

{

"ns" : "test.users_2",

"sharded" : false,

"primary" : "shard0000",

"ns" : "test.users_2",

"count" : 500000,

"size" : 48000016,

"avgObjSize" : 96.000032,

"storageSize" : 61875968,

"numExtents" : 11,

"nindexes" : 1,

"lastExtentSize" : 15001856,

"paddingFactor" : 1,

"flags" : 1,

"totalIndexSize" : 20807680,

"indexSizes" : {

"_id_" : 20807680

},

"ok" : 1

}

对其进行分片处理:

> use admin

switched to db admin

> db.runCommand({ shardcollection: "test.users_2", key: { _id:1 }})

{ "collectionsharded" : "test.users_2", "ok" : 1 }

再次查看分片后的表的状态,可以看到它已经被我们分片了

> use test

switched to db test

> db.users_2.stats()

{

"sharded" : true,

"ns" : "test.users_2",

"count" : 505462,

……

"shards" : {

"shard0000" : {

"ns" : "test.users_2",

……

"ok" : 1

},

"shard0001" : {

"ns" : "test.users_2",

……

"ok" : 1

}

},

"ok" : 1

}

>

22.6.5 新增ShardServer

刚才我们演示的是新增分片表,接下来我们演示如何新增Shard Server

启动一个新Shard Server进程

[root@localhost ~]# mkdir /data/shard/s2

[root@localhost ~]# /Apps/mongo/bin/mongod --shardsvr --port 20002 --dbpath /data/shard/s2 --fork --logpath /data/shard/log/s2.log --directoryperdb

all output going to: /data/shard/log/s2.log

forked process: 6772

配置新Shard Server

[root@localhost ~]# /Apps/mongo/bin/mongo admin --port 40000

MongoDB shell version: 1.8.1

connecting to: 127.0.0.1:40000/admin

> db.runCommand({ addshard:"localhost:20002" })

{ "shardAdded" : "shard0002", "ok" : 1 }

> printShardingStatus()

--- Sharding Status ---

sharding version: { "_id" : 1, "version" : 3 }

shards:

{ "_id" : "shard0000", "host" : "localhost:20000" }

{ "_id" : "shard0001", "host" : "localhost:20001" }

{ "_id" : "shard0002", "host" : "localhost:20002" } --新增Shard Server

databases:

{ "_id" : "admin", "partitioned" : false, "primary" : "config" }

{ "_id" : "test", "partitioned" : true, "primary" : "shard0000" }

test.users chunks:

shard0002 2

shard0000 21

shard0001 21

too many chunksn to print, use verbose if you want to force print

test.users_2 chunks:

shard0001 46

shard0002 1

shard0000 45

too many chunksn to print, use verbose if you want to force print

查看分片表状态,以验证新Shard Server

> use test

switched to db test

> db.users_2.stats()

{

"sharded" : true,

"ns" : "test.users_2",

……

"shard0002" : { --新的Shard Server已有数据

"ns" : "test.users_2",

"count" : 21848,

"size" : 2097408,

"avgObjSize" : 96,

"storageSize" : 2793472,

"numExtents" : 5,

"nindexes" : 1,

"lastExtentSize" : 2097152,

"paddingFactor" : 1,

"flags" : 1,

"totalIndexSize" : 1277952,

"indexSizes" : {

"_id_" : 1277952

},

"ok" : 1

}

},

"ok" : 1

}

>

我们可以发现,当我们新增Shard Server后数据自动分布到了新Shard上,这是由MongoDB内部自已实现的。

22.6.6 移除Shard Server

有些时候有于硬件资源有限,所以我们不得不进行一些回收工作,下面我们就要将刚刚启用的Shard Server回收,系统首先会将在这个即将被移除的Shard Server上的数据先平均分配到其它的ShardServer上,然后最终在将这个Shard Server踢下线, 我们需要不停的调用db.runCommand({"removeshard": "localhost:20002"});来观察这个移除操作进行到哪里了:

> use admin

switched to db admin

> db.runCommand({"removeshard" : "localhost:20002"});

{

"msg" : "draining started successfully",

"state" : "started",

"shard" : "shard0002",

"ok" : 1

}

> db.runCommand({"removeshard" : "localhost:20002"});

{

"msg" : "draining ongoing",

"state" : "ongoing",

"remaining" : {

"chunks" : NumberLong(44),

"dbs" : NumberLong(0)

},

"ok" : 1

}

……

> db.runCommand({"removeshard" : "localhost:20002"});

{

"msg" : "draining ongoing",

"state" : "ongoing",

"remaining" : {

"chunks" : NumberLong(1),

"dbs" : NumberLong(0)

},

"ok" : 1

}

> db.runCommand({"removeshard" : "localhost:20002"});

{

"msg" : "removeshard completed successfully",

"state" : "completed",

"shard" : "shard0002",

"ok" : 1

}

> db.runCommand({"removeshard" : "localhost:20002"});

{

"assertion" : "can't find shard for: localhost:20002",

"assertionCode" : 13129,

"errmsg" : "db assertion failure",

"ok" : 0

}

最终移除后,当我们再次调用db.runCommand({"removeshard": "localhost:20002"});的时候系统会报错,已便通知我们不存在20002这个端口的Shard Server了,因为它已经被移除掉了。

接下来我们看一下表中的数据分布:

> use test

switched to db test

> db.users_2.stats()

{

"sharded" : true,

"ns" : "test.users_2",

"count" : 500000,

"size" : 48000000,

"avgObjSize" : 96,

"storageSize" : 95203584,

"nindexes" : 1,

"nchunks" : 92,

"shards" : {

"shard0000" : {

"ns" : "test.users_2",

"count" : 248749,

"size" : 23879904,

"avgObjSize" : 96,

"storageSize" : 61875968,

"numExtents" : 11,

"nindexes" : 1,

"lastExtentSize" : 15001856,

"paddingFactor" : 1,

"flags" : 1,

"totalIndexSize" : 13033472,

"indexSizes" : {

"_id_" : 13033472

},

"ok" : 1

},

"shard0001" : {

"ns" : "test.users_2",

"count" : 251251,

"size" : 24120096,

"avgObjSize" : 96,

"storageSize" : 33327616,

"numExtents" : 8,

"nindexes" : 1,

"lastExtentSize" : 12079360,

"paddingFactor" : 1,

"flags" : 1,

"totalIndexSize" : 10469376,

"indexSizes" : {

"_id_" : 10469376

},

"ok" : 1

}

},

"ok" : 1

}

可以看出数据又被平均分配到了另外2台Shard Server上了,对业务没什么特别大的影响。

-------------------------------------------------------------------

《MongoDB管理与开发精要》、《Redis实战》作者

ChinaUnix.net专家 http://cdhongwan.blog.chinaunix.net

@CD红丸 http://weibo.com/u/2446082491
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: