您的位置:首页 > 其它

一次RegionServer GC导致会话超时下线的原因分析记录

2017-10-30 14:17 1361 查看
查看了regionserver的log日志: 

2017-10-30 03:18:02,718 INFO SecurityLogger.org.apache.hadoop.hbase.Server: Connection from 172.16.4.36 port: 38016 with unknown version info
2017-10-30 03:20:32,334 WARN org.apache.hadoop.hdfs.DFSClient: DFSOutputStream ResponseProcessor exception  for block BP-1954737973-172.16.4.81-1498561284109:blk_1237675060_163934648
java.io.EOFException: Premature EOF: no length prefix available
at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2274)
at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:235)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:1076)
2017-10-30 03:20:32,335 INFO org.apache.hadoop.hbase.ScheduledChore: Chore: MovedRegionsCleaner for region my-hbase-srv3,60020,1506017784241 missed its start time
2017-10-30 03:20:32,335 WARN org.apache.hadoop.hdfs.DFSClient: Error Recovery for block BP-1954737973-172.16.4.81-1498561284109:blk_1237675060_163934648 in pipeline 
DatanodeInfoWithStorage[172.16.4.94:50010,DS-153f2b54-c56c-461c-8983-e5b79cf7a248,DISK],
DatanodeInfoWithStorage[172.16.4.69:50010,DS-d07aff02-28cf-4467-b089-5d1209614520,DISK], 
DatanodeInfoWithStorage[172.16.4.78:50010,DS-8a1665ae-7475-4a76-ae86-b61f5a765e98,DISK]: 

bad datanode 
DatanodeInfoWithStorage[172.16.4.94:50010,DS-153f2b54-c56c-461c-8983-e5b79cf7a248,DISK]
2017-10-30 03:20:32,335 INFO org.apache.zookeeper.ClientCnxn: Client session timed out, have not heard from server in 168668ms for sessionid 0x25e189c7145bef4, closing socket connection and attempting reconnect
2017-10-30 03:20:32,335 WARN org.apache.hadoop.hbase.util.Sleeper: We slept 150703ms instead of 3000ms, this is likely due to a long garbage collecting pause and it's usually bad, see http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired  2017-10-30 03:20:32,334 INFO org.apache.hadoop.hbase.ScheduledChore: Chore: my-hbase-srv3,60020,1506017784241-MemstoreFlusherChore missed its start time
2017-10-30 03:20:32,334 INFO org.apache.hadoop.hbase.ScheduledChore: Chore: CompactionChecker missed its start time
2017-10-30 03:20:32,334 INFO org.apache.hadoop.hbase.regionserver.HRegion: Flushing 1/1 column families, memstore=5.97 KB
2017-10-30 03:20:32,334 INFO org.apache.hadoop.hbase.regionserver.HRegion: Flushing 1/1 column families, memstore=1.70 MB
2017-10-30 03:20:32,334 INFO org.apache.zookeeper.ClientCnxn: Client session timed out, have not heard from server in 153223ms for sessionid 0x35e189c9b1ec599, closing socket connection and attempting reconnect
2017-10-30 03:20:32,345 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: my-hbase-srv3,60020,1506017784241-MemstoreFlusherChore requesting flush of regionInfoKey,1469435548521.6b7e508e911c49cfbd64cf1baa4cadc3. because cfInfo has an old edit so flush to free WALs after random delay 125058ms
2017-10-30 03:20:32,345 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: my-hbase-srv3,60020,1506017784241-MemstoreFlusherChore requesting flush of regionInfoKey,1502792046466.9fcb0c6de90135cf5d8791722410e6af. because cfInfo has an old edit so flush to free WALs after random delay 188675ms
2017-10-30 03:20:32,345 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: my-hbase-srv3,60020,1506017784241-MemstoreFlusherChore requesting flush of regionInfoKey,1478593853887.0c94226b4cd94058b58daab02b7e184e. because Info has an old edit so flush to free WALs after random delay 20466ms
2017-10-30 03:20:32,345 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: my-hbase-srv3,60020,1506017784241-MemstoreFlusherChore requesting flush of regionInfoKey,1499557430610.4ac8c97fadd652f676a803688d2ff07e. because cfInfo has an old edit so flush to free WALs after random delay 241441ms
2017-10-30 03:20:32,344 INFO org.apache.hadoop.hbase.ScheduledChore: Chore: my-hbase-srv3,60020,1506017784241-HeapMemoryTunerChore missed its start time
2017-10-30 03:20:32,344 INFO SecurityLogger.org.apache.hadoop.hbase.Server: Auth successful for root (auth:SIMPLE)
2017-10-30 03:20:32,346 INFO SecurityLogger.org.apache.hadoop.hbase.Server: Connection from 172.16.4.114 port: 59124 with version info: version: "0.98.17-hadoop2" url: "git://aspire/home/apurtell/src/hbase" revision: "d5f8300c082a75ce8edbbe08b66f077e7d663a4a" user: "apurtell" date: "Fri Jan 15 22:49:31 PST 2016" src_checksum: "6e40b5bc9a3782b583c36af66806049d"
2017-10-30 03:20:32,344 INFO SecurityLogger.org.apache.hadoop.hbase.Server: Auth successful for root (auth:SIMPLE)
2017-10-30 03:20:32,344 WARN org.apache.hadoop.hbase.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 148967ms
GC pool 'ParNew' had collection(s): count=1 time=99ms
GC pool 'ConcurrentMarkSweep' had collection(s): count=2 time=149192ms


然后就是一系列的由上面的错误导致的相关的错误。

这里看到了GC时间太长(153s>60s),导致regionServer的zkSession会话超时,然后认为regionServer下线。

这里应该是因为当时有个任务写入数据太多,

大量的数据进入Memstore,占用内存过大,达到了某个临界值,进行了数据刷新到hfile的操作,然后GC操作回收内存。

下面看看一些memStore的刷新数据到hfile的条件:

1、MemStore达到上限hbase.hregion.memstore.flush.size 默认是128M的时候,会触发MemStore的刷新。这个参数表示单个MemStore的大小的阈值。这个时候是不阻塞写操作的。

 

2、当一个Region的MemStore总量达到hbase.hregion.memstore.block.multiplier * hbase.hregion.memstore.flush.size(默认2*128M=256M)时,会阻塞这个region的写操作,并强制刷写到HFile。触发这个刷新只会发生在MemStore即将写满128M时put了一个巨大的记录的情况,这时会阻塞写操作,强制刷新成功才能继续写入。

 

3、一个RegionServer会有很多个Region,很多的MemStore,所以可能单个Region并没有超过阈值,但是整个RegionServer的内存已经占用非常多了,这时候还有另外两个参数控制内存的刷写:hbase.regionserver.global.memstore.upperLimit 默认0.4,当RegionServer上全部的MemStore占用超过heap(heap的大小在hbase-env.sh中设置HBASE_HEAPSIZE,默认1G,我们设置的4G)的40%时,强制阻塞所有的写操作,将所有的MemStore刷写到HFile;hbase.regionserver.global.memstore.lowerLimit 默认0.35,表示所有的MemStore占用超过heap的35%时,会选择一些占用内存比较大的MemStore阻塞写操作并进行flush,这是为了降低阻塞全部写操作flush带来的问题。

 

4、当HLog达到最大值(hbase.regionserver.maxlogs * hbase.regionserver.hlog.blocksize 默认32*64M = 2G)时,也会触发MemStore的刷新,强制将更新固化到HFile中,避免在RegionServer crash时恢复时间过长。

 

5、定期会进行MemStore的刷新,hbase.regionserver.optionalcacheflushinterval 默认3600000,一小时,确保MemStore的数据不会长时间没有固化到HFile中。为避免所有的MemStore在同一时间都进行flush导致的问题,定期的flush操作有20000左右的随机延时。

 

6、手工可以进行flush操作,在hbase shell调用flush,可以针对某个表或者某个region进行flush:

hbase(main):010:0> help 'flush'

Flush all regions in passed table or pass a region row to

flush an individual region.  For example:

 

  hbase> flush 'TABLENAME'

  hbase> flush 'REGIONNAME'

 

7、手工flush:

在API中只看到有setMemStoreFlushSize 指定memstore flush到HDFS上的文件大小,默认是64M,不确定是否可以直接触发flush MemStore。

 

结论:

从cm监控看到 该regionserver的图标数据,由于比较大的数据加载(归根于region分布不均导致的)hbase堆内存短时间内陡增

从上面判断应该是超过了 hbase.regionserver.global.memstore.upperLimit(默认0.4)

RegionServer 的heap size。 (不好意思图片丢了)

,然而GC发生,最后导致zookeeper会话超时。

建议

1 采用预先创建region分区,尽量分布到各个节点上,我这region分布不均匀是因为集群迁移之后扩容导致的。后面看看如何将region迁移到其他的节点上。

2 最后应该进行添加集群的监控,当重启regionserver下线时候进行报警或者是重启。 
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐