hbase并发问题
2018-03-16 00:00
127 查看
2016-06-01 16:05:22,776 INFO org.apache.hadoop.hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: 打开的文件过多
2016-06-01 16:05:22,776 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_3790131629645188816_18192
2016-06-01 16:13:01,966 WARN org.apache.hadoop.hdfs.DFSClient: DFS Read: java.io.IOException: Could not obtain block: blk_-299035636445663861_7843 file=/hbase/SendReport/83908b7af3d5e3529e61b870a16f02dc/data/17703aa901934b39bd3b2e2d18c671b4.9a84770c805c78d2ff19ceff6fecb972
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695)
at java.io.DataInputStream.readBoolean(DataInputStream.java:242)
at org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:116)
at org.apache.hadoop.hbase.io.Reference.read(Reference.java:149)
at org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:216)
at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:282)
at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:2510)
at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:449)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3228)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3176)
at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:331)
at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:107)
at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
原因及修改方法:由于 Linux系统最大可打开文件数一般默认的参数值是1024,通过 ulimit -n 65535 可即时修改,但重启后就无效了。或者有如下修改方式:
有如下三种修改方式:
1.在/etc/rc.local 中增加一行 ulimit -SHn 65535
2.在/etc/profile 中增加一行 ulimit -SHn 65535
3.在/etc/security/limits.conf最后增加如下两行记录
* soft nofile 65535
* hard nofile 65535
2016-06-01 16:05:22,776 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_3790131629645188816_18192
2016-06-01 16:13:01,966 WARN org.apache.hadoop.hdfs.DFSClient: DFS Read: java.io.IOException: Could not obtain block: blk_-299035636445663861_7843 file=/hbase/SendReport/83908b7af3d5e3529e61b870a16f02dc/data/17703aa901934b39bd3b2e2d18c671b4.9a84770c805c78d2ff19ceff6fecb972
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695)
at java.io.DataInputStream.readBoolean(DataInputStream.java:242)
at org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:116)
at org.apache.hadoop.hbase.io.Reference.read(Reference.java:149)
at org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:216)
at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:282)
at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:2510)
at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:449)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3228)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3176)
at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:331)
at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:107)
at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
原因及修改方法:由于 Linux系统最大可打开文件数一般默认的参数值是1024,通过 ulimit -n 65535 可即时修改,但重启后就无效了。或者有如下修改方式:
有如下三种修改方式:
1.在/etc/rc.local 中增加一行 ulimit -SHn 65535
2.在/etc/profile 中增加一行 ulimit -SHn 65535
3.在/etc/security/limits.conf最后增加如下两行记录
* soft nofile 65535
* hard nofile 65535
相关文章推荐
- 关于HBase0.94版本在split region后META Scanner和CatalogJanitor并发操作时存在的问题
- 在大并发写时,HBase的HDFS DFSClient端报SocketTimeoutException的问题分析和解决
- 在大并发写时,HBase的HDFS DFSClient端报SocketTimeoutException的问题分析和解决
- tomcat 并发问题 (terminating thread)
- 大并发量,大数据量基于SSH应用程序架构有关问题
- Hibernate事务与并发问题处理(乐观锁与悲观锁)
- 并发,同步问题,留着研究一下,暂时没时间看
- 【转载】数据库大并发操作要考虑死锁和锁的性能问题
- 数据库并发的问题与锁机制
- 探索并发编程(七)------分布式环境中并发问题
- 关于服务器并发访问导致重复写数据库的问题
- 并发问题详述
- 解决数据库高并发访问瓶颈问题
- Java并发——DCL问题
- 利用mysql事务隔离级别解决php高并发问题
- hbase问题 slf4j-log4j12
- 并发请求遇到的问题
- PHP Session可能会引起并发问题
- HBase安装过程遇到的问题
- hbase环境搭建,启动之后HMaster挂掉,或者是集群里,只启动了HMaster节点,HRegionServer节点没有启动的问题