HDFS读写文件BUG
2015-09-21 11:48
281 查看
- java.io.IOException: Filesystem closed
java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:707)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1448)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1390)
at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:394)
at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:390)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:390)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:334)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:829)
at com.run.storage.hdfs.RunHdfsImpl.uploadToHdfs(Unknown Source)
FileSystem.get(getConf())返回的可能是一个cache中的结果,它并不是每次都创建一个新的实例。这就意味着,如果每个线程都自己去get一个文件系统,然后使用,然后关闭,就会有问题。
/** Returns the FileSystem forthis URI's scheme and authority.
Thescheme
* of the URI determines a configuration property name,
*
<tt>fs.<i>scheme</i>.class</tt>
whose value names the FileSystem class.
* The entire URI is passed to the FileSystem instance's initializemethod.
*/
publicstaticFileSystem get(URI
uri,Configuration
conf)throwsIOException {
String
scheme= uri.getScheme();
String
authority= uri.getAuthority();
if(scheme==
null&&
authority==
null){
// use default FS
returnget(conf);
}
if(scheme!=
null&&
authority==
null){
// no authority
URI
defaultUri= getDefaultUri(conf);
if(scheme.equals(defaultUri.getScheme())
// if schemematches default
&&
defaultUri.getAuthority() != null) {
// &default has authority
return
get(defaultUri,
conf);
// return default
}
}
String
disableCacheName= String.format("fs.%s.impl.disable.cache",
scheme);
if(conf.getBoolean(disableCacheName,
false)) {
returncreateFileSystem(uri,
conf);
}
returnCACHE.get(uri,
conf);
}
java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:707)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1448)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1390)
at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:394)
at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:390)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:390)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:334)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:829)
at com.run.storage.hdfs.RunHdfsImpl.uploadToHdfs(Unknown Source)
FileSystem.get(getConf())返回的可能是一个cache中的结果,它并不是每次都创建一个新的实例。这就意味着,如果每个线程都自己去get一个文件系统,然后使用,然后关闭,就会有问题。
/** Returns the FileSystem forthis URI's scheme and authority.
Thescheme
* of the URI determines a configuration property name,
*
<tt>fs.<i>scheme</i>.class</tt>
whose value names the FileSystem class.
* The entire URI is passed to the FileSystem instance's initializemethod.
*/
publicstaticFileSystem get(URI
uri,Configuration
conf)throwsIOException {
String
scheme= uri.getScheme();
String
authority= uri.getAuthority();
if(scheme==
null&&
authority==
null){
// use default FS
returnget(conf);
}
if(scheme!=
null&&
authority==
null){
// no authority
URI
defaultUri= getDefaultUri(conf);
if(scheme.equals(defaultUri.getScheme())
// if schemematches default
&&
defaultUri.getAuthority() != null) {
// &default has authority
return
get(defaultUri,
conf);
// return default
}
}
String
disableCacheName= String.format("fs.%s.impl.disable.cache",
scheme);
if(conf.getBoolean(disableCacheName,
false)) {
returncreateFileSystem(uri,
conf);
}
returnCACHE.get(uri,
conf);
}
相关文章推荐
- ubuntu从头开始搭建hadoop伪分布式环境
- hadoop Secondary NameNode作用
- HDFS 官方文档 中文
- hadoop完全分布式搭建
- Debug HDFS (远程调试HDFS)
- HDFS浅析
- HDFS浅析
- hdfs下载文件到本地
- HDFS 2.2 配置文件
- HDFS原理讲解
- Flume前述(三)--多 agent 汇聚写入 HDFS
- Hadoop-2.4.1安装配置
- HDFS(数学题)
- Hadoop分布式文件系统HDFS
- linux hadoop 搭建
- HDFS hflush hsync和close的区别
- 至HDFS附加内容
- hadoop2.7集群迁移namenode
- tachyon与hdfs,以及spark整合
- tachyon与hdfs,以及spark整合