您的位置:首页 > 运维架构

hadoop打成jar包放到服务器上运行问题集

2014-10-29 16:50 120 查看
有时候需要在某台服务器上运行本地的mapreduce任务,可以这样设置:

conf.set("fs.default.name", "local")
conf.set("mapred.job.tracker", "local")


###########################################################################################

打成jar包在服务器上执行时遇到

Exception in thread "main" java.io.IOException: No FileSystem for scheme: file
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2385)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2392)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:167)
at com.yunsuan.gaoyuan.recom.CreateTfidfJob.getFileIndex(CreateTfidfJob.java:143)
at com.yunsuan.gaoyuan.recom.CreateTfidfJob.run(CreateTfidfJob.java:74)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at com.yunsuan.gaoyuan.recom.CreateTfidfJob.main(CreateTfidfJob.java:42)


检查检查生成的jar中META-INF->services->org.apache.hadoop.fs.FileSystem文件

缺少:org.apache.hadoop.fs.LocalFileSystem #处理local file scheme的类

原因:maven打jar包的时候,覆盖了一个..

处理方法:

 
config.set("fs.hdfs.impl",org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
config.set("fs.file.impl",org.apache.hadoop.fs.LocalFileSystem.class.getName());


参考:http://blog.newitfarmer.com/big_data/big-data-platform/hadoop/13953/repost-no-filesystem-for-scheme-hdfsno-filesystem-for-scheme-file

########################################################################################################################

jar包在服务器上运行时
Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.base.Stopwatch.elapsedMillis()J
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:278)
at org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:217)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:493)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:510)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
at com.yunsuan.gaoyuan.recom.MySequenceFilesFromDirectoryJob.runMapReduce(MySequenceFilesFromDirectoryJob.java:245)
at com.yunsuan.gaoyuan.recom.MySequenceFilesFromDirectoryJob.run(MySequenceFilesFromDirectoryJob.java:115)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at com.yunsuan.gaoyuan.recom.CreateTfidfJob.run(CreateTfidfJob.java:85)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at com.yunsuan.gaoyuan.recom.CreateTfidfJob.main(CreateTfidfJob.java:38)


原因:依赖的guava版本不对

解决:改成guava-14.0.jar

########################################################################################################################

jar包在服务器上运行时

java.sql.SQLException: Access denied for user 'root'@'host08' (using password: YES)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1084)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4232)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4164)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:926)
at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:4732)
at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1340)
at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2506)
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2539)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2321)
at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:832)
at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:46)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
......


mysql在本地使用-h,再用用户名密码登陆的话会出错...

解决:创建一个需要使用密码访问本地host的mysql的用户

grant all on *.* to 'root'@'host08' identified by '123456'

########################################################################################################################

结果通过mapred存入mysql时:
java.io.IOException: wrong key class: class com.yunsuan.gaoyuan.comm.TblsWritable is not class org.apache.hadoop.io.IntWritable
at org.apache.hadoop.mapred.IFile$Writer.append(IFile.java:196)
at org.apache.hadoop.mapred.Task$CombineOutputCollector.collect(Task.java:1307)
at org.apache.hadoop.mapred.Task$NewCombinerRunner$OutputConverter.write(Task.java:1624)
at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
at com.yunsuan.gaoyuan.recom.ComputeSimilarityJob$ChangeToSimilarityMatrixReducer.reduce(ComputeSimilarityJob.java:302)
at com.yunsuan.gaoyuan.recom.ComputeSimilarityJob$ChangeToSimilarityMatrixReducer.reduce(ComputeSimilarityJob.java:1)
.....

解决:去掉setCombinerClass

ChangeToSimilarityMatrixJob.setCombinerClass(ChangeToSimilarityMatrixReducer.class)
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
相关文章推荐