您的位置:首页 > 编程语言 > Java开发

eclipse 远程链接访问hadoop 集群日志信息没有输出的问题l

2015-06-25 13:36 609 查看
Eclipse插件RunonHadoop没有用到hadoop集群节点的问题
参考来源
http://f.dataguru.cn/thread-250980-1-1.htmlhttp://f.dataguru.cn/thread-249738-1-1.html(出处:炼数成金)

三个问题:(第2个问题是我加的)

1.eclipse控制台没有运行日志输出的问题

2.eclipse上远程运行hadoop集群的情况,这过程中一直变成了本地的,搞了2天才搞通,要确保本地与hadoop集群的Master之间ssh无密码登陆,当然首先要配置好/etc/hostname/

还要注意链接代码的写法.如果实际的远程集群运行,控制台就会像我下面的代码一样的(我远程主机的masterIP是192.168.2.35,本机是192.168.2.51),代码显示运行在了HDFS所在master的

35上面,证明成功了,如果想进一步验证,可以暂时关闭远程master的hadoop,即stop-all.sh,会提示厦门的错误

15/06/2712:27:15WARNutil.NativeCodeLoader:Unabletoloadnative-hadooplibraryforyourplatform...usingbuiltin-javaclasseswhereapplicable
15/06/2712:27:16INFOConfiguration.deprecation:session.idisdeprecated.Instead,usedfs.metrics.session-id
15/06/2712:27:16INFOjvm.JvmMetrics:InitializingJVMMetricswithprocessName=JobTracker,sessionId=
Exceptioninthread"main"java.net.ConnectException:CallFromOne/192.168.2.51toMaster:9000failedonconnectionexception:java.net.ConnectException:Connectionrefused;Formoredetailssee:http://wiki.apache.org/hadoop/ConnectionRefusedatsun.reflect.NativeConstructorAccessorImpl.newInstance0(NativeMethod)
atsun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
atsun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
atjava.lang.reflect.Constructor.newInstance(Constructor.java:422)
atorg.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
atorg.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731)


如果重新启动,删除生成HDFS上面的output目录,重新运行eclipse,则有如下结果:说明远程集群确实在执行任务

15/06/2712:34:17WARNutil.NativeCodeLoader:Unabletoloadnative-hadooplibraryforyourplatform...usingbuiltin-javaclasseswhereapplicable
15/06/2712:34:18INFOConfiguration.deprecation:session.idisdeprecated.Instead,usedfs.metrics.session-id
15/06/2712:34:18INFOjvm.JvmMetrics:InitializingJVMMetricswithprocessName=JobTracker,sessionId=
15/06/2712:34:18WARNmapreduce.JobSubmitter:Nojobjarfileset.Userclassesmaynotbefound.SeeJoborJob#setJar(String).
15/06/2712:34:19INFOinput.FileInputFormat:Totalinputpathstoprocess:2
15/06/2712:34:19INFOmapreduce.JobSubmitter:numberofsplits:15
15/06/2712:34:19INFOConfiguration.deprecation:mapred.job.trackerisdeprecated.Instead,usemapreduce.jobtracker.address
15/06/2712:34:19INFOmapreduce.JobSubmitter:Submittingtokensforjob:job_local1331725738_0001
15/06/2712:34:19INFOmapreduce.Job:Theurltotrackthejob:http://localhost:8080/15/06/2712:34:19INFOmapreduce.Job:Runningjob:job_local1331725738_0001
15/06/2712:34:19INFOmapred.LocalJobRunner:OutputCommittersetinconfignull
15/06/2712:34:19INFOmapred.LocalJobRunner:OutputCommitterisorg.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
15/06/2712:34:19INFOmapred.LocalJobRunner:Waitingformaptasks
15/06/2712:34:19INFOmapred.LocalJobRunner:Startingtask:attempt_local1331725738_0001_m_000000_0
15/06/2712:34:19INFOmapred.Task:UsingResourceCalculatorProcessTree:[]
15/06/2712:34:19INFOmapred.MapTask:Processingsplit:hdfs://192.168.2.35:9000/user/hadoop/input/1.txt:0+134217728
15/06/2712:34:19INFOmapred.MapTask:(EQUATOR)0kvi26214396(104857584)
15/06/2712:34:19INFOmapred.MapTask:mapreduce.task.io.sort.mb:100
15/06/2712:34:19INFOmapred.MapTask:softlimitat83886080
15/06/2712:34:19INFOmapred.MapTask:bufstart=0;bufvoid=104857600
15/06/2712:34:19INFOmapred.MapTask:kvstart=26214396;length=6553600
15/06/2712:34:19INFOmapred.MapTask:Mapoutputcollectorclass=org.apache.hadoop.mapred.MapTask$MapOutputBuffer
15/06/2712:34:20INFOmapreduce.Job:Jobjob_local1331725738_0001runninginubermode:false
15/06/2712:34:20INFOmapreduce.Job:map0%reduce0%
15/06/2712:34:21INFOmapred.MapTask:Spillingmapoutput
15/06/2712:34:21INFOmapred.MapTask:bufstart=0;bufend=32177692;bufvoid=104857600
15/06/2712:34:21INFOmapred.MapTask:kvstart=26214396(104857584);kvend=13287304(53149216);length=12927093/6553600
15/06/2712:34:21INFOmapred.MapTask:(EQUATOR)42663446kvi10665856(42663424)
15/06/2712:34:25INFOmapred.LocalJobRunner:map>map
15/06/2712:34:26INFOmapred.MapTask:Finishedspill0
15/06/2712:34:26INFOmapred.MapTask:(RESET)equator42663446kv10665856(42663424)kvi8044428(32177712)
15/06/2712:34:26INFOmapreduce.Job:map1%reduce0%
15/06/2712:34:27INFOmapred.MapTask:Spillingmapoutput
15/06/2712:34:27INFOmapred.MapTask:bufstart=42663446;bufend=74841169;bufvoid=104857600
15/06/2712:34:27INFOmapred.MapTask:kvstart=10665856(42663424);kvend=23953172(95812688);length=12927085/6553600
15/06/2712:34:27INFOmapred.MapTask:(EQUATOR)85326920kvi21331724(85326896)
15/06/2712:34:28INFOmapred.LocalJobRunner:map>map
15/06/2712:34:31INFOmapred.MapTask:Finishedspill1
15/06/2712:34:31INFOmapred.MapTask:(RESET)equator85326920kv21331724(85326896)kvi18710300(74841200)
15/06/2712:34:31INFOmapred.LocalJobRunner:map>map


WordCount源码,注意远程连接代码写法


packagetest;
importjava.io.IOException;
importjava.util.StringTokenizer;
importorg.apache.hadoop.conf.Configuration;
importorg.apache.hadoop.fs.Path;
importorg.apache.hadoop.io.IntWritable;
importorg.apache.hadoop.io.Text;
importorg.apache.hadoop.mapreduce.Job;
importorg.apache.hadoop.mapreduce.Mapper;
importorg.apache.hadoop.mapreduce.Reducer;
importorg.apache.hadoop.mapreduce.lib.input.FileInputFormat;
importorg.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
importorg.apache.hadoop.util.GenericOptionsParser;

publicclassWordCount{

publicstaticclassTokenizerMapper
extendsMapper<Object,Text,Text,IntWritable>{

privatefinalstaticIntWritableone=newIntWritable(1);
privateTextword=newText();

publicvoidmap(Objectkey,Textvalue,Contextcontext
)throwsIOException,InterruptedException{
StringTokenizeritr=newStringTokenizer(value.toString());
while(itr.hasMoreTokens()){
word.set(itr.nextToken());
context.write(word,one);}
}
}

publicstaticclassIntSumReducerextendsReducer<Text,IntWritable,Text,IntWritable>{
privateIntWritableresult=newIntWritable();

publicvoidreduce(Textkey,Iterable<IntWritable>values,
Contextcontext
)throwsIOException,InterruptedException{
intsum=0;
for(IntWritableval:values){
sum+=val.get();
}
result.set(sum);
context.write(key,result);
}

}

publicstaticvoidmain(String[]args)throwsException{
Configurationconf=newConfiguration();
//conf.set("mapred.job.tracker","192.168.2.35:9001");
//在你的文件地址前自动添加:hdfs://master:9000/
//conf.set("fs.defaultFS","hdfs://192.168.2.35:9001/");
//conf.set("hadoop.job.user","hadoop");
//指定jobtracker的ip和端口号,master在/etc/hosts中可以配置
//conf.set("mapred.job.tracker","192.168.2.35:9001");

//在你的文件地址前自动添加:hdfs://master:9000/

conf.set("fs.defaultFS","hdfs://192.168.2.35:9000/");
////conf.set("hadoop.job.user","hadoop");
////conf.set("Master","1234");
//指定jobtracker的ip和端口号,master在/etc/hosts中可以配置
//////conf.set("mapred.job.tracker","Master:9001");
String[]ars=newString[]{"input","out"};
String[]otherArgs=newGenericOptionsParser(conf,ars).getRemainingArgs();
if(otherArgs.length!=2){
System.err.println("Usage:wordcount");
System.exit(2);
}
Jobjob=newJob(conf,"wordcount");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job,newPath(otherArgs[0]));
FileOutputFormat.setOutputPath(job,newPath(otherArgs[1]));
System.exit(job.waitForCompletion(true)?0:1);
}
}

3.eclipse上运行hadoop集群的任务时,通过eclipse(hdfs插件)远程链接到服务器的hadoop中,eclipse执行程序时,在hadoop的各节点上jps就没有结果

第一个问题的解决方法:eclipse运行时候:log4j的日志

log4j:WARNNoappenderscouldbefoundforlogger(org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARNPleaseinitializethelog4jsystemproperly.
log4j:WARNSeehttp://logging.apache.org/log4j/1.2/faq.html#noconfigformoreinfo.

把log4.properties复制到项目中bin目录下即可,注意复制后的权限。
cp/usr/hadoop2.5/etc/hadoop/log4.properties/home/hadoop/workspace/WordCount/bin/

第二个问题:
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: