您的位置:首页 > 编程语言 > Java开发

配置hadoop-1.2.1 eclipse开发环境

2015-01-10 18:53 411 查看


配置hadoop-eclipse开发环境

由于hadoop-eclipse-1.2.1插件需要自行编译,所以为了图省事而从网上直接下载了这个jar包,所以如果有需要可以从点击并下载资源。下载这个jar包后,将它放置在eclipse/plugins目录下,并重启eclipse即可。如果你需要自己编译该插件,请参考文献

如果没有意外,在你的eclipse的右上角应该出现了一只蓝色的大象logo,请点击那只大象。之后,在正下方的区域将会多出一项Map/Reduce
Locations
的选项卡,点击该选项卡,并右键新建New Hadoop
Location

这时应该会弹出一个对话框,需要你填写这些内容:

Location name

Map/Reduce Master

DFS Master

User name

Location name
指的是当前创建的链接名字,可以任意指定;Map/Reduce Master
指的是执行MR的主机地址,并且需要给定hdfs协议的通讯地址; DFS Master 指的是Distribution
File System的主机地址,并且需要给定hdfs协议的通讯地址; User name
指定的是链接至Hadoop的用户名。
参考上一篇文章的设计,hadoop-1.2.1集群搭建,这里的配置信息将沿用上一篇文章的设定。
因此,我们的设置情况如下
参数名配置参数说明
Location namehadoop
MapReduce MasterHost: 192.168.132.82NameNode 的IP地址
MapReduce MasterPort: 9001
MapReduce Port,参考自己配置的mapred-site.xml
DFS MasterPort: 9000
DFS Port,参考自己配置的core-site.xml
User namehadoop
之后,切换到Advanced
parameters,而你需要修改的有如下参数
参数名配置参数说明
fs.default.namehdfs://192.168.132.82:9000参考core-site.xml
hadoop.tmp.dir/home/hadoop/hadoop/tmp参考core-site.xml
mapred.job.trackerhdfs://192.168.132.82:9001参考mapred-site.xml
之后确认,这样便在eclipse左边出现了HDFS的文件结构。但是现在你只能查看,而不能添加修改文件。因此你还需要手工登录到HDFS上,并使用命令修改权限。
[code]./bin/hadoop fs -chmod -R 777 /

[/code]
在完成这些步骤后,需要配置最后的开发环境了。


配置开发环境

我们可以试着编译一两个Hadoop程序, File -> Map/Reduce -> Map/Reduce Project
或者直接通过 Project Wizzard 新建一个Hadoop项目,并命名该项目为 Hadoop Test。
我们的第一个程序是 wordcount, 源代码可以从
hadoop安装目录下 \src\examples\org\apache\hadoop\examples 中获得。
[code]

package org.apache.hadoop.examples;

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;

public class WordCount {

public static class TokenizerMapper
extends Mapper { private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(Object key, Text value, Context context ) throws IOException, InterruptedException { StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write(word, one); } } } public static class IntSumReducer extends Reducer { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable values, Context context ) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } result.set(sum); context.write(key, result); } } public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs(); if (otherArgs.length != 2) { System.err.println("Usage: wordcount "); System.exit(2); } Job job = new Job(conf, "word count"); job.setJarByClass(WordCount.class); job.setMapperClass(TokenizerMapper.class); job.setCombinerClass(IntSumReducer.class); job.setReducerClass(IntSumReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(otherArgs[0])); FileOutputFormat.setOutputPath(job, new Path(otherArgs[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); } }

[/code]

这里面,为了方便,我们直接贴出该代码。准备好后,就可以直接点击 Run 命令,对代码进行编译。不过在编译前,会弹出一个小窗口,选择
Run on Hadoop,并确认。

等待一段时间,编译后并执行后,你会发现出现一段提示:
[code]Usage: wordcount

[/code]

WordCount例程,需要输入文件,并且需要指定输出的文件存放目录。因此,我们还需要为程序设定参数。方法是,在Run命令下,选择Run
Configurations。
在 Arguments 选项卡中,Program
arguments一栏里,指定输入和输出的参数。
我们给定的需要进行统计的文本存放在
/Data/words。
[code]Mary had a little lamb
its fleece very white as snow
and everywhere that Mary went
the lamb was sure to go

[/code]
所以设定的参数为:
[code]hdfs://192.168.132.82:9000/Data/words hdfs://192.168.132.82:9000/out

[/code]
配置好参数,并运行


运行Hadoop源码

运行WordCount例程,Hadoop便会正常启动了。
[code]14/05/29 15:13:59 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/05/29 15:13:59 WARN mapred.JobClient: No job jar file set.  User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
14/05/29 15:13:59 INFO input.FileInputFormat: Total input paths to process : 1
14/05/29 15:13:59 WARN snappy.LoadSnappy: Snappy native library not loaded
14/05/29 15:13:59 INFO mapred.JobClient: Running job: job_local889277352_0001
14/05/29 15:13:59 INFO mapred.LocalJobRunner: Waiting for map tasks
14/05/29 15:13:59 INFO mapred.LocalJobRunner: Starting task: attempt_local889277352_0001_m_000000_0
14/05/29 15:13:59 INFO mapred.Task:  Using ResourceCalculatorPlugin : null
14/05/29 15:13:59 INFO mapred.MapTask: Processing split: hdfs://192.168.145.100:8020/Data/words:0+109
14/05/29 15:13:59 INFO mapred.MapTask: io.sort.mb = 100
14/05/29 15:13:59 INFO mapred.MapTask: data buffer = 79691776/99614720
14/05/29 15:13:59 INFO mapred.MapTask: record buffer = 262144/327680
14/05/29 15:13:59 INFO mapred.MapTask: Starting flush of map output
14/05/29 15:13:59 INFO mapred.MapTask: Finished spill 0
14/05/29 15:13:59 INFO mapred.Task: Task:attempt_local889277352_0001_m_000000_0 is done. And is in the process of commiting
14/05/29 15:13:59 INFO mapred.LocalJobRunner:
14/05/29 15:13:59 INFO mapred.Task: Task 'attempt_local889277352_0001_m_000000_0' done.
14/05/29 15:13:59 INFO mapred.LocalJobRunner: Finishing task: attempt_local889277352_0001_m_000000_0
14/05/29 15:13:59 INFO mapred.LocalJobRunner: Map task executor complete.
14/05/29 15:13:59 INFO mapred.Task:  Using ResourceCalculatorPlugin : null
14/05/29 15:13:59 INFO mapred.LocalJobRunner:
14/05/29 15:13:59 INFO mapred.Merger: Merging 1 sorted segments
14/05/29 15:13:59 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 219 bytes
14/05/29 15:13:59 INFO mapred.LocalJobRunner:
14/05/29 15:14:00 INFO mapred.Task: Task:attempt_local889277352_0001_r_000000_0 is done. And is in the process of commiting
14/05/29 15:14:00 INFO mapred.LocalJobRunner:
14/05/29 15:14:00 INFO mapred.Task: Task attempt_local889277352_0001_r_000000_0 is allowed to commit now
14/05/29 15:14:00 INFO output.FileOutputCommitter: Saved output of task 'attempt_local889277352_0001_r_000000_0' to hdfs://192.168.145.100:8020/out
14/05/29 15:14:00 INFO mapred.LocalJobRunner: reduce > reduce
14/05/29 15:14:00 INFO mapred.Task: Task 'attempt_local889277352_0001_r_000000_0' done.
14/05/29 15:14:00 INFO mapred.JobClient:  map 100% reduce 100%
14/05/29 15:14:00 INFO mapred.JobClient: Job complete: job_local889277352_0001
14/05/29 15:14:00 INFO mapred.JobClient: Counters: 19
14/05/29 15:14:00 INFO mapred.JobClient:   Map-Reduce Framework
14/05/29 15:14:00 INFO mapred.JobClient:     Spilled Records=40
14/05/29 15:14:00 INFO mapred.JobClient:     Map output materialized bytes=223
14/05/29 15:14:00 INFO mapred.JobClient:     Reduce input records=20
14/05/29 15:14:00 INFO mapred.JobClient:     Map input records=4
14/05/29 15:14:00 INFO mapred.JobClient:     SPLIT_RAW_BYTES=103
14/05/29 15:14:00 INFO mapred.JobClient:     Map output bytes=195
14/05/29 15:14:00 INFO mapred.JobClient:     Reduce shuffle bytes=0
14/05/29 15:14:00 INFO mapred.JobClient:     Reduce input groups=20
14/05/29 15:14:00 INFO mapred.JobClient:     Combine output records=20
14/05/29 15:14:00 INFO mapred.JobClient:     Reduce output records=20
14/05/29 15:14:00 INFO mapred.JobClient:     Map output records=22
14/05/29 15:14:00 INFO mapred.JobClient:     Combine input records=22
14/05/29 15:14:00 INFO mapred.JobClient:     Total committed heap usage (bytes)=290455552
14/05/29 15:14:00 INFO mapred.JobClient:   File Input Format Counters
14/05/29 15:14:00 INFO mapred.JobClient:     Bytes Read=109
14/05/29 15:14:00 INFO mapred.JobClient:   FileSystemCounters
14/05/29 15:14:00 INFO mapred.JobClient:     HDFS_BYTES_READ=218
14/05/29 15:14:00 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=137726
14/05/29 15:14:00 INFO mapred.JobClient:     FILE_BYTES_READ=557
14/05/29 15:14:00 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=137
14/05/29 15:14:00 INFO mapred.JobClient:   File Output Format Counters
14/05/29 15:14:00 INFO mapred.JobClient:     Bytes Written=137

[/code]

查看在HDFS文件系统中新生成的out文件夹,可以看见生成的part-r-00000,其结果为:
[code]Mary    2
a    1
and    1
as    1
everywhere    1

//==========================================================//

source /article/8262777.html

//==========================================================//

[/code]
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: