您的位置:首页 > 编程语言

HADOOP的学习笔记 (第五期) hadoop示例代码分析 .

2013-09-21 19:22 501 查看
上一期中已经能跑成功一个hadoop程序了。这一期来记录下,还分析下代码内容,我也只是参照。《Hadoop 权威指南》加上我自己的见解来进行分析。

示例代码:

[java]
view plaincopyprint?

public class WordCount {

/**
* extends Mapper<Object,Text,Text,IntWritable>
* 其中此4个泛型的含义为k1,v1,k2,v2
* 即:输入map的key与value,
* map输出的key与value
*/
public static class TokenizerMapper extends
Mapper<Object, Text, Text, IntWritable> {

private final static IntWritable one = new IntWritable(1);
private Text word = new Text();

public void map(Object key, Text value, Context context)
throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}

/**
* extends Reducer<Text,IntWritable,Text,IntWritable>
* 其中此4个泛型的含义为k2,v2,k3,v3
* 即:reduce输入的key与value
* reduce输出的key与value
* 其中此k2,v2与map中的k2,v2相同
*/
public static class IntSumReducer extends
Reducer<Text, IntWritable, Text, IntWritable> {
private IntWritable result = new IntWritable();

public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}

public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
conf.set("mapred.job.tracker", "192.168.0.151:9001");
String[] ars = new String[] { "input", "output3" };
String[] otherArgs = new GenericOptionsParser(conf, ars)
.getRemainingArgs();
if (otherArgs.length != 2) {
System.err.println("Usage: wordcount ");
System.exit(2);
}
//注册一个名为wordcount的job

Job job = new Job(conf, "wordcount");
//job以引用的类为WordCount

job.setJarByClass(WordCount.class);
//mapper引用的class

job.setMapperClass(TokenizerMapper.class);
//combiner引用的class

job.setCombinerClass(IntSumReducer.class);
//reduce引用的class

job.setReducerClass(IntSumReducer.class);
//k3输出的key
job.setOutputKeyClass(Text.class);
//v3输出的value
job.setOutputValueClass(IntWritable.class);
//数据输入路径
FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
//数据输出路径
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}

public class WordCount {

/**
* extends Mapper<Object,Text,Text,IntWritable>
* 其中此4个泛型的含义为k1,v1,k2,v2
* 即:输入map的key与value,
* map输出的key与value
*/
public static class TokenizerMapper extends
Mapper<Object, Text, Text, IntWritable> {

private final static IntWritable one = new IntWritable(1);
private Text word = new Text();

public void map(Object key, Text value, Context context)
throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}

/**
* extends Reducer<Text,IntWritable,Text,IntWritable>
* 其中此4个泛型的含义为k2,v2,k3,v3
* 即:reduce输入的key与value
* reduce输出的key与value
* 其中此k2,v2与map中的k2,v2相同
*/
public static class IntSumReducer extends
Reducer<Text, IntWritable, Text, IntWritable> {
private IntWritable result = new IntWritable();

public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}

public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
conf.set("mapred.job.tracker", "192.168.0.151:9001");
String[] ars = new String[] { "input", "output3" };
String[] otherArgs = new GenericOptionsParser(conf, ars)
.getRemainingArgs();
if (otherArgs.length != 2) {
System.err.println("Usage: wordcount  ");
System.exit(2);
}
//注册一个名为wordcount的job
Job job = new Job(conf, "wordcount");
//job以引用的类为WordCount
job.setJarByClass(WordCount.class);
//mapper引用的class
job.setMapperClass(TokenizerMapper.class);
//combiner引用的class
job.setCombinerClass(IntSumReducer.class);
//reduce引用的class
job.setReducerClass(IntSumReducer.class);
//k3输出的key
job.setOutputKeyClass(Text.class);
//v3输出的value
job.setOutputValueClass(IntWritable.class);
//数据输入路径
FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
//数据输出路径
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}


此处我认为需要多解释的几个地方是:

1.关于job设置输入输出的数据类型,有几种配置类型的属性,以及必须类型一致的属性:



2.关于combiner合并函数:

为map执行完以后进行合并操作的一个函数,此函数的输出,与reduce的输入相同,此是一个优化方案,应该相当于reduce的一个前置,所以说,无论调用多少次combiner合并函数,最后的reduce结果应该都是不变的,那什么场景适合用合并函数呢?举个例子:找寻最大值

第一个map任务的输出结果为:

2000 50

2000 70

2000 80

第二个map任务的输出结果为:

2000 90

2000 99

reduce函数调用的时候输出的结果是{2000,{50,70,80,90,99}},99最大,最后输出的结果为(2000,99)

而combiner的操作结果就是。

传入reduce函数的数据为{2000,{80,99}}最后输出的结果为{2000,99}

最后的结果与原来一样,只是流程有些不相同。

即:max(50,70,80,90,99) = max(max(50,70,80),max(90,99))但是不是所有场景都是用,比如求平均值的时候。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: