您的位置:首页 > 其它

WordCount案例---MapReduce学习小结(-)

2017-01-29 12:04 387 查看
距离第一次接触MapReduce已经过去了很久很久.回忆起来最开始的时候,看到一个程序那么多代码.都不知道如何入手…走了很多的弯路.到后来,慢慢的一步步学习中,才发现它并不是一个什么不可跨越的坑.

多练习,多总结.就能发现其中的秘密.

经典的WordCount

思路:

1. Map阶段:读取文件,这里读的时候,是一行一行的读取.然后,便可以分割开.输出的结果是这样的格式:

package day1;

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class wordCountDemo {

public static void main(String[] args) throws Exception {
if (args.length!=2) {

4000
System.err.println("input err!");
System.exit(-1);
}
Job job=new Job(new Configuration(), "WordCount");
job.setJarByClass(wordCountDemo.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));

job.setMapperClass(wMap.class);
job.setReducerClass(wReduce.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.waitForCompletion(true);
}
//map阶段
public static class wMap extends Mapper<LongWritable, Text, Text,IntWritable>{
@Override
protected void map(LongWritable key, Text value,
Mapper<LongWritable, Text, Text, IntWritable>.Context context)
throws IOException, InterruptedException {
String[] lines = value.toString().split(" ");
for (String words : lines) {
context.write(new Text(words), new IntWritable(1));
}
}
}
//map阶段:<key,value>
//shuffle:<key,{value,value....}
//reduce阶段
public static class wReduce extends Reducer<Text, IntWritable, Text, IntWritable>{
@Override
protected void reduce(Text k1, Iterable<IntWritable> v1,
Reducer<Text, IntWritable, Text, IntWritable>.Context context)
throws IOException, InterruptedException {
int sum=0;
for (IntWritable count : v1) {
sum+=count.get();
}
context.write(k1, new IntWritable(sum));
}
}

}


wordCount 其实不难.虽然是入门程序.但也相当的有用啊.很多都和这个的做法一样的..额,复习的时候自己写了一段.

package day1;

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

/**
* 统计学生总分.
* 数据格式:
* 姓名:科目:成绩
* 张三:语文:12
* 李四:数学:23
* 张三:英语:1000
* ....
* @author YXY
*
*/
public class SumDemo {
public static void main(String[] args) throws Exception {
if (args.length!=2) {
System.err.println("input err!");
System.exit(-1);
}
Job job=new Job(new Configuration(), "Sumdemo");
job.setJarByClass(wordCountDemo.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));

job.setMapperClass(SumMap.class);
job.setReducerClass(SumReduce.class);

job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);

job.waitForCompletion(true);
}
public static class SumMap extends Mapper<LongWritable, Text, Text, IntWritable>{
@Override
protected void map(LongWritable key, Text value,
Mapper<LongWritable, Text, Text, IntWritable>.Context context)
throws IOException, InterruptedException {
String[] lines = value.toString().split(":");
String name = lines[0].trim();
int score=Integer.parseInt(lines[2].trim());
context.write(new Text(name), new IntWritable(score));
}
}
public static class SumReduce extends Reducer<Text, IntWritable, Text,IntWritable>{
@Override
protected void reduce(Text key, Iterable<IntWritable> values,
Reducer<Text, IntWritable, Text, IntWritable>.Context context)
throws IOException, InterruptedException {
int sum=0;
for (IntWritable score : values) {
sum+=score.get();
}
context.write(key, new IntWritable(sum));
}
}
}


注意点

1. 使用智能提示的时候,注意要导入正确的包…因为很多时候,你会看到一样的名字.

2. 理解这种思路.复习MapReduce的工作原理.

3. alt+ctrl+L..快捷键.

4. 不要懒惰…刚开始不连续,想通过复制来解决.其实是对自己最大的不负责.
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: