4、第一 个mapReduce程序
2015-11-23 22:10
501 查看
第一步,map,TokenizerMapper:
继承Mapper<Object,Text,Text,IntWritable>,Object输入数据的key,如该行数据的偏移量,Text输入数据的值,Text输出数据的key,IntWritable输出数据的值;
IntWritable one = new IntWritable(1);设置每个key的value都为1
Text word = new Text();创建key对象
StringTokenizer itr = new StringTokenizer(value.toString());java中用来分割字符串的类
context.write(word, one);context保存key-value,留待reduce处理
第二步,reduce,IntSumReducer
for(IntWritable val:values)
{
sum += val.get();
}循环读取key对应的所有value,并将其累加,最后将结果写入到context中
第三步,WordCount
configuration类:读写和保存各种配置资源
job.setMapperClass(TokenizerMapper.class);设置map
job.setReducerClass(IntSumReducer.class);设置reduce
job.setOutputKeyClass(Text.class);设置输出key
job.setOutputValueClass(IntWritable.class);设置输出value
FileInputFormat.addInputPath(job, new Path(otherArgs[0]));设置输入文件路径
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));设置输出文件路径
package com.Kevin.hadoop; import java.io.IOException; import java.util.StringTokenizer; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; public class TokenizerMapper extends Mapper<Object,Text,Text,IntWritable>{ IntWritable one = new IntWritable(1); Text word = new Text(); public void map(Object key,Text value,Context context) throws IOException,InterruptedException{ StringTokenizer itr = new StringTokenizer(value.toString()); while(itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write(word, one); } } }
继承Mapper<Object,Text,Text,IntWritable>,Object输入数据的key,如该行数据的偏移量,Text输入数据的值,Text输出数据的key,IntWritable输出数据的值;
IntWritable one = new IntWritable(1);设置每个key的value都为1
Text word = new Text();创建key对象
StringTokenizer itr = new StringTokenizer(value.toString());java中用来分割字符串的类
context.write(word, one);context保存key-value,留待reduce处理
第二步,reduce,IntSumReducer
package com.Kevin.hadoop; import java.io.IOException; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; public class IntSumReducer extends Reducer<Text,IntWritable,Text,IntWritable>{ IntWritable result = new IntWritable(); public void reduce(Text key,Iterable<IntWritable> values,Context context) throws IOException,InterruptedException{ int sum = 0; for(IntWritable val:values) { sum += val.get(); } result.set(sum); context.write(key, result); } }
for(IntWritable val:values)
{
sum += val.get();
}循环读取key对应的所有value,并将其累加,最后将结果写入到context中
第三步,WordCount
package com.Kevin.hadoop; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.util.GenericOptionsParser; public class WordCount { public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); String[] otherArgs = new GenericOptionsParser(conf,args).getRemainingArgs(); if(2 != otherArgs.length) { System.err.println("Usage: wordcount<in><out>"); System.exit(2); } Job job = new Job(conf,"wordcount"); job.setJarByClass(WordCount.class); job.setMapperClass(TokenizerMapper.class); job.setReducerClass(IntSumReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(otherArgs[0])); FileOutputFormat.setOutputPath(job, new Path(otherArgs[1])); System.exit(job.waitForCompletion(true)?0:1); } }
configuration类:读写和保存各种配置资源
job.setMapperClass(TokenizerMapper.class);设置map
job.setReducerClass(IntSumReducer.class);设置reduce
job.setOutputKeyClass(Text.class);设置输出key
job.setOutputValueClass(IntWritable.class);设置输出value
FileInputFormat.addInputPath(job, new Path(otherArgs[0]));设置输入文件路径
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));设置输出文件路径
相关文章推荐
- 详解HDFS Short Circuit Local Reads
- 简单易懂云计算(转自天涯感谢原楼主iamsatisfied)
- Hadoop_2.1.0 MapReduce序列图
- 使用Hadoop搭建现代电信企业架构
- 2011云计算知识库:盘点千奇百怪的云名称
- 单机版搭建Hadoop环境图文教程详解
- hadoop常见错误以及处理方法详解
- hadoop 单机安装配置教程
- hadoop的hdfs文件操作实现上传文件到hdfs
- hadoop实现grep示例分享
- MongoDB中的MapReduce简介
- MongoDB学习笔记之MapReduce使用示例
- MongoDB中MapReduce编程模型使用实例
- Apache Hadoop版本详解
- MapReduce中ArrayWritable 使用指南
- Java函数式编程(七):MapReduce
- linux下搭建hadoop环境步骤分享
- java连接hdfs ha和调用mapreduce jar示例
- hadoop client与datanode的通信协议分析