您的位置:首页 > 运维架构

Hadoop 1.x MapReduce最小驱动配置

2016-04-18 20:35 381 查看
MapReduce中最小驱动配置指的是没有Mapper和Reducer,看如下代码:

package org.dragon.hadoop.mr;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

/**
* 最小的MapReduce:不需要Mapper和Reducer
* @author Administrator
*
*/
public class MinimalMapReduce {

//Mapper

//Reducer

public static void main(String[] args) throws Exception {

args = new String[]{
"hdfs://hadoop-master.dragon.org:9000/opt/data/test/input/simple_file.txt",
"hdfs://hadoop-master.dragon.org:9000/opt/data/test/output7/"
};

//conf
Configuration  conf = new Configuration();

//create job
Job job = new Job(conf,MinimalMapReduce.class.getSimpleName());

//set job
job.setJarByClass(MinimalMapReduce.class);

//set in/out path
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
//submit job
boolean isSuccess = job.waitForCompletion(true);

//exit
System.exit(isSuccess?0:1);
}
}


针对上述的MapReduce程序,结果分析如下:

* 最小配置的MapReduce:读取输入文件中的内容,输出到指定目录的输出文件中,此时文件中的内容为
* 		key:原输入文件每行内容的起始位置
* 		value:输入文件每行的原始内容
* 所以,输出文件的内容为:key + \t + value


通过查看源代码,可以得到默认的mapper和reducer,主要源码类为JobContext:

//查看源码,可以得知如下默认配置
//默认的输入格式
job.setInputFormatClass(TextInputFormat.class);
//default mapper
job.setMapperClass(Mapper.class);
job.setMapOutputKeyClass(LongWritable.class);
job.setMapOutputValueClass(Text.class);
//default reducer
job.setReducerClass(Reducer.class);
job.setOutputKeyClass(LongWritable.class);
job.setOutputValueClass(Text.class);
//默认的输出格式
job.setOutputFormatClass(TextOutputFormat.class);
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息