您的位置:首页 > 运维架构

初学hadoop2.7.1(三)第一个hadoop应用开发

2016-01-21 17:24 441 查看
操作系统:在windows7下使用ubuntu-14.04.3-desktop-amd64

hadoop版本:hadoop-2.7.1

jdk版本:jdk-7u79-linux-x64.tar.gz

eclipse版本:eclipse-jee-mars-R-win32

maven版本: apache-maven-3.3.9

1. Hadoop开发环境介绍:

如上图所示,我们可以选择在win中开发,也可以在linux中开发,本地启动Hadoop或者远程调用Hadoop,标配的工具都是Maven和Eclipse。

2. 用Maven构建Hadoop环境

1) 创建自己的workspace开发目录

2) 在workspace目录下用Maven创建一个标准化的Java项目

mvnarchetype:generate -DarchetypeGroupId=org.apache.maven.archetypes-DgroupId=com.myhadoop -DartifactId=myHadoop -DpackageName=org.myhadoop-Dversion=1.0-SNAPSHOT -DinteractiveMode=false

3) 进入项目myHadoop,执行mvn命令生成eclipse工程

mvn clean install

mvn eclipse:eclipse

4) 打开eclipse设置maven

设置installations

设置User Settings

5) Hadoop工程导入到eclipse

6) 在pom.xml中添加hadoop依赖

注:

很多框架都会依赖jdk中的tools.jar,但是maven仓库中却没有.

如在eclipse+maven编写mapreduce代码,就会报Missing artifact jdk.toos:jdk.toos:jar:1.6

如何解决这个问题呢,只需要在项目的pom.xml 文件中加入以下配置,指定maven去本地寻找 tools.jar、

<dependency>

<groupId>jdk.tools</groupId>

<artifactId>jdk.tools</artifactId>

<version>1.8</version>

<scope>system</scope>

<systemPath>${JAVA_HOME}/lib/tools.jar</systemPath>

3. 开发应用程序

1) 开发HDFS测试程序

package com.myhadoop;

import java.io.InputStream;

import java.net.URI;

import org.apache.hadoop.conf.Configuration;

import org.apache.hadoop.fs.FSDataOutputStream;

import org.apache.hadoop.fs.FileStatus;

import org.apache.hadoop.fs.FileSystem;

import org.apache.hadoop.fs.Path;

import org.apache.hadoop.io.IOUtils;

publicclass HDFSTest {

publicstaticvoid main(String[] args) throws Exception {

String uri = "hdfs://192.168.248.128:9000/";

Configuration config = newConfiguration();

FileSystem fs = FileSystem.get(URI.create(uri), config);

// 列出hdfs上目录下的所有文件和目录

FileStatus[] statuses = fs.listStatus(new Path("dfs/data/test"));

for (FileStatus status : statuses) {

System.out.println(status);

}

// 在hdfs目录下创建一个文件,并写入一行文本

FSDataOutputStream os = fs.create(new Path("dfs/data/test/test.log"));

os.write("HelloWorld!".getBytes());

os.flush();

os.close();

// 显示在hdfs下指定文件的内容

InputStream is = fs.open(new Path("dfs/data/test/test.log"));

IOUtils.copyBytes(is, System.out, 1024, true);
}
}

注:如果出现connect fail 错误,在core-site.xml里修改host为对外ip

<configuration>

<property>

<name>fs.defaultFS</name>

<value>hdfs://192.168.248.128:9000</value>

</property>

<property>

<name>hadoop.tmp.dir</name>

<value>/home/tongwei/Work/Dev/Hadoop/hadoop-2.7.1/tmp</value>

<description>A base of other temporarydirectories</description>

</property>

</configuration>

2) 开发MapReduce测试程序

package com.myhadoop;

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;

public class EventCount {

public static class MyMapper extends Mapper<Object, Text, Text, IntWritable>{
private final static IntWritable one = new IntWritable(1);
private Text event = new Text();

public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
int idx = value.toString().indexOf(" ");
if (idx > 0) {
String e = value.toString().substring(0, idx);
event.set(e);
context.write(event, one);
}
}
}

public static class MyReducer extends Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();

public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}

public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
if (otherArgs.length < 2) {
System.err.println("Usage: EventCount <in> <out>");
System.exit(2);
}
Job job = Job.getInstance(conf, "event count");
job.setJarByClass(EventCount.class);
job.setMapperClass(MyMapper.class);
job.setCombinerClass(MyReducer.class);
job.setReducerClass(MyReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}

运行“mvn package”命令产生jar包myHadoop-1.0-SNAPSHOT.jar,并将jar文件复制到hadoop安装目录下。这里假定我们需要分析几个日志文件中的Event信息来统计各种Event个数,所以创建一下目录和文件。

创建input文件夹

$ mkdir input

在input文件夹里创建我们需要分析的日志文件,

$ sudo vim./input/event.log.1

编辑文件内容:

JOB_NEW ...

JOB_NEW ...

JOB_FINISH ...

JOB_NEW ...

JOB_FINISH ...

以此类推分别创建event.log.2, event.log.3日志文件,

$ cp ./input/event.log.1 ./input/event.log.2

$ cp ./input/event.log.1 ./input/event.log.3

在dfs/data下创建input文件夹,

$ bin/hadoop fs -mkdir -p ./dfs/data/input

然后把这些文件复制到HDFS上,

$ bin/hdfs dfs -put./input ./dfs/data



$ bin/hadoop fs –put ./input./dfs/data

运行mapreduce作业,

1$ bin/hadoopjar myHadoop-1.0-SNAPSHOT.jar com.myhadoop.EventCount dfs/data/input dfs/data/output

查看执行结果,

$ bin/hadoopfs -cat dfs/data/output/part-r-00000

附录:

在eclispse中设置input和output作为传入参数

String input =
"hdfs://192.168.248.128:9000/user/tongwei/dfs/data/input";

String
output =
"hdfs://192.168.248.128:9000/user/tongwei/dfs/data/output";

运行改程序的时候可能会出现下面错误,

ERROR [org.apache.hadoop.util.Shell]Failed to locate the winutils binary in the hadoop binary path

java.io.IOException:Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

究其原因为程序需要根据HADOOP_HOME找到winutils.exe,由于win机器并没有配置该环境变量,所以程序报null\bin\winutils.exe。

解决方案为:

1. 下载hadoop2.7.1版本的winutils的windows版本

http://download.csdn.net/detail/faq_tong/9413293

2. Ecplise用64为JDK作为complier解决

NativeIO$Windows.createDirectoryWithMode0(Ljava/lang/String;I)S错误

3. 添加HADOOP_HOME环境变量或用下面代码设置系统属性

System.setProperty("hadoop.home.dir","C:/tongwei/works/myprojects/HadoopEx/tools/hadoop-common-bin-2.7.1");

4. 把hadoop.dll拷贝到windows/system32下解决org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z错误

一切OK,程序运行成功!
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: