您的位置:首页 > 编程语言 > Java开发

HBase学习之四: mapreduce处理数据后存储到hbase及错误java.lang.NoClassDefFoundError的解决办法

2016-07-07 22:25 567 查看
mapreduce处理数据后存储到hbase源代码(参考网上资料测试OK):

map类:

package hbase;

import java.io.IOException;

import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

public class HBaseMapper extends Mapper<LongWritable, Text, Text, Text> {
public void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
String[] item = value.toString().split(",");
String k = item[0];
String v = item[1];
context.write(new Text(k), new Text(v));
}

}

reduce类:
package hbase;

import java.io.IOException;

import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
import org.apache.hadoop.hbase.mapreduce.TableReducer;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.io.Text;

public class HBaseReducer extends TableReducer<Text,Text,ImmutableBytesWritable>{
public void reduce(Text key,Iterable<Text> value,Context context) throws IOException, InterruptedException{
String k = key.toString();
String v = value.iterator().next().toString();
Put putrow = new Put(Bytes.toBytes(k));
putrow.add(Bytes.toBytes("info"), Bytes.toBytes("name"), Bytes.toBytes(v));
context.write(new ImmutableBytesWritable(Bytes.toBytes(k)), putrow);//注意key和value的类型
}

}

driver类:
package hbase;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.util.Tool;

public class HBaseDriver extends Configured implements Tool {

@Override
public int run(String[] arg0) throws Exception {
// TODO Auto-generated method stub
Configuration conf = HBaseConfiguration.create();
conf.set("hbase.zookeeper.quorum",
"hadoop001.icccuat.com,hadoop002.icccuat.com,hadoop003.icccuat.com");
conf.set("zookeeper.znode.parent", "/hbase-unsecure");
@SuppressWarnings("deprecation")
Job job = new Job(conf, "Txt-to-Hbase");
job.setJarByClass(TxHBase.class);
Path in = new Path("/home/hbase/");//输入文件路径
FileInputFormat.addInputPath(job, in);
job.setMapperClass(HBaseMapper.class);
job.setReducerClass(HBaseReducer.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Text.class);
TableMapReduceUtil.initTableReducerJob("emp", HBaseReducer.class, job);//emp为hbase中的表名
job.waitForCompletion(true);
return 0;
}

}

程序入口:
package hbase;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.util.ToolRunner;

public class TxHBase {
public static void main(String[] args) throws Exception {
int mr = ToolRunner.run(new Configuration(), new HBaseDriver(), args);
System.exit(mr);
}
}


输入文件内容如下:

[hdfs@hadoop002 lib]$ hadoop fs -cat /home/hbase/a.txt;

78827,jiangxiaozhi

666777,zhangsan

77877,hecheng

123322,liusi

注意:逗号前部分是rowkey和表中已有数据的rowkey不能重复。

将写好的程序打jar包:

export->jar file->next->指定存储路径->next->next->指定main class(此处为TxHBase)->finish

传到hadoop集群中运行,输入命令hadoop jar mapreducehbase.jar hbase.TxHBase 输入的主类名是类全限定名。

此时报错,信息如下:

Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/HBaseConfiguration
at hbase.HBaseDriver.run(HBaseDriver.java:18)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at hbase.TxHBase.main(TxHBase.java:8)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hbase.HBaseConfiguration
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 9 more

不加入hbase相关的代码运行mapreduce程序没有任何问题,一旦加入hbase相关的代码,报各种各样和hbase相关的NoClassDefFoundError

错误,网上找了很多资料,原因大致为:mr程序没有引用到集群上的hbase jar包。解决办法是把hbase的jar包加入到hadoop classpath中,

经过测试发现下面的办法可行:

在hadoop安装目录的conf目录下找到hadoop-env.sh文件,打开找到HADOOP_CLASSPATH:

export HADOOP_CLASSPATH=${HADOOP_CLASSPATH}${JAVA_JDBC_LIBS}:${MAPREDUCE_LIBS}

加入hbase lib后变成:

HBASE_LIBS=/usr/hdp/2.2.6.0-2800/hbase/lib/* --hbase jar包位置

export HADOOP_CLASSPATH=${HADOOP_CLASSPATH}${JAVA_JDBC_LIBS}:${MAPREDUCE_LIBS}:${HBASE_LIBS} --引入jar包${HBASE_LIBS} 

不需要重启,重新执行命令hadoop jar mapreducehbase.jar hbase.TxHBase 运行成功。接下来查看数据是否插入成功

进入hbase查询:

hbase(main):001:0> scan 'emp'

ROW                                              COLUMN+CELL                                                                                                                                 

 1001                                            column=info:age, timestamp=1467103276147, value=20                                                                                          

 1001                                            column=info:name, timestamp=1467103276137, value=zhangsan                                                                                   

 1002                                            column=info:age, timestamp=1467103276151, value=21                                                                                          

 1002                                            column=info:name, timestamp=1467103276149, value=lisi                                                                                       

 1003                                            column=info:age, timestamp=1467103276154, value=22                                                                                          

 1003                                            column=info:name, timestamp=1467103276152, value=wangwu                                                                                     

 1004                                            column=info:age, timestamp=1467103276157, value=22                                                                                          

 1004                                            column=info:name, timestamp=1467103276156, value=xiaoming                                                                                   

 1005                                            column=info:age, timestamp=1467103276160, value=17                                                                                          

 1005                                            column=info:name, timestamp=1467103276159, value=hanmeimei                                                                                  

 1006                                            column=info:age, timestamp=1467103276165, value=28                                                                                          

 1006                                            column=info:name, timestamp=1467103276162, value=xiaohong                                                                                   

 1007                                            column=info:age, timestamp=1467103276168, value=45                                                                                          

 1007                                            column=info:name, timestamp=1467103276167, value=haimingwei                                                                                 

 1008                                            column=info:age, timestamp=1467103276172, value=16                                                                                          

 1008                                            column=info:name, timestamp=1467103276170, value=xiaoqi                                                                                     

 123322                                          column=info:name, timestamp=1467809673640, value=liusi         ----新插入数据ok                                                                                    

 2001                                            column=info:age, timestamp=1467103276175, value=23                                                                                          

 2001                                            column=info:name, timestamp=1467103276173, value=zhaoliu                                                                                    

 3002                                            column=info:age, timestamp=1467103276178, value=24                                                                                          

 3002                                            column=info:name, timestamp=1467103276177, value=liqi                                                                                       

 666777                                          column=info:name, timestamp=1467809673640, value=zhangsan      ----新插入数据ok                                                                                

 77877                                           column=info:name, timestamp=1467809673640, value=hecheng       ----新插入数据ok                                                                             

 78827                                           column=info:name, timestamp=1467809673640, value=jiangxiaozhi  ----新插入数据ok                                                                              

14 row(s) in 0.2780 seconds

另外在做hbase filter复杂条件分页查询的时候,PageFilter设置的页数不起作用,最后改成了rs.next(pageSize)才OK,据说是filter被传送到各个regionserver执行,但是不能保证每个执行结果汇集到客户端的时候一定和设定的pageSize一致。至于filter的具体执行逻辑还有待验证。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: