您的位置:首页 > 其它

hbase伪分布式安装

2012-03-05 14:24 302 查看
安装环境:centos6.0+jdk1.6.0_29+hadoop1.0.0+hbase0.90.4

已经安装好centos6.0+jdk1.6.0_29+hadoop1.0.0环境

1.到官方网站下载好hbase-0.90.4.tar.gz解压hbase安装包到一个可用目录(如:/opt)

cd /opt
tar zxvf hbase-0.90.4.tar.gz
chown -R hadoop:hadoop /opt/hbase-0.90.4


2.设置环境变量:

vim ~/.bashrc
export HBASE_HOME=/opt/hbase-0.90.4    #根据自己的jdk安装目录设置
PAHT=$PATH:$HBASE_HOME/bin
3.hbase配置:

在$HBASE_HOME/conf目录中,根据自己的jdk安装情况配置好hbase-env.sh中JAVA_HOME,如下所示:

# The java implementation to use.  Java 1.6 required.
export JAVA_HOME=/usr/local/jdk/jdk1.6.0_29


在$HBASE_HOME目录下的conf目录中,确保hbase-site中的hbase.rootdir的主机和端口号与$HADOOP_HOME目录下conf目录中core-site.xml中的fs.default.name的主机和端口号一致,添加如下内容:

<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.master</name>
<value>localhost:60000</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>localhost</value>
</property>
</configuration>


3.先启动hadoop,再启动hbase:

$start-all.sh        #启动hadoop
$jps                 #查看hadoop启动情况,确认DataNode,SecondaryNameNode,DataNode,JobTracker,TaskTracker全部启动
31557 DataNode
31432 NameNode
31902 TaskTracker
31777 JobTracker
689 Jps
31683 SecondaryNameNode
$start-hbase.sh       #确认hadoop完全启动后启动hbase
$jps                  #查看hbase启动情况,确认HQuorumPeer,HMaster,HRegionServer全部启动
31557 DataNode
806 HQuorumPeer
31432 NameNode
853 HMaster
31902 TaskTracker
950 HRegionServer
1110 Jps
31777 JobTracker
31683 SecondaryNameNode
$ hbase               #查看hbase命令
Usage: hbase <command>
where <command> is one of:
shell            run the HBase shell
zkcli            run the ZooKeeper shell
master           run an HBase HMaster node
regionserver     run an HBase HRegionServer node
zookeeper        run a Zookeeper server
rest             run an HBase REST server
thrift           run an HBase Thrift server
avro             run an HBase Avro server
migrate          upgrade an hbase.rootdir
hbck             run the hbase 'fsck' tool
classpath        dump hbase CLASSPATH
or
CLASSNAME        run the class named CLASSNAME
Most commands print help when invoked w/o parameters.

$hbase shell                    #启动hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.90.4, r1150278, Sun Jul 24 15:53:29 PDT 2011

hbase(main):001:0>





hbase启动可能会出错导致失败(我在hadoop0.20.203.0环境下搭hbase0.90.4就出现过这种问题,hadoop1.0.0没测试,直接做了下面的步骤),这时需要将$HADOOP_HOME目录下的hadoop-core-1.0.0.jar和$HADOOP_HOME/lib目录下的commons-configuration-1.6.jar拷贝到$HBASE_HOME/lib目录下,删除$HBASE_HOME/lib目录下的hadoop-core-0.20-append-r1056497.jar,避免版本冲突和不兼容。

4.练习hbase shell

hbase(main):001:0> create 'test','data'   #创建一个名为‘test’的表,包含一个名为‘data’的列
0 row(s) in 2.0960 seconds

hbase(main):002:0> list              #输出用户空间所有表,验证表是否创建成功
TABLE
test
1 row(s) in 0.0220 seconds
# 在列族data上的不同行和列插入三项数据
hbase(main):003:0> put 'test','row1','data:1','value1'
0 row(s) in 0.2970 seconds

hbase(main):004:0> put 'test','row2','data:2','value2'
0 row(s) in 0.0120 seconds

hbase(main):005:0> put 'test','row3','data:3','value3'
0 row(s) in 0.0180 seconds

hbase(main):006:0> scan 'test'    #查看数据插入结果
ROW                   COLUMN+CELL
row1                 column=data:1, timestamp=1330923873719, value=value1
row2                 column=data:2, timestamp=1330923891483, value=value2
row3                 column=data:3, timestamp=1330923902702, value=value3
3 row(s) in 0.0590 seconds

hbase(main):007:0> disable 'test'    #禁用表test
0 row(s) in 2.0610 seconds

hbase(main):008:0> drop 'test'      #删除表test
0 row(s) in 1.2120 seconds

hbase(main):009:0> list             #确认表test被删除
TABLE
0 row(s) in 0.0180 seconds

hbase(main):010:0> quit            #退出hbase shell
5.停止hbase实例:

$stop-hbase.sh
stopping hbase......
localhost: stopping zookeeper.
6.查看hdfs目录,你会发现在根目录下多了一个hbase的目录

$ hadoop fs -ls /
Found 4 items
drwxr-xr-x   - hadoop supergroup          0 2012-03-05 13:05 /hbase   #hbase生成目录
drwxr-xr-x   - hadoop supergroup          0 2012-02-24 17:55 /home
drwxr-xr-x   - hadoop supergroup          0 2012-03-04 20:44 /tmp
drwxr-xr-x   - hadoop supergroup          0 2012-03-04 20:47 /user
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: