在hadoop上安装hypertable
2009-03-26 16:27
288 查看
Step 1
Install and start Hadoop.
Step 2
Make sure you have extended attributes enabled on the partition that
holds the root of the Hyperspace directory tree. This will be the same
disk volume that holds the installation (e.g. ~/hypertable), unless you
configured Hyperspace explicitly to point to somewhere else. This is
not necessary on the mac, but is in general on Linux systems. To enable
extended attributes on Linux, you need to add the user_xattr property
to the relevant file systems in your /etc/fstab file. For example:
/dev/hda3 /home ext3 defaults,user_xattr 1 2You can then remount the affected partitions as follows:
$ mount -o remount /home
Step 3
Create the directory /hypertable in HDFS and make it writeable by all.
$ hadoop fs -mkdir /hypertable $ hadoop fs -chmod 777 /hypertable
Step 4
Edit the config file conf/hypertable.cfg Change the following property
to point to the Hadoop filesystem that got up and running in step 1
(assuming hdfs://motherlode000:9000):
HdfsBroker.fs.default.name=hdfs://motherlode001:9000Change the
following two properties to point to the location of the Hypertable
Master and Hyperspace (assuming motherlode001):
Hyperspace.Master.Host=motherlode001 Hypertable.Master.Host=motherlode001
Step 5
Start the Master and Hyperspace
ssh motherlode001 ~/hypertable/bin/start-master.sh hadoop
Step 6
Start the Range Servers. Add all of the names of the machines that will
be running range servers to the file conf/slaves relative to the
installation directory. Here’s what an example one looks like:
$ cat conf/slaves motherlode001 motherlode002 motherlode003
motherlode004 motherlode005 motherlode006 motherlode007
motherlode008This slaves file, along with the entire installation
should be pushed out to all of the machines using something like rsync.
Once you’ve done this, you can launch all of the range servers with the
following command (assuming installation dir is ~/hypertable on all of
the machines):
./bin/slaves.sh ~/hypertable/bin/start-range-server.sh hadoopNow you
sould be able to run the ~/hypertable/bin/hypertable HQL command
interpreter and start playing around.
Stopping the System
Execute the following commands to stop all of the servers:
ssh motherlode001 ~/hypertable/bin/kill-servers.sh ./bin/slaves.sh ~/hypertable/bin/kill-servers.sh
Install and start Hadoop.
Step 2
Make sure you have extended attributes enabled on the partition that
holds the root of the Hyperspace directory tree. This will be the same
disk volume that holds the installation (e.g. ~/hypertable), unless you
configured Hyperspace explicitly to point to somewhere else. This is
not necessary on the mac, but is in general on Linux systems. To enable
extended attributes on Linux, you need to add the user_xattr property
to the relevant file systems in your /etc/fstab file. For example:
/dev/hda3 /home ext3 defaults,user_xattr 1 2You can then remount the affected partitions as follows:
$ mount -o remount /home
Step 3
Create the directory /hypertable in HDFS and make it writeable by all.
$ hadoop fs -mkdir /hypertable $ hadoop fs -chmod 777 /hypertable
Step 4
Edit the config file conf/hypertable.cfg Change the following property
to point to the Hadoop filesystem that got up and running in step 1
(assuming hdfs://motherlode000:9000):
HdfsBroker.fs.default.name=hdfs://motherlode001:9000Change the
following two properties to point to the location of the Hypertable
Master and Hyperspace (assuming motherlode001):
Hyperspace.Master.Host=motherlode001 Hypertable.Master.Host=motherlode001
Step 5
Start the Master and Hyperspace
ssh motherlode001 ~/hypertable/bin/start-master.sh hadoop
Step 6
Start the Range Servers. Add all of the names of the machines that will
be running range servers to the file conf/slaves relative to the
installation directory. Here’s what an example one looks like:
$ cat conf/slaves motherlode001 motherlode002 motherlode003
motherlode004 motherlode005 motherlode006 motherlode007
motherlode008This slaves file, along with the entire installation
should be pushed out to all of the machines using something like rsync.
Once you’ve done this, you can launch all of the range servers with the
following command (assuming installation dir is ~/hypertable on all of
the machines):
./bin/slaves.sh ~/hypertable/bin/start-range-server.sh hadoopNow you
sould be able to run the ~/hypertable/bin/hypertable HQL command
interpreter and start playing around.
Stopping the System
Execute the following commands to stop all of the servers:
ssh motherlode001 ~/hypertable/bin/kill-servers.sh ./bin/slaves.sh ~/hypertable/bin/kill-servers.sh
相关文章推荐
- Hypertable on HDFS(hadoop) 安装
- Hypertable - 安装-Hadoop
- Hadoop安装_单机伪分布式配置
- Linux下安装Hadoop(2.7.1)详解及WordCount运行
- hadoop学习之总目录(1):安装和使用完全分布式hadoop-2.7.2及其家族其他成员
- Hadoop研究(一)安装与部署
- 7.测试hadoop安装成功与否,并跑mapreduce实例
- hadoop起步之环境安装2
- Hadoop集群 CentOS安装配置
- hadoop入门(3)——hadoop2.0理论基础:安装部署方法
- redhat6hadoop单机安装
- 基于hadoop集群的hive安装 mysql as metastore database
- hadoop配置文件详解、安装及相关操作
- 在Docker与Mac IDEA中安装单机Hadoop环境
- Hadoop 学习笔记之Hive安装
- Hadoop2.6+HA+Zookeeper3.4.6+Hbase1.0.0安装
- windows下安装hadoop
- hadoop-1.2.0集群安装与配置
- HADOOP 集群安装配置
- Hadoop+Hbase安装配置实录