您的位置:首页 > 运维架构

Hadoop分布式集群系统添加节点

2011-08-02 15:36 537 查看
1、  安装jdk1.6.0_26(安装java环境)

Scp  /etc/profile  (注释)newnodeip:/etc

2、  安装hadoop及hbase、zookeeper

Yum install hadoop-0.20

Yum install hadoop-0.20-tasktarek

Yum install hadoop-0.20-namenode

Yum install hadoop-0.20-datanode

Yum install hadoop-hbase

3|、从主节点拷贝配置文件到新加节点

Scp  –r  /etc/hadoop-0.20/conf   newnodeip:/etc/hadoop-0.20

Scp  –r  /etc/hbase/conf   newnodeip:/etc/hbase

Scp  –r  /etc/zookeeper/conf   newnodeip:/etc/zookeeper

3、  设置用户最大打开的文件数量:

Vim   /etc/security/limits.conf

hdfs            -       nofile  32768

hbase           -       nofile  32768

4、  在master主机上添加dns解析

Vim /var/named/hdfs.zone

$TTL    86400

@ IN SOA hdfs. root(

200101111

14400

3600

604800

86400)

master-hadoop IN A 192.168.5.249

slave1-hadoop IN A 192.168.5.201

hostname  IN A  newip(注释)

master-hbase IN A 192.168.5.249

slave1-hbase IN A 192.168.5.201

hostname  IN A  newip(注释)

@ IN NS ns.hdfs.

/etc/rc.d/init.d/named restart

6,在本机设置dns

vi /etc/resolv.conf

earch hdfs

domain hdfs

nameserver 192.168.5.249

nameserver 202.106.0.20

~

7、主节点master主机添加从节点配置:

Vim /etc/hadoop-0.20/slaves

slave1-hadoop.hdfs

slave2-hadoop.hdfs

slave4-hadoop.hdfs

slave3-hadoop.hdfs

hostname(注释)

vim /etc/hbase/regionserver

slave1-hadoop.hdfs

slave2-hadoop.hdfs

slave4-hadoop.hdfs

slave3-hadoop.hdfs

hostname(注释)

8、配置免密码验证

Scp /home/hdfs/.ssh/*  newip:/home/hdfs/.ssh

9、启动新加节点:

/usr/hahadoop-0.20/bin/daemon start datanode

/usr/hahadoop-0.20/bin/daemon start tasktracker
本文出自 “mary的博客” 博客,请务必保留此出处http://marysee.blog.51cto.com/1000292/629421
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: