Hadoop分布式集群系统添加节点
2011-08-02 15:36
555 查看
1、 安装jdk1.6.0_26(安装java环境)
Scp /etc/profile (注释)newnodeip:/etc
2、 安装hadoop及hbase、zookeeper
Yum install hadoop-0.20
Yum install hadoop-0.20-tasktarek
Yum install hadoop-0.20-namenode
Yum install hadoop-0.20-datanode
Yum install hadoop-hbase
3|、从主节点拷贝配置文件到新加节点
Scp –r /etc/hadoop-0.20/conf newnodeip:/etc/hadoop-0.20
Scp –r /etc/hbase/conf newnodeip:/etc/hbase
Scp –r /etc/zookeeper/conf newnodeip:/etc/zookeeper
3、 设置用户最大打开的文件数量:
Vim /etc/security/limits.conf
hdfs - nofile 32768
hbase - nofile 32768
4、 在master主机上添加dns解析
Vim /var/named/hdfs.zone
$TTL 86400
@ IN SOA hdfs. root(
200101111
14400
3600
604800
86400)
master-hadoop IN A 192.168.5.249
slave1-hadoop IN A 192.168.5.201
hostname IN A newip(注释)
master-hbase IN A 192.168.5.249
slave1-hbase IN A 192.168.5.201
hostname IN A newip(注释)
@ IN NS ns.hdfs.
/etc/rc.d/init.d/named restart
6,本机设置dns
vi /etc/resolv.conf
earch hdfs
domain hdfs
nameserver 192.168.5.249
nameserver 202.106.0.20
~
7、主节点master主机添加从节点配置:
Vim /etc/hadoop-0.20/slaves
slave1-hadoop.hdfs
slave2-hadoop.hdfs
slave4-hadoop.hdfs
slave3-hadoop.hdfs
hostname(注释)
vim /etc/hbase/regionserver
slave1-hadoop.hdfs
slave2-hadoop.hdfs
slave4-hadoop.hdfs
slave3-hadoop.hdfs
hostname(注释)
8、配置免密码验证
Scp /home/hdfs/.ssh/* newip:/home/hdfs/.ssh
9、启动新加节点:
/usr/hahadoop-0.20/bin/daemon start datanode
/usr/hahadoop-0.20/bin/daemon start tasktracker本文出自 “mary的博客” 博客,请务必保留此出处http://marysee.blog.51cto.com/1000292/629420
Scp /etc/profile (注释)newnodeip:/etc
2、 安装hadoop及hbase、zookeeper
Yum install hadoop-0.20
Yum install hadoop-0.20-tasktarek
Yum install hadoop-0.20-namenode
Yum install hadoop-0.20-datanode
Yum install hadoop-hbase
3|、从主节点拷贝配置文件到新加节点
Scp –r /etc/hadoop-0.20/conf newnodeip:/etc/hadoop-0.20
Scp –r /etc/hbase/conf newnodeip:/etc/hbase
Scp –r /etc/zookeeper/conf newnodeip:/etc/zookeeper
3、 设置用户最大打开的文件数量:
Vim /etc/security/limits.conf
hdfs - nofile 32768
hbase - nofile 32768
4、 在master主机上添加dns解析
Vim /var/named/hdfs.zone
$TTL 86400
@ IN SOA hdfs. root(
200101111
14400
3600
604800
86400)
master-hadoop IN A 192.168.5.249
slave1-hadoop IN A 192.168.5.201
hostname IN A newip(注释)
master-hbase IN A 192.168.5.249
slave1-hbase IN A 192.168.5.201
hostname IN A newip(注释)
@ IN NS ns.hdfs.
/etc/rc.d/init.d/named restart
6,本机设置dns
vi /etc/resolv.conf
earch hdfs
domain hdfs
nameserver 192.168.5.249
nameserver 202.106.0.20
~
7、主节点master主机添加从节点配置:
Vim /etc/hadoop-0.20/slaves
slave1-hadoop.hdfs
slave2-hadoop.hdfs
slave4-hadoop.hdfs
slave3-hadoop.hdfs
hostname(注释)
vim /etc/hbase/regionserver
slave1-hadoop.hdfs
slave2-hadoop.hdfs
slave4-hadoop.hdfs
slave3-hadoop.hdfs
hostname(注释)
8、配置免密码验证
Scp /home/hdfs/.ssh/* newip:/home/hdfs/.ssh
9、启动新加节点:
/usr/hahadoop-0.20/bin/daemon start datanode
/usr/hahadoop-0.20/bin/daemon start tasktracker本文出自 “mary的博客” 博客,请务必保留此出处http://marysee.blog.51cto.com/1000292/629420
相关文章推荐
- Hadoop分布式集群系统添加节点
- Hadoop分布式集群系统添加节点
- 完全分布模式hadoop集群安装配置之二 添加新节点组成分布式集群
- Hadoop之分布式集群中节点的动态添加与下架(笔记13)
- 百度的Hadoop分布式大数据系统图解:4000节点集群
- Hadoop伪分布式或集群系统的搭建
- hadoop集群添加节点
- Hadoop2.6集群动态添加和删除数据节点
- 安装单节点伪分布式 CDH hadoop 集群
- Hadoop2.6完全分布式多节点集群安装配置
- centos7(vm)下hadoop2.7.2 3节点集群(2副本)+flume1.4.0版本分布式 log收集在本地(x86)
- Hadoop集群中添加或删除节点
- 在虚拟机上安装5节点Hadoop分布式集群(HA)-环境准备
- CentOS 64位系统进行Hadoop2.3.0本地编译及完全分布式集群的部署
- Hadoop集群中部署Ganglia分布式监控系统
- hadoop集群中动态添加新的DataNode节点
- 完全分布模式hadoop集群安装配置之二 添加新节点组成分布式集群
- 搭建3个节点的hadoop集群(完全分布式部署)5 flume安装及flume导数据到hdfs
- 安装一个单节点的 Hadoop 分布式系统
- Hadoop-2.7.4 八节点分布式集群安装