大数据 IMF传奇行动 如何 搭建 8台设备的 hadoop分布式集群
2016-02-07 14:28
537 查看
硬件配置 华为RH2285设备
2CPU 8核16线程
48G 内存
380G硬盘
1.配置Hadoop的全局环境变量
输入名称# vi /etc/profile打开profile文件,按i可以进入文本输入模式,在profile文件的最后增加HADOOP_HOME及修改PATH的环境变量,输入:wq!保存退出。
export HADOOP_HOME=/usr/local/hadoop-2.6.0
export PATH=.:$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$SCALA_HOME/bin
2.在命令行中输入source /etc/profile,使刚才修改的HADOOP_HOME及PATH配置文件生效。
3.hadoop-env.sh配置文件修改
export JAVA_HOME=/usr/local/jdk1.8.0_60
4.core-site.xml核心配置文件修改
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop-2.6.0/tmp</value>
<description>hadoop.tmp.dir</description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
</configuration>
5.hdfs-site.xml配置文件修改
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/usr/local/hadoop-2.6.0/tmp/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value> /usr/local/hadoop-2.6.0/tmp/dfs/data</value>
</property>
</configuration>
6.salves 修改
root@master:/usr/local/hadoop-2.6.0/etc/hadoop# cat slaves
worker1
worker2
worker3
worker4
worker5
worker6
worker7
worker8
7。分发hadoop
root@master:/usr/local# cd setup_scripts
root@master:/usr/local/setup_scripts# ls
host_scp.sh ssh_config.sh ssh_scp.sh
root@master:/usr/local/setup_scripts#
root@master:/usr/local/setup_scripts# cat hadoop_scp.sh
#!/bin/sh
for i in 2 3 4 5 6 7 8 9
do
scp -rq /etc/profile root@192.168.189.$i:/etc/profile
ssh root@192.168.189.$i source /etc/profile
scp -rq /usr/local/hadoop-2.6.0 root@192.168.189.$i:/usr/local/hadoop-2.6.0
done
root@master:/usr/local/setup_scripts#
8。执行完成
9。格式化hdfs namenode -format
root@master:/usr/local/setup_scripts# cd /usr/local/hadoop-2.6.0/bin
root@master:/usr/local/hadoop-2.6.0/bin# hdfs namenode -format
10。启动集群
root@master:/usr/local/hadoop-2.6.0/sbin# start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [master]
master: Warning: Permanently added 'master,192.168.189.1' (ECDSA) to the list of known hosts.
master: starting namenode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-namenode-master.out
worker6: Warning: Permanently added 'worker6' (ECDSA) to the list of known hosts.
worker7: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker7.out
worker6: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker6.out
worker5: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker5.out
worker4: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker4.out
worker3: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker3.out
worker8: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker8.out
worker2: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker2.out
worker1: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker1.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-secondarynamenode-master.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-resourcemanager-master.out
worker1: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-worker1.out
worker2: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-worker2.out
worker5: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-worker5.out
worker7: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-worker7.out
worker6: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-worker6.out
worker4: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-worker4.out
worker3: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-worker3.out
worker8: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-worker8.out
root@master:/usr/local/hadoop-2.6.0/sbin#
root@master:/usr/local/hadoop-2.6.0/sbin# jps
5378 NameNode
5608 SecondaryNameNode
6009 Jps
5742 ResourceManager
root@worker1:/usr/local# jps
3866 DataNode
4077 Jps
3950 NodeManager
root@worker1:/usr/local#
root@worker7:/usr/local# jps
3750 NodeManager
3656 DataNode
3865 Jps
root@worker7:/usr/local#
11.
打开http://192.168.189.1:50070
http://192.168.189.1:50070/dfshealth.html#tab-datanode
Datanode Information
In operation
Node Last contactAdmin StateCapacityUsedNon DFS UsedRemainingBlocksBlock
pool usedFailed VolumesVersion
worker6 (192.168.189.7:50010) 2In Service17.45 GB24 KB6.19 GB11.26
GB024 KB (0%)02.6.0
worker7 (192.168.189.8:50010) 1In Service17.45 GB24 KB6.19 GB11.26
GB024 KB (0%)02.6.0
worker8 (192.168.189.9:50010) 1In Service17.45 GB24 KB6.19 GB11.26
GB024 KB (0%)02.6.0
worker1 (192.168.189.2:50010) 2In Service17.45 GB24 KB6.19 GB11.26
GB024 KB (0%)02.6.0
worker2 (192.168.189.3:50010) 1In Service17.45 GB24 KB6.19 GB11.26
GB024 KB (0%)02.6.0
worker3 (192.168.189.4:50010) 2In Service17.45 GB24 KB6.19 GB11.26
GB024 KB (0%)02.6.0
worker4 (192.168.189.5:50010) 0In Service17.45 GB24 KB6.19 GB11.26
GB024 KB (0%)02.6.0
worker5 (192.168.189.6:50010) 1In Service17.45 GB24 KB6.19 GB11.25
GB024 KB (0%)02.6.0
Decomissioning
Overview 'master:9000' (active)
Started: Sun Feb 07 14:17:41 CST 2016
Version: 2.6.0, re3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
Compiled: 2014-11-13T21:10Z by jenkins from (detached from e349649)
Cluster ID: CID-f4efbd54-7685-450e-b119-5932052252ff
Block Pool ID: BP-367257699-192.168.189.1-1454825792055
2CPU 8核16线程
48G 内存
380G硬盘
1.配置Hadoop的全局环境变量
输入名称# vi /etc/profile打开profile文件,按i可以进入文本输入模式,在profile文件的最后增加HADOOP_HOME及修改PATH的环境变量,输入:wq!保存退出。
export HADOOP_HOME=/usr/local/hadoop-2.6.0
export PATH=.:$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$SCALA_HOME/bin
2.在命令行中输入source /etc/profile,使刚才修改的HADOOP_HOME及PATH配置文件生效。
3.hadoop-env.sh配置文件修改
export JAVA_HOME=/usr/local/jdk1.8.0_60
4.core-site.xml核心配置文件修改
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop-2.6.0/tmp</value>
<description>hadoop.tmp.dir</description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
</configuration>
5.hdfs-site.xml配置文件修改
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/usr/local/hadoop-2.6.0/tmp/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value> /usr/local/hadoop-2.6.0/tmp/dfs/data</value>
</property>
</configuration>
6.salves 修改
root@master:/usr/local/hadoop-2.6.0/etc/hadoop# cat slaves
worker1
worker2
worker3
worker4
worker5
worker6
worker7
worker8
7。分发hadoop
root@master:/usr/local# cd setup_scripts
root@master:/usr/local/setup_scripts# ls
host_scp.sh ssh_config.sh ssh_scp.sh
root@master:/usr/local/setup_scripts#
root@master:/usr/local/setup_scripts# cat hadoop_scp.sh
#!/bin/sh
for i in 2 3 4 5 6 7 8 9
do
scp -rq /etc/profile root@192.168.189.$i:/etc/profile
ssh root@192.168.189.$i source /etc/profile
scp -rq /usr/local/hadoop-2.6.0 root@192.168.189.$i:/usr/local/hadoop-2.6.0
done
root@master:/usr/local/setup_scripts#
8。执行完成
9。格式化hdfs namenode -format
root@master:/usr/local/setup_scripts# cd /usr/local/hadoop-2.6.0/bin
root@master:/usr/local/hadoop-2.6.0/bin# hdfs namenode -format
10。启动集群
root@master:/usr/local/hadoop-2.6.0/sbin# start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [master]
master: Warning: Permanently added 'master,192.168.189.1' (ECDSA) to the list of known hosts.
master: starting namenode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-namenode-master.out
worker6: Warning: Permanently added 'worker6' (ECDSA) to the list of known hosts.
worker7: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker7.out
worker6: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker6.out
worker5: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker5.out
worker4: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker4.out
worker3: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker3.out
worker8: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker8.out
worker2: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker2.out
worker1: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker1.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-secondarynamenode-master.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-resourcemanager-master.out
worker1: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-worker1.out
worker2: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-worker2.out
worker5: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-worker5.out
worker7: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-worker7.out
worker6: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-worker6.out
worker4: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-worker4.out
worker3: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-worker3.out
worker8: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-worker8.out
root@master:/usr/local/hadoop-2.6.0/sbin#
root@master:/usr/local/hadoop-2.6.0/sbin# jps
5378 NameNode
5608 SecondaryNameNode
6009 Jps
5742 ResourceManager
root@worker1:/usr/local# jps
3866 DataNode
4077 Jps
3950 NodeManager
root@worker1:/usr/local#
root@worker7:/usr/local# jps
3750 NodeManager
3656 DataNode
3865 Jps
root@worker7:/usr/local#
11.
打开http://192.168.189.1:50070
http://192.168.189.1:50070/dfshealth.html#tab-datanode
Datanode Information
In operation
Node Last contactAdmin StateCapacityUsedNon DFS UsedRemainingBlocksBlock
pool usedFailed VolumesVersion
worker6 (192.168.189.7:50010) 2In Service17.45 GB24 KB6.19 GB11.26
GB024 KB (0%)02.6.0
worker7 (192.168.189.8:50010) 1In Service17.45 GB24 KB6.19 GB11.26
GB024 KB (0%)02.6.0
worker8 (192.168.189.9:50010) 1In Service17.45 GB24 KB6.19 GB11.26
GB024 KB (0%)02.6.0
worker1 (192.168.189.2:50010) 2In Service17.45 GB24 KB6.19 GB11.26
GB024 KB (0%)02.6.0
worker2 (192.168.189.3:50010) 1In Service17.45 GB24 KB6.19 GB11.26
GB024 KB (0%)02.6.0
worker3 (192.168.189.4:50010) 2In Service17.45 GB24 KB6.19 GB11.26
GB024 KB (0%)02.6.0
worker4 (192.168.189.5:50010) 0In Service17.45 GB24 KB6.19 GB11.26
GB024 KB (0%)02.6.0
worker5 (192.168.189.6:50010) 1In Service17.45 GB24 KB6.19 GB11.25
GB024 KB (0%)02.6.0
Decomissioning
Overview 'master:9000' (active)
Started: Sun Feb 07 14:17:41 CST 2016
Version: 2.6.0, re3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
Compiled: 2014-11-13T21:10Z by jenkins from (detached from e349649)
Cluster ID: CID-f4efbd54-7685-450e-b119-5932052252ff
Block Pool ID: BP-367257699-192.168.189.1-1454825792055
相关文章推荐
- Vim简明教程【CoolShell】转自http://blog.csdn.net/niushuai666/article/details/7275406 (飘过的小牛)
- 大数据IMF 传奇 8台设备如何实现免密码的SSH登录呢 ?脚本分发 解决方案
- HDFS的读写流程
- 《Windows IoT 应用开发指南》
- 大数据IMF传奇行动 UBUNTU的SSH SECURECRT不能登陆 与 vmvare net 8的问题解决
- JetBrains Cracker
- tortoisegit 常见错误disconnected no supported authentication methods available(server sent: publickey)
- aircrack-ng套装破解wifi过程详解
- color-scheme,颜色方案,jetbrains主题,webstrom
- 新浪云计算SAE部署代码过程
- 大数据IMF传奇行动 IDEA导入spark源代码! 走入spark源代码世界!
- 人工智能之一:什么是人工智能?
- #AIM Tech Round [div2] C. Graph and String 【连通图、染色】
- 使用Behavior Designer插件简单制作NPC AI
- 构建IoT系统必须的五项内容 (Page 4)
- Codeforces AIM Tech Round (Div. 2) 624A A. Save Luke
- Codeforces AIM Tech Round (Div. 2) 624B Making a String
- Codeforces AIM Tech Round (Div. 1)623A Graph and String
- Codeforces AIM Tech Round (Div. 2)解题报告
- 构建IoT系统必须的五项内容 (Page 3)