您的位置:首页 > 运维架构

Hadoop 2.2.0的安装配置

2014-02-17 21:57 253 查看
根据网上的文章搭建了Hadoop 2.2.0的环境,具体内容如下,备用后续自己做参考。

环境介绍:

我使用的是两台笔记本,都使用VMware安装了Fedora 10的系统。

虚拟机1: IP 192.168.1.105 hostname: cloud001 用户:root

虚拟机2: IP 192.168.1.106 hostname: cloud002 用户: root

准备工作:

1、配置/etc/hosts文件,增加如下两行

192.168.1.105 cloud001

192.168.1.106 cloud002

2、service iptables stop 关闭iptables服务

3、安装jdk jdk-6u45-linux-i586.bin,在/opt下运行解压

配置/etc/profile,增加

export JAVA_HOME=/opt/jdk1.6.0_45

export CLASSPATH=.:$JAVA_HOME/lib.tools.jar

export PATH=$JAVA_HOME/bin:$PATH

而后运行 source /etc/profile

可以使用env命令查看环境变量是否设置成功

4、配置SSH免密码登陆,这个比较重要

1)两台机器上运行ssh-keygen -t rsa 生成一对秘钥:私钥(id_rsa)和公钥(id_rsa.pub);

2)将192.168.1.105机器上的公钥复制到192.168.1.106机器上的相应/root/.ssh/目录下

scp ./id_rsa.pub
root@192.168.1.106:/root/.ssh/authorized_keys

3)将192.168.1.106机器上的公钥复制到192.168.1.105机器上的相应/root/.ssh/目录下

scp ./id_rsa.pub
root@192.168.1.105:/root/.ssh/authorized_keys

4)两台机器都进入到/root/.ssh目录下,都运行cat id_rsa.pub >> authorized_keys

5)配置好后,两台机子上ssh cloud001和ssh cloud002都应该是免密码登陆的

配置hadoop 2.2.0

1、下载hadoop-2.2.0.tar.gz,我这里使用的是32位的,解压到两个机器的/opt目录下;

2、进入/opt/hadoop-2.2.0/etc/hadoop目录,修改hadoop-env.sh

export JAVA_HOME=/opt/jdk1.6.0_45

3、192.168.1.105上修改yarn-env.sh

export JAVA_HOME=/opt/jdk1.6.0_45

4、192.168.1.105上修改core-site.xml,注意XML文件头都有<?xml version="1.0"?>,且XML中配置的目录都应该是真实存在的,如果不存在就自行先创建好。

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://cloud001:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/hadoop-2.2.0/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>hadoop.proxyuser.hduser.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hduser.groups</name>
<value>*</value>
</property>
</configuration>


5、192.168.1.105上修改mapred-site.xml

<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>cloud001:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>cloud001:19888</value>
</property>
</configuration>

6、192.168.1.105上修改hdfs-site.xml

<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>cloud001:9001</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/opt/hadoop-2.2.0/tmp/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/opt/hadoop-2.2.0/tmp/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>

7、192.168.1.105上修改yarn-site.xml文件

<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>cloud001:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>cloud001:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>cloud001:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>cloud001:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>cloud001:8088</value>
</property>
</configuration>

8、192.168.1.105上修改slaves文件,增加一行

cloud002

9、192.168.1.105上创建Sc2salve.sh(名字随便 vim Sc2slave.sh),目的是将配置拷贝到192.168.1.106上

scp /opt/hadoop-2.2.0/etc/hadoop/slaves
root@cloud002:/opt/hadoop-2.2.0/etc/hadoop/slaves

scp /opt/hadoop-2.2.0/etc/hadoop/core-site.xml
root@cloud002:/opt/hadoop-2.2.0/etc/hadoop/core-site.xml

scp /opt/hadoop-2.2.0/etc/hadoop/hdfs-site.xml
root@cloud002:/opt/hadoop-2.2.0/etc/hadoop/hdfs-site.xml

scp /opt/hadoop-2.2.0/etc/hadoop/mapred-site.xml
root@cloud002:/opt/hadoop-2.2.0/etc/hadoop/mapred-site.xml

scp /opt/hadoop-2.2.0/etc/hadoop/yarn-site.xml
root@cloud002:/opt/hadoop-2.2.0/etc/hadoop/yarn-site.xml

10、192.168.1.105上运行Sc2slave.sh将配置文件拷贝到192.168.1.106上;

11、192.168.1.105上修改/etc/profile,目的是能够直接在命令行下使用hadoop等命令;

export HADOOP_HOME=/opt/hadoop-2.2.0

export PATH=.:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$JAVA_HOME/bin:$PATH

12、执行hadoop namenode -format;

13、执行/opt/hadoop-2.2.0/sbin下的start-all.sh脚本

14、执行完成后192.168.1.105上JPS查看:

10531 Jps

9444 SecondaryNameNode

9579 ResourceManager

9282 NameNode

192.168.1.106上 jps查看:

4463 DataNode

4941 Jps

4535 NodeManager

15、192.168.1.105上执行hdfs dfsadmin -report,显示

Configured Capacity: 13460701184 (12.54 GB)

Present Capacity: 5762686976 (5.37 GB)

DFS Remaining: 5762662400 (5.37 GB)

DFS Used: 24576 (24 KB)

DFS Used%: 0.00%

Under replicated blocks: 0

Blocks with corrupt replicas: 0

Missing blocks: 0

-------------------------------------------------

Datanodes available: 1 (1 total, 0 dead)

Live datanodes:

Name: 192.168.1.106:50010 (cloud002)

Hostname: localhost

Decommission Status : Normal

Configured Capacity: 13460701184 (12.54 GB)

DFS Used: 24576 (24 KB)

Non DFS Used: 7698014208 (7.17 GB)

DFS Remaining: 5762662400 (5.37 GB)

DFS Used%: 0.00%

DFS Remaining%: 42.81%

Last contact: Mon Feb 17 05:52:18 PST 2014

整个配置过程都比较顺利,注意的是几个XML文件的配置不要有错误的地方,另如果出错可以查看log检查问题都是出在上面地方,我的操作步骤执行hadoop的log在/opt/hadoop-2.2.0/logs下。

应该是hadoop配置成功了,可以开始下一步学习了!
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: