您的位置:首页 > 运维架构 > Linux

CentOS系统下Hadoop 2.4.1集群安装配置(简易版)

2014-11-05 20:52 381 查看

安装配置

1、软件下载

JDK下载:jdk-7u65-linux-i586.tar.gz

http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html

Hadoop下载:hadoop-2.4.1.tar.gz

http://www.apache.org/dyn/closer.cgi/hadoop/common/

2、/etc/hosts配置

[html] view plaincopyprint?

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

<strong><span style="color:#ff0000;">192.168.1.2 Master.Hadoop

192.168.1.3 Slave1.Hadoop</span></strong>

3、/etc/profile配置

[html] view plaincopyprint?

export JAVA_HOME=/usr/java/jrockit-jdk1.6.0_45-R28.2.7-4.1.0

export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib

export PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/bin

export HADOOP_HOME=/usr/hadoop

export HADOOP_HOME_WARN_SUPPRESS=1

export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

4、~/etc/hadoop/core-site.xml配置

[html] view plaincopyprint?

<configuration>

<property>

<name>fs.defaultFS</name>

<value>hdfs://Master.Hadoop:9000</value>

<description>

Where to find the Hadoop Filesystem through the network.

Note 9000 is not the default port.

(This is slightly changed from previous versions which didnt have "hdfs")

</description>

</property>

<property>

<name>hadoop.tmp.dir</name>

<value>/usr/hadoop/tmp</value>

</property>

</configuration>

5、~/etc/hadoop/mapred-site.xml配置

[html] view plaincopyprint?

<configuration>

<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

</property>

</configuration>

6、etc/hadoop/yarn-site.xml配置

[html] view plaincopyprint?

<configuration>

<property>

<name>yarn.resourcemanager.scheduler.address</name>

<value>Master.Hadoop:8030</value>

</property>

<property>

<name>yarn.resourcemanager.resource-tracker.address</name>

<value>Master.Hadoop:8031</value>

</property>

<property>

<name>yarn.resourcemanager.address</name>

<value>Master.Hadoop:8032</value>

</property>

<property>

<name>yarn.resourcemanager.admin.address</name>

<value>Master.Hadoop:8033</value>

</property>

<property>

<name>yarn.resourcemanager.webapp.address</name>

<value>Master.Hadoop:8088</value>

</property>

<property>

<name>yarn.resourcemanager.webapp.https.address</name>

<value>Master.Hadoop:8090</value>

</property>

<property>

<name>yarn.nodemanager.local-dirs</name>

<value>${hadoop.tmp.dir}/nodemanager/local</value>

<description>the local directories used by the nodemanager</description>

</property>

<property>

<name>yarn.nodemanager.remote-app-log-dir</name>

<value>${hadoop.tmp.dir}/nodemanager/remote</value>

<description>directory on hdfs where the application logs are moved to </description>

</property>

<property>

<name>yarn.nodemanager.log-dirs</name>

<value>${hadoop.tmp.dir}/nodemanager/logs</value>

<description>the directories used by Nodemanagers as log directories</description>

</property>

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

<description>shuffle service that needs to be set for Map Reduce to run </description>

</property>

<property>

<name>mapreduce.jobhistory.address</name>

<value>Master.Hadoop:10020</value>

</property>

<property>

<name>mapreduce.jobhistory.webapp.address</name>

<value>Master.Hadoop:19888</value>

</property>

</configuration>

7、~/etc/hadoop/hdfs-site.xml配置

[html] view plaincopyprint?

<configuration>

<property>

<name>dfs.permissions.superusergroup</name>

<value>root</value>

</property>

<property>

<name>dfs.replication</name>

<value>1</value>

</property>

</configuration>

启动与验证

1、格式化HDFS文件系统

hadoop namenode -format

2、启动Hadoop

启动前关闭集群中所有机器的防火墙

service iptables stop

启动命令

start-all.sh

3、验证Hadoop

方式一:jps

方式二:hadoop dfsadmin -report
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: