您的位置:首页 > 运维架构

hadoop2.6伪分布式配置

2016-03-31 09:49 429 查看
软件的安装目录:

/opt/modules/

安装:

0)说明

SSH本机免登陆密码

ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa

cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

1、系统:CentOS 6.4 64位

2、关闭防火墙和SELinux

service iptables status

service iptables stop

chkconfig iptables off

vi /etc/sysconfig/selinux

设置 SELINUX=disabled

3、设置静态IP地址

vi /etc/sysconfig/network-scripts/ifcfg-eth0

4、修改HostName

hostname hadoop-yarn.dragon.org

vi /etc/sysconfig/network

5、IP与HostName绑定

vi /etc/hosts

内容:

192.168.48.128
hadoop-yarn.dragon.org hadoop-yarn

6、安装JDK

安装目录:/opt/modules/jdk1.6.0_45

版本:jdk-6u45-linux-x64.bin

命令:

chmod u+x jdk-6u45-linux-x64.bin

./jdk-6u45-linux-x64.bin

设置环境变量:

vi /etc/profile

添加内容:

export JAVA_HOME=/opt/modules/jdk1.6.0_45

export PATH=$PATH:$JAVA_HOME/bin

生效:

# source /etc/profile

##JAVA

export JAVA_HOME=/opt/modules/jdk1.7

export PATH=$PATH:$JAVA_HOME/bin

##HADOOP

export HADOOP_HOME=/opt/modules/hadoop-2.6.0

export PATH=$PATH:/opt/modules/hadoop-2.6.0/sbin:/opt/modules/hadoop-2.6.0/bin

slaves

hadoop-master.dragon.org

1、hadoop-env.sh

export JAVA_HOME=/opt/modules/jdk1.7

2、yarn-env.sh

export JAVA_HOME=/opt/modules/jdk1.7

3、mapred-env.sh

export JAVA_HOME=/opt/modules/jdk1.7

core-site.xml

<property>

<name>hadoop.tmp.dir</name>

<value>/opt/hadoopdata/tmp</value>

</property>

<property>

<name>fs.defaultFS</name>

<value>hdfs://hadoop-master.dragon.org:9000</value>

</property>

hdfs-site.xml

<configuration>

<property>

<name>dfs.replication</name>

<value>1</value>

</property>

<property>

<name>dfs.permissions</name>

<value>false</value>

</property>

<property>

<name>dfs.namenode.name.dir</name>

<value>/opt/hadoopdata/dfs/name</value>

</property>

<property>

<name>dfs.datannode.data.dir</name>

<value>/opt/hadoopdata/dfs/data</value>

</property>

</configuration>

yarn-site.xml

<configuration>

<property>

<name>yarn.resourcemanager.hostname</name>

<value>hadoop-master.dragon.org</value>

</property>

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>

<!-- Site specific YARN configuration properties -->

</configuration>

mapred-site.xml

<configuration>

<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

</property>

</configuration>

4)启动 start-dfs.sh start-yarn.sh mr-jobhistory-daemon.sh start historyserver

1、启动HDFS

NameNode、DataNode、SecondaryNameNode

* NameNode 格式化

bin/hdfs namenode -format
-- 产生一个Cluster ID

拓展:

* 指定Cluster ID

bin/hdfs namenode -format -clusterid yarn-cluster

* Block Pool ID

数据块池ID

* NameNode Fedaration

* 启动NameNode

sbin/hadoop-daemon.sh start namenode

* 启动DataNode

sbin/hadoop-daemon.sh start datanode

* 启动SecondaryNameNode

sbin/hadoop-daemon.sh start secondarynamenode

2、启动YARN

ResourceManager、NodeManager

* 启动ResourceManger

sbin/yarn-daemon.sh start resourcemanager

* 启动NodeManager

sbin/yarn-daemon.sh start nodemanager
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: