您的位置:首页 > 运维架构

Hadoop2.0单机环境安装部署

2014-01-22 22:17 573 查看
步骤1:将安装包hadoop-2.2.0.tar.gz存放到某一个目录下,并解压

步骤2:修改解压后的目录中的文件夹etc/hadoop下的xml配置文件(如果文件不存在,则自己创建)

hadoop-evn.sh修改

export JAVA_HOME=/usr/java/jdk1.6.0_12

slaves文件修改为(可改也不改,如果改,则要在/etc/hosts中相应加上,本文是加上了)

YARN0001

如果是本机,默认的是localhost,就不用修改

以下XML配置文件,需在标签<configuration>和</configuration>之间增加配置项

mapred-site.xml

<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

</property>

core-site.xml(其中YARN s是在/etc/hosts中设置的host,如果未设置,则换为localhost)

<property>

<name>fs.default.name</name>

<value>hdfs://YARN001:8020</value>

</property>

yarn-site.xml

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>

hdfs-site.xml

<property>

<name>dfs.replication</name>

<value>1</value>

</property>

<property>

<name>dfs.namenode.name.dir</name>

<value>/usr/local/hadoop2/name</value>

</property>

<property>

<name>dfs.datanode.data.dir</name>

<value>/usr/local/hadoop2/data</value>

</property>

一步步来

bin/hadoop namenode -format

sbin/hadoop-daemon.sh start namenode

sbin/hadoop-daemon.sh start datanode

jps

yarn001:50070/

sbin/yarn-daemon.sh start yarn

sbin/start-yarn.sh 启动yarn

yarn001:8088/

sbin/stop-yarn.sh 关闭yarn

跑一个测试用例

bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar pi 2 100

使用start-yarn.sh和start-dfs.sh启动

[user@fakeDistnode hadoop-2.2.0]$ sbin/start-yarn.sh

starting yarn daemons

starting resourcemanager, logging to /usr/local/hadoop-2.2.0/logs/yarn-user-resourcemanager-fakeDistnode.out

localhost: starting nodemanager, logging to /usr/local/hadoop-2.2.0/logs/yarn-user-nodemanager-fakeDistnode.out

[user@fakeDistnode hadoop-2.2.0]$ jps

3474 NameNode

3865 NodeManager

3761 ResourceManager

4059 Jps

3591 DataNode

[user@fakeDistnode hadoop-2.2.0]$ sbin/stop-yarn.sh

stopping yarn daemons

stopping resourcemanager

localhost: stopping nodemanager

no proxyserver to stop

[user@fakeDistnode hadoop-2.2.0]$ sbin/stop-dfs.sh

Stopping namenodes on [YARN001]

The authenticity of host 'yarn001 (127.0.0.1)' can't be established.

RSA key fingerprint is 86:8e:c7:42:6b:df:ea:76:e9:42:34:a6:b5:0f:96:96.

Are you sure you want to continue connecting (yes/no)? yes

YARN001: Warning: Permanently added 'yarn001' (RSA) to the list of known hosts.

YARN001: stopping namenode

localhost: stopping datanode

Stopping secondary namenodes [0.0.0.0]

The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.

RSA key fingerprint is 86:8e:c7:42:6b:df:ea:76:e9:42:34:a6:b5:0f:96:96.

Are you sure you want to continue connecting (yes/no)? yes

0.0.0.0: Warning: Permanently added '0.0.0.0' (RSA) to the list of known hosts.

0.0.0.0: no secondarynamenode to stop

[user@fakeDistnode hadoop-2.2.0]$ sbin/start-yarn.sh

starting yarn daemons

starting resourcemanager, logging to /usr/local/hadoop-2.2.0/logs/yarn-user-resourcemanager-fakeDistnode.out

localhost: starting nodemanager, logging to /usr/local/hadoop-2.2.0/logs/yarn-user-nodemanager-fakeDistnode.out

[user@fakeDistnode hadoop-2.2.0]$ jps

3474 NameNode

3865 NodeManager

3761 ResourceManager

4059 Jps

3591 DataNode

[user@fakeDistnode hadoop-2.2.0]$ sbin/stop-yarn.sh

stopping yarn daemons

stopping resourcemanager

localhost: stopping nodemanager

no proxyserver to stop

[user@fakeDistnode hadoop-2.2.0]$ sbin/stop-dfs.sh

Stopping namenodes on [YARN001]

The authenticity of host 'yarn001 (127.0.0.1)' can't be established.

RSA key fingerprint is 86:8e:c7:42:6b:df:ea:76:e9:42:34:a6:b5:0f:96:96.

Are you sure you want to continue connecting (yes/no)? yes

YARN001: Warning: Permanently added 'yarn001' (RSA) to the list of known hosts.

YARN001: stopping namenode

localhost: stopping datanode

Stopping secondary namenodes [0.0.0.0]

The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.

RSA key fingerprint is 86:8e:c7:42:6b:df:ea:76:e9:42:34:a6:b5:0f:96:96.

Are you sure you want to continue connecting (yes/no)? yes

0.0.0.0: Warning: Permanently added '0.0.0.0' (RSA) to the list of known hosts.

0.0.0.0: no secondarynamenode to stop

如果某个节点无法启动可以类似启动,如果是namenode,把下面的datanode改为namenode即可

[user@fakeDistnode hadoop-2.2.0]$ sbin/hadoop-daemon.sh start datanode

starting datanode, logging to /usr/local/hadoop-2.2.0/logs/hadoop-user-datanode-fakeDistnode.out

[user@fakeDistnode hadoop-2.2.0]$ jps

3474 NameNode

3591 DataNode

3632 Jps

那么使用start-yarn.sh和start-dfs.sh后,什么时候某个节点可能启动呢,我遇到了namenode无法启动的情况:

[user@centos64 hadoop-2.2.0]$ sbin/start-all.sh

This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh

Starting namenodes on [localhost]

localhost: ssh: localhost: Temporary failure in name resolution

centos64: starting datanode, logging to /usr/local/hadoop-2.2.0/logs/hadoop-user-datanode-centos64.out

Starting secondary namenodes [0.0.0.0]

0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.2.0/logs/hadoop-user-secondarynamenode-

centos64.out

starting yarn daemons

starting resourcemanager, logging to /usr/local/hadoop-2.2.0/logs/yarn-user-resourcemanager-centos64.out

centos64: starting nodemanager, logging to /usr/local/hadoop-2.2.0/logs/yarn-user-nodemanager-centos64.out

原因是core-site.xml 我把fs.default.name 写成了hdfs://localhost:8020

如果没有 ssh-keygen要交互输入机器的账号密码(http://blog.csdn.net/lzlchangqi/article/details/13397415 实现SSH无密码登陆)

[hadoop@localhost hadoop-2.6.0]$ sbin/start-all.sh

This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh

15/03/11 04:47:49 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

15/03/11 04:47:53 WARN conf.Configuration: bad conf file: element not <property>

15/03/11 04:47:53 WARN conf.Configuration: bad conf file: element not <property>

15/03/11 04:47:54 WARN conf.Configuration: bad conf file: element not <property>

15/03/11 04:47:54 WARN conf.Configuration: bad conf file: element not <property>

Starting namenodes on [YARN001]

The authenticity of host 'yarn001 (127.0.0.1)' can't be established.

RSA key fingerprint is e5:5f:ff:0e:13:a8:8f:20:dc:93:8b:40:ff:3e:77:40.

Are you sure you want to continue connecting (yes/no)? yes

YARN001: Warning: Permanently added 'yarn001' (RSA) to the list of known hosts.

hadoop@yarn001's password:

YARN001: starting namenode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-hadoop-namenode-localhost.localdomain.out

hadoop@yarn001's password:

YARN001: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-hadoop-datanode-localhost.localdomain.out

Starting secondary namenodes [0.0.0.0]

hadoop@0.0.0.0's password:

0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-hadoop-secondarynamenode-localhost.localdomain.out

15/03/11 04:50:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

(请参考:http://blog.csdn.net/iloveyin/article/details/28909771 或http://blog.itpub.net/20777547/viewspace-1147174从这下人家编译好的,当然自己也可以下载源代码后自己编译)

15/03/11 04:50:42 WARN conf.Configuration: bad conf file: element not <property>

15/03/11 04:50:42 WARN conf.Configuration: bad conf file: element not <property>

15/03/11 04:50:43 WARN conf.Configuration: bad conf file: element not <property>

15/03/11 04:50:43 WARN conf.Configuration: bad conf file: element not <property>

starting yarn daemons

resourcemanager running as process 25218. Stop it first.

hadoop@yarn001's password:

hadoop@yarn001's password: YARN001: Permission denied, please try again.

YARN001: starting nodemanager, logging to /usr/local/hadoop-2.2.0/logs/yarn-hadoop-nodemanager-localhost.localdomain.out

[hadoop@localhost hadoop-2.2.0]$ jps

26222 NodeManager

25688 NameNode

26002 SecondaryNameNode

25796 DataNode

25218 ResourceManager

26254 Jps

[user@centos64 hadoop-2.2.0]$ bin/hdfs dfs -ls /

Found 2 items

drwx------ - user supergroup 0 2015-05-30 22:53 /tmp

drwxr-xr-x - user supergroup 0 2015-05-30 22:53 /user

[user@centos64 hadoop-2.2.0]$ bin/hdfs dfs -ls -R /

drwx------ - user supergroup 0 2015-05-30 23:14 /tmp

drwxr-xr-x - user supergroup 0 2015-05-30 22:53 /user

drwxr-xr-x - user supergroup 0 2015-05-30 22:55 /user/user

[user@centos64 hadoop-2.2.0]$ bin/hdfs dfs -mkdir /user/user/pigtest

[user@centos64 hadoop-2.2.0]$ bin/hdfs dfs -ls -R /

drwx------ - user supergroup 0 2015-05-30 23:14 /tmp

drwxr-xr-x - user supergroup 0 2015-05-30 22:53 /user

drwxr-xr-x - user supergroup 0 2015-05-30 23:16 /user/user

drwxr-xr-x - user supergroup 0 2015-05-30 23:16 /user/user/pigtest

[user@centos64 Desktop]$ hdfs dfs -put /home/user/Desktop/student.txt /user/user/pigtest

[user@centos64 Desktop]$ hdfs dfs -put /home/user/Desktop/teacher.txt /user/user/pigtest

[user@centos64 Desktop]$ hdfs dfs -ls /user/user/pigtest/

Found 2 items

-rw-r--r-- 1 user supergroup 124 2015-05-30 23:46 /user/user/pigtest/student.txt

-rw-r--r-- 1 user supergroup 38 2015-05-30 23:48 /user/user/pigtest/teacher.txt

[user@centos64 Desktop]$
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: