您的位置:首页 > 运维架构

[Hadoop]Hadoop安装

2016-12-04 15:19 99 查看
1. SSH参考博文:[Hadoop]SSH免密码登录以及失败解决方案(http://blog.csdn.net/sunnyyoona/article/details/51689041#t12. 下载(1)直接从官网上下载 http://hadoop.apache.org/releases.html(2)使用命令行下载:
xiaosi@yoona:~$ wget http://mirrors.hust.edu.cn/apache/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz[/code] 
--2016-06-16 08:40:07--  http://mirrors.hust.edu.cn/apache/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz

正在解析主机 mirrors.hust.edu.cn (mirrors.hust.edu.cn)... 202.114.18.160

正在连接 mirrors.hust.edu.cn (mirrors.hust.edu.cn)|202.114.18.160|:80... 已连接。

已发出 HTTP 请求,正在等待回应... 200 OK

长度: 196015975 (187M) [application/octet-stream]

正在保存至: “hadoop-2.6.4.tar.gz”

3. 解压缩Hadoop包解压位于根目录/文件夹下的hadoop-2.7.3.tar.gz到~/opt文件夹下
xiaosi@yoona:~$ tar -zxvf hadoop-2.7.3.tar.gz -C opt/


[b]4. 配置
[/b]配置文件都位于安装目录下的 /etc/hadoop文件夹下:
xiaosi@yoona:~/opt/hadoop-2.7.3/etc/hadoop$ ls

capacity-scheduler.xml  hadoop-env.sh              httpfs-log4j.properties  log4j.properties            mapred-site.xml.template

configuration.xsl       hadoop-metrics2.properties  httpfs-signature.secret  log4j.properties          slaves

container-executor.cfg  hadoop-metrics.properties   httpfs-site.xml          mapred-env.cmd              ssl-client.xml.example

core-site.xml           hadoop-policy.xml           kms-acls.xml             mapred-env.sh               ssl-server.xml.example

core-site.xml          hdfs-site.xml               kms-env.sh               mapred-queues.xml.template  yarn-env.cmd

hadoop-env.cmd          hdfs-site.xml              kms-log4j.properties     mapred-site.xml             yarn-env.sh

hadoop-env.sh           httpfs-env.sh               kms-site.xml             mapred-site.xml            yarn-site.xml

Hadoop的各个组件均可利用XML文件进行配置。core-site.xml文件用于配置Common组件的属性,hdfs-site.xml文件用于配置HDFS属性,而mapred-site.xml文件则用于配置MapReduce属性。
备注:Hadoop早期版本采用一个配置文件hadoop-site.xml来配置Common,HDFS和MapReduce组件。从0.20.0版本开始该文件以分为三,各对应一个组件。4.1 配置core-site.xmlcore-site.xml 配置如下:
<?xml version="1.0" encoding="UTF-8"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!--

 Licensed under the Apache License, Version 2.0 (the "License");

 you may not use this file except in compliance with the License.

 You may obtain a copy of the License at


   http://www.apache.org/licenses/LICENSE-2.0


 Unless required by applicable law or agreed to in writing, software

 distributed under the License is distributed on an "AS IS" BASIS,

 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

 See the License for the specific language governing permissions and

 limitations under the License. See accompanying LICENSE file.

-->


<!-- Put site-specific property overrides in this file. -->


<configuration>


  <property>

   <name>hadoop.tmp.dir</name>

   <value>/home/${user.name}/tmp/hadoop</value>

   <description>Abase for other temporary directories.</description>

  </property>


  <property>

   <name>fs.defaultFS</name>

   <value>hdfs://localhost:9000</value>

  </property>

 

  <property> 

   <name>hadoop.proxyuser.xiaosi.hosts</name> 

   <value>*</value> 

   <description>The superuser can connect only from host1 and host2 to impersonate a user</description>

  </property> 

  <property> 

   <name>hadoop.proxyuser.xiaosi.groups</name> 

   <value>*</value> 

   <description>Allow the superuser oozie to impersonate any members of the group group1 and group2</description>

  </property>


</configuration>


4.2 配置hdfs-site.xmlhdfs-site.xml配置如下:
<?xml version="1.0" encoding="UTF-8"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!--

 Licensed under the Apache License, Version 2.0 (the "License");

 you may not use this file except in compliance with the License.

 You may obtain a copy of the License at


   http://www.apache.org/licenses/LICENSE-2.0


 Unless required by applicable law or agreed to in writing, software

 distributed under the License is distributed on an "AS IS" BASIS,

 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

 See the License for the specific language governing permissions and

 limitations under the License. See accompanying LICENSE file.

-->


<!-- Put site-specific property overrides in this file. -->


<configuration>


  <property>

    <name>dfs.replication</name>

    <value>1</value>

  </property>


  <property>

    <name>dfs.namenode.name.dir</name>

    <value>file:/home/xiaosi/tmp/hadoop/dfs/name</value>

  </property>


  <property>

    <name>dfs.datanode.data.dir</name>

    <value>file:/home/xiaosi/tmp/hadoop/dfs/data</value>

  </property>


</configuration>


4.3 配置 mapred-site.xmlmapred-site.xml配置如下:
<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!--

 Licensed under the Apache License, Version 2.0 (the "License");

 you may not use this file except in compliance with the License.

 You may obtain a copy of the License at


   http://www.apache.org/licenses/LICENSE-2.0


 Unless required by applicable law or agreed to in writing, software

 distributed under the License is distributed on an "AS IS" BASIS,

 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

 See the License for the specific language governing permissions and

 limitations under the License. See accompanying LICENSE file.

-->


<!-- Put site-specific property overrides in this file. -->


<configuration>


  <property>

    <name>mapred.job.tracker</name>

    <value>localhost:9001</value>

  </property>


</configuration>


运行Hadoop的时候可能会找不到jdk,需要我们修改hadoop.env.sh脚本文件,唯一需要修改的环境变量就是JAVE_HOME,其他选项都是可选的:
export JAVA_HOME=/home/xiaosi/opt/jdk-1.8.0


5. 运行5.1 初始化HDFS系统在配置完成后,运行hadoop前,要初始化HDFS系统,在bin/目录下执行如下命令:
xiaosi@yoona:~/opt/hadoop-2.7.3$ ./bin/hdfs namenode -format

5.2 启动
开启NameNode和DataNode守护进程:
xiaosi@yoona:~/opt/hadoop-2.7.3$ ./sbin/start-dfs.sh

Starting namenodes on [localhost]

localhost: starting namenode, logging to /home/xiaosi/opt/hadoop-2.7.3/logs/hadoop-xiaosi-namenode-yoona.out

localhost: starting datanode, logging to /home/xiaosi/opt/hadoop-2.7.3/logs/hadoop-xiaosi-datanode-yoona.out

Starting secondary namenodes [0.0.0.0]

0.0.0.0: starting secondarynamenode, logging to /home/xiaosi/opt/hadoop-2.7.3/logs/hadoop-xiaosi-secondarynamenode-yoona.out

通过jps命令查看namenode和datanode是否已经启动起来:
xiaosi@yoona:~/opt/hadoop-2.7.3$ jps

13400 SecondaryNameNode

13035 NameNode

13197 DataNode

13535 Jps

从启动日志我们可以知道,日志信息存储在hadoop-2.7.3/logs/目录下,如果启动过程中有任何问题,可以通过查看日志来确认问题原因。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: