CentOS7.0分布式安装HADOOP 2.6.0笔记-转载的
2016-04-06 10:51
507 查看
三台虚拟机,IP地址通过路由器静态DHCP分配 (这样就无需设置host了)。
三台机器信息如下 -
1. hadoop-a: 192.168.0.20 #master
2. hadoop-b: 192.168.0.21 #slave
3. hadoop-c: 192.168.0.22 #slave
CenOS 7.0, Hadoop 2.6.0
1. 设置ssh无密码登陆(略)#可参考课程PPT或者 http://my.oschina.net/u/1169607/blog/175899
2. 安装JDK(略)#CentOS7.0 带的OpenJDK版本是1.7,已经可用,不用另外安装。 #export
JAVA_HOME=/usr/lib/jvm/java
3. 安装相关软件:yum install maven svn ncurses-devel
gcc* lzo-devel zlib-devel autoconf automake libtool cmake
openssl-devel (三台机器均安装) //如果安装的是binary,则无需安装这些
4. 关闭防火墙 (三台机器全部关闭)
# systemctl status firewalld.service --查看防火墙状态# systemctl stop firewalld.service --关闭防火墙# systemctl disable firewalld.service --永久关闭防火墙
-------- 以下操作是在Master机上面进行的 ---------
5. 下载解压Hadoop 2.6.0个人目录下面 http://apache.fayea.com/hadoop/c ... hadoop-2.6.0.tar.gz
6. 创建目录,切换到刚解压的HADOOP目录
$ mkdir -p dfs/name
$ mkdir -p dfs/data
$ mkdir -p tmp
$ cd etc/hadoop
$vim slaves
hadoop-b
hadoop-c
7. 修改hadoop-env.sh和yarn-env.sh
$ vim hadoop-env.sh / vim yarn-env.sh
export export JAVA_HOME=/usr/lib/jvm/java
8. 修改core-site.xml文件
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop-a:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131702</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/oliver/hadoop-2.6.0/tmp</value>
</property>
<property>
<name>hadoop.proxyuser.hadoop.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hadoop.groups</name>
<value>*</value>
</property>
</configuration>
9. 修改hdfs-site.xml文件
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>:/home/oliver/hadoop-2.6.0/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>:/home/oliver/hadoop-2.6.0/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop-a:9001</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
10. 修改修改mapred-site.xml文件
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop-a:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop-a:19888</value>
</property>
</configuration>
11. 修改yarn-site.xml文件
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>hadoop-a:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>hadoop-a:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>hadoop-a:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>hadoop-a:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>hadoop-a:8088</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>768</value>
</property>
</configuration>
12. 分发master机器上的hadoop文件到slave机器上
13. 格式化namenode (Master机器上面)
$ ./bin/hdfs namenode -format
14. 启动hdfs (Master机器上面)
$ ./sbin/start-dfs.sh$ ./sbin/start-yarn.sh
15. 检查启动情况
http://192.168.0.20:8088
http://192.168.0.20:9001
三台机器信息如下 -
1. hadoop-a: 192.168.0.20 #master
2. hadoop-b: 192.168.0.21 #slave
3. hadoop-c: 192.168.0.22 #slave
CenOS 7.0, Hadoop 2.6.0
1. 设置ssh无密码登陆(略)#可参考课程PPT或者 http://my.oschina.net/u/1169607/blog/175899
2. 安装JDK(略)#CentOS7.0 带的OpenJDK版本是1.7,已经可用,不用另外安装。 #export
JAVA_HOME=/usr/lib/jvm/java
3. 安装相关软件:yum install maven svn ncurses-devel
gcc* lzo-devel zlib-devel autoconf automake libtool cmake
openssl-devel (三台机器均安装) //如果安装的是binary,则无需安装这些
4. 关闭防火墙 (三台机器全部关闭)
# systemctl status firewalld.service --查看防火墙状态# systemctl stop firewalld.service --关闭防火墙# systemctl disable firewalld.service --永久关闭防火墙
-------- 以下操作是在Master机上面进行的 ---------
5. 下载解压Hadoop 2.6.0个人目录下面 http://apache.fayea.com/hadoop/c ... hadoop-2.6.0.tar.gz
6. 创建目录,切换到刚解压的HADOOP目录
$ mkdir -p dfs/name
$ mkdir -p dfs/data
$ mkdir -p tmp
$ cd etc/hadoop
$vim slaves
hadoop-b
hadoop-c
7. 修改hadoop-env.sh和yarn-env.sh
$ vim hadoop-env.sh / vim yarn-env.sh
export export JAVA_HOME=/usr/lib/jvm/java
8. 修改core-site.xml文件
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop-a:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131702</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/oliver/hadoop-2.6.0/tmp</value>
</property>
<property>
<name>hadoop.proxyuser.hadoop.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hadoop.groups</name>
<value>*</value>
</property>
</configuration>
9. 修改hdfs-site.xml文件
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>:/home/oliver/hadoop-2.6.0/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>:/home/oliver/hadoop-2.6.0/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop-a:9001</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
10. 修改修改mapred-site.xml文件
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop-a:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop-a:19888</value>
</property>
</configuration>
11. 修改yarn-site.xml文件
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>hadoop-a:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>hadoop-a:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>hadoop-a:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>hadoop-a:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>hadoop-a:8088</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>768</value>
</property>
</configuration>
12. 分发master机器上的hadoop文件到slave机器上
13. 格式化namenode (Master机器上面)
$ ./bin/hdfs namenode -format
14. 启动hdfs (Master机器上面)
$ ./sbin/start-dfs.sh$ ./sbin/start-yarn.sh
15. 检查启动情况
http://192.168.0.20:8088
http://192.168.0.20:9001
相关文章推荐
- 慢慢聊Linux AIO
- Linux内核分析07
- Linux设备驱动开发详解总结(二)之并发与竞争
- 解决redhat自带yum不能用的问题 (借用CentOS)
- 宿主机Windows访问虚拟机Linux文件(一)
- linux重启命令学习
- linux ping不通外网unknown host xxxxx解决方法
- 我的2016年决心书(老男孩教育在线课程班)
- Linux应用服务器搭建手册——weblogic安装
- Linux下chkconfig命令详解
- umask文件权限屏蔽字
- linux如何禁止某个ip连接服务器
- Ubuntu下安装fastboot的时候出现 未发现软件包
- 局域网内Windows使用RealVNC远程连接CentOS6.5桌面
- 硬链接的创建及删除
- 阿里云CentOS使用vsftpd搭建FTP服务器
- 由文件描述符得到文件的全路径
- 在linux下部署java项目的准备
- centos 改动字符集为GB2312的方法
- InfiniBand 技术及其在 Linux 系统中的配置简介