2018-07-08期 Hadoop单节点伪分布式集群配置
2018-07-08 12:15
253 查看
一、安装介质下载地址:http://archive.apache.org/dist/hadoop/core/安装版本:hadoop-2.4.1.tar.gz二、安装步骤1、解压hadoop-2.4.1.tar.gz[root@hadoop-server01 hadoop-2.4.1]# tar -xvf hadoop-2.4.1.tar.gz -C /usr/local/apps/[root@hadoop-server01 hadoop-2.4.1]# pwd/usr/local/apps/hadoop-2.4.1[root@hadoop-server01 hadoop-2.4.1]# lltotal 52drwxr-xr-x. 2 67974 users 4096 Jun 20 2014 bindrwxr-xr-x. 3 67974 users 4096 Jun 20 2014 etcdrwxr-xr-x. 2 67974 users 4096 Jun 20 2014 includedrwxr-xr-x. 3 67974 users 4096 Jun 20 2014 libdrwxr-xr-x. 2 67974 users 4096 Jun 20 2014 libexec-rw-r--r--. 1 67974 users 15458 Jun 20 2014 LICENSE.txt-rw-r--r--. 1 67974 users 101 Jun 20 2014 NOTICE.txt-rw-r--r--. 1 67974 users 1366 Jun 20 2014 README.txtdrwxr-xr-x. 2 67974 users 4096 Jun 20 2014 sbindrwxr-xr-x. 4 67974 users 4096 Jun 20 2014 share[root@hadoop-server01 hadoop-2.4.1]# 2、修改配置文件[root@hadoop-server01 etc]# cd /usr/local/apps/hadoop-2.4.1/etc/hadoop/--修改hadoop-env.sh[root@hadoop-server01 hadoop]# vi hadoop-env.sh # The only required environment variable is JAVA_HOME. All others are# optional. When running a distributed configuration it is best to# set JAVA_HOME in this file, so that it is correctly defined on# remote nodes.# The java implementation to use.export JAVA_HOME=/usr/local/apps/jdk1.7.0_80/# The jsvc implementation to use. Jsvc is required to run secure datanodes.#export JSVC_HOME=${JSVC_HOME}--修改core-site.xml [root@hadoop-server01 hadoop]# vi core-site.xml <!-- Put site-specific property overrides in this file. --><configuration><property><name>fs.defaultFS</name><value>hdfs://hadoop-server01:9000/</value></property><property><name>hadoop.tmp.dir</name><value>/usr/local/apps/hadoop-2.4.1/tmp/</value></property></configuration>--修改hdfs-site.xml [root@hadoop-server01 hadoop]# vi hdfs-site.xml <!-- Put site-specific property overrides in this file. --><configuration><property><name>dfs.replication</name><value>1</value></property></configuration>--修改mapred-site.xml[root@hadoop-server01 hadoop]# mv mapred-site.xml.template mapred-site.xml[root@hadoop-server01 hadoop]# vi mapred-site.xml<!-- Put site-specific property overrides in this file. --><configuration><property><name>mapreduce.framework.name</name></value>yarn</value></property></configuration>--修改yarn-site.xml [root@hadoop-server01 hadoop]# vi yarn-site.xml<configuration><!-- Site specific YARN configuration properties --><property><name>yarn.resourcemanager.hostname</name><value>hadoop-server01</value></property><property><name>yarn.nodemanager.aux-services</name><value>mapreduce-shuffle</value></property></configuration>--修改slaves [root@hadoop-server01 hadoop]# vi slaveshadoop-server01
hadoop-server02
hadoop-server03
3、启动服务--格式化[root@hadoop-server01 hadoop]# cd /usr/local/apps/hadoop-2.4.1/bin/[root@hadoop-server01 bin]# ./hadoop namenode -format 18/06/15 00:44:09 INFO util.GSet: capacity = 2^15 = 32768 entries18/06/15 00:44:09 INFO namenode.AclConfigFlag: ACLs enabled? false18/06/15 00:44:09 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1300855425-192.168.1.201-152904864916318/06/15 00:44:09 INFO common.Storage: Storage directory /usr/local/apps/hadoop-2.4.1/tmp/dfs/name has been successfully formatted.18/06/15 00:44:09 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 018/06/15 00:44:09 INFO util.ExitUtil: Exiting with status 018/06/15 00:44:09 INFO namenode.NameNode: SHUTDOWN_MSG:/************************************************************SHUTDOWN_MSG: Shutting down NameNode at hadoop-server01/192.168.1.201************************************************************/3.1 手动启动(1)启动HDFS[root@hadoop-server01 sbin]# ./hadoop-daemon.sh start namenode[root@hadoop-server01 sbin]# ./hadoop-daemon.sh start datanode[root@hadoop-server01 sbin]# ./hadoop-daemon.sh start secondarynamenode[root@hadoop-server01 sbin]# jps28993 Jps28925 SecondaryNameNode4295 DataNode4203 NameNode--访问地址http://192.168.1.201:50070/(2)启动yarn[root@hadoop-server01 sbin]# ./yarn-daemon.sh start resourcemanager[root@hadoop-server01 sbin]# ./yarn-daemon.sh start nodemanager[root@hadoop-server01 sbin]# jps29965 NodeManager28925 SecondaryNameNode29062 ResourceManager4295 DataNode4203 NameNode3.2 脚本自动启动--前提条件需要配置ssh免密登录[root@hadoop-server01 sbin]# ssh-keygen[root@hadoop-server01 sbin]# ssh-copy-id hadoop-server01[root@hadoop-server01 sbin]# shh hadoop-server01(1)启动HDFS[root@hadoop-server01 sbin]# ./start-dfs.sh [root@hadoop-server01 sbin]# jps31538 Jps31423 SecondaryNameNode31271 DataNode31152 NameNode(2)启动yarn[root@hadoop-server01 sbin]# ./start-yarn.sh [root@hadoop-server01 sbin]# jps32009 Jps31423 SecondaryNameNode31271 DataNode31697 NodeManager31593 ResourceManager31152 NameNode
备注:本文档所有配置都采用主机名配置,因此需要先配置hosts文件 ,非windows下配置/etc/hosts windows下配置/windows/systemm32/drivers/etc/hosts文件,配置格式 :IP 主机名
hadoop-server02
hadoop-server03
3、启动服务--格式化[root@hadoop-server01 hadoop]# cd /usr/local/apps/hadoop-2.4.1/bin/[root@hadoop-server01 bin]# ./hadoop namenode -format 18/06/15 00:44:09 INFO util.GSet: capacity = 2^15 = 32768 entries18/06/15 00:44:09 INFO namenode.AclConfigFlag: ACLs enabled? false18/06/15 00:44:09 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1300855425-192.168.1.201-152904864916318/06/15 00:44:09 INFO common.Storage: Storage directory /usr/local/apps/hadoop-2.4.1/tmp/dfs/name has been successfully formatted.18/06/15 00:44:09 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 018/06/15 00:44:09 INFO util.ExitUtil: Exiting with status 018/06/15 00:44:09 INFO namenode.NameNode: SHUTDOWN_MSG:/************************************************************SHUTDOWN_MSG: Shutting down NameNode at hadoop-server01/192.168.1.201************************************************************/3.1 手动启动(1)启动HDFS[root@hadoop-server01 sbin]# ./hadoop-daemon.sh start namenode[root@hadoop-server01 sbin]# ./hadoop-daemon.sh start datanode[root@hadoop-server01 sbin]# ./hadoop-daemon.sh start secondarynamenode[root@hadoop-server01 sbin]# jps28993 Jps28925 SecondaryNameNode4295 DataNode4203 NameNode--访问地址http://192.168.1.201:50070/(2)启动yarn[root@hadoop-server01 sbin]# ./yarn-daemon.sh start resourcemanager[root@hadoop-server01 sbin]# ./yarn-daemon.sh start nodemanager[root@hadoop-server01 sbin]# jps29965 NodeManager28925 SecondaryNameNode29062 ResourceManager4295 DataNode4203 NameNode3.2 脚本自动启动--前提条件需要配置ssh免密登录[root@hadoop-server01 sbin]# ssh-keygen[root@hadoop-server01 sbin]# ssh-copy-id hadoop-server01[root@hadoop-server01 sbin]# shh hadoop-server01(1)启动HDFS[root@hadoop-server01 sbin]# ./start-dfs.sh [root@hadoop-server01 sbin]# jps31538 Jps31423 SecondaryNameNode31271 DataNode31152 NameNode(2)启动yarn[root@hadoop-server01 sbin]# ./start-yarn.sh [root@hadoop-server01 sbin]# jps32009 Jps31423 SecondaryNameNode31271 DataNode31697 NodeManager31593 ResourceManager31152 NameNode
备注:本文档所有配置都采用主机名配置,因此需要先配置hosts文件 ,非windows下配置/etc/hosts windows下配置/windows/systemm32/drivers/etc/hosts文件,配置格式 :IP 主机名
相关文章推荐
- 【配置】Hadoop三节点分布式集群搭建
- hadoop全分布式集群:配置主从节点之间的免密登录
- Hadoop2.6完全分布式多节点集群安装配置
- hadoop伪分布式集群(单节点与多节点)配置
- 完全分布式Hadoop集群的安装搭建和配置(4节点)
- 举例配置hadoop完全分布式集群(准备节点数4个,h15、h16、h17、h18)
- 完全分布模式hadoop集群安装配置之二 添加新节点组成分布式集群
- 【Hadoop】 分布式Hadoop集群安装配置
- 李克华 云计算高级群: 292870151 195907286 交流:Hadoop、NoSQL、分布式、lucene、solr、nutch kafka入门:简介、使用场景、设计原理、主要配置及集群搭
- hadoop2.2完全分布式集群+hive+mysql存储元数据配置
- 第五天-Hadoop全分布式集群搭建(傻瓜式配置)
- 李克华 云计算高级群: 292870151 195907286 交流:Hadoop、NoSQL、分布式、lucene、solr、nutch kafka入门:简介、使用场景、设计原理、主要配置及集群搭
- hadoop-2.3.0-cdh5.1.0完全分布式集群配置及HA配置(待)
- 单节点伪分布式Hadoop配置
- hadoop集群分布式搭建以及环境配置
- 一步步教你Hadoop多节点集群安装配置
- hadoop安装,并配置单节点hadoop集群
- 5节点Hadoop分布式集群搭建经验分享
- centos7(vm)下hadoop2.7.2 3节点集群(2副本)+flume1.4.0版本分布式 log收集在本地(x86)
- Hadoop1.2.1+Zookeeper3.4.5+HBase0.94.18完全分布式集群配置过程中遇到的问题