您的位置:首页 > 大数据 > Hadoop

2018-07-08期 Hadoop单节点伪分布式集群配置

2018-07-08 12:15 253 查看
一、安装介质下载地址:http://archive.apache.org/dist/hadoop/core/安装版本:hadoop-2.4.1.tar.gz二、安装步骤1、解压hadoop-2.4.1.tar.gz[root@hadoop-server01 hadoop-2.4.1]# tar -xvf hadoop-2.4.1.tar.gz -C /usr/local/apps/[root@hadoop-server01 hadoop-2.4.1]# pwd/usr/local/apps/hadoop-2.4.1[root@hadoop-server01 hadoop-2.4.1]# lltotal 52drwxr-xr-x. 2 67974 users 4096 Jun 20 2014 bindrwxr-xr-x. 3 67974 users 4096 Jun 20 2014 etcdrwxr-xr-x. 2 67974 users 4096 Jun 20 2014 includedrwxr-xr-x. 3 67974 users 4096 Jun 20 2014 libdrwxr-xr-x. 2 67974 users 4096 Jun 20 2014 libexec-rw-r--r--. 1 67974 users 15458 Jun 20 2014 LICENSE.txt-rw-r--r--. 1 67974 users 101 Jun 20 2014 NOTICE.txt-rw-r--r--. 1 67974 users 1366 Jun 20 2014 README.txtdrwxr-xr-x. 2 67974 users 4096 Jun 20 2014 sbindrwxr-xr-x. 4 67974 users 4096 Jun 20 2014 share[root@hadoop-server01 hadoop-2.4.1]# 2、修改配置文件[root@hadoop-server01 etc]# cd /usr/local/apps/hadoop-2.4.1/etc/hadoop/--修改hadoop-env.sh[root@hadoop-server01 hadoop]# vi hadoop-env.sh # The only required environment variable is JAVA_HOME. All others are# optional. When running a distributed configuration it is best to# set JAVA_HOME in this file, so that it is correctly defined on# remote nodes.# The java implementation to use.export JAVA_HOME=/usr/local/apps/jdk1.7.0_80/# The jsvc implementation to use. Jsvc is required to run secure datanodes.#export JSVC_HOME=${JSVC_HOME}--修改core-site.xml [root@hadoop-server01 hadoop]# vi core-site.xml <!-- Put site-specific property overrides in this file. --><configuration><property><name>fs.defaultFS</name><value>hdfs://hadoop-server01:9000/</value></property><property><name>hadoop.tmp.dir</name><value>/usr/local/apps/hadoop-2.4.1/tmp/</value></property></configuration>--修改hdfs-site.xml [root@hadoop-server01 hadoop]# vi hdfs-site.xml <!-- Put site-specific property overrides in this file. --><configuration><property><name>dfs.replication</name><value>1</value></property></configuration>--修改mapred-site.xml[root@hadoop-server01 hadoop]# mv mapred-site.xml.template mapred-site.xml[root@hadoop-server01 hadoop]# vi mapred-site.xml<!-- Put site-specific property overrides in this file. --><configuration><property><name>mapreduce.framework.name</name></value>yarn</value></property></configuration>--修改yarn-site.xml [root@hadoop-server01 hadoop]# vi yarn-site.xml<configuration><!-- Site specific YARN configuration properties --><property><name>yarn.resourcemanager.hostname</name><value>hadoop-server01</value></property><property><name>yarn.nodemanager.aux-services</name><value>mapreduce-shuffle</value></property></configuration>--修改slaves [root@hadoop-server01 hadoop]# vi slaveshadoop-server01
hadoop-server02
hadoop-server03

3、启动服务--格式化[root@hadoop-server01 hadoop]# cd /usr/local/apps/hadoop-2.4.1/bin/[root@hadoop-server01 bin]# ./hadoop namenode -format 18/06/15 00:44:09 INFO util.GSet: capacity = 2^15 = 32768 entries18/06/15 00:44:09 INFO namenode.AclConfigFlag: ACLs enabled? false18/06/15 00:44:09 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1300855425-192.168.1.201-152904864916318/06/15 00:44:09 INFO common.Storage: Storage directory /usr/local/apps/hadoop-2.4.1/tmp/dfs/name has been successfully formatted.18/06/15 00:44:09 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 018/06/15 00:44:09 INFO util.ExitUtil: Exiting with status 018/06/15 00:44:09 INFO namenode.NameNode: SHUTDOWN_MSG:/************************************************************SHUTDOWN_MSG: Shutting down NameNode at hadoop-server01/192.168.1.201************************************************************/3.1 手动启动(1)启动HDFS[root@hadoop-server01 sbin]# ./hadoop-daemon.sh start namenode[root@hadoop-server01 sbin]# ./hadoop-daemon.sh start datanode[root@hadoop-server01 sbin]# ./hadoop-daemon.sh start secondarynamenode[root@hadoop-server01 sbin]# jps28993 Jps28925 SecondaryNameNode4295 DataNode4203 NameNode--访问地址http://192.168.1.201:50070/(2)启动yarn[root@hadoop-server01 sbin]# ./yarn-daemon.sh start resourcemanager[root@hadoop-server01 sbin]# ./yarn-daemon.sh start nodemanager[root@hadoop-server01 sbin]# jps29965 NodeManager28925 SecondaryNameNode29062 ResourceManager4295 DataNode4203 NameNode3.2 脚本自动启动--前提条件需要配置ssh免密登录[root@hadoop-server01 sbin]# ssh-keygen[root@hadoop-server01 sbin]# ssh-copy-id hadoop-server01[root@hadoop-server01 sbin]# shh hadoop-server01(1)启动HDFS[root@hadoop-server01 sbin]# ./start-dfs.sh [root@hadoop-server01 sbin]# jps31538 Jps31423 SecondaryNameNode31271 DataNode31152 NameNode(2)启动yarn[root@hadoop-server01 sbin]# ./start-yarn.sh [root@hadoop-server01 sbin]# jps32009 Jps31423 SecondaryNameNode31271 DataNode31697 NodeManager31593 ResourceManager31152 NameNode
备注:本文档所有配置都采用主机名配置,因此需要先配置hosts文件 ,非windows下配置/etc/hosts windows下配置/windows/systemm32/drivers/etc/hosts文件,配置格式 :IP 主机名
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
相关文章推荐