Hadoop-2.6.0-cdh5.7.0安装详解
2020-02-17 12:11
253 查看
下载Hadoop和JDK
-
下载Hadoop地址:http://archive.cloudera.com/cdh5/cdh/5/hadoop-2.6.0-cdh5.7.0.tar.gz
-
下载jdk:注册甲骨文账号下载,推荐1.7版本
安装JDK
- 解压jdk压缩包
tar -zxvf /home/hadoop/software/jdk-7u80-linux-x64.tar.gz -C /usr/java/
- 1
- 2
- 配置jdk环境变量
hadoop:root:/usr/java:>vi /etc/profile # /etc/profile # System wide environment and startup programs, for login setup # Functions and aliases go in /etc/bashrc # It's NOT a good idea to change this file unless you know what you # are doing. It's much better to create a custom.sh shell script in # /etc/profile.d/ to make custom changes to your environment, as this # will prevent the need for merging in future updates. #add path export JAVA_HOME=/usr/java/jdk1.7.0_80export PATH=$JAVA_HOME/bin:$PATH #show path hadoop:root:/usr/java:>source /etc/profile hadoop:root:/usr/java:>java -version java version "1.7.0_80" Java(TM) SE Runtime Environment (build 1.7.0_80-b15) Java HotSpot(TM) 64-Bit Server VM (build 24.80-b11, mixed mode)
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
配置shh
hadoop:hadoop:/home/hadoop:>ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/hadoop/.ssh/id_rsa. Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub. The key fingerprint is: ca:20:e2:68:64:46:e0:f2:62:63:b9:60:71:a5:75:4a hadoop@hadoopThe key's randomart image is: +--[ RSA 2048]----+ |. E . | |o = o | |.+ o . | |o.+ | |+Xo . S | |@oo. o . | |.+ o | |. | | | +-----------------+ hadoop:hadoop:/home/hadoop:> hadoop:hadoop:/home/hadoop:>cp .ssh/id_rsa.pub ~/.ssh/authorized_keys hadoop:hadoop:/home/hadoop:>cd .ssh/ hadoop:hadoop:/home/hadoop/.ssh:>ll total 12 -rw-r--r-- 1 hadoop hadoop 395 Jan 2 02:16 authorized_keys -rw------- 1 hadoop hadoop 1675 Jan 2 02:16 id_rsa -rw-r--r-- 1 hadoop hadoop 395 Jan 2 02:16 id_rsa.pub
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
安装Hadoop
-
解压Hadoop
hadoop:hadoop:/home/hadoop/app:>tar -zxvf /home/hadoop/software/hadoop-2.6.0-cdh5.7.0.tar.gz -C /home/hadoop/app/ -
配置环境
hadoop:hadoop:/home/hadoop:>vi .bash_profile # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs export HADOOP_HOME=/home/hadoop/app/hadoop-2.6.0-cdh5.7.0 export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH hadoop:hadoop:/home/hadoop:>source .bash_profile hadoop:hadoop:/home/hadoop:>echo $HADOOP_HOME /home/hadoop/app/hadoop-2.6.0-cdh5.7.0
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
修改配置文件
- hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.7.0_80
- 1
- 2
- core-site.xml
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://hadoop:8020</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/hadoop/app/tmp</value> </property> </configuration>
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- hdfs-site.xml
<configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration>
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- slaves
hadoop
- 1
- 2
-
mapred-site.xml(正常情况下没有这个文件,可由 mapred-queues.xml.template
复制而来)
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration>
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- yarn-site.xml
<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
格式化Hadoop
hdfs namenode -format
- 1
- 2
启动Hadoop
hadoop:hadoop:/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop:>start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh 18/01/02 02:49:33 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [hadoop] hadoop: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-namenode-hadoop.out hadoop: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop.out Starting secondary namenodes [0.0.0.0] 0.0.0.0: starting secondarynamenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-secondarynamenode-hadoop.out 18/01/02 02:50:15 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable starting yarn daemons starting resourcemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-resourcemanager-hadoop.out hadoop: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-nodemanager-hadoop.out hadoop:hadoop:/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop:>jps 8345 NodeManager 8066 SecondaryNameNode 7820 NameNode 7914 DataNode 8249 ResourceManager 8613 Jps
来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/31537832/viewspace-2154777/,如需转载,请注明出处,否则将追究法律责任。
转载于:http://blog.itpub.net/31537832/viewspace-2154777/
- 点赞
- 收藏
- 分享
- 文章举报
相关文章推荐
- hdfs命令
- Hadoop中ip:50070不能访问的问题
- hadoop-eclipse环境搭建(二)
- hadoop-hdfs(三)
- hadoop、storm和spark的区别、比较
- 单机Docker搭建Hadoop/Spark环境
- HADOOP2.7.3编译
- Hive异常处理之return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask
- 我的青春,我的Hadoop
- Hadoop分布式文件系统HDFS
- 3.Hadoop_HDFS1.x_全分布式搭建
- 4.Hadoop_HDFS2.x_高可用搭建
- 5.Hadoop_HDFS_Java API
- hadoop日常维护之问题解决01
- Hadoop Spark:全面比拼(架构、性能、成本、安全)
- 大数据计算:Storm和Spark Streaming、Hadoop有什么区别?
- Hadoop、Spark和Storm有什么关系,未来大数据架构会走向何方
- Hadoop+Spark+MongoDB+MySQL+C#大数据开发项目最佳实践
- Spark是什么?Spark和Hadoop的区别
- 基于hadoop的Hbase安装