搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hadoop)(一)
2017-11-13 20:10
507 查看
一、准备工作:
操作系统:Linux(CentOS 7.0)下载
Java(jdk-8u111-linux-x64.rpm)
Hive2.1.1(apache-hive-2.1.1-bin.tar.gz jdk-8u111-linux-x64.rpm)
Hadoop2.7.3(hadoop-2.7.3.tar.gz)
下载Java(JDK)
点击打开链接
下载安装在官网可下载最新版(Hadoop/Hive)
点击打开链接
把下载文件存放在CentOS桌面文件(Hadoop)
二、安装配置
1、 安装 java (JDK)
[plain] viewplain copy
[root@localhost Hahoop]# yuminstall -y jdk-8u111-linux-x64.rpm
查看安装后的版本
[plain] view
plain copy
[root@localhost Hahoop]# java -version
java version "1.8.0_111"
Java(TM) SE Runtime Environment (build1.8.0_111-b14)
Java HotSpot(TM) 64-Bit Server VM(build 25.111-b14, mixed mode)
2、 解压hadoop和hive程序
[plain] viewplain copy
[root@localhost Hahoop]# tar -xzfhadoop-2.7.3.tar.gz
[root@localhost Hahoop]# tar -xzfapache-hive-2.1.1-bin.tar.gz
查看解压文件夹
[plain] view
plain copy
[root@localhost Hahoop]# ls
apache-hive-2.1.1-bin hadoop-2.7.3 jdk-8u111-linux-x64.rpm
apache-hive-2.1.1-bin.tar.gz hadoop-2.7.3.tar.gz
3、 移动解压文件并重命名hive/Hadoop
[plain] viewplain copy
[root@localhost Hahoop]# mvhadoop-2.7.3 /usr/Hadoop
[root@localhost Hahoop]# mvapache-hive-2.1.1-bin /usr/hive
4、 配置环境变量(HADOOP)
[plain] viewplain copy
[root@localhost hadoop]# vim~/.bashrc
添加:
[plain] view
plain copy
# set hadoop/hive/jdk(java) path
export HADOOP_HOME=/usr/hadoop
export HIVE_HOME=/usr/hive
export JAVA_HOME=/usr/java/jdk1.8.0_111
export PATH="$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HIVE_HOME/bin:$JAVA_HOME/bin"
生效:
[plain] view
plain copy
[root@localhost hadoop]# source~/.bashrc
5、 创建目录Hadoop
[plain] viewplain copy
[root@localhost hadoop]# cd /usr/Hadoop
[root@localhost hadoop]# mkdir tmp
[root@localhost hadoop]# mkdir hdfs
[root@localhost hadoop]# mkdir hdfs/data
[root@localhost hadoop]# mkdir hdfs/name
6、 设置配置文件
5.0、指定文件所在路径
[plain] view
plain copy
[root@localhost hadoop]# cd/usr/hadoop/etc/Hadoop
5.1、配置hadoop-env.sh]
[plain] view
plain copy
<span style="font-weight: normal;">[root@localhost hadoop]# vimhadoop-env.sh</span>
--添加
[plain] view
plain copy
#export JAVA_HOME=${JAVA_HOME}
export JAVA_HOME=/usr/java/jdk1.8.0_111
5.2、配置yarn-env.sh
[plain] view
plain copy
[root@localhost hadoop]# vim yarn-env.sh
添加
[plain] view
plain copy
# export JAVA_HOME=/home/y/libexec/jdk1.6.0/
exportJAVA_HOME=/usr/java/jdk1.8.0_111
5.3、配置core-site.xml
[plain] view
plain copy
[root@localhost hadoop]# vimcore-site.xml
[html] view
plain copy
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
<description>HSDF的URL,文件系统:namenode标识:端口号</description>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/hadoop/tmp</value>
<description>本地hadoop临时文件夹</description>
</property>
</configuration>
5.4、配置hdfs-site.xml
[plain] view
plain copy
[root@localhost hadoop]# vim hdfs-site.xml
[html] view
plain copy
<configuration>
<property>
<name>dfs.name.dir</name>
<value>/usr/hadoop/hdfs/name</value>
<description>namenode上存储hdfs名字空间元数据</description>
</property>
<property>
<name>dfs.data.dir</name>
<value>/usr/hadoop/hdfs/data</value>
<description>datanode上数据块的物理存储位置</description>
</property>
<!--指定HDFS副本的数量-->
<property>
<name>dfs.replication</name>
<value>1</value>
<description>副本个数,默认3应该小于datanode机器数量</description>
</property>
</configuration>
5.5、配置yarn-site.xml
[plain] view
plain copy
[root@localhost hadoop]# vim yarn-site.xml
[html] view
plain copy
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>localhost:8099</value>
</property>
</configuration>
5.6、配置mapred-site.xml(启用服务时没用到,可不用配置)
[plain] view
plain copy
[root@localhost hadoop]# mvmapred-site.xml.template mapred-site.xml
[root@localhost hadoop]# vim mapred-site.xml
[html] view
plain copy
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
<!--客户端访问为yarn-->
</property>
</configuration>
6、配置ssh无密码(1.产生RSA密钥对;2.群集内共享密钥)启动hadoop服务(启动dfs/yarn时的无需密码.注:如果home目录没有通过NFS共享需要其他方法共享密钥(ssh-copy-id,复制到远程主机调用格式:ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.2)
[plain] viewplain copy
[root@localhost hive]# ssh-keygen -t rsa-p'' -f ~/.ssh/id_rsa
[root@localhost hive]# cat ~/.ssh/id_rsa.pub>> ~/.ssh/authorized_keys
7、启动Hadoop
7.1、格式化namenode
[plain] view
plain copy
[root@localhost hadoop]# bin/hdfs namenode-format
7.2、启动hdfs(按提示输入yes和账号密码,配置了(步骤6) ssh时跳过输入密码步骤)
[plain] view
plain copy
[root@localhost hadoop]# sbin/start-dfs.sh
7.3、启动yarn(按提示输入yes和账号密码,配置了(步骤6) ssh时跳过输入密码步骤)
[plain] view
plain copy
[root@localhost hadoop]# sbin/start-yarn.sh
7.4、查看进程:
[plain] view
plain copy
[root@localhost hadoop]# jps
26161 DataNode
26021 NameNode
26344 SecondaryNameNode
26890 Jps
26492 ResourceManager
26767 NodeManager
相关文章推荐
- 搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hive+Hadoop)(二)
- 搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hive+MySQL+Connector)(三)
- 搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hadoop)(一)
- 搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hive+Hadoop)(二)
- 搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hive+MySQL+Connector)(三)
- Sqoop-1.4.6安装配置及Mysql->HDFS->Hive数据导入(基于Hadoop2.7.3)
- hive本地模式配置,连接mysql数据库--hive2.1.1+hadoop2.7.3+mysql5.7.18
- mysql5.7.18安装、Hive2.1.1安装和配置(基于Hadoop2.7.3集群)
- hadoop-2.7+hive-2.1.1+mysql 集群配置
- Hadoop-2.7.3环境下Hive-2.1.1安装配置。
- 大数据学习环境搭建(CentOS6.9+Hadoop2.7.3+Hive1.2.1+Hbase1.3.1+Spark2.1.1)
- Hadoop 2.8 +Mysql 搭建Hive2.1.1
- Hive2.1.1、Hadoop2.7.3 部署
- 单机RedHat6.5+JDK1.8+Hadoop2.7.3+Spark2.1.1+zookeeper3.4.6+kafka2.11+flume1.6环境搭建步骤
- Hive1.1安装配置,基于最小安装的CentOS7、hadoop2.6、MySQL
- hadoop+spark+hive+mysql集群搭建过程
- hadoop2.2.0的基础上配置hive0.12.0(支持mysql)
- Linux搭建Hive On Spark环境(spark-1.6.3-without-hive+hadoop2.8.0+hive2.1.1)
- Hadoop集群搭建与MySQL搭建和Hive安装
- 集群RedHat6.5+JDK1.8+Hadoop2.7.3+Spark2.1.1+zookeeper3.4.6+kafka2.11+flume1.6环境搭建步骤