Hadoop-2.7.3环境下Hive-2.1.1安装配置。
2017-03-26 17:49
459 查看
环境:ubuntu-16.0.4;jdk1.8.0_111;apache-hadoop-2.7.3;apache-hive-2.1.1。
这里只记录Hive的安装。
首先从官网上下载所需要的版本,本人下载的apache-hive-2.1.1-bin.tar.gz。放到用户主目录下面。
(1)解压:
$tar -zxvf apache-hive-2.1.1-bin.tar.gz
(2)进入到conf目录:
$cd apache-hive-2.1.1-bin/bin/conf
$ls
会看到有下面这些文件:
beeline-log4j2.properties.template hive-exec-log4j2.properties.template llap-cli-log4j2.properties.template
hive-default.xml.template hive-log4j2.properties.template llap-daemon-log4j2.properties.template
hive-env.sh.template ivysettings.xml parquet-logging.properties
然后在conf路径下,执行以下几个命令
$cp hive-default.xml.template hive-default.xml
$cp hive-env.sh.template hive-env.sh
$cp hive-default.xml hive-site.xml
(3)添加mysql驱动:
下载mysql-connector-java-x.y.z-bin.jar文件并放到apache-hive-2.1.1-bin/lib目录下面。
(4)设置路径及环境变量:
$sudo mv apache-hive-2.1.1-bin /usr/local/
$sudo vim /etc/profile
添加HIVE_HOME。
source /etc/profile
(5)修改hive-site.xml及hive-env.sh相关配置
将hive-site.xml文件中的内容修改为如下所示:
将hive-env.sh文件修改为如下所示:
(6)在mysql里创建hive用户,并赋予其足够权限
1.$mysql -u root -p
2.mysql> create user 'hive' identified
by '123456';
Query OK, 0 rows affected (0.00 sec)
mysql> grant all privileges on *.* to 'hive' with grant option;
Query OK, 0 rows affected (0.00 sec)
mysql> flush privileges;
Query OK, 0 rows affected (0.01 sec)
(7)设置元数据库
$schematool -initSchema -dbType
mysql
如果hive的元数据库是本地环境下,至此就完成了安装。
如果hive的元数据库是远程环境下,在服务端主机上执行完第5步的时候,然后将apache-hive-2.1.1-bin文件夹拷贝到客户端上。同时,将服务端中hive-site.xml中URL地址的localhost改成服务端的ip地址。
并将客户端的hive-site.xml文件中的内容修改为如下所示:
这个时候,客户端需要启动metastore服务:
$hive --service metastore &
如果一直没反应,按回车,可以使用jobs命令查看是否启动成功
启动成功后,就可以执行hive命令。
这里只记录Hive的安装。
首先从官网上下载所需要的版本,本人下载的apache-hive-2.1.1-bin.tar.gz。放到用户主目录下面。
(1)解压:
$tar -zxvf apache-hive-2.1.1-bin.tar.gz
(2)进入到conf目录:
$cd apache-hive-2.1.1-bin/bin/conf
$ls
会看到有下面这些文件:
beeline-log4j2.properties.template hive-exec-log4j2.properties.template llap-cli-log4j2.properties.template
hive-default.xml.template hive-log4j2.properties.template llap-daemon-log4j2.properties.template
hive-env.sh.template ivysettings.xml parquet-logging.properties
然后在conf路径下,执行以下几个命令
$cp hive-default.xml.template hive-default.xml
$cp hive-env.sh.template hive-env.sh
$cp hive-default.xml hive-site.xml
(3)添加mysql驱动:
下载mysql-connector-java-x.y.z-bin.jar文件并放到apache-hive-2.1.1-bin/lib目录下面。
(4)设置路径及环境变量:
$sudo mv apache-hive-2.1.1-bin /usr/local/
$sudo vim /etc/profile
添加HIVE_HOME。
source /etc/profile
(5)修改hive-site.xml及hive-env.sh相关配置
将hive-site.xml文件中的内容修改为如下所示:
<?xml version="1.0" encoding="UTF-8" standalone="no"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true</value> <description>JDBC connect string for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> <description>Driver class name for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>hive</value> <description>username to use against metastore database</description> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>123456</value> <description>password to use against metastore database</description> </property> </configuration>
将hive-env.sh文件修改为如下所示:
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Set Hive and Hadoop environment variables here. These variables can be used # to control the execution of Hive. It should be used by admins to configure # the Hive installation (so that users do not have to set environment variables # or set command line parameters to get correct behavior). # # The hive service being invoked (CLI/HWI etc.) is available via the environment # variable SERVICE # Hive Client memory usage can be an issue if a large number of clients # are running at the same time. The flags below have been useful in # reducing memory usage: # # if [ "$SERVICE" = "cli" ]; then # if [ -z "$DEBUG" ]; then # export HADOOP_OPTS="$HADOOP_OPTS -XX:NewRatio=12 -Xms10m -XX:MaxHeapFreeRatio=40 -XX:MinHeapFreeRatio=15 -XX:+UseParNewGC -XX:-UseGCOverheadLimit" # else # export HADOOP_OPTS="$HADOOP_OPTS -XX:NewRatio=12 -Xms10m -XX:MaxHeapFreeRatio=40 -XX:MinHeapFreeRatio=15 -XX:-UseGCOverheadLimit" # fi # fi # The heap size of the jvm stared by hive shell script can be controlled via: # # export HADOOP_HEAPSIZE=1024 export HADOOP_HEAPSIZE=1024 # # Larger heap size may be required when running queries over large number of files or partitions. # By default hive shell scripts use a heap size of 256 (MB). Larger heap size would also be # appropriate for hive server (hwi etc). # Set HADOOP_HOME to point to a specific hadoop install directory # HADOOP_HOME=${bin}/../../hadoop HADOOP_HOME=/usr/local/hadoop #这里设置成自己的hadoop路径 # Hive Configuration Directory can be controlled by: # export HIVE_CONF_DIR= export HIVE_CONF_DIR=/usr/local/apache-hive-2.1.1-bin/conf # Folder containing extra ibraries required for hive compilation/execution can be controlled by: # export HIVE_AUX_JARS_PATH= export HIVE_AUX_JARS_PATH=/usr/local/apache-hive-2.1.1-bin/lib
(6)在mysql里创建hive用户,并赋予其足够权限
1.$mysql -u root -p
2.mysql> create user 'hive' identified
by '123456';
Query OK, 0 rows affected (0.00 sec)
mysql> grant all privileges on *.* to 'hive' with grant option;
Query OK, 0 rows affected (0.00 sec)
mysql> flush privileges;
Query OK, 0 rows affected (0.01 sec)
(7)设置元数据库
$schematool -initSchema -dbType
mysql
如果hive的元数据库是本地环境下,至此就完成了安装。
如果hive的元数据库是远程环境下,在服务端主机上执行完第5步的时候,然后将apache-hive-2.1.1-bin文件夹拷贝到客户端上。同时,将服务端中hive-site.xml中URL地址的localhost改成服务端的ip地址。
并将客户端的hive-site.xml文件中的内容修改为如下所示:
<?xml version="1.0" encoding="UTF-8" standalone="no"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>hive.metastore.uris</name> <value>thrift://服务端ip地址:9083</value> </property> </configuration>
这个时候,客户端需要启动metastore服务:
$hive --service metastore &
如果一直没反应,按回车,可以使用jobs命令查看是否启动成功
启动成功后,就可以执行hive命令。
相关文章推荐
- centos 与hadoop2.7.3环境下hive2.1.1安装详解
- [置顶] CentOS7基于Hadoop 2.7.3安装Hive 2.1.1
- CentOS7基于Hadoop 2.7.3安装Hive 2.1.1
- 搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hive+MySQL+Connector)(三)
- 搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hadoop)(一)
- mysql5.7.18安装、Hive2.1.1安装和配置(基于Hadoop2.7.3集群)
- 搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hive+Hadoop)(二)
- hadoop-2.7.3 + hive-2.3.0 + zookeeper-3.4.8 + hbase-1.3.1 完全分布式安装配置
- hadoop-2.7.3 + hive-2.3.0 + zookeeper 4000 -3.4.8 + hbase-1.3.1 完全分布式安装配置
- 搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hadoop)(一)
- Hive2.1.1的安装(hadoop版本2.7.3)
- 搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hive+MySQL+Connector)(三)
- 搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hive+Hadoop)(二)
- Spark-2.1.1集群的安装和配置(基于Hadoop-2.7.3)
- Sqoop-1.4.6安装配置及Mysql->HDFS->Hive数据导入(基于Hadoop2.7.3)
- hive本地模式配置,连接mysql数据库--hive2.1.1+hadoop2.7.3+mysql5.7.18
- 【心血之作】linux虚拟机下安装配置Hadoop(完全分布式)生态环境(hadoop2.2.0,HBase0.98,Hive0.13(连接oracle),sqoop1.4.4(连接oracle)
- 大数据学习环境搭建(CentOS6.9+Hadoop2.7.3+Hive1.2.1+Hbase1.3.1+Spark2.1.1)
- hadoop和hive安装配置详解
- (三)hadoop虚拟机环境下安装以及配置