hive-1.2.1 安装 (基于hadoop-2.6.2 HA)
2016-02-17 00:00
274 查看
摘要: 在配置hive-1.2.1的过程中遇到了一些问题,现在把整个过程记下来。
Hive配置
解压hive-1.2.1到/home/hadoop/work/hive-1.2.1目录下,然后修改配置文件
修改hive-site.xml,进入hive的conf目录下
hive-site.xml,参数说明:
使用hive下面的jline-2.12.jar替代hadoop下面的jline-0.9.94.jar
把以上操作复制到所有部署hive的机器上
启动metastore service
分别在bi10和bi12下面启动metastore service
测试
进入hive,看是否有问题。如果存在问题是用hive --hiveconf hive.root.logger=DEBUG,console启动,可以看到具体的日志。
环境介绍
IP | 主机名 | 部署 |
192.168.2.10 | bi10 | hadoop-2.6.2,hive-1.2.1,hive metastore |
192.168.2.12 | bi12 | hadoop-2.6.2,hive-1.2.1,hive metastore |
192.168.2.13 | bi13 | hadoop-2.6.2,hive-1.2.1 |
mysql配置
新建用户hive,密码hive,然后授权。数据库IP:192.168.2.11 端口:3306CREATE USER 'hive'@'%' IDENTIFIED BY 'hive'; GRANT ALL PRIVILEGES ON *.* TO 'hive'@'%' WITH GRANT OPTION; FLUSH PRIVILEGES; CREATE DATABASE hive; alter database hive character set latin1;
Hive配置
解压hive-1.2.1到/home/hadoop/work/hive-1.2.1目录下,然后修改配置文件修改hive-site.xml,进入hive的conf目录下
[hadoop@bi13 conf]$ mv hive-env.sh.template hive-site.xml [hadoop@bi13 conf]$ vim hive-site.xml
hive-site.xml,参数说明:
hive.metastore.warehouse.dir | hive数据仓库在hdfs中的位置,由于hadoop集群采用了ha的方式,所以在这里使用hdfs://masters/user/hive/warehouse,而没有具体的namenode主机+端口 |
hive.metastore.uris | 这个使用hive使用metastore server的端口配置,我们使用默认的9083端口 |
hive.exec.scratchdir | 同样,由于ha的配置,我们使用hdfs://masters/user/hive/tmp |
javax.jdo.option.ConnectionPassword | mysql数据库密码 |
javax.jdo.option.ConnectionDriverName | mysql数据库驱动 |
javax.jdo.option.ConnectionURL | mysql数据库URL |
javax.jdo.option.ConnectionUserName | mysql数据库用户名 |
hive.querylog.location hive.server2.logging.operation.log.location hive.exec.local.scratchdir hive.downloaded.resources.dir | 这些配置项的value值必须写成具体的路径,否在会出现问题 |
<property> <name>hive.metastore.warehouse.dir</name> <value>hdfs://masters/user/hive/warehouse</value> <description>location of default database for the warehouse</description> </property> <property> <name>hive.metastore.uris</name> <value>thrift://bi10:9083,thrift://bi12:9083</value> <description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description> </property> <property> <name>hive.exec.scratchdir</name> <value>hdfs://masters/user/hive/tmp</value> <description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/<username> is created, with ${hive.scratch.dir.permission}.</description> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>hive</value> <description>password to use against metastore database</description> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> <description>Driver class name for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://192.168.2.11:3306/hive?createDatabaseIfNotExist=true</value> <description>JDBC connect string for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>hive</value> <description>Username to use against metastore database</description> </property> <property> <name>hive.querylog.location</name> <value>/home/hadoop/work/hive-1.2.1/tmp/iotmp</value> <description>Location of Hive run time structured log file</description> </property> <property> <name>hive.server2.logging.operation.log.location</name> <value>/home/hadoop/work/hive-1.2.1/tmp/operation_logs</value> <description>Top level directory where operation logs are stored if logging functionality is enabled</description> </property> <property> <name>hive.exec.local.scratchdir</name> <value>/home/hadoop/work/hive-1.2.1/tmp/${system:user.name}</value> <description>Local scratch space for Hive jobs</description> </property> <property> <name>hive.downloaded.resources.dir</name> <value>/home/hadoop/work/hive-1.2.1/tmp/${hive.session.id}_resources</value> <description>Temporary local directory for added resources in the remote file system.</description> </property>
使用hive下面的jline-2.12.jar替代hadoop下面的jline-0.9.94.jar
mv hive-1.2.1/lib/jline-2.12.jar hadoop-2.6.2/share/hadoop/yarn/lib/ mv hadoop-2.6.2/share/hadoop/yarn/lib/jline-0.9.94.jar hadoop-2.6.2/share/hadoop/yarn/lib/jline-0.9.94.jar.bak
把以上操作复制到所有部署hive的机器上
启动metastore service
分别在bi10和bi12下面启动metastore servicenohup hive --service metastore > null 2>&1 &
测试
进入hive,看是否有问题。如果存在问题是用hive --hiveconf hive.root.logger=DEBUG,console启动,可以看到具体的日志。[hadoop@bi13 work]$ hive SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/home/hadoop/work/hadoop-2.6.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/home/hadoop/work/spark-1.5.1/lib/spark-assembly-1.5.1-hadoop2.4.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/home/hadoop/work/hadoop-2.6.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/home/hadoop/work/spark-1.5.1/lib/spark-assembly-1.5.1-hadoop2.4.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] Logging initialized using configuration in jar:file:/home/hadoop/work/hive-1.2.1/lib/hive-common-1.2.1.jar!/hive-log4j.properties hive>
相关文章推荐
- hbase-1.1.3 基于 hadoop-2.6.2 ha 分布式部署
- 网站架构几大关键要素
- linux查看一个目录下的文件大小
- tomcat6.0.45+jdk1.8注意点若干
- 用Maven构建Hadoop项目
- Lucene的整体架构(三)
- linux防火墙设置
- linux文件系统
- linux、windows 目录 区别
- 关闭tomcat中web服务 多线程的关闭
- centos6.5集群lvs+keepalived部署
- cent os7下安装virtualbox
- Centos下安装Discuz!
- 【hadoop】win7-32位下安装hadoop2.x
- linux-信号量
- 重学OpenGL(一)----工具篇
- Linux与Windows共享文件夹之samba的安装与使用(Ubuntu为例)
- 20160216自学Linux_硬件基础+历史+入门基础开篇
- Facebook开源动画库 POP-POPDecayAnimation运用
- compare `lvs/haproxy/nginx`