您的位置:首页 > 数据库 > MySQL

mysql与hive2.1.1安装和配置

2017-10-10 18:20 477 查看
1.mysql安装

这个安装很简单,是在线安装,只需要按顺序执行一下几个命令就ok了。

(1)sudo apt-get install mysql-server

(2)sudo apt-get install mysql-client

(3)sudo apt-get install libmysqlclient-dev

(4)sudo apt-get installlibmysql-java

(5)将/usr/share/java/mysql-connector-java-5.1.28.jar拷贝到hive的lib目录下

cp
mysql-connector-java-5.1.28.jar /home/xqshi/Downloads/hadoop/apache-hive-2.1.1-bin/lib

安装过程会让你给数据库root用户输入密码,不要忽略。然后通过如下命令检查是否安装成功:

root@ubuntu:/usr/local# sudo netstat -tap | grep mysql
root@ubuntu:/usr/local# sudo /etc/init.d/mysql restart

登录验证:

root@ubuntu:/usr/local# mysql -hlocalhost -uroot -pxqshi

2.正确安装hadoop

3.下载hive安装文件apache-hive-2.1.1-bin.tar.gz,解压
4. 修改配置环境变量 vi ~/.bahrc

export JAVA_HOME=/home/xqshi/Downloads/hadoop/jdk1.8.0_91
export JRE_HOME=/home/xqshi/Downloads/hadoop/jdk1.8.0_91/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:${JRE_HOME}/bin:$PATH
export HADOOP_HOME=/home/xqshi/Downloads/hadoop/hadoop-2.8.0
export PATH=${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin:$PATH
export IDEA_JDK=/home/xqshi/Downloads/hadoop/jdk1.8.0_91
export HIVE_HOME=/home/xqshi/Downloads/hadoop/apache-hive-2.1.1-bin
export PATH=${HIVE_HOME}/bin:$PATH


5.修改Hive配置hive-config.sh

vi /bin/hive-config.sh
export JAVA_HOME=/home/xqshi/Downloads/hadoop/jdk1.8.0_91
export HIVE_HOME=/home/xqshi/Downloads/hadoop/apache-hive-2.1.1-bin
export HADOOP_HOME=/home/xqshi/Downloads/hadoop/hadoop-2.8.0


6.修改hive-env.sh

cp hive-env.sh.template hive-env.sh


7.修改hive-site.xml

cp hive-default.xml.template hive-site.xml
vi hive-site.xml
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost:3306/hive</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>数据库用户名</value>
<description>Username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>数据库密码</value>
<description>password to use against metastore database</description>
</property>

#如果不配置下面的部分会产生错误1.
<property>
<name>hive.exec.local.scratchdir</name>
<value>自定义目录</value>
<description>Local scratch space for Hive jobs</description>
</property>
<property>
<name>hive.downloaded.resources.dir</name>
<value>自定义目录</value>
<description>Temporary local directory for added resources in the remote file system.</description>
</property>
<property>
<name>hive.querylog.location</name>
<value>自定义目录</value>
<description>Location of Hive run time structured log file</description>
</property>
<property>
<name>hive.server2.logging.operation.log.location</name>
<value>自定义目录/operation_logs</value>
<description>Top level directory where operation logs are stored if logging functionality is enabled</description>
</property>


8.修改hive-log4j.properties

cp hive-log4j.properties.template hive-log4j.properties
vim hive-log4j.properties
hive.log.dir=自定义目录/log/


9.在HDFS上建立/tmp和/user/hive/warehouse目录,并赋予组用户写权限。

HADOOP_HOME/bin/hadoop fs -mkdir       /tmp
HADOOP_HOME/bin/hadoop fs -mkdir       /user/hive/warehouse
HADOOP_HOME/bin/hadoop fs -chmod g+w   /tmp
HADOOP_HOME/bin/hadoop fs -chmod g+w   /user/hive/warehouse


10.mysql配置

#创建数据库
mysql> create database hive;
#赋予访问权限
schematool -dbType mysql -initSchema


mysql> grant all privileges on hive.* to root@localhost identified by '密码' with grant option;mysql> flush privileges;#将JDBC复制到Hive库目录用于java程序与mysql的连接cp
mysql-connector-java-5.1.28.jar /home/xqshi/Downloads/hadoop/apache-hive-2.1.1-bin/lib



11.初始化hive,在hive2.0以后的版本,初始化命令都是:

schematool -dbType mysql -initSchema

12.始化成功后,就可以运行hive了,可以检测一下hive是否正常

参考网址:http://blog.csdn.net/jdplus/article/details/4649355

参考网址:http://www.cnblogs.com/K-artorias/p/7141479.html
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  hive mysql