mysql与hive2.1.1安装和配置
2017-10-10 18:20
477 查看
1.mysql安装
这个安装很简单,是在线安装,只需要按顺序执行一下几个命令就ok了。
(1)sudo apt-get install mysql-server
(2)sudo apt-get install mysql-client
(3)sudo apt-get install libmysqlclient-dev
(4)sudo apt-get installlibmysql-java
(5)将/usr/share/java/mysql-connector-java-5.1.28.jar拷贝到hive的lib目录下
cp
mysql-connector-java-5.1.28.jar /home/xqshi/Downloads/hadoop/apache-hive-2.1.1-bin/lib
安装过程会让你给数据库root用户输入密码,不要忽略。然后通过如下命令检查是否安装成功:
root@ubuntu:/usr/local# sudo netstat -tap | grep mysql
root@ubuntu:/usr/local# sudo /etc/init.d/mysql restart
登录验证:
root@ubuntu:/usr/local# mysql -hlocalhost -uroot -pxqshi
2.正确安装hadoop
3.下载hive安装文件apache-hive-2.1.1-bin.tar.gz,解压
4. 修改配置环境变量 vi ~/.bahrc
5.修改Hive配置hive-config.sh
6.修改hive-env.sh
7.修改hive-site.xml
8.修改hive-log4j.properties
9.在HDFS上建立/tmp和/user/hive/warehouse目录,并赋予组用户写权限。
10.mysql配置
mysql> grant all privileges on hive.* to root@localhost identified by '密码' with grant option;mysql> flush privileges;#将JDBC复制到Hive库目录用于java程序与mysql的连接cp
mysql-connector-java-5.1.28.jar /home/xqshi/Downloads/hadoop/apache-hive-2.1.1-bin/lib
11.初始化hive,在hive2.0以后的版本,初始化命令都是:
schematool -dbType mysql -initSchema
12.始化成功后,就可以运行hive了,可以检测一下hive是否正常
参考网址:http://blog.csdn.net/jdplus/article/details/4649355
参考网址:http://www.cnblogs.com/K-artorias/p/7141479.html
这个安装很简单,是在线安装,只需要按顺序执行一下几个命令就ok了。
(1)sudo apt-get install mysql-server
(2)sudo apt-get install mysql-client
(3)sudo apt-get install libmysqlclient-dev
(4)sudo apt-get installlibmysql-java
(5)将/usr/share/java/mysql-connector-java-5.1.28.jar拷贝到hive的lib目录下
cp
mysql-connector-java-5.1.28.jar /home/xqshi/Downloads/hadoop/apache-hive-2.1.1-bin/lib
安装过程会让你给数据库root用户输入密码,不要忽略。然后通过如下命令检查是否安装成功:
root@ubuntu:/usr/local# sudo netstat -tap | grep mysql
root@ubuntu:/usr/local# sudo /etc/init.d/mysql restart
登录验证:
root@ubuntu:/usr/local# mysql -hlocalhost -uroot -pxqshi
2.正确安装hadoop
3.下载hive安装文件apache-hive-2.1.1-bin.tar.gz,解压
4. 修改配置环境变量 vi ~/.bahrc
export JAVA_HOME=/home/xqshi/Downloads/hadoop/jdk1.8.0_91 export JRE_HOME=/home/xqshi/Downloads/hadoop/jdk1.8.0_91/jre export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib export PATH=${JAVA_HOME}/bin:${JRE_HOME}/bin:$PATH export HADOOP_HOME=/home/xqshi/Downloads/hadoop/hadoop-2.8.0 export PATH=${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin:$PATH export IDEA_JDK=/home/xqshi/Downloads/hadoop/jdk1.8.0_91 export HIVE_HOME=/home/xqshi/Downloads/hadoop/apache-hive-2.1.1-bin export PATH=${HIVE_HOME}/bin:$PATH
5.修改Hive配置hive-config.sh
vi /bin/hive-config.sh export JAVA_HOME=/home/xqshi/Downloads/hadoop/jdk1.8.0_91 export HIVE_HOME=/home/xqshi/Downloads/hadoop/apache-hive-2.1.1-bin export HADOOP_HOME=/home/xqshi/Downloads/hadoop/hadoop-2.8.0
6.修改hive-env.sh
cp hive-env.sh.template hive-env.sh
7.修改hive-site.xml
cp hive-default.xml.template hive-site.xml vi hive-site.xml <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://localhost:3306/hive</value> <description>JDBC connect string for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> <description>Driver class name for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>数据库用户名</value> <description>Username to use against metastore database</description> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>数据库密码</value> <description>password to use against metastore database</description> </property> #如果不配置下面的部分会产生错误1. <property> <name>hive.exec.local.scratchdir</name> <value>自定义目录</value> <description>Local scratch space for Hive jobs</description> </property> <property> <name>hive.downloaded.resources.dir</name> <value>自定义目录</value> <description>Temporary local directory for added resources in the remote file system.</description> </property> <property> <name>hive.querylog.location</name> <value>自定义目录</value> <description>Location of Hive run time structured log file</description> </property> <property> <name>hive.server2.logging.operation.log.location</name> <value>自定义目录/operation_logs</value> <description>Top level directory where operation logs are stored if logging functionality is enabled</description> </property>
8.修改hive-log4j.properties
cp hive-log4j.properties.template hive-log4j.properties vim hive-log4j.properties hive.log.dir=自定义目录/log/
9.在HDFS上建立/tmp和/user/hive/warehouse目录,并赋予组用户写权限。
HADOOP_HOME/bin/hadoop fs -mkdir /tmp HADOOP_HOME/bin/hadoop fs -mkdir /user/hive/warehouse HADOOP_HOME/bin/hadoop fs -chmod g+w /tmp HADOOP_HOME/bin/hadoop fs -chmod g+w /user/hive/warehouse
10.mysql配置
#创建数据库 mysql> create database hive; #赋予访问权限
schematool -dbType mysql -initSchema
mysql> grant all privileges on hive.* to root@localhost identified by '密码' with grant option;mysql> flush privileges;#将JDBC复制到Hive库目录用于java程序与mysql的连接cp
mysql-connector-java-5.1.28.jar /home/xqshi/Downloads/hadoop/apache-hive-2.1.1-bin/lib
11.初始化hive,在hive2.0以后的版本,初始化命令都是:
schematool -dbType mysql -initSchema
12.始化成功后,就可以运行hive了,可以检测一下hive是否正常
参考网址:http://blog.csdn.net/jdplus/article/details/4649355
参考网址:http://www.cnblogs.com/K-artorias/p/7141479.html
相关文章推荐
- mysql与hive2.1.1安装和配置
- mysql5.7.18安装、Hive2.1.1安装和配置(基于Hadoop2.7.3集群)
- hive安装及mysql配置
- 搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hadoop)(一)
- hadoop分布式集群hive-mysql的安装配置
- Hive 2.1.1安装配置
- 搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hive+Hadoop)(二)
- 解决安装配置hive时出错Caused by: com.mysql.cj.core.exceptions.CJCommunicationsException: Communications link
- Hive-0.12.0 安装配置(MySQL存储meta data)
- Sqoop-1.4.6安装配置及Mysql->HDFS->Hive数据导入(基于Hadoop2.7.3)
- Hive安装配置过程中mysql的用户创建
- HIVE的安装配置、mysql的安装、hive创建表、创建分区、修改表等内容、hive beeline使用、HIVE的四种数据导入方式、使用Java代码执行hive的sql命令
- Hive 2.1.1安装配置
- HIVE安装系列之二:配置HIVE(用Mysql作为元数据仓库)
- hadoop集群配置之hive1.2.0安装部署(远程mysql)
- 搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hive+MySQL+Connector)(三)
- 一脸懵逼学习Hive的元数据库Mysql方式安装配置
- hive安装、配置 mysql存储元数据
- hadoop集群配置之hive1.2.0安装部署(远程mysql)
- hive+mysql安装配置