搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hive+MySQL+Connector)(三)
2017-11-13 20:12
459 查看
准备工作下载最新连接器地址
https://dev.mysql.com/downloads/connector/j/例子:下载mysql-connector-java-5.1.41.tar
1、解压连接器connector文件
1.1、解压
[plain] view
plain copy
[root@localhost Software]# tar xzfmysql-connector-java-5.1.41.tar.gz
[root@localhost Software]# cd mysql-connector-java-5.1.41/
1.2、查看文件夹
[plain] view
plain copy
[root@localhostmysql-connector-java-5.1.41]# ll
1.3、Copy到hive/lib路径下
[plain] view
plain copy
[root@localhost Software]# cpmysql-connector-java-5.1.41/mysql-connector-java-5.1.41-bin.jar/usr/hive/lib/mysql-connector-java-5.1.41-bin.jar
2、登陆MySQL创建数据库:hive_db(注意配置hive-site.xml时有指定)
2.1、用户名:root 密码:password,另开一个终端登陆MySQL,创建数据库hive_db
[plain] view
plain copy
[root@localhost hive]# mysql -u root -ppassword
mysql> create database hive_db;
3、改配置文件hive-site.xml
以下只列出改动的配置项,其它保留默认
[html] view
plain copy
<configuration>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/usr/hive/warehouse</value>
<description>location of default database for the warehouse</description>
</property>
<property>
<name>hive.metastore.local</name>
<value>true</value>
<description>Use false if a production metastore server is used</description>
</property>
<property>
<name>hive.exec.scratchdir</name>
<value>/tmp/hive</value>
<description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/<username> is created, with ${hive.scratch.dir.permission}.</description>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost:3306/hive_db?createDatabaseIfNoExist=true</value>
<description> Roy
JDBC connect string for a JDBC metastore.
To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.
For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.
</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>User-Defined(Roy) Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
<description>User-defined(Roy)Username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>password</value>
<description>User-defined(Roy)password to use against metastore database</description>
</property>
</configuration>
4、使用schematool初始化
[plain] viewplain copy
[root@localhost hive]# schematool -dbType mysql -initSchema
--显示成功
[plain] view
plain copy
schemaTool completed
5、启动hive服务端程序
5.1、启动hive服务端
[plain] view
plain copy
[root@localhost hive]# hive --servicemetastore &
--屏幕提示信息不显示时,按ctrl+c退出
5.2、查看进程信息
[plain] view
plain copy
[root@localhost hive]# jps
--显示进程信息多了(RunJar)
[plain] view
plain copy
51280 Jps
5985 SecondaryNameNode
6226 ResourceManager
45766 DataNode
5753 NameNode
51194 RunJar
6348 NodeManager
5.3、有需要时,可启动hive 远程服务 (端口号10000)
[plain] view
plain copy
[root@localhost hive]# hive --servicehiveserver &
6、测试环境配置是否成功
6.1、准备导入文本文件/root/桌面/Test/wc-in/a.txt格式:
[plain] view
plain copy
1,h
2,i
3,v
4,e
6.2、登陆hive成功后,测试创建表
[plain] viewplain copy
root@localhost hadoop]# hive
6.2.1、创建表及指定逗号(,)为分隔符
[plain] view
plain copy
hive> create table a(id int,name string)
> row format delimited fields terminated by ',';
--显示信息
[plain] view
plain copy
OK
Time taken: 0.288 seconds
6.2.2、导入文件a.txt
[plain] view
plain copy
hive> load data local inpath '/root/桌面/Test/wc-in/a.txt' into table a;
--显示信息
[plain] view
plain copy
Loading data to table default.a
OK
Time taken: 0.763 seconds
6.2.3、查看效果
[plain] view
plain copy
hive> select * from a;
--显示信息
[plain] view
plain copy
OK
1 h
2 i
3 v
4 e
Time taken: 0.309 seconds, Fetched: 4row(s)
6.3、在Hive内使用dfs命令
6.3.1、查看a表dfs存储路径
[plain] view
plain copy
hive> dfs -ls /usr/hive/warehouse/a;
--显示信息
[plain] view
plain copy
Found 1 items
-rw-r--r-- 1 root supergroup 16 2017-03-08 17:46/usr/hive/warehouse/a/a.txt
6.3.2、查看文件内容
[plain] view
plain copy
hive> dfs -cat /usr/hive/warehouse/a/*;
--显示信息
[plain] view
plain copy
1,h
2,i
3,v
4,e
7、登陆MySQL查看创建表
[plain] viewplain copy
[root@localhost conf]# mysql -u root -ppassword
[sql] view
plain copy
mysql> use hive_db;
mysql> select TBL_ID, CREATE_TIME,DB_ID, OWNER, TBL_NAME,TBL_TYPE from TBLS;
--显示信息
[plain] view
plain copy
+--------+-------------+-------+-------+----------+---------------+
| TBL_ID | CREATE_TIME | DB_ID | OWNER |TBL_NAME | TBL_TYPE |
+--------+-------------+-------+-------+----------+---------------+
| 37 | 1488966386 | 1 | root | a | MANAGED_TABLE |
+--------+-------------+-------+-------+----------+---------------+
1 row in set (0.03 sec)
8、在hdfs查看生成文件(同上步骤[6.3])
8.1、查看a表存储路径
[plain] view
plain copy
[root@localhost hadoop]# hdfs dfs -ls/usr/hive/warehouse/a
--显示信息
[plain] view
plain copy
Found 1 items
-rw-r--r-- 1 root supergroup 162017-03-08 17:46 /usr/hive/warehouse/a/a.txt
8.2、查看内容
[plain] view
plain copy
[root@localhost hadoop]# hdfs dfs -cat /usr/hive/warehouse/a/*
--显示信息
[plain] view
plain copy
1,h
2,i
3,v
4,e
常见问题处理:
1、启动hive时报错
[plain] view
plain copy
[root@localhost hive]# hive
--显示报错信息
[plain] view
plain copy
Caused by:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):Cannot create directory /tmp/hive/root/24f1d91f-f32b-47e1-824d-ba26b02bd13e.Name node is in safe mode.
原因:hadoop为安全模式
--解决方法:
关闭安全模式
[plain] view
plain copy
[root@localhost hadoop]# hadoop dfsadmin-safemode leave
--显示信息
[plain] view
plain copy
DEPRECATED: Use of this script to executehdfs command is deprecated.
Instead use the hdfs command for it.
Safe mode is OFF
2、在导入数据时出错信息
[plain] view
plain copy
hive> load data local inpath '/root/桌面/Test/wc-in/a.txt' into table a;
--显示报错信息
[plain] view
plain copy
FAILED: Execution Error, return code 1 fromorg.apache.hadoop.hive.ql.exec.MoveTask.org.apache.hadoop.ipc.RemoteException(java.io.IOException): File/usr/hive/warehouse/a/a_copy_2.txt could only be replicated to 0 nodes insteadof minReplication (=1). There are 0datanode(s) running and no node(s) are excluded in this operation.
原因:hadoop没有启动datanote
解决方法:
[plain] view
plain copy
[root@localhost hive]# start-dfs.sh
[root@localhost hive]# jps
--显示信息
[plain] view
plain copy
51152 Jps
5985 SecondaryNameNode
6226 ResourceManager
45766 DataNode
5753 NameNode
6348 NodeManager
[plain] view
plain copy
应网友要求测个例子:
--调用HiveServer2客户端和beeline命令用法
--启用服务,信息不动时Ctrl+C退出
[plain] view
plain copy
[root@localhost bin]# hiveserver2
[plain] view
plain copy
[root@localhost bin]# beeline
显示信息如下:
[plain] view
plain copy
which: no hbase in (/usr/lib64/qt-3.3/bin:/root/perl5/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/hadoop/bin:/usr/hadoop/bin:/usr/hadoop/sbin:/usr/hive/bin:/usr/java/jdk1.8.0_111/bin:/root/bin:/usr/hadoop/bin:/usr/hadoop/sbin:/usr/hive/bin:/usr/java/jdk1.8.0_111/bin)
Beeline version 2.1.1 by Apache Hive
beeline>
连接和登陆账号密码输入:
[plain] view
plain copy
Connecting to jdbc:mysql://localhost:3306/hive_db
Enter username for jdbc:mysql://localhost:3306/hive_db: root
Enter password for jdbc:mysql://localhost:3306/hive_db: ********
--测试创建表:
[plain] view
plain copy
0: jdbc:mysql://localhost:3306/hive_db> create table Test_beeline(id int);
显示信息:
[html] view
plain copy
No rows affected (0.044 seconds)
--查看创建表
[plain] view
plain copy
0: jdbc:mysql://localhost:3306/hive_db> show tables;
相关文章推荐
- 搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hive+MySQL+Connector)(三)
- 搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hadoop)(一)
- 搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hive+Hadoop)(二)
- 搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hive+Hadoop)(二)
- 搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hadoop)(一)
- Sqoop-1.4.6安装配置及Mysql->HDFS->Hive数据导入(基于Hadoop2.7.3)
- Hadoop 2.8 +Mysql 搭建Hive2.1.1
- hadoop-2.7+hive-2.1.1+mysql 集群配置
- mysql5.7.18安装、Hive2.1.1安装和配置(基于Hadoop2.7.3集群)
- Hadoop-2.7.3环境下Hive-2.1.1安装配置。
- 大数据学习环境搭建(CentOS6.9+Hadoop2.7.3+Hive1.2.1+Hbase1.3.1+Spark2.1.1)
- hive本地模式配置,连接mysql数据库--hive2.1.1+hadoop2.7.3+mysql5.7.18
- Hive2.1.1、Hadoop2.7.3 部署
- hadoop集群配置之hive1.2.0安装部署(远程mysql)
- Hadoop集群搭建与MySQL搭建和Hive安装
- linux环境下的hive mysql hadoop环境搭建
- hadoop+spark+hive+mysql集群搭建过程
- 伪分布式集群环境搭建、jdk、hadoop、zk、hbase、hive、mysql
- hadoop-2.7.3 + hive-2.3.0 + zookeeper 4000 -3.4.8 + hbase-1.3.1 完全分布式安装配置
- Hive2.1.1的安装(hadoop版本2.7.3)