搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hive+MySQL+Connector)(三)
2017-03-08 23:35
435 查看
续上一篇:搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hive+Hadoop)(二)
准备工作下载最新连接器地址
https://dev.mysql.com/downloads/connector/j/例子:下载mysql-connector-java-5.1.41.tar1、解压连接器connector文件
1.1、解压[root@localhost Software]# tar xzfmysql-connector-java-5.1.41.tar.gz[root@localhost Software]# cd mysql-connector-java-5.1.41/1.2、查看文件夹[root@localhostmysql-connector-java-5.1.41]# ll1.3、Copy到hive/lib路径下[root@localhost Software]# cpmysql-connector-java-5.1.41/mysql-connector-java-5.1.41-bin.jar/usr/hive/lib/mysql-connector-java-5.1.41-bin.jar2、登陆MySQL创建数据库:hive_db(注意配置hive-site.xml时有指定)
2.1、用户名:root 密码:password,另开一个终端登陆MySQL,创建数据库hive_db[root@localhost hive]# mysql -u root -ppassword mysql> create database hive_db;3、改配置文件hive-site.xml以下只列出改动的配置项,其它保留默认<configuration> <property> <name>hive.metastore.warehouse.dir</name> <value>/usr/hive/warehouse</value> <description>location of default database for the warehouse</description> </property> <property> <name>hive.metastore.local</name> <value>true</value> <description>Use false if a production metastore server is used</description> </property> <property> <name>hive.exec.scratchdir</name> <value>/tmp/hive</value> <description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/<username> is created, with ${hive.scratch.dir.permission}.</description> </property> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://localhost:3306/hive_db?createDatabaseIfNoExist=true</value> <description> Roy JDBC connect string for a JDBC metastore. To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL. For example, jdbc:postgresql://myhost/db?ssl=true for postgres database. </description> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> <description>User-Defined(Roy) Driver class name for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>root</value> <description>User-defined(Roy)Username to use against metastore database</description> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>password</value> <description>User-defined(Roy)password to use against metastore database</description> </property> </configuration>
4、使用schematool初始化
[root@localhost hive]# schematool -dbType mysql -initSchema--显示成功schemaTool completed5、启动hive服务端程序
5.1、启动hive服务端[root@localhost hive]# hive --servicemetastore &--屏幕提示信息不显示时,按ctrl+c退出5.2、查看进程信息[root@localhost hive]# jps--显示进程信息多了(RunJar)51280 Jps5985 SecondaryNameNode6226 ResourceManager45766 DataNode5753 NameNode51194 RunJar6348 NodeManager5.3、有需要时,可启动hive 远程服务 (端口号10000)[root@localhost hive]# hive --servicehiveserver &6、测试环境配置是否成功
6.1、准备导入文本文件/root/桌面/Test/wc-in/a.txt格式:1,h2,i3,v4,e6.2、登陆hive成功后,测试创建表
root@localhost hadoop]# hive6.2.1、创建表及指定逗号(,)为分隔符hive> create table a(id int,name string)> row format delimited fields terminated by ',';--显示信息OKTime taken: 0.288 seconds6.2.2、导入文件a.txthive> load data local inpath '/root/桌面/Test/wc-in/a.txt' into table a;--显示信息Loading data to table default.aOKTime taken: 0.763 seconds6.2.3、查看效果hive> select * from a;--显示信息OK1 h2 i3 v4 eTime taken: 0.309 seconds, Fetched: 4row(s)6.3、在Hive内使用dfs命令6.3.1、查看a表dfs存储路径hive> dfs -ls /usr/hive/warehouse/a;--显示信息Found 1 items -rw-r--r-- 1 root supergroup 16 2017-03-08 17:46/usr/hive/warehouse/a/a.txt6.3.2、查看文件内容hive> dfs -cat /usr/hive/warehouse/a/*;--显示信息1,h2,i3,v4,e
7、登陆MySQL查看创建表
[root@localhost conf]# mysql -u root -ppasswordmysql> use hive_db; mysql> select TBL_ID, CREATE_TIME,DB_ID, OWNER, TBL_NAME,TBL_TYPE from TBLS;--显示信息+--------+-------------+-------+-------+----------+---------------+| TBL_ID | CREATE_TIME | DB_ID | OWNER |TBL_NAME | TBL_TYPE |+--------+-------------+-------+-------+----------+---------------+| 37 | 1488966386 | 1 | root | a | MANAGED_TABLE |+--------+-------------+-------+-------+----------+---------------+1 row in set (0.03 sec)
8、在hdfs查看生成文件(同上步骤[6.3])
8.1、查看a表存储路径[root@localhost hadoop]# hdfs dfs -ls/usr/hive/warehouse/a--显示信息Found 1 items -rw-r--r-- 1 root supergroup 162017-03-08 17:46 /usr/hive/warehouse/a/a.txt8.2、查看内容[root@localhost hadoop]# hdfs dfs -cat /usr/hive/warehouse/a/*--显示信息1,h2,i3,v4,e 常见问题处理:1、启动hive时报错[root@localhost hive]# hive--显示报错信息Caused by:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):Cannot create directory /tmp/hive/root/24f1d91f-f32b-47e1-824d-ba26b02bd13e.Name node is in safe mode.原因:hadoop为安全模式--解决方法:关闭安全模式[root@localhost hadoop]# hadoop dfsadmin-safemode leave--显示信息DEPRECATED: Use of this script to executehdfs command is deprecated.Instead use the hdfs command for it.Safe mode is OFF2、在导入数据时出错信息hive> load data local inpath '/root/桌面/Test/wc-in/a.txt' into table a;--显示报错信息FAILED: Execution Error, return code 1 fromorg.apache.hadoop.hive.ql.exec.MoveTask.org.apache.hadoop.ipc.RemoteException(java.io.IOException): File/usr/hive/warehouse/a/a_copy_2.txt could only be replicated to 0 nodes insteadof minReplication (=1). There are 0datanode(s) running and no node(s) are excluded in this operation.原因:hadoop没有启动datanote解决方法:[root@localhost hive]# start-dfs.sh[root@localhost hive]# jps--显示信息51152 Jps5985 SecondaryNameNode6226 ResourceManager45766 DataNode5753 NameNode6348 NodeManager应网友要求测个例子:--调用HiveServer2客户端和beeline命令用法--启用服务,信息不动时Ctrl+C退出[root@localhost bin]# hiveserver2 [root@localhost bin]# beeline显示信息如下:
which: no hbase in (/usr/lib64/qt-3.3/bin:/root/perl5/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/hadoop/bin:/usr/hadoop/bin:/usr/hadoop/sbin:/usr/hive/bin:/usr/java/jdk1.8.0_111/bin:/root/bin:/usr/hadoop/bin:/usr/hadoop/sbin:/usr/hive/bin:/usr/java/jdk1.8.0_111/bin)Beeline version 2.1.1 by Apache Hivebeeline>连接和登陆账号密码输入:
Connecting to jdbc:mysql://localhost:3306/hive_dbEnter username for jdbc:mysql://localhost:3306/hive_db: rootEnter password for jdbc:mysql://localhost:3306/hive_db: ********--测试创建表:0: jdbc:mysql://localhost:3306/hive_db> create table Test_beeline(id int);显示信息:No rows affected (0.044 seconds)--查看创建表
0: jdbc:mysql://localhost:3306/hive_db> show tables;
相关文章推荐
- 搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hive+MySQL+Connector)(三)
- 搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hive+Hadoop)(二)
- 搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hadoop)(一)
- 搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hadoop)(一)
- 搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hive+Hadoop)(二)
- 大数据学习环境搭建(CentOS6.9+Hadoop2.7.3+Hive1.2.1+Hbase1.3.1+Spark2.1.1)
- hadoop-2.7+hive-2.1.1+mysql 集群配置
- Hadoop-2.7.3环境下Hive-2.1.1安装配置。
- hive本地模式配置,连接mysql数据库--hive2.1.1+hadoop2.7.3+mysql5.7.18
- Sqoop-1.4.6安装配置及Mysql->HDFS->Hive数据导入(基于Hadoop2.7.3)
- mysql5.7.18安装、Hive2.1.1安装和配置(基于Hadoop2.7.3集群)
- Hadoop 2.8 +Mysql 搭建Hive2.1.1
- Hive2.1.1、Hadoop2.7.3 部署
- hadoop2.4.2集群搭建及hive与mysql集成文档记录
- Hive2.1.1的安装(hadoop版本2.7.3)
- hadoop集群配置之hive1.2.0安装部署(远程mysql)
- hadoop环境搭建之配置MySQL
- centos6.8平台上安装hive(基于Mysql6.5 和hadoop2.7.3伪分布集群下)
- HIVE+MYSQL+HADOOP环境配置(用于学习)
- HIVE+MYSQL+HADOOP环境配置(用于学习)