Centos6.5 64位 安装Hadoop2.7.0, MapReduce日志分析, Hive2.1.0, JDBC连接Hive查询 (4)
2016-09-13 10:50
826 查看
第四篇 JDBC连接Hive查询
jdbc 连接hive 查询, 要做以下几个事情, 前面也曾提到:
1 在hadoop的core-site.xml 中增加配置
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>
2 让hive支持自定义验证
http://blog.csdn.net/system1024/article/details/51955936
3 编写测试程序
package hive.server2.query;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.Statement;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
public class ApiQueryTest {
private static String driverName = "org.apache.hive.jdbc.HiveDriver";
private static final Logger logger = LogManager.getLogger("ApiQueryTest");
public static void main(String[] args) {
try {
Class.forName(driverName);
Connection con = null;
con = DriverManager.getConnection("jdbc:hive2://10.68.128.215:10000", "root", "kangyun9413");
Statement stmt = con.createStatement();
ResultSet res = null;
String sql = "select sum(num) total, url, status from apis.api_logs group by status, url order by total desc limit 10";
System.out.println("Running: " + sql);
res = stmt.executeQuery(sql);
System.out.println("ok");
while (res.next()) {
System.out.println(res.getString(1) + "\t" + res.getString(2) + "\t" + res.getString(3));
}
} catch (Exception e) {
e.printStackTrace();
System.out.println("error");
}
}
}
运行:
jdbc 连接hive 查询, 要做以下几个事情, 前面也曾提到:
1 在hadoop的core-site.xml 中增加配置
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>
2 让hive支持自定义验证
http://blog.csdn.net/system1024/article/details/51955936
3 编写测试程序
package hive.server2.query;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.Statement;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
public class ApiQueryTest {
private static String driverName = "org.apache.hive.jdbc.HiveDriver";
private static final Logger logger = LogManager.getLogger("ApiQueryTest");
public static void main(String[] args) {
try {
Class.forName(driverName);
Connection con = null;
con = DriverManager.getConnection("jdbc:hive2://10.68.128.215:10000", "root", "kangyun9413");
Statement stmt = con.createStatement();
ResultSet res = null;
String sql = "select sum(num) total, url, status from apis.api_logs group by status, url order by total desc limit 10";
System.out.println("Running: " + sql);
res = stmt.executeQuery(sql);
System.out.println("ok");
while (res.next()) {
System.out.println(res.getString(1) + "\t" + res.getString(2) + "\t" + res.getString(3));
}
} catch (Exception e) {
e.printStackTrace();
System.out.println("error");
}
}
}
运行:
相关文章推荐
- Centos6.5 64位 安装Hadoop2.7.0, MapReduce日志分析, Hive2.1.0, JDBC连接Hive查询 (1)
- Centos6.5 64位 安装Hadoop2.7.0, MapReduce日志分析, Hive2.1.0, JDBC连接Hive查询 (2)
- Centos6.5 64位 安装Hadoop2.7.0, MapReduce日志分析, Hive2.1.0, JDBC连接Hive查询 (3)
- Hadoop安装配置(VMware + CentOS 6.5 64位)
- centos6.8平台上安装hive(基于Mysql6.5 和hadoop2.7.3伪分布集群下)
- centos6.5-64位安装hadoop-2.2.0集群
- Centos6.5 64位Hadoop完全分布安装教程
- Centos6.5 64位Hadoop伪分布安装教程
- CENTOS6.5安装日志分析ELK elasticsearch + logstash + redis + kibana
- 简明的hadoop 2.5 HA 基于centos6.5 安装部署文档(hdfs,mapreduce,hbase)
- CentOS6.5 64位安装单机版hadoop2.6教程
- Hadoop学习笔记-009-CentOS_6.5_64_HA高可用-Hadoop2.6+Zookeeper3.4.5安装Hive1.1.0
- 【centos6.5 hadoop2.7 _64位一键安装脚本】有问题加我Q直接问
- CentOS6.5安装hive-2.1.0
- CentOS下SparkR安装部署:hadoop2.7.3+spark2.0.0+scale2.11.8+hive2.1.0
- Centos 6.5 安装nginx日志分析系统 elasticsearch + logstash + redis + kibana
- hadoop学习--基于Hive的Hadoop日志分析
- 使用python构建基于hadoop的mapreduce日志分析平台
- win7硬盘安装Centos 6.5 64位双系统
- hadoop前戏配置一:centos6.5平台JDK安装与配置