您的位置:首页 > 数据库

第73课内幕资料详细版 Spark SQL Thrift Server 实战 每天晚上20:00YY频道现场授课频道68917580

2016-04-04 20:59 525 查看
第73课 Spark SQL Thrift Server 实战

/* * *王家林老师授课http://weibo.com/ilovepains */

每天晚上20:00YY频道现场授课频道68917580

1.启动hadoop

root@master:/usr/local/hadoop-2.6.0/sbin# start-dfs.sh

Starting namenodes on [master]

master: starting namenode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-namenode-master.out

worker1: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker1.out

worker2: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker2.out

worker4: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker4.out

worker6: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker6.out

worker7: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker7.out

worker3: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker3.out

worker5: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker5.out

worker8: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker8.out

Starting secondary namenodes [0.0.0.0]

0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-secondarynamenode-master.out

2、启动spark

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# start-all.sh

org.apache.spark.deploy.master.Master running as process 10457. Stop it first.

worker7: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark-1.6.0-bin-hadoop2.6/logs/spark-root-

org.apache.spark.deploy.worker.Worker-1-worker7.out

worker1: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark-1.6.0-bin-hadoop2.6/logs/spark-root-

org.apache.spark.deploy.worker.Worker-1-worker1.out

worker2: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark-1.6.0-bin-hadoop2.6/logs/spark-root-

org.apache.spark.deploy.worker.Worker-1-worker2.out

worker4: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark-1.6.0-bin-hadoop2.6/logs/spark-root-

org.apache.spark.deploy.worker.Worker-1-worker4.out

worker8: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark-1.6.0-bin-hadoop2.6/logs/spark-root-

org.apache.spark.deploy.worker.Worker-1-worker8.out

worker3: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark-1.6.0-bin-hadoop2.6/logs/spark-root-

org.apache.spark.deploy.worker.Worker-1-worker3.out

worker6: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark-1.6.0-bin-hadoop2.6/logs/spark-root-

org.apache.spark.deploy.worker.Worker-1-worker6.out

worker5: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark-1.6.0-bin-hadoop2.6/logs/spark-root-

org.apache.spark.deploy.worker.Worker-1-worker5.out

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# jps

18341 Jps

10551 HistoryServer

18152 SecondaryNameNode

10457 Master

17934 NameNode

3.启动thriftserver

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# ./start-thriftserver.sh

starting org.apache.spark.sql.hive.thriftserver.HiveThriftServer2, logging to /usr/local/spark-1.6.0-bin-

hadoop2.6/logs/spark-root-org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-master.out

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# cat /usr/local/spark-1.6.0-bin-hadoop2.6/logs/spark-root-

org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-master.out

报错了

16/04/04 16:42:04 WARN hive.metastore: Failed to connect to the MetaStore Server...

16/04/04 16:42:04 INFO hive.metastore: Waiting 1 seconds before next connection attempt.

16/04/04 16:42:05 WARN metadata.Hive: Failed to access metastore. This class should not accessed in runtime.

org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate

org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient

at org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1236)

at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:174)

at org.apache.hadoop.hive.ql.metadata.Hive.<clinit>(Hive.java:166)

at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:503)

at org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:194)

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

at java.lang.reflect.Constructor.newInstance(Constructor.java:422)

at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:249)

at org.apache.spark.sql.hive.HiveContext.metadataHive$lzycompute(HiveContext.scala:327)

at org.apache.spark.sql.hive.HiveContext.metadataHive(HiveContext.scala:237)

at org.apache.spark.sql.hive.HiveContext.setConf(HiveContext.scala:441)

4.启动hive --service metastore元数据

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# hive --service metastore

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in [jar:file:/usr/local/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-

1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/usr/local/spark-1.6.0-bin-hadoop2.6/lib/spark-assembly-1.6.0-

hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

Starting Hive Metastore Server

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in [jar:file:/usr/local/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-

1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/usr/local/spark-1.6.0-bin-hadoop2.6/lib/spark-assembly-1.6.0-

hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

5、再次启动thriftserver

^Croot@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin./start-thriftserver.sh

starting org.apache.spark.sql.hive.thriftserver.HiveThriftServer2, logging to /usr/local/spark-1.6.0-bin-

hadoop2.6/logs/spark-root-org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-master.out

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# cat /usr/local/spark-1.6.0-bin-hadoop2.6/logs/spark-root-

org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-master.out

Spark Command: /usr/local/jdk1.8.0_60/bin/java -cp /usr/local/spark-1.6.0-bin-hadoop2.6/conf/:/usr/local/spark-1.6.0-bin-

hadoop2.6/lib/spark-assembly-1.6.0-hadoop2.6.0.jar:/usr/local/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-core-

3.2.10.jar:/usr/local/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar:/usr/local/spark-1.6.0-bin-

hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/usr/local/hadoop-2.6.0/etc/hadoop/ -Xms1g -Xmx1g

org.apache.spark.deploy.SparkSubmit --class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 spark-internal

========================================

16/04/04 16:57:04 INFO thriftserver.HiveThriftServer2: Starting SparkContext

16/04/04 16:57:04 INFO spark.SparkContext: Running Spark version 1.6.0

16/04/04 16:57:05 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java

classes where applicable

16/04/04 16:57:05 INFO spark.SecurityManager: Changing view acls to: root

16/04/04 16:57:05 INFO spark.SecurityManager: Changing modify acls to: root

16/04/04 16:57:05 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view

permissions: Set(root); users with modify permissions: Set(root)

16/04/04 16:57:06 INFO util.Utils: Successfully started service 'sparkDriver' on port 56403.

16/04/04 16:57:07 INFO slf4j.Slf4jLogger: Slf4jLogger started

16/04/04 16:57:07 INFO Remoting: Starting remoting

16/04/04 16:57:08 INFO Remoting: Remoting started; listening on addresses :

[akka.tcp://sparkDriverActorSystem@192.168.189.1:60284]

16/04/04 16:57:08 INFO util.Utils: Successfully started service 'sparkDriverActorSystem' on port 60284.

16/04/04 16:57:08 INFO spark.SparkEnv: Registering MapOutputTracker

16/04/04 16:57:08 INFO spark.SparkEnv: Registering BlockManagerMaster

16/04/04 16:57:08 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-f2ed166c-3cff-43b5-aa10-

a21f9dd4ef16

16/04/04 16:57:08 INFO storage.MemoryStore: MemoryStore started with capacity 517.4 MB

16/04/04 16:57:08 INFO spark.SparkEnv: Registering OutputCommitCoordinator

16/04/04 16:57:08 INFO server.Server: jetty-8.y.z-SNAPSHOT

16/04/04 16:57:08 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040

16/04/04 16:57:08 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.

16/04/04 16:57:08 INFO ui.SparkUI: Started SparkUI at http://192.168.189.1:4040
16/04/04 16:57:08 INFO executor.Executor: Starting executor ID driver on host localhost

16/04/04 16:57:09 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on

port 56354.

16/04/04 16:57:09 INFO netty.NettyBlockTransferService: Server created on 56354

16/04/04 16:57:09 INFO storage.BlockManagerMaster: Trying to register BlockManager

16/04/04 16:57:09 INFO storage.BlockManagerMasterEndpoint: Registering block manager localhost:56354 with 517.4 MB RAM,

BlockManagerId(driver, localhost, 56354)

16/04/04 16:57:09 INFO storage.BlockManagerMaster: Registered BlockManager

16/04/04 16:57:11 INFO scheduler.EventLoggingListener: Logging events to hdfs://master:9000/historyserverforSpark/local-

1459760228899

16/04/04 16:57:13 INFO hive.HiveContext: Initializing execution hive, version 1.2.1

16/04/04 16:57:13 INFO client.ClientWrapper: Inspected Hadoop version: 2.6.0

16/04/04 16:57:13 INFO client.ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.0

16/04/04 16:57:14 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation

class:org.apache.hadoop.hive.metastore.ObjectStore

16/04/04 16:57:14 INFO metastore.ObjectStore: ObjectStore, initialize called

16/04/04 16:57:14 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored

16/04/04 16:57:14 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored

16/04/04 16:57:14 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin#

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# jps

10551 HistoryServer

18968 RunJar

18152 SecondaryNameNode

10457 Master

17934 NameNode

19343 Jps

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin#

6.集群启动thriftserver

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# ./start-thriftserver.sh --master spark://192.168.189.1:7077

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# ./start-thriftserver.sh --master spark://192.168.189.1:7077

starting org.apache.spark.sql.hive.thriftserver.HiveThriftServer2, logging to /usr/local/spark-1.6.0-bin-

hadoop2.6/logs/spark-root-org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-master.out

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# cat /usr/local/spark-1.6.0-bin-hadoop2.6/logs/spark-root-

org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-master.out

Spark Command: /usr/local/jdk1.8.0_60/bin/java -cp /usr/local/spark-1.6.0-bin-hadoop2.6/conf/:/usr/local/spark-1.6.0-bin-

hadoop2.6/lib/spark-assembly-1.6.0-hadoop2.6.0.jar:/usr/local/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-core-

3.2.10.jar:/usr/local/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar:/usr/local/spark-1.6.0-bin-

hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/usr/local/hadoop-2.6.0/etc/hadoop/ -Xms1g -Xmx1g

org.apache.spark.deploy.SparkSubmit --master spark://192.168.189.1:7077 --class

org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 spark-internal

========================================

16/04/04 17:04:20 INFO thriftserver.HiveThriftServer2: Starting SparkContext

16/04/04 17:04:20 INFO spark.SparkContext: Running Spark version 1.6.0

16/04/04 17:04:21 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java

classes where applicable

16/04/04 17:04:21 INFO spark.SecurityManager: Changing view acls to: root

16/04/04 17:04:21 INFO spark.SecurityManager: Changing modify acls to: root

16/04/04 17:04:21 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view

permissions: Set(root); users with modify permissions: Set(root)

16/04/04 17:04:22 INFO util.Utils: Successfully started service 'sparkDriver' on port 35661.

16/04/04 17:04:23 INFO slf4j.Slf4jLogger: Slf4jLogger started

16/04/04 17:04:23 INFO Remoting: Starting remoting

16/04/04 17:04:24 INFO Remoting: Remoting started; listening on addresses :

[akka.tcp://sparkDriverActorSystem@192.168.189.1:35140]

16/04/04 17:04:24 INFO util.Utils: Successfully started service 'sparkDriverActorSystem' on port 35140.

16/04/04 17:04:24 INFO spark.SparkEnv: Registering MapOutputTracker

16/04/04 17:04:24 INFO spark.SparkEnv: Registering BlockManagerMaster

16/04/04 17:04:24 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-ebe5b0cb-3664-45bf-b239-

bf09dba33bb6

16/04/04 17:04:24 INFO storage.MemoryStore: MemoryStore started with capacity 517.4 MB

16/04/04 17:04:24 INFO spark.SparkEnv: Registering OutputCommitCoordinator

16/04/04 17:04:24 INFO server.Server: jetty-8.y.z-SNAPSHOT

16/04/04 17:04:24 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040

16/04/04 17:04:24 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.

16/04/04 17:04:24 INFO ui.SparkUI: Started SparkUI at http://192.168.189.1:4040
16/04/04 17:04:25 INFO client.AppClient$ClientEndpoint: Connecting to master spark://192.168.189.1:7077...

16/04/04 17:04:25 INFO cluster.SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20160404170425-0001

16/04/04 17:04:25 INFO client.AppClient$ClientEndpoint: Executor added: app-20160404170425-0001/0 on worker-20160404163554-

192.168.189.3-53359 (192.168.189.3:53359) with 1 cores

16/04/04 17:04:25 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160404170425-0001/0 on hostPort

192.168.189.3:53359 with 1 cores, 1024.0 MB RAM

16/04/04 17:04:25 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on

port 43988.

16/04/04 17:04:25 INFO netty.NettyBlockTransferService: Server created on 43988

16/04/04 17:04:25 INFO storage.BlockManagerMaster: Trying to register BlockManager

16/04/04 17:04:25 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.189.1:43988 with 517.4 MB RAM,

BlockManagerId(driver, 192.168.189.1, 43988)

16/04/04 17:04:25 INFO storage.BlockManagerMaster: Registered BlockManager

16/04/04 17:04:25 INFO client.AppClient$ClientEndpoint: Executor added: app-20160404170425-0001/1 on worker-20160404163540-

192.168.189.5-56184 (192.168.189.5:56184) with 1 cores

16/04/04 17:04:25 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160404170425-0001/1 on hostPort

192.168.189.5:56184 with 1 cores, 1024.0 MB RAM

16/04/04 17:04:25 INFO client.AppClient$ClientEndpoint: Executor added: app-20160404170425-0001/2 on worker-20160404163538-

192.168.189.8-58085 (192.168.189.8:58085) with 1 cores

16/04/04 17:04:25 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160404170425-0001/2 on hostPort

192.168.189.8:58085 with 1 cores, 1024.0 MB RAM

16/04/04 17:04:25 INFO client.AppClient$ClientEndpoint: Executor added: app-20160404170425-0001/3 on worker-20160404163540-

192.168.189.9-52324 (192.168.189.9:52324) with 1 cores

16/04/04 17:04:25 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160404170425-0001/3 on hostPort

192.168.189.9:52324 with 1 cores, 1024.0 MB RAM

16/04/04 17:04:25 INFO client.AppClient$ClientEndpoint: Executor added: app-20160404170425-0001/4 on worker-20160404163545-

192.168.189.4-53732 (192.168.189.4:53732) with 1 cores

16/04/04 17:04:25 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160404170425-0001/4 on hostPort

192.168.189.4:53732 with 1 cores, 1024.0 MB RAM

16/04/04 17:04:25 INFO client.AppClient$ClientEndpoint: Executor added: app-20160404170425-0001/5 on worker-20160404163553-

192.168.189.2-48280 (192.168.189.2:48280) with 1 cores

16/04/04 17:04:25 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160404170425-0001/5 on hostPort

192.168.189.2:48280 with 1 cores, 1024.0 MB RAM

16/04/04 17:04:25 INFO client.AppClient$ClientEndpoint: Executor added: app-20160404170425-0001/6 on worker-20160404163544-

192.168.189.7-59892 (192.168.189.7:59892) with 1 cores

16/04/04 17:04:25 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160404170425-0001/6 on hostPort

192.168.189.7:59892 with 1 cores, 1024.0 MB RAM

16/04/04 17:04:25 INFO client.AppClient$ClientEndpoint: Executor added: app-20160404170425-0001/7 on worker-20160404163554-

192.168.189.6-37912 (192.168.189.6:37912) with 1 cores

16/04/04 17:04:25 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160404170425-0001/7 on hostPort

192.168.189.6:37912 with 1 cores, 1024.0 MB RAM

16/04/04 17:04:26 INFO client.AppClient$ClientEndpoint: Executor updated: app-20160404170425-0001/1 is now RUNNING

16/04/04 17:04:26 INFO client.AppClient$ClientEndpoint: Executor updated: app-20160404170425-0001/6 is now RUNNING

16/04/04 17:04:26 INFO client.AppClient$ClientEndpoint: Executor updated: app-20160404170425-0001/2 is now RUNNING

16/04/04 17:04:26 INFO client.AppClient$ClientEndpoint: Executor updated: app-20160404170425-0001/4 is now RUNNING

16/04/04 17:04:26 INFO client.AppClient$ClientEndpoint: Executor updated: app-20160404170425-0001/7 is now RUNNING

16/04/04 17:04:26 INFO client.AppClient$ClientEndpoint: Executor updated: app-20160404170425-0001/3 is now RUNNING

16/04/04 17:04:27 INFO client.AppClient$ClientEndpoint: Executor updated: app-20160404170425-0001/5 is now RUNNING

16/04/04 17:04:27 INFO client.AppClient$ClientEndpoint: Executor updated: app-20160404170425-0001/0 is now RUNNING

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin#

7.启动beeline

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/bin# ./beeline

Beeline version 1.6.0 by Apache Hive

beeline>

报错了

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/bin# ./beeline

Beeline version 1.6.0 by Apache Hive

beeline> !connect jdbc:hive2://master:10000

Connecting to jdbc:hive2://master:10000

Enter username for jdbc:hive2://master:10000: root

Enter password for jdbc:hive2://master:10000:

16/04/04 17:11:11 INFO jdbc.Utils: Supplied authorities: master:10000

16/04/04 17:11:11 INFO jdbc.Utils: Resolved authority: master:10000

16/04/04 17:11:11 INFO jdbc.HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://master:10000

16/04/04 17:11:11 INFO jdbc.HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://master:10000

16/04/04 17:11:11 INFO jdbc.HiveConnection: Transport Used for JDBC connection: null

Error: Could not open client transport with JDBC Uri: jdbc:hive2://master:10000: java.net.ConnectException: Connection

refused (state=08S01,code=0)

8.检查端口,端口没有起

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/bin# netstat -lanp | grep 10000

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/bin# netstat | grep 10000

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/bin# netstat | grep 8080

unix 3 [ ] STREAM CONNECTED 18080

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/bin# netstat | grep 18080

unix 3 [ ] STREAM CONNECTED 18080

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/bin#

9.没有起

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# stop-thriftserver.sh

no org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 to stop

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin#

10.

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# start-thriftserver.sh

starting org.apache.spark.sql.hive.thriftserver.HiveThriftServer2, logging to /usr/local/spark-1.6.0-bin-

hadoop2.6/logs/spark-root-org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-master.out

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# netstat -lanp | grep 10000

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin#

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# hive --service metastore

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in [jar:file:/usr/local/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-

1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/usr/local/spark-1.6.0-bin-hadoop2.6/lib/spark-assembly-1.6.0-

hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

Starting Hive Metastore Server

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in [jar:file:/usr/local/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-

1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/usr/local/spark-1.6.0-bin-hadoop2.6/lib/spark-assembly-1.6.0-

hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

10. 原因 hive --service metastore没有后台运行,结果就终止了

现在这个好了

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# netstat -lanp | grep 10000

tcp6 0 0 :::10000 :::* LISTEN 20315/java

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin#

再来一次

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# stop-thriftserver.sh

stopping org.apache.spark.sql.hive.thriftserver.HiveThriftServer2

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin#

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# hive --service metastore &

[1] 20493

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in [jar:file:/usr/local/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-

1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/usr/local/spark-1.6.0-bin-hadoop2.6/lib/spark-assembly-1.6.0-

hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

Starting Hive Metastore Server

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in [jar:file:/usr/local/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-

1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/usr/local/spark-1.6.0-bin-hadoop2.6/lib/spark-assembly-1.6.0-

hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# start-thriftserver.sh

starting org.apache.spark.sql.hive.thriftserver.HiveThriftServer2, logging to /usr/local/spark-1.6.0-bin-

hadoop2.6/logs/spark-root-org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-master.out

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# cat /usr/local/spark-1.6.0-bin-hadoop2.6/logs/spark-root-

org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-master.out

Spark Command: /usr/local/jdk1.8.0_60/bin/java -cp /usr/local/spark-1.6.0-bin-hadoop2.6/conf/:/usr/local/spark-1.6.0-bin-

hadoop2.6/lib/spark-assembly-1.6.0-hadoop2.6.0.jar:/usr/local/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-core-

3.2.10.jar:/usr/local/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar:/usr/local/spark-1.6.0-bin-

hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/usr/local/hadoop-2.6.0/etc/hadoop/ -Xms1g -Xmx1g

org.apache.spark.deploy.SparkSubmit --class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 spark-internal

========================================

16/04/04 17:39:08 INFO thriftserver.HiveThriftServer2: Starting SparkContext

16/04/04 17:39:08 INFO spark.SparkContext: Running Spark version 1.6.0

16/04/04 17:39:09 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java

classes where applicable

16/04/04 17:39:09 INFO spark.SecurityManager: Changing view acls to: root

16/04/04 17:39:09 INFO spark.SecurityManager: Changing modify acls to: root

16/04/04 17:39:09 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view

permissions: Set(root); users with modify permissions: Set(root)

16/04/04 17:39:10 INFO util.Utils: Successfully started service 'sparkDriver' on port 59520.

16/04/04 17:39:11 INFO slf4j.Slf4jLogger: Slf4jLogger started

16/04/04 17:39:11 INFO Remoting: Starting remoting

16/04/04 17:39:12 INFO Remoting: Remoting started; listening on addresses :

[akka.tcp://sparkDriverActorSystem@192.168.189.1:57751]

16/04/04 17:39:12 INFO util.Utils: Successfully started service 'sparkDriverActorSystem' on port 57751.

16/04/04 17:39:12 INFO spark.SparkEnv: Registering MapOutputTracker

16/04/04 17:39:12 INFO spark.SparkEnv: Registering BlockManagerMaster

16/04/04 17:39:12 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-b4f044ee-0fab-4732-ba73-

f5bba48a039d

16/04/04 17:39:12 INFO storage.MemoryStore: MemoryStore started with capacity 517.4 MB

16/04/04 17:39:12 INFO spark.SparkEnv: Registering OutputCommitCoordinator

16/04/04 17:39:12 INFO server.Server: jetty-8.y.z-SNAPSHOT

16/04/04 17:39:12 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040

16/04/04 17:39:12 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.

16/04/04 17:39:12 INFO ui.SparkUI: Started SparkUI at http://192.168.189.1:4040
16/04/04 17:39:13 INFO executor.Executor: Starting executor ID driver on host localhost

16/04/04 17:39:13 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on

port 55206.

16/04/04 17:39:13 INFO netty.NettyBlockTransferService: Server created on 55206

16/04/04 17:39:13 INFO storage.BlockManagerMaster: Trying to register BlockManager

16/04/04 17:39:13 INFO storage.BlockManagerMasterEndpoint: Registering block manager localhost:55206 with 517.4 MB RAM,

BlockManagerId(driver, localhost, 55206)

16/04/04 17:39:13 INFO storage.BlockManagerMaster: Registered BlockManager

16/04/04 17:39:15 INFO scheduler.EventLoggingListener: Logging events to hdfs://master:9000/historyserverforSpark/local-

1459762753036

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# netstat -lanp | grep 10000

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# netstat -lanp | grep 10000

tcp6 0 0 :::10000 :::* LISTEN 20588/java

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# ::* LISTEN 20588/java

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# ::*: command not found

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# netstat -lanp | grep 10000

tcp6 0 0 :::10000 :::* LISTEN 20588/java

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin#

11.继续beeline ok

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/bin# beeline

Beeline version 1.6.0 by Apache Hive

beeline>

Connecting to jdbc:hive2://master:10000

Enter username for jdbc:hive2://master:10000: root

Enter password for jdbc:hive2://master:10000:

16/04/04 17:41:51 INFO jdbc.Utils: Supplied authorities: master:10000

16/04/04 17:41:51 INFO jdbc.Utils: Resolved authority: master:10000

16/04/04 17:41:52 INFO jdbc.HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://master:10000

Connected to: Spark SQL (version 1.6.0)

Driver: Spark Project Core (version 1.6.0)

Transaction isolation: TRANSACTION_REPEATABLE_READ

0: jdbc:hive2://master:10000>

0: jdbc:hive2://master:10000> show databases;

+----------+--+

| result |

+----------+--+

| default |

| hive |

+----------+--+

2 rows selected (6.743 seconds)

0: jdbc:hive2://master:10000>

0: jdbc:hive2://master:10000> use hive;

+---------+--+

| result |

+---------+--+

+---------+--+

No rows selected (0.087 seconds)

0: jdbc:hive2://master:10000> show tables;

+------------+--------------+--+

| tableName | isTemporary |

+------------+--------------+--+

| a1 | false |

| a2 | false |

| sogouq2 | false |

+------------+--------------+--+

3 rows selected (0.066 seconds)

0: jdbc:hive2://master:10000>

0: jdbc:hive2://master:10000> desc sogouq2;

+-------------+------------+----------+--+

| col_name | data_type | comment |

+-------------+------------+----------+--+

| id | string | NULL |

| websession | string | NULL |

| word | string | NULL |

| s_seq | int | NULL |

| c_seq | int | NULL |

| website | string | NULL |

+-------------+------------+----------+--+

6 rows selected (0.698 seconds)

0: jdbc:hive2://master:10000>

0: jdbc:hive2://master:10000> select count (*) from sogouq2;

+----------+--+

| _c0 |

+----------+--+

| 1000000 |

+----------+--+

1 row selected (7.415 seconds)

0: jdbc:hive2://master:10000>

==============================================================================

thrift代码实战

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# stop-thriftserver.sh

stopping org.apache.spark.sql.hive.thriftserver.HiveThriftServer2

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# ./start-thriftserver.sh --master spark://192.168.189.1:7077 --

hiveconf hive.server2.transport.mode=http --hiveconf hive.server2.thrift.http.path=cliservice

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# ./start-thriftserver.sh --master spark://192.168.189.1:7077 --

hiveconf hive.server2.transport.mode=http --hiveconf hive.server2.thrift.http.path=cliservice

starting org.apache.spark.sql.hive.thriftserver.HiveThriftServer2, logging to /usr/local/spark-1.6.0-bin-

hadoop2.6/logs/spark-root-org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-master.out

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# cat /usr/local/spark-1.6.0-bin-hadoop2.6/logs/spark-root-

org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-master.out

Spark Command: /usr/local/jdk1.8.0_60/bin/java -cp /usr/local/spark-1.6.0-bin-hadoop2.6/conf/:/usr/local/spark-1.6.0-bin-

hadoop2.6/lib/spark-assembly-1.6.0-hadoop2.6.0.jar:/usr/local/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-core-

3.2.10.jar:/usr/local/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar:/usr/local/spark-1.6.0-bin-

hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/usr/local/hadoop-2.6.0/etc/hadoop/ -Xms1g -Xmx1g

org.apache.spark.deploy.SparkSubmit --master spark://192.168.189.1:7077 --class

org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 spark-internal --hiveconf hive.server2.transport.mode=http --

hiveconf hive.server2.thrift.http.path=cliservice

========================================

16/04/04 18:41:49 INFO thriftserver.HiveThriftServer2: Starting SparkContext

16/04/04 18:41:49 INFO spark.SparkContext: Running Spark version 1.6.0

16/04/04 18:41:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java

classes where applicable

16/04/04 18:41:51 INFO spark.SecurityManager: Changing view acls to: root

16/04/04 18:41:51 INFO spark.SecurityManager: Changing modify acls to: root

16/04/04 18:41:51 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view

permissions: Set(root); users with modify permissions: Set(root)

16/04/04 18:41:52 INFO util.Utils: Successfully started service 'sparkDriver' on port 54815.

16/04/04 18:41:53 INFO slf4j.Slf4jLogger: Slf4jLogger started

16/04/04 18:41:54 INFO Remoting: Starting remoting

16/04/04 18:41:54 INFO Remoting: Remoting started; listening on addresses :

[akka.tcp://sparkDriverActorSystem@192.168.189.1:53723]

16/04/04 18:41:54 INFO util.Utils: Successfully started service 'sparkDriverActorSystem' on port 53723.

16/04/04 18:41:54 INFO spark.SparkEnv: Registering MapOutputTracker

16/04/04 18:41:54 INFO spark.SparkEnv: Registering BlockManagerMaster

16/04/04 18:41:54 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-b7cf4d71-c13f-410a-bc9c-

816a44577186

16/04/04 18:41:54 INFO storage.MemoryStore: MemoryStore started with capacity 517.4 MB

16/04/04 18:41:54 INFO spark.SparkEnv: Registering OutputCommitCoordinator

16/04/04 18:41:55 INFO server.Server: jetty-8.y.z-SNAPSHOT

16/04/04 18:41:55 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040

16/04/04 18:41:55 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.

16/04/04 18:41:55 INFO ui.SparkUI: Started SparkUI at http://192.168.189.1:4040
16/04/04 18:41:56 INFO client.AppClient$ClientEndpoint: Connecting to master spark://192.168.189.1:7077...

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin#

hive中新建表

CREATE TABLE person(name STRING,age int) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n';

person.txt上传hive

Michael,29

Andy,30 多了个空格

Justin,19

hive> show tables;

OK

a1

a2

person

sogouq2

Time taken: 0.045 seconds, Fetched: 4 row(s)

hive> load data local inpath '/usr/local/IMF_testdata/person.txt' into table person;

Loading data to table hive.person

Table hive.person stats: [numFiles=1, totalSize=30]

OK

Time taken: 0.64 seconds

hive> select * from person;

OK

Michael 29

Andy NULL

Justin 19

Time taken: 0.275 seconds, Fetched: 3 row(s)

hive>

为避免各种问题,再重启了虚拟机。重启hadoop spark

运行脚本,报错了,搞错表了

root@master:/usr/local/IMF_testdata# /usr/local/spark-1.6.0-bin-hadoop2.6/bin/spark-submit --class

com.dt.spark.IMFSparkAppsSQL.SparkSQLJDBC2ThriftServer --master spark://192.168.189.1:7077

/usr/local/IMF_testdata/SparkSQLJDBC2ThriftServer.jar

16/04/04 20:06:03 INFO jdbc.Utils: Supplied authorities: Master:10001

16/04/04 20:06:03 WARN jdbc.Utils: ***** JDBC param deprecation *****

16/04/04 20:06:03 WARN jdbc.Utils: The use of hive.server2.transport.mode is deprecated.

16/04/04 20:06:03 WARN jdbc.Utils: Please use transportMode like so:

jdbc:hive2://<host>:<port>/dbName;transportMode=<transport_mode_value>

16/04/04 20:06:03 WARN jdbc.Utils: ***** JDBC param deprecation *****

16/04/04 20:06:03 WARN jdbc.Utils: The use of hive.server2.thrift.http.path is deprecated.

16/04/04 20:06:03 WARN jdbc.Utils: Please use httpPath like so: jdbc:hive2://<host>:<port>/dbName;httpPath=<http_path_value>

16/04/04 20:06:03 INFO jdbc.Utils: Resolved authority: Master:10001

java.sql.SQLException: org.apache.spark.sql.AnalysisException: Table not found: people; line 1 pos 17

at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:296)

at org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:392)

at org.apache.hive.jdbc.HivePreparedStatement.executeQuery(HivePreparedStatement.java:109)

at com.dt.spark.IMFSparkAppsSQL.SparkSQLJDBC2ThriftServer.main(SparkSQLJDBC2ThriftServer.java:31)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:497)

at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)

at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)

at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)

at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)

at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

Exception in thread "main" java.lang.NullPointerException

at com.dt.spark.IMFSparkAppsSQL.SparkSQLJDBC2ThriftServer.main(SparkSQLJDBC2ThriftServer.java:43)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:497)

at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)

at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)

at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)

at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)

at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

root@master:/usr/local/IMF_testdata#

那就再新建一个

hive> use hive;

OK

Time taken: 1.337 seconds

hive> select name from people where age ='29';

FAILED: SemanticException [Error 10001]: Line 1:17 Table not found 'people'

hive> show tables;

OK

a1

a2

person

sogouq2

Time taken: 0.399 seconds, Fetched: 4 row(s)

hive> CREATE TABLE people(name STRING,age int) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n';

OK

Time taken: 0.501 seconds

hive> load data local inpath '/usr/local/IMF_testdata/person.txt' into table people;

Loading data to table hive.people

Table hive.people stats: [numFiles=1, totalSize=30]

OK

Time taken: 1.089 seconds

hive>

重启 thriftserver.sh

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# stop-thriftserver.sh

stopping org.apache.spark.sql.hive.thriftserver.HiveThriftServer2

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# ./start-thriftserver.sh --master spark://192.168.189.1:7077

--hiveconf hive.server2.transport.mode=http --hiveconf hive.server2.thrift.http.path=cliservice

starting org.apache.spark.sql.hive.thriftserver.HiveThriftServer2, logging to /usr/local/spark-1.6.0-bin-

hadoop2.6/logs/spark-root-org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-master.out

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# cat /usr/local/spark-1.6.0-bin-hadoop2.6/logs/spark-root-

org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-master.out

再次重启虚拟机

其实跟虚拟机无关系,连到数据库default 了 ,练不到hive中,所有找不到表了。

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# /usr/local/spark-1.6.0-bin-hadoop2.6/bin/spark-submit --class

com.dt.spark.IMFSparkAppsSQL.SparkSQLJDBC2ThriftServer --master spark://192.168.189.1:7077

/usr/local/IMF_testdata/SparkSQLJDBC2ThriftServer100.jar

16/04/04 20:50:35 INFO jdbc.Utils: Supplied authorities: 192.168.189.1:10001

16/04/04 20:50:35 WARN jdbc.Utils: ***** JDBC param deprecation *****

16/04/04 20:50:35 WARN jdbc.Utils: The use of hive.server2.transport.mode is deprecated.

16/04/04 20:50:35 WARN jdbc.Utils: Please use transportMode like so:

jdbc:hive2://<host>:<port>/dbName;transportMode=<transport_mode_value>

16/04/04 20:50:35 WARN jdbc.Utils: ***** JDBC param deprecation *****

16/04/04 20:50:35 WARN jdbc.Utils: The use of hive.server2.thrift.http.path is deprecated.

16/04/04 20:50:35 WARN jdbc.Utils: Please use httpPath like so: jdbc:hive2://<host>:<port>/dbName;httpPath=<http_path_value>

16/04/04 20:50:35 INFO jdbc.Utils: Resolved authority: 192.168.189.1:10001

=============================================

=============================================

=========the conn is default =============default

=============================================

=============================================

=============================================

=============================================

=========conn.setCatalog hive ,but not use =============default

=============================================

=============================================

Michael

root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin#

上图







内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: