zeppelin + spark 遇到的坑
2017-11-27 16:41
141 查看
1.###报错:java.lang.NoSuchFieldError: HIVE_STATS_JDBC_TIMEOUT
折腾了一整天发现是spark的客户端没有安装好,重新安装后修复问题
2.###报错
需要修改修改bin/interpreter.sh
去除 –driver-class-path”ZEPPELINCLASSPATHOVERRIDES:{CLASSPATH}”
3.###报错
解决:修改 conf/zeppelin-env.sh export
添加:SPARK_SUBMIT_OPTIONS=”–jars /home/hadoop/spark-2.0.0-bin-hadoop2.6/jars/mysql-connector-java-5.1.11-bin.jar”
折腾了一整天发现是spark的客户端没有安装好,重新安装后修复问题
2.###报错
org.apache.spark.SparkException: Found both spark.driver.extraClassPath and SPARK_CLASSPATH. Use only the former. at org.apache.spark.SparkConf$$anonfun$validateSettings$7$$anonfun$apply$8.apply(SparkConf.scala:543) at org.apache.spark.SparkConf$$anonfun$validateSettings$7$$anonfun$apply$8.apply(SparkConf.scala:541) at scala.collection.immutable.List.foreach(List.scala:381) at org.apache.spark.SparkConf$$anonfun$validateSettings$7.apply(SparkConf.scala:541) at org.apache.spark.SparkConf$$anonfun$validateSettings$7.apply(SparkConf.scala:529) at scala.Option.foreach(Option.scala:257) at org.apache.spark.SparkConf.validateSettings(SparkConf.scala:529) at org.apache.spark.SparkContext.<init>(SparkContext.scala:368) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2256) at org.apache.spark.sql.SparkSession$Builder$$anonfun$8.apply(SparkSession.scala:831) at org.apache.spark.sql.SparkSession$Builder$$anonfun$8.apply(SparkSession.scala:823) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:823) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:38) at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:33) at org.apache.zeppelin.spark.SparkInterpreter.createSparkSession(SparkInterpreter.java:368) at org.apache.zeppelin.spark.SparkInterpreter.getSparkSession(SparkInterpreter.java:233) at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:841) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:70) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:491) at org.apache.zeppelin.scheduler.Job.run(Job.java:175) at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) INFO [2017-11-27 10:36:49,880] ({pool-2-thread-4} Logging.scala[logInfo]:54) - Successfully stopped SparkContext
需要修改修改bin/interpreter.sh
去除 –driver-class-path”ZEPPELINCLASSPATHOVERRIDES:{CLASSPATH}”
3.###报错
at org.apache.spark.network.client.TransportResponseHandler.handle(TransportResponseHandler.java:223) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:121) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51) at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) at java.lang.Thread.run(Thread.java:745)
解决:修改 conf/zeppelin-env.sh export
添加:SPARK_SUBMIT_OPTIONS=”–jars /home/hadoop/spark-2.0.0-bin-hadoop2.6/jars/mysql-connector-java-5.1.11-bin.jar”
相关文章推荐
- 部署zeppelin时遇到的spark on yarn的submit方式问题
- spark 编译遇到的错误及解决办法(四)
- 新手都会遇到的问题 该学hadoop?还是spark?
- spark 编译遇到的错误及解决办法(五)
- eclipse 上Spark 环境搭建中遇到的问题
- spark standalone模式 zeppelin安装
- spark 开发遇到问题
- mac上IDEA编辑spark提交遇到的问题
- Zeppelin连接Spark实现SQLStdBasedAuthorization权限验证
- spark 2.0.1 和zeppelin 0.6.2 编译及与hadoop yarn关联
- spark结合Hadoop2.2.0 HA使用中遇到的问题
- 使用spark过程中遇到的技术问题及自身问题
- windows中以本地模式运行spark遇到“Could not locate executable null\bin\winutils.exe in the Hadoop binarie”
- 从Spark1.4版本升级为Spark2.2.1所遇到的坑
- Spark-zeppelin-大数据可视化分析
- MongoDB With Spark遇到的2个错误,不能初始化和sample重复的key
- Spark写ES的遇到的坑
- spark 开发遇到问题
- idea-maven-java-spark(2.2.0~2.2.1)编译遇到的坑