您的位置:首页 > 运维架构

spark结合Hadoop2.2.0 HA使用中遇到的问题

2014-07-20 09:42 330 查看
scala> rdd1.toDebugString

14/07/20 09:42:05 INFO Client: Retrying connect to server: mycluster/202.106.199.34:8020. Already tried 0 time(s); maxRetries=45

14/07/20 09:42:25 WARN Client: Address change detected. Old: mycluster/202.106.199.34:8020 New: mycluster:8020

14/07/20 09:42:25 INFO Client: Retrying connect to server: mycluster:8020. Already tried 0 time(s); maxRetries=45

java.io.IOException: Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "master/192.168.1.202"; destination host is: "mycluster":8020;

    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)

    at org.apache.hadoop.ipc.Client.call(Client.java:1351)

    at org.apache.hadoop.ipc.Client.call(Client.java:1300)

    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)

    at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

    at java.lang.reflect.Method.invoke(Method.java:606)

    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)

    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)

    at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)

    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:651)

    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1679)

    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1106)

    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)

    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)

    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)

    at org.apache.hadoop.fs.FileSystem.globStatusInternal(FileSystem.java:1701)

    at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1647)

    at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:222)

    at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:270)

    at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:172)

    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)

    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)

    at scala.Option.getOrElse(Option.scala:120)

    at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)

    at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)

    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)

    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)

    at scala.Option.getOrElse(Option.scala:120)

    at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)

    at org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$debugString$1(RDD.scala:1194)

    at org.apache.spark.rdd.RDD.toDebugString(RDD.scala:1197)

    at $iwC$$iwC$$iwC$$iwC.<init>(<console>:15)

    at $iwC$$iwC$$iwC.<init>(<console>:20)

    at $iwC$$iwC.<init>(<console>:22)

    at $iwC.<init>(<console>:24)

    at <init>(<console>:26)

    at .<init>(<console>:30)

    at .<clinit>(<console>)

    at .<init>(<console>:7)

    at .<clinit>(<console>)

    at $print(<console>)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

    at java.lang.reflect.Method.invoke(Method.java:606)

    at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:788)

    at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1056)

    at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:614)

    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:645)

    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:609)

    at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:796)

    at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:841)

    at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:753)

    at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:601)

    at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:608)

    at org.apache.spark.repl.SparkILoop.loop(SparkILoop.scala:611)

    at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply$mcZ$sp(SparkILoop.scala:936)

    at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:884)

    at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:884)

    at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)

    at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:884)

    at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:982)

    at org.apache.spark.repl.Main$.main(Main.scala:31)

    at org.apache.spark.repl.Main.main(Main.scala)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

    at java.lang.reflect.Method.invoke(Method.java:606)

    at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:292)

    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:55)

    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

Caused by: java.io.IOException: Couldn't set up IO streams

    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:711)

    at org.apache.hadoop.ipc.Client$Connection.access$2600(Client.java:314)

    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1399)

    at org.apache.hadoop.ipc.Client.call(Client.java:1318)

    ... 72 more

Caused by: java.nio.channels.UnresolvedAddressException

    at sun.nio.ch.Net.checkAddress(Net.java:127)

    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:640)

    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)

    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)

    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)

    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:547)

    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:642)

    ... 75 more

scala>

查看spark的官网,发现:

Inheriting Cluster Configuration

If you plan to read and write from HDFS using Spark, there are two Hadoop configuration files thatshould be included on Spark’s classpath:

hdfs-site.xml
, which provides default behaviors for the HDFS client.
core-site.xml
, which sets the default filesystem name.
The location of these configuration files varies across CDH and HDP versions, buta common location is inside of
/etc/hadoop/conf
. Some tools, such as Cloudera Manager, createconfigurations on-the-fly, but offer a mechanisms to download copies of them.

To make these files visible to Spark, set
HADOOP_CONF_DIR
in
$SPARK_HOME/spark-env.sh
to a location c
4000
ontaining the configuration files.

于是我在spark-env.sh中添加了

export
HADOOP_CONF_DIR
=/hadoop2.2.0/etc/hadoop

再次执行上面的代码,就可以正常执行了。

可见使用hadoop的HA时,因为添加了一个dfs.nameservices=mycluster代表整个集群的namenode。spark环境中如果找不到hadoop的配置文件就不能正确解析mycluster这个名称。

非HA配置下就无需添加这个
HADOOP_CONF_DIR
变量了,因为namenode只有一个。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: