您的位置:首页 > 编程语言 > Java开发

spark启动master时提示端口8080被占用SelectChannelConnector@0.0.0.0:8080: java.net.BindException

2014-07-10 14:52 330 查看
使用spark-shell时提示:

14/07/10 15:48:14 WARN AbstractLifeCycle: FAILED SelectChannelConnector@0.0.0.0:8080: java.net.BindException: アドレスは既に使用中です
java.net.BindException: アドレスは既に使用中です
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
at org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
at org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at org.eclipse.jetty.server.Server.doStart(Server.java:293)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at org.apache.spark.ui.JettyUtils$$anonfun$1.apply$mcV$sp(JettyUtils.scala:192)
at org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:192)
at org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:192)
at scala.util.Try$.apply(Try.scala:161)
at org.apache.spark.ui.JettyUtils$.connect$1(JettyUtils.scala:191)
at org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:205)
at org.apache.spark.ui.WebUI.bind(WebUI.scala:99)
at org.apache.spark.deploy.master.Master.preStart(Master.scala:124)
at akka.actor.ActorCell.create(ActorCell.scala:562)
at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:425)
at akka.actor.ActorCell.systemInvoke(ActorCell.scala:447)
at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:262)
at akka.dispatch.Mailbox.run(Mailbox.scala:218)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
14/07/10 15:48:14 WARN AbstractLifeCycle: FAILED org.eclipse.jetty.server.Server@1a33bbf0: java.net.BindException: アドレスは既に使用中です
java.net.BindException: アドレスは既に使用中です
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
at org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
at org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at org.eclipse.jetty.server.Server.doStart(Server.java:293)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at org.apache.spark.ui.JettyUtils$$anonfun$1.apply$mcV$sp(JettyUtils.scala:192)
at org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:192)
at org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:192)
at scala.util.Try$.apply(Try.scala:161)
at org.apache.spark.ui.JettyUtils$.connect$1(JettyUtils.scala:191)
at org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:205)
at org.apache.spark.ui.WebUI.bind(WebUI.scala:99)
at org.apache.spark.deploy.master.Master.preStart(Master.scala:124)
at akka.actor.ActorCell.create(ActorCell.scala:562)
at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:425)
at akka.actor.ActorCell.systemInvoke(ActorCell.scala:447)
at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:262)
at akka.dispatch.Mailbox.run(Mailbox.scala:218)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
14/07/10 15:48:14 INFO JettyUtils: Failed to create UI at port, 8080. Trying again.
14/07/10 15:48:14 INFO JettyUtils: Error was: Failure(java.net.BindException: アドレスは既に使用中です)
14/07/10 15:48:24 WARN AbstractLifeCycle: FAILED SelectChannelConnector@0.0.0.0:8081: java.net.BindException: アドレスは既に使用中です
java.net.BindException: アドレスは既に使用中です
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
at org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
at org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at org.eclipse.jetty.server.Server.doStart(Server.java:293)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at org.apache.spark.ui.JettyUtils$$anonfun$1.apply$mcV$sp(JettyUtils.scala:192)
at org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:192)
at org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:192)
at scala.util.Try$.apply(Try.scala:161)
at org.apache.spark.ui.JettyUtils$.connect$1(JettyUtils.scala:191)
at org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:205)
at org.apache.spark.ui.WebUI.bind(WebUI.scala:99)
at org.apache.spark.deploy.master.Master.preStart(Master.scala:124)
at akka.actor.ActorCell.create(ActorCell.scala:562)
at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:425)
at akka.actor.ActorCell.systemInvoke(ActorCell.scala:447)
at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:262)
at akka.dispatch.Mailbox.run(Mailbox.scala:218)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
14/07/10 15:48:24 WARN AbstractLifeCycle: FAILED org.eclipse.jetty.server.Server@506d41d8: java.net.BindException: アドレスは既に使用中です
java.net.BindException: アドレスは既に使用中です
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
at org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
at org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at org.eclipse.jetty.server.Server.doStart(Server.java:293)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at org.apache.spark.ui.JettyUtils$$anonfun$1.apply$mcV$sp(JettyUtils.scala:192)
at org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:192)
at org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:192)
at scala.util.Try$.apply(Try.scala:161)
at org.apache.spark.ui.JettyUtils$.connect$1(JettyUtils.scala:191)
at org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:205)
at org.apache.spark.ui.WebUI.bind(WebUI.scala:99)
at org.apache.spark.deploy.master.Master.preStart(Master.scala:124)
at akka.actor.ActorCell.create(ActorCell.scala:562)
at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:425)
at akka.actor.ActorCell.systemInvoke(ActorCell.scala:447)
at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:262)
at akka.dispatch.Mailbox.run(Mailbox.scala:218)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
14/07/10 15:48:24 INFO JettyUtils: Failed to create UI at port, 8081. Trying again.
14/07/10 15:48:24 INFO JettyUtils: Error was: Failure(java.net.BindException: アドレスは既に使用中です)
Exception in thread "main" java.util.concurrent.TimeoutException: Futures timed out after [30 seconds]
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.result(package.scala:107)
at org.apache.spark.deploy.master.Master$.startSystemAndActor(Master.scala:791)
at org.apache.spark.deploy.master.Master$.main(Master.scala:765)
at org.apache.spark.deploy.master.Master.main(Master.scala)

在root权限下通过命令netstat  -apn | grep 8080查看使用该端口的应用程序:

[root@hadoop186 hadoop]# netstat -apn | grep 8080
tcp 0 0 :::8080 :::* LISTEN 3985/java 提示是pid为3985的java程序在使用该端口
于是 我直接使用浏览器访问该端口:



通过窗口可以看出是hadoop占用了该端口。。。。

显而易见 现在有两种解决方案:

第一种是找到hadoop配置文件中使用8080的设置,然后修改这个端口,比较麻烦

所以我们采用第二种

第二种是找到spark的配置文件中8080的设置,让spark的webui使用其他端口:这个配置是在sbin/start-master.sh中

[hadoop@hadoop186 sbin]$ cat start-master.sh
#!/usr/bin/env bash

#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0 #
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

# Starts the master on the machine this script is executed on.

sbin=`dirname "$0"`
sbin=`cd "$sbin"; pwd`

START_TACHYON=false

while (( "$#" )); do
case $1 in
--with-tachyon)
if [ ! -e "$sbin"/../tachyon/bin/tachyon ]; then
echo "Error: --with-tachyon specified, but tachyon not found."
exit -1
fi
START_TACHYON=true
;;
esac
shift
done

. "$sbin/spark-config.sh"

. "$SPARK_PREFIX/bin/load-spark-env.sh"

if [ "$SPARK_MASTER_PORT" = "" ]; then
SPARK_MASTER_PORT=7077
fi

if [ "$SPARK_MASTER_IP" = "" ]; then
SPARK_MASTER_IP=`hostname`
fi

if [ "$SPARK_MASTER_WEBUI_PORT" = "" ]; then
<span style="color:#ff0000;"> SPARK_MASTER_WEBUI_PORT=8080</span>
fi

"$sbin"/spark-daemon.sh start org.apache.spark.deploy.master.Master 1 --ip $SPARK_MASTER_IP --port $SPARK_MASTER_PORT --webui-port $SPARK_MASTER_WEBUI_PORT

if [ "$START_TACHYON" == "true" ]; then
"$sbin"/../tachyon/bin/tachyon bootstrap-conf $SPARK_MASTER_IP
"$sbin"/../tachyon/bin/tachyon format -s
"$sbin"/../tachyon/bin/tachyon-start.sh master
fi只要将这段端口修改成其他没有被使用的端口启动就没问题了:
启动日志:

[hadoop@hadoop186 spark]$ cd logs/
[hadoop@hadoop186 logs]$ ls
spark-hadoop-org.apache.spark.deploy.master.Master-1-hadoop186.out spark-hadoop-org.apache.spark.deploy.worker.Worker-1-hadoop186.out
[hadoop@hadoop186 logs]$ cat spark-hadoop-org.apache.spark.deploy.master.Master-1-hadoop186.out
Spark assembly has been built with Hive, including Datanucleus jars on classpath
Spark Command: /usr/java/jdk1.7.0_45/bin/java -cp ::/home/hadoop/spark-1.0.0-bin-cdh4/conf:/home/hadoop/spark-1.0.0-bin-cdh4/lib/spark-assembly-1.0.0-hadoop2.0.0-mr1-cdh4.2.0.jar:/home/hadoop/spark-1.0.0-bin-cdh4/lib/datanucleus-core-3.2.2.jar:/home/hadoop/spark-1.0.0-bin-cdh4/lib/datanucleus-rdbms-3.2.1.jar:/home/hadoop/spark-1.0.0-bin-cdh4/lib/datanucleus-api-jdo-3.2.1.jar:/home/hadoop/hadoop/etc/hadoop:/home/hadoop/hadoop/etc/hadoop -XX:MaxPermSize=128m -Dspark.akka.logLifecycleEvents=true -Xms512m -Xmx512m org.apache.spark.deploy.master.Master --ip hadoop186 --port 7077 --webui-port 8080
========================================

14/07/10 15:23:40 INFO SecurityManager: Changing view acls to: hadoop
14/07/10 15:23:40 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop)
14/07/10 15:23:42 INFO Slf4jLogger: Slf4jLogger started
14/07/10 15:23:43 INFO Remoting: Starting remoting
14/07/10 15:23:43 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkMaster@hadoop186:7077]
14/07/10 15:23:44 INFO Master: Starting Spark master at spark://hadoop186:7077
14/07/10 15:23:55 INFO MasterWebUI: Started MasterWebUI at http://hadoop186:8080 14/07/10 15:23:55 INFO Master: I have been elected leader! New state: ALIVE
14/07/10 15:23:56 INFO Master: Registering worker hadoop186:47966 with 1 cores, 846.0 MB RAM
[hadoop@hadoop186 logs]$ jps查看进程:
[hadoop@hadoop186 logs]$ jps
2675 QuorumPeerMain
<span style="background-color: rgb(255, 0, 0);">32031 Worker</span>
2764 JournalNode
32558 Jps
3163 DFSZKFailoverController
3985 NodeManager
2847 NameNode
<span style="color:#ff0000;">31900 Master</span>
3872 ResourceManager
2927 DataNode
[hadoop@hadoop186 logs]$

可以看到Master和Worker的进程
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
相关文章推荐