Launching Applications with spark-submit【使用脚本提交作业到集群5种部署模式--】
2017-03-03 18:28
711 查看
Once a user application is bundled, it can be launched using the
Some of the commonly used options are:
master URL for the cluster (e.g.
† A common deployment strategy is to submit your application from a gateway machinethat isphysically co-located with your worker machines (e.g. Master node in a standalone EC2 cluster).In this setup,
Alternatively, if your application is submitted from a machine far from the worker machines (e.g.locally on your laptop), it is common to use
For Python applications, simply pass a
There are a few options available that are specific to thecluster manager that is being used.For example, with a
Spark standalone cluster with
bin/spark-submitscript.This script takes care of setting up the classpath with Spark and itsdependencies, and can support different cluster managers and deploy modes that Spark supports:
./bin/spark-submit \ --class <main-class> \ --master <master-url> \ --deploy-mode <deploy-mode> \ --conf <key>=<value> \ ... # other options <application-jar> \ [application-arguments]
Some of the commonly used options are:
--class: The entry point for your application (e.g.
org.apache.spark.examples.SparkPi)
--master: The
master URL for the cluster (e.g.
spark://23.195.26.187:7077)
--deploy-mode: Whether to deploy your driver on the worker nodes (
cluster) or locally as an external client (
client) (default:
client) †
--conf: Arbitrary Spark configuration property in key=value format. For values that contain spaces wrap “key=value” in quotes (as shown).
application-jar: Path to a bundled jar including your application and all dependencies. The URL must be globally visible inside of your cluster, for instance, an
hdfs://path or a
file://path that is present on all nodes.
application-arguments: Arguments passed to the main method of your main class, if any
† A common deployment strategy is to submit your application from a gateway machinethat isphysically co-located with your worker machines (e.g. Master node in a standalone EC2 cluster).In this setup,
clientmode is appropriate. In
clientmode, the driver is launched directlywithin the
spark-submitprocess which acts as a client to the cluster. The input andoutput of the application is attached to the console. Thus, this mode is especially suitablefor applications that involve the REPL (e.g. Spark shell).
Alternatively, if your application is submitted from a machine far from the worker machines (e.g.locally on your laptop), it is common to use
clustermode to minimize network latency betweenthe drivers and the executors. Currently, standalone mode does not support cluster mode for Pythonapplications.
For Python applications, simply pass a
.pyfile in the place of
<application-jar>instead of a JAR,and add Python
.zip,
.eggor
.pyfiles to the search path with
--py-files.
There are a few options available that are specific to thecluster manager that is being used.For example, with a
Spark standalone cluster with
clusterdeploy mode,you can also specify
--superviseto make sure that the driver is automatically restarted if itfails with non-zero exit code.【高可用的一个配置参数】 To enumerate all such options available to
spark-submit,run it with
--help. Here are a few examples of common options:
# Run application locally on 8 cores ./bin/spark-submit \ --class org.apache.spark.examples.SparkPi \ --master local[8] \ /path/to/examples.jar \ 100 # Run on a Spark standalone cluster in client deploy mode ./bin/spark-submit \ --class org.apache.spark.examples.SparkPi \ --master spark://207.184.161.138:7077 \ --executor-memory 20G \ --total-executor-cores 100 \ /path/to/examples.jar \ 1000 # Run on a Spark standalone cluster in cluster deploy mode with supervise ./bin/spark-submit \ --class org.apache.spark.examples.SparkPi \ --master spark://207.184.161.138:7077 \ --deploy-mode cluster \ --supervise \ --executor-memory 20G \ --total-executor-cores 100 \ /path/to/examples.jar \ 1000 # Run on a YARN cluster export HADOOP_CONF_DIR=XXX ./bin/spark-submit \ --class org.apache.spark.examples.SparkPi \ --master yarn \ --deploy-mode cluster \ # can be client for client mode --executor-memory 20G \ --num-executors 50 \ /path/to/examples.jar \ 1000 # Run a Python application on a Spark standalone cluster ./bin/spark-submit \ --master spark://207.184.161.138:7077 \ examples/src/main/python/pi.py \ 1000 # Run on a Mesos cluster in cluster deploy mode with supervise ./bin/spark-submit \ --class org.apache.spark.examples.SparkPi \ --master mesos://207.184.161.138:7077 \ --deploy-mode cluster \ --supervise \ --executor-memory 20G \ --total-executor-cores 100 \ http://path/to/examples.jar \ 1000
相关文章推荐
- 蜗龙徒行-Spark学习笔记【四】Spark集群中使用spark-submit提交jar任务包实战经验
- Spark2.1.0 + CarbonData1.0.0集群模式部署及使用入门
- Spark组件之SparkR学习3--使用spark-submit向集群提交R代码文件data-manipulation.R
- Spark2.1.0 + CarbonData1.0.0+hadoop2.7.2集群模式部署及使用入门
- spark-submit 提交作业到集群
- 使用spark-submit提交到的各个模式
- Spark2.1.0 + CarbonData1.0.0集群模式部署及使用入门
- Spark集群中使用spark-submit提交jar任务包实战经验
- Spark组件之SparkR学习2--使用spark-submit向集群提交R代码文件dataframe.R
- 使用spark-submit提交jar包到spark standalone集群(续)
- 第8课:彻底实战详解使用IDE开发Spark程序--集群模式运行
- Spark脚本提交/运行/部署
- 使用docker安装部署Spark集群来训练CNN(含Python实例)
- Spark中文手册10:spark部署:提交应用程序及独立部署模式
- Spark on YARN集群模式作业运行全过程分析
- Spark standalone 模式下的集群部署
- 使用docker安装部署Spark集群来训练CNN(含Python实例)
- Spark脚本提交/运行/部署
- 传统的模式窗口中是使用向隐藏框架页提交,来避免submit以后弹出一个新窗口
- spark-submit提交任务到集群