Spark计算Pi---Spark学习笔记2
2014-03-28 22:53
381 查看
话接上回Spark学习笔记1-编译源代码,已经成功编译了spark on yarn
启动yarn
运行脚本,准备运行的是spark的计算Pi的例子。先来看下准备条件:1.spark客户端目录,那几个文件夹放编译好的jar包。2. 运行的shell
我的机器配置比较差,这里设置worker和master都比较保守。。。还是比较卡运行的时候。
执行截图:
![](https://img-blog.csdn.net/20140328225030609?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvb29wc29vbQ==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center)
![](https://img-blog.csdn.net/20140328225106000?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvb29wc29vbQ==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center)
![](https://img-blog.csdn.net/20140328225113171?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvb29wc29vbQ==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center)
![](https://img-blog.csdn.net/20140328225208765?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvb29wc29vbQ==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center)
<原创,转载请注明出处http://blog.csdn.net/oopsoom/article/details/22419597>
启动yarn
victor@victor-ubuntu:~/software/hadoop-2.2.0/sbin$ ./start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh Starting namenodes on [localhost] localhost: starting namenode, logging to /home/victor/software/hadoop-2.2.0/logs/hadoop-victor-namenode-victor-ubuntu.out localhost: starting datanode, logging to /home/victor/software/hadoop-2.2.0/logs/hadoop-victor-datanode-victor-ubuntu.out Starting secondary namenodes [0.0.0.0] 0.0.0.0: starting secondarynamenode, logging to /home/victor/software/hadoop-2.2.0/logs/hadoop-victor-secondarynamenode-victor-ubuntu.out starting yarn daemons starting resourcemanager, logging to /home/victor/software/hadoop-2.2.0/logs/yarn-victor-resourcemanager-victor-ubuntu.out localhost: starting nodemanager, logging to /home/victor/software/hadoop-2.2.0/logs/yarn-victor-nodemanager-victor-ubuntu.out victor@victor-ubuntu:~/software/hadoop-2.2.0/sbin$ jps 22896 SecondaryNameNode 23383 Jps 22317 NameNode 22578 DataNode 23078 ResourceManager 23342 NodeManager
运行脚本,准备运行的是spark的计算Pi的例子。先来看下准备条件:1.spark客户端目录,那几个文件夹放编译好的jar包。2. 运行的shell
victor@victor-ubuntu:~/software/incubator-spark-0.8.1-incubating$ pwd /home/victor/software/incubator-spark-0.8.1-incubating
victor@victor-ubuntu:~/software/incubator-spark-0.8.1-incubating/spark_compiled_client$ pwd /home/victor/software/incubator-spark-0.8.1-incubating/spark_compiled_client
victor@victor-ubuntu:~/software/incubator-spark-0.8.1-incubating/spark_compiled_client$ ll total 28 drwxrwxr-x 5 victor victor 4096 3月 28 21:38 ./ drwxrwxr-x 24 victor victor 4096 3月 28 22:47 ../ drwxrwxr-x 3 victor victor 4096 3月 28 02:41 assembly/ drwxrwxr-x 2 victor victor 4096 12月 11 06:35 conf/ drwxrwxr-x 3 victor victor 4096 3月 28 02:42 examples/ -rwxr-xr-x 1 victor victor 4802 12月 11 06:35 spark-class*
#!/bin/sh export YARN_CONF_DIR=/home/victor/software/hadoop-2.2.0/etc/hadoop SPARK_JAR=./assembly/target/scala-2.9.3/spark-assembly-0.8.1-incubating-hadoop2.2.0.jar \ ./spark-class org.apache.spark.deploy.yarn.Client \ --jar ./examples/target/scala-2.9.3/spark-examples-assembly-0.8.1-incubating.jar \ --class org.apache.spark.examples.JavaSparkPi \ --args yarn-standalone \ --num-workers 2 \ --master-memory 400m \ --worker-memory 512m \ --worker-cores 1
我的机器配置比较差,这里设置worker和master都比较保守。。。还是比较卡运行的时候。
执行截图:
<原创,转载请注明出处http://blog.csdn.net/oopsoom/article/details/22419597>
相关文章推荐
- Spark计算Pi运行过程详解---Spark学习笔记4
- Spark学习笔记(5)Spark Streaming流计算框架的运行源码
- 慕课网学习spark笔记之数据清洗
- 计算广告学习笔记1.1 广告的基础知识-广告的目的
- spark学习笔记之三:调度流程剖析
- Hadoop学习笔记(7)-简述MapReduce计算框架原理
- Spark学习笔记
- C# Hadoop学习笔记(六)—C#的云计算框架借鉴(上)
- Scala中隐式参数实战详解以及隐式参数在Spark中的应用源码解析之Scala学习笔记-50
- 黑马程序员学习笔记——关于时间复杂度计算2
- spark学习笔记:集群模式下的addFile()操作(存疑)
- 计算广告学习笔记 6.2 广告交易市场 实时竞价
- 云计算学习笔记---Hadoop简介,hadoop实现原理,NoSQL介绍...与传统关系型数据库对应关系,云计算面临的挑战
- 程序设计的基本元素、过程和计算---SICP学习笔记(1)
- Spark学习笔记
- Scala类型约束代码实战及其在Spark中的应用源码解析之Scala学习笔记-39
- Spark2.x学习笔记:10、简易电影受众系统
- OpenMP 之 临界区 求数值积分圆周率(pi)(学习笔记)
- 大数据学习笔记之三十 Spark介绍之一
- Spark学习笔记:初识Spark