您的位置:首页 > 大数据

大数据IMF传奇行动 scala IDE 内存不够问题解决

2016-01-14 12:05 337 查看
1、scalaIDE 运行提示报错:Please use a larger heap size

2、修改jdk 使用内存

找到eclispe 中window->preferences->Java->Installed JRE ,点击右侧的Edit 按钮,在编辑界面中的 “Default VM Arguments ”选项中,填入如下值即可。

-Xms128m -Xmx512m

3、修改读入windows本地文件的路径

val conf = new SparkConf() //创建SparkConf对象

conf.setAppName("Wow,My First Spark App!") //设置应用程序的名称,在程序运行的监控界面可以看到名称

conf.setMaster("local") //此时,程序在本地运行,不需要安装Spark集群

val lines = sc.textFile("G://IMFBigDataSpark2016//Bigdata_Software//spark-1.6.0-bin-hadoop2.6//spark-1.6.0-bin-hadoop2.6//spark-1.6.0-bin-hadoop2.6//README.md", 1) //读取本地文件并设置为一个Partion

4、运行搞定!

16/01/14 11:58:36 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, localhost, partition 0,NODE_LOCAL, 1894 bytes)

16/01/14 11:58:36 INFO Executor: Running task 0.0 in stage 1.0 (TID 1)

16/01/14 11:58:36 INFO ShuffleBlockFetcherIterator: Getting 1 non-empty blocks out of 1 blocks

16/01/14 11:58:36 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 17 ms

package : 1

For : 2

Programs : 1

processing. : 1

Because : 1

The : 1

cluster. : 1

its : 1

[run : 1

APIs : 1

have : 1

Try : 1

computation : 1

through : 1

5、hadoop 找不到的问题,将导入winutils.exe包来解决

16/01/14 11:58:26 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

16/01/14 11:58:26 ERROR Shell: Failed to locate the winutils binary in the hadoop binary path

java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: