您的位置:首页 > 大数据

大数据环境部署6:Spark环境部署

2015-10-22 21:04 375 查看


1、下载scala2.11.4版本下载地址为:http://www.scala-lang.org/download/2.11.4.html ,也可以使用wget http://downloads.typesafe.com/scala/2.11.4/scala-2.11.4.tgz?_ga=1.248348352.61371242.1418807768
2、解压和安装:解压:[spark@LOCALHOST
scala]$ tar -xvfscala-2.11.4.tgz ,安装:[spark@LOCALHOST scala]$ mv scala-2.11.4 ~/opt/

3、编辑
~/.bash_profile文件增加SCALA_HOME环境变量配置,

exportJAVA_HOME=/usr/java/jdk1.7.0_79

export CLASSPATH=.:$JAVA_HOME/jre/lib:$JAVA_HOME/lib:$JAVA_HOME/lib/tools.jar

export SCALA_HOME=/home/spark/opt/scala-2.11.4

export HADOOP_HOME=/home/spark/opt/hadoop-2.6.0

PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:${SCALA_HOME}/bin

立即生效 bash_profile ,[spark@LOCALHOST
scala]$ source ~/.bash_profile

4、验证scala:[spark@LOCALHOST
scala]$ scala -version

Scala code runner version 2.11.4 -- Copyright 2002-2013, LAMP/EPFL

[spark@LOCALHOSTscala]$ scala

Welcome to Scala version 2.11.4 (Java HotSpot(TM) 64-Bit Server VM, Java1.6.0_37).

Type in expressions to have them evaluated.

Type :help for more information.

scala> varstr = "SB is"+"SB"

str: String = SB isSB

scala>

5、copy到slave机器,

[spark@LOCALHOSTscala]$ scp ~/.bash_profile spark@172.16.107.8:~/.bash_profile

[spark@LOCALHOSTscala]$ scp ~/.bash_profile spark@172.16.107.7:~/.bash_profile

6、下载spark,wgethttp://d3kbcqa49mib13.cloudfront.net/spark-1.2.0-bin-hadoop2.4.tgz

7、在master主机配置spark:

将下载的spark-1.2.0-bin-hadoop2.4.tgz解压到
~/opt/,即 ~/opt/spark-1.2.0-bin-hadoop2.4,配置环境变量SPARK_HOME

# set javaenv

export JAVA_HOME=/usr/java/jdk1.7.0_79

export CLASSPATH=.:$JAVA_HOME/jre/lib:$JAVA_HOME/lib:$JAVA_HOME/lib/tools.jar

export SCALA_HOME=/home/spark/opt/scala-2.11.4

export HADOOP_HOME=/home/spark/opt/hadoop-2.6.0

export SPARK_HOME=/home/spark/opt/spark-1.2.0-bin-hadoop2.4

PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:${SCALA_HOME}/bin:${SPARK_HOME}/bin:${HADOOP_HOME}/bin

配置完成后使用source命令使配置生效

进入 spark conf目录:

[spark@LOCALHOST opt]$ cd spark-1.2.0-bin-hadoop2.4/

[spark@LOCALHOST spark-1.2.0-bin-hadoop2.4]$ ls

bin conf data ec2 examples lib LICENSE logs NOTICE python README.md RELEASE sbin work

[spark@LOCALHOST spark-1.2.0-bin-hadoop2.4]$ cd conf/

[spark@LOCALHOST conf]$ ls

fairscheduler.xml.template metrics.properties.template slaves.template spark-env.sh

log4j.properties.template slaves spark-defaults.conf.template spark-env.sh.template

first:修改slaves文件,增加三个slave节点172.16.107.9、172.16.107.8、172.16.107.7

[spark@LOCALHOSTconf]$ vi slaves

172.16.107.9

172.16.107.8

172.16.107.7

second:配置spark-env.sh

首先把spark-env.sh.template copy spark-env.sh

vi spark-env.sh文件在最下面增加:

exportJAVA_HOME=/usr/java/jdk1.7.0_79

export SCALA_HOME=/home/spark/opt/scala-2.11.4

export SPARK_MASTER_IP=172.16.107.9

export SPARK_WORKER_MEMORY=2g

export HADOOP_CONF_DIR=/home/spark/opt/hadoop-2.6.0/etc/hadoop

HADOOP_CONF_DIR是Hadoop配置文件目录,SPARK_MASTER_IP主机IP地址,SPARK_WORKER_MEMORY是worker使用的最大内存

完成配置后,将spark目录copy
slave机器

scp -r~/opt/spark-1.2.0-bin-hadoop2.4 spark@172.16.107.8:~/opt/

scp -r~/opt/spark-1.2.0-bin-hadoop2.4 spark@172.16.107.7:~/opt/

8、启动spark分布式集群并查看信息

[spark@LOCALHOSTsbin]$ ./start-all.sh

查看:

[spark@LOCALHOSTsbin]$ jps

31233 ResourceManager

27201 Jps

30498 NameNode

30733 SecondaryNameNode

5648 Worker

5399 Master

15888 JobHistoryServer

如果HDFS没有启动,请启动起来。

查看slave节点:

[spark@localhostscala]$ jps

20352 Bootstrap

30737 NodeManager

7219 Jps

30482 DataNode

29500 Bootstrap

757 Worker

9、页面查看集群状况:

进去spark集群的web管理页面,访问
http://172.16.107.9:8080/
进入spark的bin目录,启动spark-shell控制台

[spark@localhostbin]$ sh spark-shell

访问http:// 172.16.107.9:4040/,可以看到spark
WEBUI页面

到目前为止,spark集群环境搭建成功了。

参考:

/article/1810632.html
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: