您的位置:首页 > 其它

spark 1.6.0 升版升级到 spark 1.6.1 spark集群1台master 8台worker设备的详细升版解密

2016-04-30 19:10 316 查看
1.下载 新版本 spark-1.6.1-bin-hadoop2.6.tgz

root@master:/usr/local/setup_tools# tar -zxvf spark-1.6.1-bin-hadoop2.6.tgz

2.编辑/etc/profile

vi /etc/profile

export SPARK_HOME=/usr/local/spark-1.6.0-bin-hadoop2.6

修改为

export SPARK_HOME=/usr/local/spark-1.6.1-bin-hadoop2.6

3.生效

root@master:/usr/local# source /etc/profile

4.备份spark1.6.1的配置文件

cp -R /usr/local/spark-1.6.1-bin-hadoop2.6/conf/. /usr/local/spark-1.6.1-bin-hadoop2.6/conf.161.bak

5.将原来的1.6.0的配置文件拷贝过来

cp -R /usr/local/spark-1.6.0-bin-hadoop2.6/conf/. /usr/local/spark-1.6.1-bin-hadoop2.6/conf

6 查看1.6.1 的配置

vi spark-env.sh

export SCALA_HOME=/usr/local/scala-2.10.4

export JAVA_HOME=/usr/local/jdk1.8.0_60

export SPARK_MASTER_IP=192.168.189.1

export SPARK_WORKER_MEMORY=2g

export HADOOP_CONF_DIR=/usr/local/hadoop-2.6.0/etc/hadoop

7.cat slaves

worker1

worker2

worker3

worker4

worker5

worker6

worker7

worker8

8.集群分发的脚本

root@master:/usr/local/setup_scripts# chmod u+x spark1.6.1scp.sh

root@master:/usr/local/setup_scripts# cat spark1.6.1scp.sh

#!/bin/sh

for i in 2 3 4 5 6 7 8 9

do

scp -rq /etc/profile root@192.168.189.$i:/etc/profile

ssh root@192.168.189.$i source /etc/profile

scp -rq /usr/local/spark-1.6.1-bin-hadoop2.6 root@192.168.189.$i:/usr/local/spark-1.6.1-bin-hadoop2.6

done

root@master:/usr/local/setup_scripts#

9.执行脚本

root@master:/usr/local/setup_scripts# spark1.6.1scp.sh

root@master:/usr/local/setup_scripts#

10.启动hadoop

root@master:/usr/local/hadoop-2.6.0/sbin# start-dfs.sh

Starting namenodes on [master]

master: starting namenode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-namenode-master.out

worker6: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker6.out

worker4: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker4.out

worker8: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker8.out

worker3: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker3.out

worker5: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker5.out

worker7: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker7.out

worker2: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker2.out

worker1: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-worker1.out

Starting secondary namenodes [0.0.0.0]

0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-secondarynamenode-master.out

root@master:/usr/local/hadoop-2.6.0/sbin# jps

3250 Jps

2932 NameNode

3147 SecondaryNameNode

root@master:/usr/local/hadoop-2.6.0/sbin#

11.spark1.6.1 启动验证

root@master:/usr/local/spark-1.6.1-bin-hadoop2.6/sbin# start-all.sh

starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark-1.6.1-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.master.Master-1-master.out

worker2: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark-1.6.1-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-worker2.out

worker4: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark-1.6.1-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-worker4.out

worker8: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark-1.6.1-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-worker8.out

worker7: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark-1.6.1-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-worker7.out

worker6: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark-1.6.1-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-worker6.out

worker3: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark-1.6.1-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-worker3.out

worker1: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark-1.6.1-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-worker1.out

worker5: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark-1.6.1-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-worker5.out

root@master:/usr/local/spark-1.6.1-bin-hadoop2.6/sbin# jps

2932 NameNode

3383 Jps

3306 Master

3147 SecondaryNameNode

root@master:/usr/local/spark-1.6.1-bin-hadoop2.6/sbin#

12.启动历史服务器

root@master:/usr/local/spark-1.6.1-bin-hadoop2.6/sbin# start-history-server.sh

starting org.apache.spark.deploy.history.HistoryServer, logging to /usr/local/spark-1.6.1-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.history.HistoryServer-1-master.out

root@master:/usr/local/spark-1.6.1-bin-hadoop2.6/sbin# jps

2932 NameNode

3306 Master

3147 SecondaryNameNode

3403 HistoryServer

3436 Jps

root@master:/usr/local/spark-1.6.1-bin-hadoop2.6/sbin#

13.运行结果

1.6.1 Spark Master at spark://192.168.189.1:7077

URL: spark://192.168.189.1:7077

REST URL: spark://192.168.189.1:6066 (cluster mode)

Alive Workers: 8

Cores in use: 8 Total, 0 Used

Memory in use: 16.0 GB Total, 0.0 B Used

Applications: 0 Running, 0 Completed

Drivers: 0 Running, 0 Completed

Status: ALIVE

14.History Server

1.6.1 History Server

Event log directory: hdfs://master:9000/historyserverforSpark

Showing 1-20 of 50



内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: