您的位置:首页 > 运维架构

Hadoop2.2集群安装配置-Spark集群安装部署

2014-10-18 14:13 585 查看
配置安装Hadoop2.2.0 部署spark 1.0的流程

一、环境描述

本实验在一台Windows7-64下安装Vmware,在Vmware里安装两虚拟机分别如下

主机名spark1(192.168.232.147),RHEL6.2-64 操作系统,用户名Root

从机名spark2(192.168.232.152),RHEL6.2-64 操作系统,用户名Root

二、环境准备

1、防火墙禁用,SSH服务设置为开机启动,并关闭SELINUX

2、修改hosts文件

3、配置SSH无密码登录

4、准备安装软件包

5、JDK1.7安装及配置

以上操作比较简单,在此就无需赘述。

三. Hadoop2.2集群安装配置

1、创建安装目录(在spark2上同做)

mkdir -p /root/install/hadoop
mkdir -p /root/install/hadoop/hdfs
mkdir -p /root/install/hadoop/tmp
mkdir -p /root/install/hadoop/mapred
mkdir -p /root/install/hadoop/hdfs/name
mkdir -p /root/install/hadoop/hdfs/data
mkdir -p /root/install/hadoop/mapred/local
mkdir -p /root/install/hadoop/mapred/system


2、把文件hadoop-2.2.0.x86_64.tar.gz上传到/root/install目录下,并解压

3、配置Hadoop环境变量

export HADOOP_HOME=/root/install/hadoop-2.2.0
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH


4、配置Hadoop

(1)向配置hadoop-env.sh文件添加

export J***A_HOME=/root/install/jdk1.7.0_21

(2)向配置yarn-env.sh文件添加

export J***A_HOME=/root/install/jdk1.7.0_21

(3)配置core-site.xml

<configuration>
        <property>
                <name>fs.defaultFS</name>
                <value>hdfs://spark1:9000</value>
        </property>
        <property>
                <name>hadoop.tmp.dir</name>
                <value>/root/install/hadoop/tmp</value>
        </property>
</configuration>


(3)配置hdfs-site.xml

<configuration>
        <property>
                <name>dfs.name.dir</name>
                <value>/root/install/hadoop/hdfs/name</value>
        </property>
        <property>
                <name>dfs.data.dir</name>
                <value>/root/install/hadoop/hdfs/data</value>
        </property>
        <property>
                <name>dfs.replication</name>
                <value>3</value>
        </property>
</configuration>


(4)配置mapred-site.xml

<configuration>
        <property>
                <name>mapreduce.cluster.local.dir</name>
                <value>/root/install/hadoop/mapred/local</value>
        </property>
        <property>
                <name>mapreduce.cluster.system.dir</name>
                <value>/root/install/hadoop/mapred/system</value>
        </property>
        <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
        </property>
        <property>
                <name>mapreduce.jobhistory.address</name>
                <value>spark1:10020</value>
        </property>
        <property>
                <name>mapreduce.jobhistory.webapp.address</name>
                <value>spark1:19888</value>
        </property>

        <property>
                 <name>mapred.child.java.opts</name>
                 <value>-Djava.awt.headless=true</value>
        </property>
        <!-- add headless to default -Xmx1024m -->
        <property>
                 <name>yarn.app.mapreduce.am.command-opts</name>
                 <value>-Djava.awt.headless=true -Xmx1024m</value>
        </property>
        <property>
                 <name>yarn.app.mapreduce.am.admin-command-opts</name>
                 <value>-Djava.awt.headless=true</value>
         </property>
</configuration>


(5)配置masters

把localhost修改为spark1

(6)配置slaves

把localhost修改为spark1,spark2,这两个分别各一行

(7)配置好之后将整个安装目录拷贝到spark2的/root/install目录下

(8)编写一个脚本,方便修改配置文件时好同步到其他机器

[root@spark1 install]# cat dispatchcfg.sh
#!/bin/bash
for target in spark2
do
    scp -r $HADOOP_CONF_DIR $target:/root/install/hadoop-2.2.0/etc
done


(9)格式化Hadoop的Namenode:hadoop namenode -format

5.Hadoop集群启动

(1)start-all.sh

(2)查看相关进程(jps)

6 Hadoop测试

(1)创建一个目录/input,并把数据文件上传到目录下

hadoop fs -mkdir /input

hadoop fs -put /etc/group /input

(2)运行wordcount

hadoop jar hadoop-mapreduce-examples-2.2.0.jar wordcount /input /output



四、安装部署spark1.0

(1)解压spark-1.0.0-bin-2.2.0.tgz

(2)在文件conf/spark-env.sh添加

export J***A_HOME=/root/install/jdk1.7.0_21
export SPARK_MASTER_IP=spark1
export SPARK_MASTER_PORT=7077
export SPARK_WORKER_CORES=1
export SPARK_WORKER_INSTANCES=1
export SPARK_WORKER_MEMORY=1g


(3)启动spark集群:sbin/start-all.sh,并查看相关进程





(4)查看运行效果







(5)运行 bin/spark-shell --executor-memory 1g --driver-memory 1g --master spark://spark1:7077

内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: