您的位置:首页 > 运维架构

Hadoop+Spark集群安装步骤详解

2018-02-28 11:12 656 查看
一、环境:操作系统版本:SUSE Linux Enterprise Server 11 (x86_64) SP3主机名:192.168.0.10    node1192.168.0.11    node2192.168.0.12    node3192.168.0.13    node4 软件路径:/data/installHadoop集群路径:/dataJAVA_HOME路径:/usr/jdk1.8.0_66 版本
组件名版本说明
JREjdk-8u66-linux-x64.tar.gz 
zookeeperzookeeper-3.4.6.tar.gz 
Hadoophadoop-2.7.3.tar.gz主程序包
sparkspark-2.0.2-bin-hadoop2.7.tgz 
hbasehbase-1.2.5-bin.tar.gz 
  一、       常用命令1.        查看系统版本:linux-n4ga:~ # uname –a                            #内核版本Linux node1 3.0.76-0.11-default #1 SMP Fri Jun 14 08:21:43 UTC 2013 (ccab990) x86_64 x86_64 x86_64 GNU/Linuxlinux-n4ga:~ # lsb_release                         #发行版本LSB Version:    core-2.0-noarch:core-3.2-noarch:core-4.0-noarch:core-2.0-x86_64:core-3.2-x86_64:core-4.0-x86_64:desktop-4.0-amd64:desktop-4.0-noarch:graphics-2.0-amd64:graphics-2.0-noarch:graphics-3.2-amd64:graphics-3.2-noarch:graphics-4.0-amd64:graphics-4.0-noarchlinux-n4ga:~ # cat /etc/SuSE-release           #补丁版本SUSE Linux Enterprise Server 11 (x86_64)VERSION = 11PATCHLEVEL = 3node1:~ # cat /etc/issueWelcome to SUSE Linux Enterprise Server 11 SP3  (x86_64) - Kernel \r (\l).node1:~ #2.        启动集群start-dfs.shstart-yarn.sh3.        关闭集群stop-yarn.shstop-dfs.sh4.        监控集群hdfs dfsadmin -report5.        单个进程启动/关闭hadoop-daemon.sh start|stop namenode|datanode| journalnodeyarn-daemon.sh start |stop resourcemanager|nodemanagerhttp://blog.chinaunix.net/uid-25723371-id-4943894.html   二、       环境准备(所有服务器)6.        关闭防火墙并禁止开机自启动linux-n4ga:~ # rcSuSEfirewall2 stopShutting down the Firewall                                done linux-n4ga:~ # chkconfig SuSEfirewall2_setup offlinux-n4ga:~ # chkconfig SuSEfirewall2_init offlinux-n4ga:~ # chkconfig --list|grep fireSuSEfirewall2_init        0:off  1:off  2:off  3:off  4:off  5:off  6:offSuSEfirewall2_setup       0:off  1:off  2:off  3:off  4:off  5:off  6:off7.        设置主机名(其它类似)linux-n4ga:~ # hostname node1linux-n4ga:~ # vim /etc/HOSTNAMEnode1.site8.        ssh免密登陆node1:~ # ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsanode1:~ # cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keysnode1:~ # ll -d .ssh/drwx------ 2 root root 4096 Jun  5 08:50 .ssh/node1:~ # ll .ssh/  total 12-rw-r--r-- 1 root root 599 Jun  5 08:50 authorized_keys-rw------- 1 root root 672 Jun  5 08:50 id_dsa-rw-r--r-- 1 root root 599 Jun  5 08:50 id_dsa.pub把其它服务器的~/.ssh/id_dsa.pub内容也追加到node1服务器的~/.ssh/authorized_keys文件中,然后分发scp –rp ~/.ssh/authorized_keys root@192.168.0.11: ~/.ssh/scp –rp ~/.ssh/authorized_keys root@192.168.0.12: ~/.ssh/scp –rp ~/.ssh/authorized_keys root@192.168.0.13: ~/.ssh/ 9.        修改hosts文件node1:~ # vim /etc/hosts… …ff02::2         ipv6-allroutersff02::3         ipv6-allhosts192.168.0.10    node1192.168.0.11    node2192.168.0.12    node3192.168.0.13    node4分发:scp -rp /etc/hosts root@192.168.0.11:/etc/scp -rp /etc/hosts root@192.168.0.12:/etc/scp -rp /etc/hosts root@192.168.0.13:/etc/10.    修改文件句柄数node1:~ # vim /etc/security/limits.conf*           soft   nofile       24000*           hard  nofile       65535*           soft  nproc        24000*           hard  nproc       65535node1:~ # source /etc/security/limits.confnode1:~ # ulimit -n2400011.    时间同步测试(举例)node1 :~ # /usr/sbin/ntpdate 192.168.0.1013 Jun 13:49:41 ntpdate[8370]: adjust time server 192.168.0.10 offset -0.007294 sec添加定时任务node1 :~ # crontab –e*/10 * * * * /usr/sbin/ntpdate 192.168.0.10 > /dev/null 2>&1;/sbin/hwclock -wnode1:~ # service cron restartShutting down CRON daemon                                                          doneStarting CRON daemon                                                                             donenode1:~ # dateTue Jun 13 05:32:49 CST 2017node1:~ #12.    上传安装包到node1服务器node1:~ # mkdir –pv /data/installnode1:~ # cd  /data/installnode1:~ # pwd/data/install上传安装包到/data/install目录下node1:/data/install # lltotal 671968-rw-r--r-- 1 root root 214092195 Jun  5 05:40 hadoop-2.7.3.tar.gz-rw-r--r-- 1 root root 104584366 Jun  5 05:40 hbase-1.2.5-bin.tar.gz-rw-r--r-- 1 root root 181287376 Jun  5 05:47 jdk-8u66-linux-x64.tar.gz-rw-r--r-- 1 root root 187426587 Jun  5 05:40 spark-2.0.2-bin-hadoop2.7.tgz-rw-r--r-- 1 root root 187426587 Jun  5 05:40 zookeeper-3.4.6.tar.gz13.    安装JDKnode1:~ # cd /data/installnode1:/data/install # tar -zxvf  jdk-8u66-linux-x64.tar.gz -C /usr/配置环境变量node1:/data/install #vim /etc/profileexport JAVA_HOME=/usr/jdk1.8.0_66export HADOOP_HOME=/data/hadoop-2.7.3export HBASE_HOME=/data/hbase-1.2.5export SPARK_HOME=/data/spark-2.0.2export ZOOKEEPER_HOME=/data/zookeeper-3.4.6export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATHexport CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/libexport PATH=$ZOOKEEPER_HOME/bin:$PATHexport PATH=$HBASE_HOME/bin:$PATHexport PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATHexport PATH=$SPARK_HOME/bin:$PATH node1:/opt # source /etc/profilenode1:~ # java –version               #验证java version "1.8.0_66"Java(TM) SE Runtime Environment (build 1.8.0_66-b17)Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode)node1:~ # echo $JAVA_HOME/usr/jdk1.8.0_66 三、       安装zookeeper14.    解压zookeepernode1:~ # cd /data/installnode1:/data/install # tar -zxvf  zookeeper-3.4.6.tar.gz  -C /data/15.    配置zoo.cfg文件node1:/data/install # cd /data/zookeeper-3.4.6/conf/            #进入conf目录node1: /data/zookeeper-3.4.6/conf/ # cp  zoo_sample.cfg  zoo.cfg                                      #拷贝模板node1: /data/zookeeper-3.4.6/conf/ # vi zoo.cfg# The number of millinode2s of each ticktickTime=2000# The number of ticks that the initial# synchronization phase can takeinitLimit=10# The number of ticks that can pass between# sending a request and getting an acknowledgementsyncLimit=5# the directory where the snapshot is stored.# do not use /tmp for storage, /tmp here is just# example sakes.dataDir=/data/zookeeper-3.4.6/datadataLogDir=/data/zookeeper-3.4.6/dataLog# the port at which the clients will connectclientPort=2181# the maximum number of client connections.# increase this if you need to handle more clients#maxClientCnxns=60## Be sure to read the maintenance section of the# administrator guide before turning on autopurge.## http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance## The number of snapshots to retain in dataDir#autopurge.snapRetainCount=3# Purge task interval in hours# Set to "0" to disable auto purge feature#autopurge.purgeInterval=1server.1=node1:2888:3888server.2=node2:2888:3888server.3=node3:2888:388816.    添加myid,分发(安装个数为奇数)创建指定目录:dataDir目录下增加myid文件;myid中写当前zookeeper服务的id, 因为server.1=node1:2888:3888 server指定的是1,node1: /data/zookeeper-3.4.6/conf/ # mkdir  –pv /data/zookeeper-3.4.6/{data, dataLog}node1: /data/zookeeper-3.4.6/conf/ # echo 1 > /data/zookeeper-3.4.6/data/myid17.    分发:node1: /data/zookeeper-3.4.6/conf/ # scp -rp /data/zookeeper-3.4.6  root@192.168.0.11:/datanode1: /data/zookeeper-3.4.6/conf/ # scp -rp /data/zookeeper-3.4.6  root@192.168.0.12:/data在其余机子配置,node2下面的myid是2,node3下面myid是3,这些都是根据server来的node2: /data/zookeeper-3.4.6/conf/ # echo 2 > /data/zookeeper-3.4.6/data/myidnode3: /data/zookeeper-3.4.6/conf/ # echo 3> /data/zookeeper-3.4.6/data/myid
  四、       安装Hadoop18.    解压hadoop     node1:~ # cd /data/installnode1:/data/install # tar -zxvf hadoop-2.7.3.tar.gz -C /data/19.    配置hadoop-env.shnode1:~ # vim /data/hadoop-2.7.3/etc/hadoop/hadoop-env.shexport JAVA_HOME=/usr/jdk1.8.0_6620.    配置core-site.xmlnode1:~ # vim  /data/hadoop-2.7.3/etc/hadoop/core-site.xml    <?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration>    <!--Yarn 需要使用 fs.defaultFS 指定NameNode URI -->    <property>          <value>hdfs://mycluster</value>  </property>   <property><name>hadoop.tmp.dir</name><value>/data/hadoop-2.7.3/data/tmp</value></property><property><name>ha.zookeeper.quorum</name><value>node1:2181,node2:2181,node3:2181</value><discription>zookeeper客户端连接地址</discription></property> <property><name>ha.zookeeper.session-timeout.ms</name><value>10000</value></property>   <property>    <name>fs.trash.interval</name>    <value>1440</value>    <discription>以分钟为单位的垃圾回收时间,垃圾站中数据超过此时间,会被删除。如果是0,垃圾回收机制关闭。</discription>  </property>   <property>    <name>fs.trash.checkpoint.interval</name>    <value>1440</value>    <discription>以分钟为单位的垃圾回收检查间隔。</discription>  </property></configuration>21.  配置yarn-site.xmlnode1:~ # vim /data/hadoop-2.7.3/etc/hadoop/yarn-site.xml #<?xml version="1.0"?><configuration>    <property>        <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>        <value>5000</value>        <discription>schelduler失联等待连接时间</discription>    </property>    <property>        <name>yarn.nodemanager.aux-services</name>        <value>mapreduce_shuffle</value>        <discription>NodeManager上运行的附属服务。需配置成mapreduce_shuffle,才可运行MapReduce程序</discription>    </property>    <property>        <name>yarn.resourcemanager.ha.enabled</name>        <value>true</value>        <discription>是否启用RM HA,默认为false(不启用)</discription>    </property>    <property>        <name>yarn.resourcemanager.cluster-id</name>        <value>cluster1</value>        <discription>集群的Id,elector使用该值确保RM不会做为其它集群的active。</discription>    </property>    <property>        <name>yarn.resourcemanager.ha.rm-ids</name>        <value>rm1,rm2</value>        <discription>RMs的逻辑id列表,用逗号分隔,如:rm1,rm2 </discription>    </property>    <property>        <name>yarn.resourcemanager.hostname.rm1</name>        <value>node3</value>        <discription>RM的hostname</discription>    </property>    <property>        <name>yarn.resourcemanager.scheduler.address.rm1</name>        <value>${yarn.resourcemanager.hostname.rm1}:8030</value>        <discription>RM对AM暴露的地址,AM通过地址想RM申请资源,释放资源等</discription>    </property>    <property>        <name>yarn.resourcemanager.resource-tracker.address.rm1</name>        <value>${yarn.resourcemanager.hostname.rm1}:8031</value>        <discription>RM对NM暴露地址,NM通过该地址向RM汇报心跳,领取任务等</discription>    </property>    <property>        <name>yarn.resourcemanager.address.rm1</name>        <value>${yarn.resourcemanager.hostname.rm1}:8032</value>        <discription>RM对客户端暴露的地址,客户端通过该地址向RM提交应用程序等</discription>    </property>    <property>        <name>yarn.resourcemanager.admin.address.rm1</name>        <value>${yarn.resourcemanager.hostname.rm1}:8033</value>        <discription>RM对管理员暴露的地址.管理员通过该地址向RM发送管理命令等</discription>    </property>    <property>        <name>yarn.resourcemanager.webapp.address.rm1</name>        <value>${yarn.resourcemanager.hostname.rm1}:8088</value>        <discription>RM对外暴露的web http地址,用户可通过该地址在浏览器中查看集群信息</discription>    </property>    <property>        <name>yarn.resourcemanager.hostname.rm2</name>        <value>node4</value>    </property>    <property>        <name>yarn.resourcemanager.scheduler.address.rm2</name>        <value>${yarn.resourcemanager.hostname.rm2}:8030</value>    </property>    <property>        <name>yarn.resourcemanager.resource-tracker.address.rm2</name>        <value>${yarn.resourcemanager.hostname.rm2}:8031</value>    </property>    <property>        <name>yarn.resourcemanager.address.rm2</name>        <value>${yarn.resourcemanager.hostname.rm2}:8032</value>    </property>    <property>        <name>yarn.resourcemanager.admin.address.rm2</name>        <value>${yarn.resourcemanager.hostname.rm2}:8033</value>    </property>    <property>        <name>yarn.resourcemanager.webapp.address.rm2</name>        <value>${yarn.resourcemanager.hostname.rm2}:8088</value>    </property>    <property>        <name>yarn.resourcemanager.recovery.enabled</name>        <value>true</value>        <discription>默认值为false,也就是说resourcemanager挂了相应的正在运行的任务在rm恢复后不能重新启动</discription>    </property>    <property>        <name>yarn.resourcemanager.store.class</name>        <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>        <discription>状态存储的类</discription>    </property>    <property>        <name>yarn.resourcemanager.zk-address</name>        <value>node1:2181,node2:2181,node3:2181</value>    </property>    <property>        <name>yarn.nodemanager.resource.memory-mb</name>        <value> 240000</value>        <discription>该节点上nodemanager可使用的物理内存总量</discription>    </property>    <property>        <name>yarn.nodemanager.resource.cpu-vcores</name>        <value>24</value>        <discription>该节点上nodemanager可使用的虚拟CPU个数</discription>    </property>     <property>        <name>yarn.scheduler.minimum-allocation-mb</name>        <value>1024</value>        <discription>单个任务可申请的最小物理内存量</discription>    </property>    <property>        <name>yarn.scheduler.maximum-allocation-mb</name>        <value>240000</value>        <discription>单个任务可申请的最大物理内存量</discription>    </property>    <property>        <name>yarn.scheduler.minimum-allocation-vcores</name>        <value>1</value>        <discription>单个任务可申请的最小虚拟CPU个数</discription>    </property>    <property>        <name>yarn.scheduler.maximum-allocation-vcores</name>        <value>24</value>        <discription>单个任务可申请的最大虚拟CPU个数</discription>    </property>    <property>        <name>yarn.nodemanager.vmem-pmem-ratio</name>        <value>4</value>        <discription>任务每使用1MB物理内存,最多可使用虚拟内存量,默认是2.1。</discription>    </property></configuration>22.    配置mapred-site.xmlnode1:~ # cp /data/hadoop-2.7.3/etc/hadoop/mapred-site.xml{.template,}node1:~ # vim /data/hadoop-2.7.3/etc/hadoop/mapred-site.xml<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration>    <property>        <name>mapreduce.framework.name</name>        <value>yarn</value>    </property></configuration>23.    配置hdfs-site.xmlnode1:~ # vim /data/hadoop-2.7.3/etc/hadoop/hdfs-site.xml<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration>    <property>        <name>dfs.replication</name>        <value>2</value>        <description>保存副本数</description>    </property>    <property>        <name>dfs.nameservices</name>        <value>mycluster</value>    </property>    <property>        <name>dfs.ha.namenodes.mycluster</name>        <value>nn1,nn2</value>    </property>    <property>        <name>dfs.namenode.rpc-address.mycluster.nn1</name>        <value>node1:8020</value>    </property>    <property>        <name>dfs.namenode.rpc-address.mycluster.nn2</name>        <value>node2:8020</value>    </property>    <property>        <name>dfs.namenode.http-address.mycluster.nn1</name>        <value>node1:50070</value>    </property>    <property>        <name>dfs.namenode.http-address.mycluster.nn2</name>        <value>node2:50070</value>    </property>    <property>        <name>dfs.namenode.shared.edits.dir</name>        <value>qjournal://node1:8485;node2:8485;node3:8485/mycluster</value>    </property>    <property>        <name>dfs.client.failover.proxy.provider.mycluster</name>        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>    </property>    <property>        <name>dfs.ha.fencing.methods</name>        <value>sshfence</value>    </property>    <property>        <name>dfs.ha.fencing.ssh.private-key-files</name>        <value>/root/.ssh/id_dsa</value>    </property>    <property>        <name>dfs.journalnode.edits.dir</name>        <value>/data/ hadoop-2.7.3/data/journal</value>    </property>    <property>        <name>dfs.permissions.superusergroup</name>        <value>root</value><description>超级用户组名</description>    </property>     <property>        <name>dfs.ha.automatic-failover.enabled</name>        <value>true</value><description>开启自动故障转移</description>    </property></configuration>新建相应目录node1:~ # mkdir -pv /data/ hadoop-2.7.3/data/{journal,tmp}24.    配置capacity-scheduler.xml<configuration>   <property>    <name>yarn.scheduler.capacity.maximum-applications</name>    <value>10000</value>    <description>      Maximum number of applications that can be pending and running.    </description>  </property>   <property>    <name>yarn.scheduler.capacity.maximum-am-resource-percent</name>    <value>0.1</value>    <description>      Maximum percent of resources in the cluster which can be used to run      application masters i.e. controls number of concurrent running      applications.    </description>  </property>   <property>    <name>yarn.scheduler.capacity.resource-calculator</name>    <value>org.apache.hadoop.yarn.util.resource.DominantResourceCalculator</value>    <description>      The ResourceCalculator implementation to be used to compare      Resources in the scheduler.      The default i.e. DefaultResourceCalculator only uses Memory while      DominantResourceCalculator uses dominant-resource to compare      multi-dimensional resources such as Memory, CPU etc.    </description>  </property>   <property>    <name>yarn.scheduler.capacity.root.queues</name>    <value>default</value>    <description>      The queues at the this level (root is the root queue).    </description>  </property>   <property>    <name>yarn.scheduler.capacity.root.default.capacity</name>    <value>100</value>    <description>Default queue target capacity.</description>  </property>   <property>    <name>yarn.scheduler.capacity.root.default.user-limit-factor</name>    <value>1</value>    <description>      Default queue user limit a percentage from 0.0 to 1.0.    </description>  </property>   <property>    <name>yarn.scheduler.capacity.root.default.maximum-capacity</name>    <value>100</value>    <description>      The maximum capacity of the default queue.    </description>  </property>   <property>    <name>yarn.scheduler.capacity.root.default.state</name>    <value>RUNNING</value>      The state of the default queue. State can be one of RUNNING or STOPPED.    </description>  </property>   <property>    <name>yarn.scheduler.capacity.root.default.acl_submit_applications</name>    <value>*</value>    <description>      The ACL of who can submit jobs to the default queue.    </description>  </property>   <property>    <name>yarn.scheduler.capacity.root.default.acl_administer_queue</name>    <value>*</value>    <description>      The ACL of who can administer jobs on the default queue.    </description>  </property>   <property>    <name>yarn.scheduler.capacity.node-locality-delay</name>    <value>40</value>    <description>      Number of missed scheduling opportunities after which the CapacityScheduler      attempts to schedule rack-local containers.      Typically this should be set to number of nodes in the cluster, By default is setting      approximately number of nodes in one rack which is 40.    </description>  </property>   <property>    <name>yarn.scheduler.capacity.queue-mappings</name>    <value></value>    <description>      A list of mappings that will be used to assign jobs to queues      The syntax for this list is [u|g]:[name]:[queue_name][,next mapping]*      Typically this list will be used to map users to queues,      for example, u:%user:%user maps all users to queues with the same name      as the user.    </description>  </property>   <property>    <value>false</value>    <description>      If a queue mapping is present, will it override the value specified      by the user? This can be used by administrators to place jobs in queues      that are different than the one specified by the user.      The default is false.    </description>  </property> </configuration>25.    配置slavesnode1:~ # vim  /data/hadoop-2.7.3/etc/hadoop/node1node2node3node426.    修改$HADOOP_HOME/sbin/hadoop-daemon.shnode1: /data/hadoop-2.7.3 # cd /data/hadoop-2.7.3/sbin/#添加:node1: /data/hadoop-2.7.3/sbin # HADOOP_PID_DIR=/data/hdfs/pids            27.    修改$HADOOP_HOME/sbin/yarn-daemon.sh#添加:node1: /data/hadoop-2.7.3/sbin # HADOOP_PID_DIR=/data/hdfs/pids     28.    分发node1: /data/hadoop-2.7.3/etc/hadoop/ # scp -rp /data/hadoop-2.7.3  root@192.168.0.11:/datanode1: /data/hadoop-2.7.3/etc/hadoop/ # scp -rp /data/hadoop-2.7.3  root@192.168.0.12:/datanode1: /data/hadoop-2.7.3/etc/hadoop/ # scp -rp /data/hadoop-2.7.3  root@192.168.0.13:/data
 五、       安装hbase29.    解压hbasenode1:/data # cd /data/installnode1:/data/install # tar -zxvf  hbase-1.2.5-bin.tar.gz  -C /data30.    修改$HBASE_HOME/conf/hbase-env.sh,添加node1:/data # cd /data/hbase-1.2.5/confnode1: /data/hbase-1.2.5 # vim hbase-env.shexport HBASE_HOME=/data/hbase-1.2.5export JAVA_HOME=/usr/jdk1.8.0_66export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HADOOP_HOME/lib/native/export HBASE_LIBRARY_PATH=$HBASE_LIBRARY_PATH:$HBASE_HOME/lib/native/#设置到Hadoop的etc/hadoop目录是用来引导Hbase找到Hadoop,也就是说hbase和hadoop进行关联【必须设置,否则hmaster起不来】export HBASE_CLASSPATH=$HADOOP_HOME/etc/hadoopexport HBASE_MANAGES_ZK=false               #不启用hbase自带的zookeeper export HBASE_PID_DIR=/data/hdfs/pidsexport HBASE_SSH_OPTS="-o ConnectTimeout=1 -p 36928"                #ssh端口;31.    修改regionservers文件node1: /data/hbase-1.2.5 # vim regionserversnode1node2node3node4node1: /data/hbase-1.2.5 #32.    修改hbase-site.xml文件node1:/data/hbase-1.2.5/conf # vim hbase-site.xml<configuration>  <property>    <name>hbase.rootdir</name>    <value>hdfs://mycluster/hbase</value>  </property><property>      <name>hbase.zookeeper.quorum</name>      <value>node1,node2,node3</value></property><property>      <name>hbase.zookeeper.property.clientPort</name>      <value>2181</value>   </property></configuration>33.    分发node1: /data/hbase-1.2.5/conf # scp -rp /data/hbase-1.2.5  root@192.168.0.11:/datanode1: /data/hbase-1.2.5/conf # scp -rp /data/hbase-1.2.5  root@192.168.0.12:/datanode1: /data/hbase-1.2.5/conf # scp -rp /data/hbase-1.2.5  root@192.168.0.13:/data
六、       安装spark34.    解压sparknode1:/data #cd /data/installnode1:/data/install # tar -zxvf spark-2.0.2-bin-hadoop2.7.tgz  -C /data35.    修改文件名:spark-2.0.2node1:/data # mv spark-2.0.2-bin-hadoop2.7 spark-2.0.236.    配置spark-env.shnode1:/data #cd /data/spark-2.0.2/conf/node1: /data/spark-2.0.2/conf/ #cp spark-env.sh.template spark-env.shnode1: /data/spark-2.0.2/conf/ #vim spark-env.sh                     #添加:export JAVA_HOME=/usr/jdk1.8.0_66export SPARK_PID_DIR=/data/ spark-2.0.2/conf/pids#设置内存export SPARK_WORKER_MEMORY=240gexport HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoopexport LD_LIBRARY_PATH=$HADOOP_HOME/lib/nativeexport SPARK_MASTER_PORT=7077export SPARK_WORKER_INSTANCES=1export SPARK_HISTORY_OPTS="-Dspark.history.ui.port=18080 -Dspark.history.retainedApplications=3 -Dspark.history.fs.logDirectory=hdfs://mycluster/directory"# 限制程序申请资源最大核数export SPARK_MASTER_OPTS="-Dspark.deploy.defaultCores=12"export SPARK_SSH_OPTS="-p 36928 -o StrictHostKeyChecking=no $SPARK_SSH_OPTS"export SPARK_HISTORY_OPTS="-Dspark.history.ui.port=18080 -Dspark.history.retainedApplications=3 -Dspark.history.fs.logDirectory=hdfs://mycluster/directory"#内存小于32G,配下面的export SPARK_JAVA_OPTS="-XX:+UseCompressedOops -XX:+UseCompressedStrings $SPARK_JAVA_OPTS"37.    配置spark-defaults.confnode1:/data #cd /data/spark-2.0.2/conf/node1: /data/spark-2.0.2/conf/ #cp spark-defaults.conf.template spark-defaults.confnode1: /data/spark-2.0.2/conf/ #vi spark-defaults.conf#添加spark.serializer                  org.apache.spark.serializer.KryoSerializerspark.eventLog.enabled           truespark.eventLog.dir               hdfs://mycluster/directoryspark.local.dir                  /data/spark-2.0.2/sparktmp38.    配置slavesnode1:/data #cd /data/spark-2.0.2/conf/node1: /data/spark-2.0.2/conf/ #mv slaves.template slavesnode1: /data/spark-2.0.2/conf/ # vim slavesnode1node2node3node4node1: /data/spark-2.0.2/conf/ #39.    分发node1: /data/spark-2.0.2/conf/ # scp -rp /data/spark-2.0.2  root@192.168.0.11:/datanode1: /data/spark-2.0.2/conf/ # scp -rp /data/spark-2.0.2  root@192.168.0.12:/datanode1: /data/spark-2.0.2/conf/ # scp -rp /data/spark-2.0.2  root@192.168.0.13:/data 七、       启动过程40.    同时开启所有zookeeper节点node1:/data #cd /data/zookeeper-3.4.6/binnode1: /data/zookeeper-3.4.6/bin #zkServer.sh startnode2: /data/zookeeper-3.4.6/bin #zkServer.sh startnode3: /data/zookeeper-3.4.6/bin #zkServer.sh start41.    启动所有journalnode节点node1:/data #cd /data/hadoop-2.7.3node1:/data/hadoop-2.7.3 #sbin/hadoop-daemon.sh start journalnodenode2:/data/hadoop-2.7.3 #sbin/hadoop-daemon.sh start journalnodenode3:/data/hadoop-2.7.3 #sbin/hadoop-daemon.sh start journalnode42.    格式化namenode目录(主节点node1)node1:/data #cd /data/hadoop-2.7.3node1:/data/hadoop-2.7.3 #./bin/hdfs namenode -format43.    启动当前格式化的namenode进程(主节点node1)node1:/data/hadoop-2.7.3 #./sbin/hadoop-daemon.sh start namenode44.    在没有格式化的NN上 执行同步命令(副节点node2)node2:/data/hadoop-2.7.3 #./bin/hdfs namenode -bootstrapStandby45.    启动hdfsnode1:/data/hadoop-2.7.3 #./sbin/hadoop-daemon.sh start namenodenode1:/data/hadoop-2.7.3 #./sbin/start-dfs.sh46.    启动yarn:node1:~ # $HADOOP_HOME/sbin/start-yarn.sh 47.    两台resourcemanager上启动resourcemanagernode3:~ # $HADOOP_HOME/sbin/yarn-daemon.sh start resourcemanagernode4:~ # $HADOOP_HOME/sbin/yarn-daemon.sh start resourcemanagerHDFS和yarn的web控制台默认监听端口分别为50070和8088。可以通过浏览放访问查看运行情况。停止命令:$HADOOP_HOME/sbin/stop-dfs.sh$HADOOP_HOME/sbin/stop-yarn.sh如果一切正常,使用jps可以查看到正在运行的Hadoop服务,在我机器上的显示结果为:7312 Jps1793 NameNode2163 JournalNode357 NodeManager2696 QuorumPeerMain14428 DFSZKFailoverController1917 DataNode48.    启动hbasenode1:/data/hadoop-2.7.3 #cd /data/hbase-1.2.5/binnode1:/data/hbase-1.2.5/bin #./start-hbase.shnode1:/data/hbase-1.2.5/bin # jps7312 Jps8463 HMaster1793 NameNode2163 JournalNode357 NodeManager14632 HRegionServer2696 QuorumPeerMain14428 DFSZKFailoverController1917 DataNode Hbase web页面http://node1:1601049.    启动sparknode1: /data/hbase-1.2.5/bin #cd /data /spark-2.0.2/sbinnode1: /data /spark-2.0.2/sbin #./start-all.shnode1: /data /spark-2.0.2/sbin #./start-history-server.shnode1:/data/spark-2.0.2/sbin # jps7312 Jps8463 HMaster1793 NameNode2163 JournalNode4901 Worker357 NodeManager14632 HRegionServer2696 QuorumPeerMain14428 DFSZKFailoverController1917 DataNode1722 Masternode1:/data/spark-2.0.2/sbin # spark的master web页面访问 http://node1:8080spark的app历史日志页面访问 http://node1:18080
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: