大数据平台搭建-hadoop/hbase集群的搭建
2017-09-06 14:30
537 查看
版本要求
java
版本:1.8.*(1.8.0_60)下载地址:http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
zookeeper
版本:3.4.*(zookeeper-3.4.8)下载地址:http://mirrors.hust.edu.cn/apache/zookeeper/zookeeper-3.4.8/
hadoop
版本:2.7.*(hadoop-2.7.3)下载地址:http://apache.fayea.com/hadoop/common/hadoop-2.7.3/
hbase
版本:1.2.*(hbase-1.2.4)下载地址:http://archive.apache.org/dist/hbase/1.2.4/
hadoop安装
前置条件
免密登录
见链接http://www.cnblogs.com/molyeo/p/7007917.htmljava安装
见链接http://www.cnblogs.com/molyeo/p/7007917.htmlzookeeper安装
见链接http://www.cnblogs.com/molyeo/p/7048867.html下载地址
http://apache.fayea.com/hadoop/common/hadoop-2.7.3/解压安装
cd ~ tar -zxvf hadoop-2.7.3.tar.gz mv hadoop-2.7.3 hadoop
配置环境变量
vi ~/.bash_profile export JAVA_HOME=/wls/oracle/jdk export SCALA_HOME=/wls/oracle/scala export ZOOKEEPER_HOME=/wls/oracle/zookeeper export HADOOP_HOME=/wls/oracle/hadoop export HBASE_HOME=/wls/oracle/hbase export SPARK_HOME=/wls/oracle/spark export PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:$SCALA_HOME/bin:$SPARK_HOME/bin:$ZOOKEEPER_HOME/bin:$HADOOP_HOME/bin:$HBASE_HOME/bin export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar export PATH CLASSPATH JAVA_HOME SCALA_HOME ZOOKEEPER_HOME HADOOP_HOME SPARK_HOME
hadoop配置更改
hadoop相关的配置都在$HADOOP_HOME/etc/hadoop目录下,hadoop集群搭建主要涉及如下配置文件的变更hadoop-env.sh
hadoop-env.sh文件只需要改动JAVA_HOME为具体的路径即可export JAVA_HOME=/wls/oracle/jdk
core-site.xml
<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://SZB-L0045546:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/wls/oracle/bigdata/hadoop/tmp</value> </property> <property> <name>ha.zookeeper.quorum</name> <value>SZB-L0045546:2181,SZB-L0045551:2181,SZB-L0045552:2181</value> </property> </configuration>
hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>dfs.nameservices</name> <value>cluster</value> </property> <property> <name>dfs.ha.namenodes.cluster</name> <value>SZB-L0045546,SZB-L0045551</value> </property> <property> <name>dfs.namenode.rpc-address.cluster.SZB-L0045546</name> <value>SZB-L0045546:9000</value> </property> <property> <name>dfs.namenode.http-address.cluster.SZB-L0045546</name> <value>SZB-L0045546:50070</value> </property> <property> <name>dfs.namenode.rpc-address.cluster.SZB-L0045551</name> <value>SZB-L0045551:9000</value> </property> <property> <name>dfs.namenode.http-address.cluster.SZB-L0045551</name> <value>SZB-L0045551:50070</value> </property> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://SZB-L0045552:8485;SZB-L0047815:8485;SZB-L0047816:8485/cluster</value> </property> <property> <name>dfs.journalnode.edits.dir</name> <value>/wls/oracle/bigdata/hadoop/journal</value> </property> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> <property> <name>dfs.client.failover.proxy.provider.nsl</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <property> <name>dfs.ha.fencing.methods</name> <value>sshfence</value> </property> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/root/.ssh/id_rsa</value> </property> </configuration>
slaves
SZB-L0045552 SZB-L0047815 SZB-L0047816
mapred-site.xml
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration>
yarn-site.xml
<?xml version="1.0"?> <configuration> <property> <name>yarn.resourcemanager.hostname</name> <value>SZB-L0045546</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>
运行命令
启动journalnode
启动journalnodecd /wls/oracle/hadoop/sbin /wls/oracle/hadoop/sbin/hadoop-daemons.sh start journalnode
格式化hadoop
cd /wls/oracle/hadoop/bin hadoop namenode -format
格式化zookeeper
cd /wls/oracle/hadoop/bin hdfs zkfc -formatZK cd /wls/oracle/hadoop/sbin /wls/oracle/hadoop/sbin/start-dfs.sh cd /wls/oracle/hadoop/sbin /wls/oracle/hadoop/sbin/start-yarn.sh
hadoop集群停止
cd /wls/oracle/hadoop/sbin /wls/oracle/hadoop/sbin/stop-yarn.sh cd /wls/oracle/hadoop/sbin /wls/oracle/hadoop/sbin/stop-dfs.sh
其他命令
/wls/oracle/hadoop/sbin/hadoop-daemon.sh start namenode /wls/oracle/hadoop/sbin/hadoop-daemon.sh stop namenode /wls/oracle/hadoop/bin/hdfs namenode -bootstrapStandby /wls/oracle/hadoop/sbin/hadoop-daemon.sh start namenode /wls/oracle/hadoop/sbin/hadoop-daemon.sh start datanode /wls/oracle/hadoop/sbin/hadoop-daemon.sh --script hdfs start datanode
界面查看
hadoop http://SZB-L0045546:50070 yarn http://SZB-L0045546:8088/cluster[/code] hdfs文件系统hdfs dfs -ls hdfs://
mapreduce测试hadoop jar /wls/oracle/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar pi 2 5
重复启停异常后,可以尝试删除rm -f /wls/oracle/bigdata/hadoop/tmp/dfs/data/current/VERSIONhbase安装
解压安装
tar -zxvf hbase-1.2.4-bin.tar.gz mv hbase-1.2.4-bin hbase环境变量
vi ~/.bash_profile export JAVA_HOME=/wls/oracle/jdk export SCALA_HOME=/wls/oracle/scala export ZOOKEEPER_HOME=/wls/oracle/zookeeper export HADOOP_HOME=/wls/oracle/hadoop export HBASE_HOME=/wls/oracle/hbase export SPARK_HOME=/wls/oracle/spark export PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:$SCALA_HOME/bin:$SPARK_HOME/bin:$ZOOKEEPER_HOME/bin:$HADOOP_HOME/bin:$HBASE_HOME/bin export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar export PATH CLASSPATH JAVA_HOME SCALA_HOME ZOOKEEPER_HOME HADOOP_HOME SPARK_HOME配置
hbase需要修改的配置主要包含如下文件hbase-env.sh hbase-site.xml regionservershbase-env.sh
hbase-env.sh文件增加JAVA_HOME的配置即可export JAVA_HOME=/wls/oracle/jdkhbase-site.xml
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>hbase.master</name> <value>10.20.112.59:60000</value> </property> <property> <name>hbase.master.maxclockskew</name> <value>180000</value> </property> <property> <name>hbase.rootdir</name> <value>hdfs://SZB-L0045546:9000/user/oracle/hbase</value> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> <property> <name>hbase.zookeeper.quorum</name> <value>SZB-L0045546,SZB-L0045551,SZB-L0045552</value> </property> <property> <name>hbase.zookeeper.property.dataDir</name> <value>/hbase</value> </property> <property> <name>dfs.replication</name> <value>2</value> </property> </configuration>regionservers
SZB-L0045546
SZB-L0045551
SZB-L0045552 SZB-L0047815 SZB-L0047816运维命令
启动集群 /wls/oracle/hbase/bin/start_hbase.sh 停止集群 /wls/oracle/hbase/bin/stop_hbase.sh
相关文章推荐
- 高可用Hadoop平台-HBase集群搭建
- 大数据学习系列之七 ----- Hadoop+Spark+Zookeeper+HBase+Hive集群搭建 图文详解
- 大数据1-hadoop、zookeeper、hbase、spark集群环境搭建
- 大数据平台Hadoop的分布式集群环境搭建
- 搭建大数据处理集群(Hadoop,Spark,Hbase)
- hadoop大数据平台手动搭建-hbase
- 基于centos7搭建hadoop+zookeeper+hbase大数据集群
- 大数据平台 Hadoop 的分布式集群环境搭建
- hadoop大数据平台手动搭建(四)-hbase
- 伪分布式集群环境hadoop、hbase、zookeeper搭建(全)
- hadoop, hbase, zookeeper集群搭建
- Spark项目之电商用户行为分析大数据平台之(二)CentOS7集群搭建
- 大数据教程(一)—— Hadoop集群坏境搭建配置
- [大数据]连载No1之Hadoop概念和伪分布式集群环境搭建
- 大数据集群遇到的问题(Hadoop、Spark、Hive、kafka、Hbase、Phoenix)
- hadoop+hbase+zookeeper 分布式集群搭建 + eclipse远程连接hdfs 完美运行
- 离线部署 CDH 5.12.1 及使用 CDH 部署 Hadoop 大数据平台集群服务
- Hadoop,HBase集群环境搭建的问题集锦(二)
- hadoop完全分布式集群+Win Eclipse+Hbase+Hive+Zookeeper+Sqoop+SPARK试验机平台