您的位置:首页 > 运维架构

ubuntu 虚拟机 完全分布式 hadoop集群搭建 hive搭建 ha搭建

2016-11-16 12:00 866 查看
针对分布式hadoop集群搭建,已经在四台虚拟机上,完全搭建好,这里针对整个搭建过程以及遇到的问题做个总结,按照下面的做法应该能够比较顺畅的搭建一套高可用的分布式hadoop集群。

这一系列分布式组件的安装过程中,大体可以分为以下几步:

第一步.配置机器互信

   机器互信配置原理:机器互信是指一个机器可以不需要输入密码的情况下直接登录到另外一台机器,使用证书信任的方式。ftp,telnet等连接方式的弊端是采用明文传输,中间者可以冒充真正的服务器来截取这部分传输数据,从而带来安全问题。ssh是secure shell的简写  两种登录方式 一种是口令登录,输入用户名和密码,另外一种是秘钥验证,自己为自己创建一对秘钥,然后把公钥放在服务器上,如果连接服务器的时候,客户端首先发送请求,里面包含公钥,请求服务器验证,服务器接收到请求后,与自己机器上保存的所有公钥中进行对比验证,如果相同,服务器就把质询信息加密发送给客户端,客户端收到后,用自己的私钥解密,然后把解密结果发给服务器进行验证,验证通过则可以通信。

其实就是三次握手的一来一回,A跟B发公钥  B给A发密文 A解密发个B解密后的东西 可以连接
知道了原理以后,下面来配置机器互信,机器配置互信有下面几个步骤
第一步:创建hadoop用户 

sudo addgroup hadoop
sudo adduser --ingroup hadoop hadoop
      第一行是建立了hadoop组,第二行是添加了hadoop用户,而且该用户是属于hadoop组的

这些可以在当前用户下执行  sudo su - hadoop 来切换到hadoop用户下   中间的小短线- 不要忘记写,表示将环境变量等信息加载到新用户下,可以避免很多不必要的麻烦。但是在hadoop用户  没办法使用sudo 还需要修改一个文件。当使用sudo时,会提示如下错误:
hadoop is not in the sudoers file.  This incident will be reported.

此时,需要修改/etc/sudoers文件 默认这个文件是不可写的 
首先切换到root用户  
sudo su -
      然后修改文件属性 
chmod u+w /etc/sudoers
      给u(user)添加了w(write)权限,测试在root用户下,可以编辑该文件了

添加如下行:

hadoop  ALL=(ALL:ALL) ALL


:wq保存后后恢复文件属性

chmod u-w /etc/sudoers
      切回到hadoop用户下 发现可以使用sudo命令了。

第二步:修改机器名字  比如我的ubuntu虚拟机默认登录上去是verlink@ubuntu,可以选择hadoop用户登录,或者是切换到hadoop用户下,sudo su - hadoop,
修改机器主机名 

sudo vim /etc/hostname
      打开文件后将主机名删除掉 然后换成 hadoop01-namenode  标识这台机器作为hadoop的namenode节点  然后保存退出。此时还需要做映射操作,集群中所有用到的主机名都要做映射 这样才能正常使用ssh hadoop03-datanode 这种命令来登录 否则只能是使用ip地址,下面来说如何做映射。

sudo vim /etc/hosts
      在下面追加需要增加的映射地址即可,比如
hadoop01-namenode 192.168.79.183
      保存退出即可。

      第三步,生成信任证书,也是真正配置互信的地方

ssh-keygen -t rsa
       然后一直回车,证书已经生成

cd ~/.ssh
cat id_rsa.pub
      你会看到公钥内容,创建一个公钥认证的文件

touch authorized_keys
chmod 600 authorized_keys

      将id_rsa.pub文件内的东西复制到authorized_keys文件中,退出保存,所有机器的互信配置都需要执行上面的操作,而且公钥文件都需要添加到authorized_keys文件里面,也就是说如果A,C想跟B连接 那么A和C的公钥要写入B的authorized_keys里,以追加的形式即可。此时就算完成了互信配置。验证机器是否安装了ssh,如果提示错误

ssh: connect to host localhost port 22: Connection refused

说明ssh服务没安装,可以使用ubuntu的安装包来安装,如下所示

sudo apt-get install openssh-server
      此时再次执行,ssh hadoop01-namenode 提示输入连接yes后 可以不用输密码登录到自己机器上了,说明互信配置成功。

到此,互信就算是配置完了。
第二步。配置hadoop,以HA的方式配置
下面主要是展示hadoop中核心的几个配置文件
下面是core_site.xml的配置内容

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
 http://www.apache.org/licenses/LICENSE-2.0 
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>

<name>hadoop.tmp.dir</name>

<value>file:/home/hadoop/tmp/</value>

<description>Abase for other temporary directories.</description>

</property>
<!--指定zookeeper地址,主要是配置高可用使用-->
<property>
<name>ha.zookeeper.quorum</name>
<value>hadoop02-datanode:2181,hadoop03-datanode:2181,hadoop04-datanode:2181</value>
</property>

<property>
<!-- 指定hdfs的nameservice为ns -->
<name>fs.defaultFS</name>

<value>hdfs://ns</value>

</property>

</configuration>


下面是hdfs_site.xml的文件内容

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
 http://www.apache.org/licenses/LICENSE-2.0 
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>dfs.nameservices</name>
<value>ns</value>
</property>
<property>
//HA的两个node
<name>dfs.ha.namenodes.ns</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.ns.nn1</name>
<value>hadoop02-datanode:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.ns.nn1</name>
<value>hadoop02-datanode:50070</value>
</property>
<property>
<name>dfs.namenode.rpc-address.ns.nn2</name>
<value>hadoop03-datanode:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.ns.nn2</name>
<value>hadoop03-datanode:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://hadoop02-datanode:8485;hadoop03-datanode:8485;hadoop04-datanode:8485/ns</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/home/hadoop/journal</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.ns</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hadoop/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>

<property>
<name>dfs.namenode.name.dir</name>
<value>file:///home/hadoop/workspace/hdfs/name</value>
</property>

<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/tmp/</value>
</property>

<property>
<name>dfs.datanode.data.dir</name>
<value>file:///home/hadoop/workspace/hdfs/data</value>
</property>

<!--
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>192.168.79.183:9001</value>
</property>
-->
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>

</configuration>
3.下面是mapred.site.xml的内容

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
 http://www.apache.org/licenses/LICENSE-2.0 
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>

<name>mapred.job.tracker</name>

<value>hadoop02-datanode:9001</value>

</property>
<property>
<name>mapreduce.map.memory.mb</name>
<value>768</value>
</property>
<property>
<name>mapreduce.map.java.opts</name>
<value>-Xmx512m</value>
</property>

<property>
<name>mapreduce.reduce.memory.mb</name>
<value>1536</value>
</property>
<property>
<name>mapreduce.reduce.java.opts</name>
<value>-Xmx1024m</value>
</property>

<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>192.168.79.183:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>192.168.79.183:19888</value>
</property>
<property>
<name>mapreduce.application.classpath</name>
<value>/usr/local/hadoop/share/hadoop/mapreduce/*,/usr/local/hadoop/share/hadoop/mapreduce/lib/*</value>

</property>

</configuration>


下面是yarn_site.xml的内容:

<?xml version="1.0"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
 http://www.apache.org/licenses/LICENSE-2.0 
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<description>Classpath for typical applications.</description>
<name>yarn.application.classpath</name>
<value>/usr/local/hadoop/lib/*,/usr/local/hadoop/share/hadoop/common/*,/usr/local/hadoop/share/hadoop/common/lib/*,/usr/local/hadoop/share/hadoop/hdfs/*,/usr/local/hadoop/share/hadoop/hdfs/lib/*,/usr/local/hadoop/share/hadoop/httpfs/*,/usr/local/hadoop/share/hadoop/httpfs/lib/*,/usr/local/hadoop/share/hadoop/kms/*,/usr/local/hadoop/share/hadoop/kms/lib/*,/usr/local/hadoop/share/hadoop/mapreduce/*,/usr/local/hadoop/share/hadoop/mapreduce/lib/*,/usr/local/hadoop/share/hadoop/tools/*,/usr/local/hadoop/share/hadoop/tools/lib/*,/usr/local/hadoop/share/hadoop/yarn/*,/usr/local/hadoop/share/hadoop/yarn/lib/*</value>
</property>

<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>hadoop02-datanode:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>hadoop02-datanode:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>hadoop02-datanode:8035</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>hadoop02-datanode:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>hadoop02-datanode:8088</value>
</property>
<property>
<description>Amount of physical memory, in MB, that can be allocated
for containers.</description>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>3096</value>
</property>

<property>
<description>The class to use as the resource scheduler.</description>
<name>yarn.resourcemanager.scheduler.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
</property>

<property>
<description>The minimum allocation for every container request at the RM,
in MBs. Memory requests lower than this won't take effect,
and the specified value will get allocated at minimum.</description>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>1024</value>
</property>

<property>
<description>The maximum allocation for every container request at the RM,
in MBs. Memory requests higher than this won't take effect,
and will get capped to this value.</description>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>2048</value>
</property>
<property>
<name>yarn.app.mapreduce.am.resource.mb</name>
<value>768</value>
</property>
<property>
<name>yarn.app.mapreduce.am.command-opts</name>
<value>-Xmx512m</value>

</property>
<property>
<name>yarn.app.mapreduce.am.resource.mb</name>
<value>768</value>
</property>
<property>
<name>yarn.app.mapreduce.am.command-opts</name>
<value>-Xmx512m</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop04-datanode</value>
</property>

</configuration>


此处不得不说,hadoop的搭建精髓就是编写配置文件。具体里面所有的配置的文件的含义,读者可以自行到官方文档里去查看。
下面给出搭建过程中系统路径的所有环境变量信息。这部分也是很关键的部分。
export JAVA_HOME=/usr/local/java
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH
export PATH=${PATH}:/usr/lib/jsoncpp/libs/linux-gcc-4.8
export M2_HOME=/home/verlink/Desktop/apache-maven-3.1.1
export TOMCAT_HOME=/home/verlink/tomcat/apache-tomcat-7.0.69
export PATH=$PATH:$M2_HOME/bin:$TOMCAT_HOME
export HADOOP_HOME=/usr/local/hadoop
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH

export HADOOP_HOME=/usr/local/hadoop
export HADOOP_COMMON_HOME=${HADOOP_HOME}
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop

export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"

export ZOOKEEPER_HOME=/usr/local/hadoop/app/zookeeper
export PATH=$ZOOKEEPER_HOME/bin:$PATH

1.hive中的元数据是依托mysql来进行管理的,在hive-site.xml中需要配置出mysql的地址信息。以及连接mysql所使用的用户名以及密码,同时还需要将mysql的连接驱动jar包包含到系统的classpath中去。

2.对mysql的授权操作,允许其他ip地址的机器可以对该mysql进行访问  此处查询下授权命令 grant 一次性搞懂
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  ssh互信 hadoop hive ha
相关文章推荐