您的位置:首页 > 运维架构

Hadoop集群下安装和启动SSH协议

2014-08-25 17:59 169 查看
一、确认本机已经安装了ssh和rsync两个服务

查看是否安装了这两个服务:

rpm
–qa |
grep openssh

rpm
–qa |
grep rsync

若没有安装,通过下面命令进行安装:

yum install
ssh 安装SSH协议

yum install
rsync (rsync是一个远程数据同步工具,可通过LAN/WAN快速同步多台主机间的文件)

service sshd
restart 启动服务

注意:当所有服务器都安装了上述两个服务,各台机器之间可以通过密码验证相互登录

二、配置Master无密码登录所有的Salve

1、无密码登录本机

1)SSH无密码登录原理

Master(NameNode
|
JobTracker)作为客户端,要实现无密码公钥认证,连接到服务器Salve(DataNode
|
Tasktracker)上时,需要在Master上生成一个密钥对,包括一个公钥和一个私钥,而后将公钥复制到所有的Slave上。当Master通过SSH连接Salve时,Salve就会生成一个随机数并用Master的公钥对随机数进行加密,并发送给Master。Master收到加密数之后再用私钥解密,并将解密数回传给Slave,Slave确认解密数无误之后就允许Master进行连接了。这就是一个公钥认证过程,其间不需要用户手工输入密码。重要过程是将客户端Master复制到Slave上。

2)Master借点上生成密钥对

Master节点切换到Hadoop用户登录(一定要切换到Hadoop用户),生成密钥对,将会生成id_rsa和id_rsa.pub密钥对,默认存储路径为:“/home/hadoop/.ssh”目录下

[hadoop@Master
~]$ ssh-keygen -t rsa -p

3)追加公钥到授权的key,并修改权限

将公钥id_rsa.pub追加到授权的key(authorized_keys)里面去

[hadoop@Master
.ssh]$ cat ~/.ssh/id_rsa.pub >>
~/.ssh/authorized_keys

修改authorized_keys的权限

[hadoop@Master
.ssh]$ chmod 600 ~/.ssh/authorized_keys

注意:一定要修改权限,否则无法无密码登录将无效

4)修改本机的SSHD的配置文件

用root身份登录,修改如下文件

[root@Master
~]# vi /etc/ssh/sshd_config

找到一下内容,并去掉注释符号“#”

RSAAuthentication
yes

PubkeyAuthentication yes

AuthorizedKeysFile
.ssh/authorized_keys

重启SSH服务

[root@Master
~]$ service sshd restart

5)退出root登录,使用Hadoop普通用户验证是否成功

若看到如下信息,则成功:

[hadoop@Master
~]$ ssh localhost

Last login:
Wed Jul 30 21:14:34 2014 from localhost.localdomain

[hadoop@Master ~]$ s

2、无密码登录其他Salve机器

1)复制公钥

将公钥id_rsa.pub
复制到Salve1机器上,因为还没有建立无密码登录,所以连接时需要输入Salve1机器用户Hadoop的密码,成功复制后,Salve1机器的“/home/hadoop”目录下应该有这个公钥

[hadoop@Master
~]$ scp ~/.ssh/id_rsa.pub hadoop@192.168.137.128:~/

2)在Salve1机器上创建.ssh目录,并修改器权限为700

[hadoop@Master
~]$ mkdir ~/.ssh

[hadoop@Master ~]$ chmod 700 ~/.ssh

备注如果不进行,即使你按照前面的操作设置了"authorized_keys"权限,并配置了"/etc/ssh/sshd_config",还重启了sshd服务,在Master能用"ssh
localhost"进行无密码登录,但是对Slave1.Hadoop进行登录仍然需要输入密码,就是因为".ssh"文件夹的权限设置不对。这个文件夹".ssh"在配置SSH无密码登录时系统自动生成时,权限自动为"700",如果是自己手动创建,它的组权限和其他权限都有,这样就会导致RSA无密码远程登录失败。

3)追加授权文件“authorized_keys”,并修改“authorized_keys”的权限为600

[hadoop@Salve1
~]$ cat ~/.ssh/id_rsa.pub >>
~/.ssh/authorized_keys

[hadoop@Salve1
~]$ chmod 600
~/.ssh/authorized_keys

4)用root用户修改“/etc/ssh/ssd_config”

用root身份登录,修改如下文件

[root@Master
~]# vi /etc/ssh/sshd_config

找到一下内容,并去掉注释符号“#”

RSAAuthentication
yes

PubkeyAuthentication yes

AuthorizedKeysFile
.ssh/authorized_keys

重启SSH服务

[root@Master
~]$ service sshd restart

5)用Master.Hadoop使用ssh无密码登录Salve1.Hadoop

ssh
远程服务器IP

[hadoop@Master
~]$ ssh 192.168.137.128

Last login: Thu Jul 31 03:17:49 2014 from master.hadoop

[hadoop@Salve1 ~]$

如看到上述信息,则成功登录;

把Salve1机器/home/hadoop目录下的id_rsa.pub删除

[hadoop@Salve1 ~]$ rm -r ~/id_rsa.pub

对其他的Salve服务器做同样的配置,结果如下:

[hadoop@Master ~]$ ssh 192.168.137.129

Last login: Wed Jul 30 21:43:41 2014 from master.hadoop

[hadoop@Salve2 ~]$

三、配置所有额Salve无密码登录Master

跟前面的原理类似,在Salve中生成密钥对,并将公钥追加到本机授权的keys中,而且将公钥复制到Master服务器中,以及追加公钥到授权的keys中;

1)产生密钥对

[hadoop@Salve2 ~]$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key
(/home/hadoop/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in
/home/hadoop/.ssh/id_rsa.

Your public key has been saved in
/home/hadoop/.ssh/id_rsa.pub.

The key fingerprint is:

f9:56:b1:20:21:1f:71:17:fa:ba:19:c4:1b:70:97:0a hadoop@Salve2.Hadoop

The key's randomart image is:

+--[ RSA 2048]----+

|
. +.. o. |

|
o + o . |

|
E + + |

|
B = o |

|
S = + |

|
o
=
|

|
*
|

|
.
+
|

|
o
|

+-----------------+

2)追加公钥到授权的keys中

[hadoop@Salve2 ~]$ cd .ssh/

[hadoop@Salve2 .ssh]$ ll

total 12

-rw-------. 1 hadoop hadoop 804 Jul 31 18:48
authorized_keys

-rw-------. 1 hadoop hadoop 1675 Jul 31 21:09 id_rsa

-rw-r--r--. 1 hadoop hadoop 402 Jul 31 21:09
id_rsa.pub

[hadoop@Salve2 .ssh]$ cat id_rsa.pub >> authorized_keys

[hadoop@Salve2 .ssh]$

3)复制公钥到Master主机

[hadoop@Salve2 .ssh]$ scp id_rsa.pub hadoop@192.168.137.120:~/

The authenticity of host '192.168.137.120 (192.168.137.120)' can't
be established.

RSA key fingerprint is
e7:24:ea:f5:fe:a6:be:bf:35:dc:04:16:40:04:4c:6c.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added '192.168.137.120' (RSA) to the list of
known hosts.

hadoop@192.168.137.120's
password:

id_rsa.pub
100%
402
0.4KB/s
00:00

[hadoop@Salve2 .ssh]$

4)Master主机中追加Salve2的公钥的授权的keys中,并删除该公钥

[hadoop@Master
~]$ ll

total 24

-rw-r--r-- 1 hadoop hadoop 402 Jul 30 21:16 hadoop@192.168.137.129

-rw-r--r-- 1 hadoop hadoop 402 Jul 31 21:21 id_rsa.pub

-rw-r--r-- 1 hadoop hadoop 33 Jul 28 20:52
warning.txt

[hadoop@Master ~]$ cat id_rsa.pub >>
~/.ssh/authorized_keys

[hadoop@Master ~]$ rm id_rsa.pub

[hadoop@Master ~]$ ll

total 16

-rw-r--r-- 1 hadoop hadoop 402 Jul 30 21:16 hadoop@192.168.137.129

-rw-r--r-- 1 hadoop hadoop 33 Jul 28 20:52
warning.txt

[hadoop@Master ~]$

5)验证是否设置成功

[hadoop@Salve2 .ssh]$ ssh 192.168.137.120

Last login: Thu Jul 31 21:16:28 2014 from salve1.hadoop

[hadoop@Master ~]$

结果Salve2无密码成功登录Master主机;

四、各个Salve主机之间相互登录

设置方法如上.....................

本文版权所有,如需转载,请声明,并给出原文地址!http://blog.sina.com.cn/s/blog_7d4174ce0102uywn.html
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: