您的位置:首页 > 运维架构

通过sshpass实现自动配置搭建Hadoop完全分布式所需的SSH免密码访问

2014-01-23 12:32 609 查看

通过sshpass实现以非交互的形式为ssh提供密码

1、下载源码:wget http://sourceforge.net/projects/sshpass/files/sshpass/1.05/sshpass-1.05.tar.gz/download2、所有机器都要编译安装sshpas(master,slave)[root@master ~]# tar zxvf sshpass-1.05.tar.gz[root@master ~]# cd sshpass-1.05[root@master sshpass-1.05]# ./configure[root@master sshpass-1.05]# make && make install
3、sshpass命令介绍 使用-p 参数指定明文密码,直接登录远程服务器, 它支持将密码通过命令行,文件,环境变量提供
[root@master ~]# sshpassUsage: sshpass [-f|-d|-p|-e] [-hV] command parameters -f filename Take password to use from file -d number Use number as file descriptor for getting password -p password Provide password as argument (security unwise) -e Password is passed as env-var "SSHPASS" With no parameters - password will be taken from stdin -h Show help (this screen) -V Print version informationAt most one of -f, -d, -p or -e should be used

1)从命令行方式传递密码sshpass -p userpassword ssh username@192.168.200.1282)从文件读取密码echo "user_password" > user.passwdsshpass -f user.passwd ssh username@192.168.200.1283)从环境变量获取密码export SSHPASS="user_password"sshpass -e ssh user_name@192.168.200.128
sshpass -p [yourpassword] ssh [yourusername]@[host] [yourcommand]
4、消除首次ssh登录时要求输入yes确认在所有机器上修改/etc/ssh/ssh_config文件中设置StrictHostKeyChecking no即可(默认为 ask )[root@master ~]# grep "StrictHostKeyChecking" /etc/ssh/ssh_configStrictHostKeyChecking no
前提条件:1、所有节点上都要设置相同的密码
2、在节点上(slave)正确安装并作了相应配置

用法
1、确保hadoop-prepare.sh脚本具有执行权限
2、根据需要修改hadoop-prepare.sh:
2.1)修改变量ADMIN_PASS: 用户root的密码
2.2)修改变量HADOOP_USER: 要在所有节点上创建的用户的用户名
2.3)修改变量HADOOP_PASS: 所有节点上创建的用户的对应密码
2.4)所有节点的主机名的列表:
2.4.1) 方式A:在变量OTHER_HOSTS中直接给出其他节点的hostname
2.4.2)方式B:在函数hostname_list_gen()中依据一定规则,自动生成其他节点的hostname

5、执行hadoop-ssh.sh go完成配置工作
执行hadoop-ssh.sh test验证配置

测试:

[root@master ~]# ./hadoop-ssh.sh
Usage: ./hadoop-ssh.sh {start|test|info}
[root@master ~]# ./hadoop-ssh.sh  go
>>> All hostnames:
master
slave
>>> Distributing system configurations
Skipping localhost master
/etc/ssh/ssh_config to slave
Warning: Permanently added 'slave,192.168.200.102' (RSA) to the list of known hosts.
/etc/hosts to slave
Warning: Permanently added 'master,192.168.200.101' (RSA) to the list of known hosts.
Changing password for user hadoop.
passwd: all authentication tokens updated successfully.
Changing password for user hadoop.
passwd: all authentication tokens updated successfully.
>>> Generating SSH key at each host
>>> All public keys copied to localhost
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEArhzopeVGYNvU3Prt8OgEmvmqS+zNJeX30779YMWaVvOKssl1oRQmPGHoqi/ofi82xKGCLIHTJDmgD79KfS+e/JSkVn24u9blrfby/UquU4LyyTRJ2zDv95DjkdIbB1AjAnYWph/lBF5xRjiJNP3M4HTh1YicZ5B6kN+inDE7j3As27ekHRmYX/9WaX6FKxdcRYIdLw+oVet8IFIc26woM4+csnZc+hS5slb78q0kvyRkI4SVPAoUYHZ95XGN76WoNIgxUis2qVlX+npTma1ByVlfllHY90STDGnbXGWC1XzfxYLGCyjeAqpCLHDLygEDU1CJGxqKBRy++ebxviUQWw== hadoop@master
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAzu0XCFKHnFeS9dmjpHNUETVQedub0G7n8PcaKF+S9HWyMcf3geQcp7avHeYjAtbY6gR8k1U25ZrrKIyC5oVNacb48Zd7xfA09Cbx+ySc0yfkmywrxSLr5AM7GTSD1sTgtYG4gEe7UTIMRwlnLiB0dHlZMYEIs5ZFKRGGaWIcwQTVNtokeaDH6VyNE5zCb0LoAVnhxjIxSql6jwNUqi742Jar6p0l5e9Y685J56jpb6Z2HVQsZRYgQw1ocDnP9FMSb5YgROK5Tl8VzEfjWgrfR7+3RMuC0HG32dXgUlSE+D0qx5jNDhy9b969QBfYzcKWh2RBdzDImY9K7yZTjTBtvw== hadoop@slave
>>> Distributing all public keys
[root@master ~]# su - hadoop
[hadoop@master ~]$ ssh slave
Warning: Permanently added 'slave,192.168.200.102' (RSA) to the list of known hosts.
[hadoop@master2 ~]$ exit
logout
Connection to slave closed.

[hadoop@slave ~]$ ssh master
Warning: Permanently added 'master,192.168.200.101' (RSA) to the list of known hosts.
[hadoop@master ~]$ exit
logout
Connection to master closed.

脚本内容:

[root@master ~]# cat hadoop-ssh.sh

#!/bin/sh
#by Crushlinux
#2012-07-22
ADMIN_USER="root"
ADMIN_PASS="crushlinux"
HADOOP_USER="hadoop"
HADOOP_PASS="hadoop"

LOCAL_HOST=`hostname`
OTHER_HOSTS="slave1 slave2 slave3"

function hostname_list_gen()
{
if [ -n "$OTHER_HOSTS" ]
then
HOSTNAME_LIST="$LOCAL_HOST $OTHER_HOSTS"
return
fi

HOSTNAME_LIST=""
for i in {1..4};
do
for j in {1..8};
do
HOSTNAME_LIST="${HOSTNAME_LIST} gd1$i$j"
done
done
}
function hostname_list_print()
{
echo ">>> All hostnames:"
for host in $HOSTNAME_LIST;
do
echo $host
done
}

function add_user()
{
cmd="useradd $HADOOP_USER; echo '$HADOOP_PASS' | passwd $HADOOP_USER --stdin"
for host in $HOSTNAME_LIST;
do
#echo "at $host: $cmd"
sshpass -p $ADMIN_PASS ssh $ADMIN_USER@$host $cmd
done
}

function ssh_auth()
{
echo "" > $HADOOP_USER-authorized_keys
echo ">>> Generating SSH key at each host"
cmd_rm='rm -f ~/.ssh/id_rsa* ~/.ssh/known_hosts'
cmd_gen='ssh-keygen -q -N "" -t rsa -f ~/.ssh/id_rsa'
cmd_cat='cat ~/.ssh/id_rsa.pub'
for host in $HOSTNAME_LIST;
do
sshpass -p $HADOOP_PASS ssh $HADOOP_USER@$host $cmd_rm
sshpass -p $HADOOP_PASS ssh $HADOOP_USER@$host $cmd_gen
sshpass -p $HADOOP_PASS ssh $HADOOP_USER@$host $cmd_cat >> $HADOOP_USER-authorized_keys
done

echo ">>> All public keys copied to localhost"
#ls -l /home/$HADOOP_USER/.ssh/authorized_keys
cat $HADOOP_USER-authorized_keys

echo ">>> Distributing all public keys"
cmd_chmod="chmod 600 /home/$HADOOP_USER/.ssh/authorized_keys"
for host in $HOSTNAME_LIST;
do
sshpass -p $HADOOP_PASS scp $HADOOP_USER-authorized_keys $HADOOP_USER@$host:/home/$HADOOP_USER/.ssh/authorized_keys
sshpass -p $HADOOP_PASS ssh $HADOOP_USER@$host $cmd_chmod
done
}

function ssh_subtest()
{
for host in $HOSTNAME_LIST;
do
ssh $HADOOP_USER@$host hostname
done
}

function ssh_test()
{
echo ">>> Testing SSH authorization for $HADOOP_USER in all nodes"
cmd="./$0 subtest"
for host in $HOSTNAME_LIST;
do
echo ">>> Testing SSH authorization at $host"
sshpass -p $HADOOP_PASS scp ./$0 $HADOOP_USER@$host:~
sshpass -p $HADOOP_PASS ssh $HADOOP_USER@$host $cmd
done

return
}

HOSTS_CONF="/etc/hosts"
SSH_CONF="/etc/ssh/ssh_config"

function system_conf()
{
echo ">>> Distributing system configurations"
for host in $HOSTNAME_LIST;
do
if [ "$host" == "$LOCAL_HOST" ]
then
echo "Skipping localhost $LOCAL_HOST"
continue
fi

echo "$SSH_CONF to $host"
sshpass -p $ADMIN_PASS scp $SSH_CONF $ADMIN_USER@$host:$SSH_CONF
echo "$HOSTS_CONF to $host"
sshpass -p $ADMIN_PASS scp $HOSTS_CONF $ADMIN_USER@$host:$HOSTS_CONF
done
}
function print_info()
{
echo "Version: 2011-12-20"
return
}
case "$1" in
start)
hostname_list_gen
hostname_list_print
system_conf
add_user
ssh_auth
RETVAL=0
;;
subtest)
ssh_subtest
RETVAL=0
;;
test)
ssh_test
RETVAL=0
;;
info)
print_info
RETVAL=0
;;
*)
echo $"Usage: $0 {go|test|info}"
RETVAL=2
esac
exit $RETVAL
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  hadoop cluster ssh s
相关文章推荐