您的位置:首页 > 运维架构

spark集群详细搭建过程及遇到的问题解决(二)

2016-11-16 15:51 676 查看
(二)配置ssh无密码访问集群机器

master节点
root@master:/home# su - spark
spark@master:~$
spark@master:~$ ssh-keygen -t rsa
#一直enter键



spark@master:~$ cd .ssh/
spark@master:~/.ssh$ ls
id_rsa  id_rsa.pub
spark@master:~/.ssh$  cat id_rsa.pub > authorized_keys
spark@master:~/.ssh$ scp spark@master:~/.ssh/id_rsa.pub ./master_rsa.pub



spark@master:~/.ssh$ ls
authorized_keys  id_rsa  id_rsa.pub  known_hosts  master_rsa.pub
spark@master:~/.ssh$ cat master_rsa.pub >>authorized_keys
worker1节点
与master节点执行过程相同

worker2节点与master节点执行过程相同

执行完上述操作之后:
master节点:
spark@master:~/.ssh$ scp spark@worker1:~/.ssh/id_rsa.pub ./worker1_rsa.pub
注意是在./ssh目录下



spark@master:~/.ssh$ cat worker1_rsa.pub >>authorized_keys
spark@master:~/.ssh$ scp spark@worker2:~/.ssh/id_rsa.pub ./worker2_rsa.pub



spark@master:~/.ssh$ cat worker2_rsa.pub >>authorized_keys
worker1节点
spark@worker1:~/.ssh$ scp spark@worker2:~/.ssh/id_rsa.pub ./worker2_rsa.pub
cat worker2_rsa.pub >>authorized_keys
worker2节点
spark@worker2:~/.ssh$ scp spark@worker1:~/.ssh/id_rsa.pub ./worker1_rsa.pub
cat worker1_rsa.pub >>authorized_keys
验证是否配置成功

从master节点分别登陆自身、worker1节点与worker2节点(可能第一次需要密码、退出后,第二次重新ssh则不需要密码)
spark@master:~/.ssh$ ssh master



spark@master:~/.ssh$ ssh worker1



spark@master:~/.ssh$ ssh worker2



从worker1节点登陆自身、master节点与worker2节点(可能第一次需要密码、退出后,第二次重新ssh则不需要密码)
spark@worker1:~/.ssh$ ssh master



spark@worker1:~/.ssh$ ssh worker1



spark@worker1:~/.ssh$ ssh worker2



从worker2节点登陆自身、master节点与worker1节点(可能第一次需要密码、退出后,第二次重新ssh则不需要密码)
spark@worker2:~/.ssh$ ssh master



spark@worker2:~/.ssh$ ssh worker1



spark@worker2:~/.ssh$ ssh worker2



注意,登陆到其他节点后,一定要记得退出
下文中,将开始安装hadoop。。。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  集群 hadoop spark