生产环境实战spark (8)分布式集群 Hadoop集群WEBUI打不开问题解决,关闭防火墙firewall
2017-04-25 15:11
936 查看
生产环境实战spark (8)分布式集群 Hadoop集群WEBUI打不开问题解决
在上一步中安装了Hadoop集群,发现127.0.0.1:50070 页面无法打开。
systemctl stop firewalld.service#停止firewall
systemctl disable firewalld.service #禁止firewall开机启动
firewall-cmd --state #查看默认防火墙状态
3,检查WEUI页面
在上一步中安装了Hadoop集群,发现127.0.0.1:50070 页面无法打开。
1,master本地检查webui
在云平台master上直接打开浏览器,检查127.0.0.1:50070可以打开。 初步定位是防火墙的问题2,关闭防火墙。
CentOS 7.0默认使用的是firewall作为防火墙,master和worker节点要关闭掉。systemctl stop firewalld.service#停止firewall
systemctl disable firewalld.service #禁止firewall开机启动
firewall-cmd --state #查看默认防火墙状态
[root@master ~]# firewall-cmd --state running [root@master ~]# systemctl stop firewalld.service [root@master ~]# systemctl disable firewalld.service rm '/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service' rm '/etc/systemd/system/basic.target.wants/firewalld.service' [root@master ~]# firewall-cmd --state not running [root@master ~]# ^C [root@master ~]# ^C [root@master ~]# cd /usr/local [root@master local]# ls bin etc games hadoop-2.6.5 include jdk1.8.0_121 lib lib64 libexec rhzf_setup_scripts rhzf_spark_setupTools sbin scala-2.11.8 share src [root@master local]# cd hadoop-2.6.5 [root@master hadoop-2.6.5]# l bash: l: command not found... [root@master hadoop-2.6.5]# ls bin etc file: include lib libexec LICENSE.txt logs NOTICE.txt README.txt sbin share tmp [root@master hadoop-2.6.5]# cd sbin [root@master sbin]# ls distribute-exclude.sh hdfs-config.sh refresh-namenodes.sh start-balancer.sh start-yarn.cmd stop-balancer.sh stop-yarn.cmd hadoop-daemon.sh httpfs.sh slaves.sh start-dfs.cmd start-yarn.sh stop-dfs.cmd stop-yarn.sh hadoop-daemons.sh kms.sh start-all.cmd start-dfs.sh stop-all.cmd stop-dfs.sh yarn-daemon.sh hdfs-config.cmd mr-jobhistory-daemon.sh start-all.sh start-secure-dns.sh stop-all.sh stop-secure-dns.sh yarn-daemons.sh [root@master sbin]# stop-dfs.sh Stopping namenodes on [Master] Master: stopping namenode worker03: stopping datanode worker04: stopping datanode worker01: stopping datanode worker02: stopping datanode master: no datanode to stop Stopping secondary namenodes [0.0.0.0] 0.0.0.0: stopping secondarynamenode [root@master sbin]# start-dfs.sh Starting namenodes on [Master] Master: starting namenode, logging to /usr/local/hadoop-2.6.5/logs/hadoop-root-namenode-master.out worker03: starting datanode, logging to /usr/local/hadoop-2.6.5/logs/hadoop-root-datanode-worker03.out master: starting datanode, logging to /usr/local/hadoop-2.6.5/logs/hadoop-root-datanode-master.out worker04: starting datanode, logging to /usr/local/hadoop-2.6.5/logs/hadoop-root-datanode-worker04.out worker02: starting datanode, logging to /usr/local/hadoop-2.6.5/logs/hadoop-root-datanode-worker02.out worker01: starting datanode, logging to /usr/local/hadoop-2.6.5/logs/hadoop-root-datanode-worker01.out Starting secondary namenodes [0.0.0.0] 0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.6.5/logs/hadoop-root-secondarynamenode-master.out [root@master sbin]#
Last login: Wed Apr 19 16:15:55 2017 from 132.150.75.19 [root@worker01 ~]# systemctl stop firewalld.service systemctl disable firewalld.service [root@worker01 ~]# systemctl disable firewalld.service rm '/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service' rm '/etc/systemd/system/basic.target.wants/firewalld.service' [root@worker01 ~]# firewall-cmd --state not running [root@worker01 ~]#
Last login: Wed Apr 19 16:16:00 2017 from 132.150.75.19 [root@worker02 ~]# systemctl stop firewalld.service systemctl disable firewalld.service [root@worker02 ~]# systemctl disable firewalld.service rm '/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service' rm '/etc/systemd/system/basic.target.wants/firewalld.service' [root@worker02 ~]# firewall-cmd --state not running [root@worker02 ~]#
Last login: Wed Apr 19 16:16:05 2017 from 132.150.75.19 [root@worker03 ~]# systemctl stop firewalld.service systemctl disable firewalld.service [root@worker03 ~]# systemctl disable firewalld.service rm '/etc/systemd/system/basic.target.wants/firewalld.service' rm '/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service' [root@worker03 ~]# firewall-cmd --state not running [root@worker03 ~]#
Last login: Wed Apr 19 16:16:09 2017 from 132.150.75.19 [root@worker04 ~]# systemctl stop firewalld.service systemctl disable firewalld.service [root@worker04 ~]# systemctl disable firewalld.service rm '/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service' rm '/etc/systemd/system/basic.target.wants/firewalld.service' [root@worker04 ~]# firewall-cmd --state not running [root@worker04 ~]#
3,检查WEUI页面
相关文章推荐
- 生产环境实战spark (10)分布式集群 5台设备 SPARK集群 HistoryServer WEBUI不能打开问题解决 File file:/tmp/spark-events does not
- 生产环境实战spark (7)分布式集群 5台设备 Hadoop集群安装
- 王家林的“云计算分布式大数据Hadoop实战高手之路---从零开始”的第五讲Hadoop图文训练课程:解决典型Hadoop分布式集群环境搭建问题
- 王家林的“云计算分布式大数据Hadoop实战高手之路---从零开始”的第五讲Hadoop图文训练课程:解决典型Hadoop分布式集群环境搭建问题
- 王家林的“云计算分布式大数据Hadoop实战高手之路---从零开始”的第五讲Hadoop图文训练课程:解决典型Hadoop分布式集群环境搭建问题
- 生产环境实战spark (5)分布式集群 5台设备之间hosts文件配置 ssh免密码登录
- 解决典型Hadoop分布式集群环境搭建问题
- 生产环境实战spark (11)分布式集群 5台设备 Zookeeper集群、Kafka集群安装部署
- 生产环境实战spark (6)分布式集群 5台设备 Scala安装
- 生产环境实战spark (9)分布式集群 5台设备 SPARK集群安装
- hadoop的集群配置,原创解决了好多个问题(spark+hadoop+scala集群配置)
- spark视频-构建商业生产环境下的Spark集群实战
- 解决hadoop集群中datanode启动后自动关闭的问题
- 解决hadoop集群中datanode启动后自动关闭的问题
- Centos7 下 spark1.6.1_hadoop2.6 分布式集群环境搭建
- Hadoop2.7.2 Centos 完全分布式集群环境搭建 (3) - 问题汇总
- spark程序对hadoop环境的依赖,导致checkpoint失败问题的解决
- hadoop 2.6.0 分布式 + Spark 1.1.0 集群环境
- 大数据 IMF 传奇 spark -history在分布式 集群 的安装部署 及问题解决
- hadoop的集群配置,原创解决了好多个问题(spark+hadoop+scala集群配置)