Hadoop 1.x 启动脚本分析
2016-02-28 20:16
429 查看
Hadoop 1.x启动停止有三种模式,其对应的Shell脚本不同,但启动和停止顺序是相同的。
1) 此shell脚本仅在主节点上运行。【#Start all hadoop daemons. Run this onmaster node.】
2) 首先启动DFS文件系统的守护进程,其次启动MapReduce框架的守护进程。
3) 启动HDFS文件系统的守护进程时,调用start-dfs.shshell脚本;启动MapReduce守护进程时,调用start-mapred.sh shell脚本。
1) 此脚本运行在DFS文件系统的主节点
2) 如果先启动DataNode守护进程,在没有启动NameNode守护进程之前,DataNode日志文件一直出现连接NameNode错误信息。
3) 启动HDFS守护进程的顺序为NameNode、DataNode和SecondaryNameNode
4) NameNode启动,调用的是hadoop-daemon.sh脚本;
5) DataNode和SecondaryNameNode启动,调用的是hadoop-daemons.sh脚本。
6) 在启动SecondaryNameNode守护进程服务时,通过指定参数【--hosts masters】指定哪些机器上运行SecondaryNameNode服务,从而验证了配置文件【masters】配置的IP地址为SecondaryNameNode服务地址。
1、脚本start-all.sh
脚本如下:#!/usr/bin/env bash # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Start all hadoop daemons. Run this on master node. bin=`dirname "$0"` bin=`cd "$bin"; pwd` if [ -e "$bin/../libexec/hadoop-config.sh" ]; then . "$bin"/../libexec/hadoop-config.sh else . "$bin/hadoop-config.sh" fi # start dfs daemons "$bin"/start-dfs.sh --config $HADOOP_CONF_DIR # start mapred daemons "$bin"/start-mapred.sh --config $HADOOP_CONF_DIR从上述文件中可以得出如下结论:
1) 此shell脚本仅在主节点上运行。【#Start all hadoop daemons. Run this onmaster node.】
2) 首先启动DFS文件系统的守护进程,其次启动MapReduce框架的守护进程。
3) 启动HDFS文件系统的守护进程时,调用start-dfs.shshell脚本;启动MapReduce守护进程时,调用start-mapred.sh shell脚本。
2、脚本start-dfs.sh
脚本如下:#!/usr/bin/env bash # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Start hadoop dfs daemons. # Optinally upgrade or rollback dfs state. # Run this on master node. usage="Usage: start-dfs.sh [-upgrade|-rollback]" bin=`dirname "$0"` bin=`cd "$bin"; pwd` if [ -e "$bin/../libexec/hadoop-config.sh" ]; then . "$bin"/../libexec/hadoop-config.sh else . "$bin/hadoop-config.sh" fi # get arguments if [ $# -ge 1 ]; then nameStartOpt=$1 shift case $nameStartOpt in (-upgrade) ;; (-rollback) dataStartOpt=$nameStartOpt ;; (*) echo $usage exit 1 ;; esac fi # start dfs daemons # start namenode after datanodes, to minimize time namenode is up w/o data # note: datanodes will log connection errors until namenode starts "$bin"/hadoop-daemon.sh --config $HADOOP_CONF_DIR start namenode $nameStartOpt "$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR start datanode $dataStartOpt "$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR --hosts masters start secondarynamenode得出如下内容:
1) 此脚本运行在DFS文件系统的主节点
2) 如果先启动DataNode守护进程,在没有启动NameNode守护进程之前,DataNode日志文件一直出现连接NameNode错误信息。
3) 启动HDFS守护进程的顺序为NameNode、DataNode和SecondaryNameNode
4) NameNode启动,调用的是hadoop-daemon.sh脚本;
5) DataNode和SecondaryNameNode启动,调用的是hadoop-daemons.sh脚本。
6) 在启动SecondaryNameNode守护进程服务时,通过指定参数【--hosts masters】指定哪些机器上运行SecondaryNameNode服务,从而验证了配置文件【masters】配置的IP地址为SecondaryNameNode服务地址。
相关文章推荐
- 详解HDFS Short Circuit Local Reads
- Hadoop_2.1.0 MapReduce序列图
- 使用Hadoop搭建现代电信企业架构
- 单机版搭建Hadoop环境图文教程详解
- hadoop常见错误以及处理方法详解
- hadoop 单机安装配置教程
- hadoop的hdfs文件操作实现上传文件到hdfs
- hadoop实现grep示例分享
- Apache Hadoop版本详解
- linux下搭建hadoop环境步骤分享
- hadoop client与datanode的通信协议分析
- hadoop中一些常用的命令介绍
- Hadoop单机版和全分布式(集群)安装
- 用PHP和Shell写Hadoop的MapReduce程序
- hadoop map-reduce中的文件并发操作
- Hadoop1.2中配置伪分布式的实例
- java结合HADOOP集群文件上传下载
- 让python在hadoop上跑起来
- 用python + hadoop streaming 分布式编程(一) -- 原理介绍,样例程序与本地调试
- Hadoop安装感悟