您的位置:首页 > 运维架构

Hadoop 1.x 启动脚本分析

2016-02-28 20:16 429 查看
    Hadoop 1.x启动停止有三种模式,其对应的Shell脚本不同,但启动和停止顺序是相同的。

1、脚本start-all.sh

脚本如下:

#!/usr/bin/env bash

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0 #
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Start all hadoop daemons.  Run this on master node.

bin=`dirname "$0"`
bin=`cd "$bin"; pwd`

if [ -e "$bin/../libexec/hadoop-config.sh" ]; then
. "$bin"/../libexec/hadoop-config.sh
else
. "$bin/hadoop-config.sh"
fi

# start dfs daemons
"$bin"/start-dfs.sh --config $HADOOP_CONF_DIR

# start mapred daemons
"$bin"/start-mapred.sh --config $HADOOP_CONF_DIR
从上述文件中可以得出如下结论:

1) 此shell脚本仅在主节点上运行。【#Start all hadoop daemons.  Run this onmaster node.】

2) 首先启动DFS文件系统的守护进程,其次启动MapReduce框架的守护进程。

3) 启动HDFS文件系统的守护进程时,调用start-dfs.shshell脚本;启动MapReduce守护进程时,调用start-mapred.sh shell脚本。

2、脚本start-dfs.sh

脚本如下:

#!/usr/bin/env bash

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0 #
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Start hadoop dfs daemons.
# Optinally upgrade or rollback dfs state.
# Run this on master node.

usage="Usage: start-dfs.sh [-upgrade|-rollback]"

bin=`dirname "$0"`
bin=`cd "$bin"; pwd`

if [ -e "$bin/../libexec/hadoop-config.sh" ]; then
. "$bin"/../libexec/hadoop-config.sh
else
. "$bin/hadoop-config.sh"
fi

# get arguments
if [ $# -ge 1 ]; then
nameStartOpt=$1
shift
case $nameStartOpt in
(-upgrade)
;;
(-rollback)
dataStartOpt=$nameStartOpt
;;
(*)
echo $usage
exit 1
;;
esac
fi

# start dfs daemons
# start namenode after datanodes, to minimize time namenode is up w/o data
# note: datanodes will log connection errors until namenode starts
"$bin"/hadoop-daemon.sh --config $HADOOP_CONF_DIR start namenode $nameStartOpt
"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR start datanode $dataStartOpt
"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR --hosts masters start secondarynamenode
得出如下内容:

1) 此脚本运行在DFS文件系统的主节点

2) 如果先启动DataNode守护进程,在没有启动NameNode守护进程之前,DataNode日志文件一直出现连接NameNode错误信息。

3) 启动HDFS守护进程的顺序为NameNode、DataNode和SecondaryNameNode

4) NameNode启动,调用的是hadoop-daemon.sh脚本;

5) DataNode和SecondaryNameNode启动,调用的是hadoop-daemons.sh脚本。

6) 在启动SecondaryNameNode守护进程服务时,通过指定参数【--hosts masters】指定哪些机器上运行SecondaryNameNode服务,从而验证了配置文件【masters】配置的IP地址为SecondaryNameNode服务地址。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息