您的位置:首页 > Web前端 > Node.js

hadoop中secondarynamenode节点添加方法

2014-12-11 14:42 357 查看
当时,hadoop已经安装成功,但是secondarynamenode没有启动

后来经过研究,原来是配置的目录有问题

首先修改一下shell文件

文件路径:/home/work/hadoop/bin

原来:master  现在:secondarynamenode 

[work@master bin]$ cat start-dfs.sh

#!/usr/bin/env bash

# Licensed to the Apache Software Foundation (ASF) under one or more

# contributor license agreements.  See the NOTICE file distributed with

# this work for additional information regarding copyright ownership.

# The ASF licenses this file to You under the Apache License, Version 2.0

# (the "License"); you may not use this file except in compliance with

# the License.  You may obtain a copy of the License at

#

#     http://www.apache.org/licenses/LICENSE-2.0
#

# Unless required by applicable law or agreed to in writing, software

# distributed under the License is distributed on an "AS IS" BASIS,

# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

# See the License for the specific language governing permissions and

# limitations under the License.

# Start hadoop dfs daemons.

# Optinally upgrade or rollback dfs state.

# Run this on master node.

usage="Usage: start-dfs.sh [-upgrade|-rollback]"

bin=`dirname "$0"`

bin=`cd "$bin"; pwd`

if [ -e "$bin/../libexec/hadoop-config.sh" ]; then

  . "$bin"/../libexec/hadoop-config.sh

else

  . "$bin/hadoop-config.sh"

fi

# get arguments

if [ $# -ge 1 ]; then
nameStartOpt=$1
shift
case $nameStartOpt in
 (-upgrade)
  ;;
 (-rollback) 
  dataStartOpt=$nameStartOpt
  ;;
 (*)
 echo $usage
 exit 1
   ;;
esac

fi

# start dfs daemons

# start namenode after datanodes, to minimize time namenode is up w/o data

# note: datanodes will log connection errors until namenode starts

"$bin"/hadoop-daemon.sh --config $HADOOP_CONF_DIR start namenode $nameStartOpt

"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR start datanode $dataStartOpt

"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR --hosts
secondarynamenode start secondarynamenode

[work@master bin]$ 

停止的shell文件也要修改:

原来:master  现在:secondarynamenode 

[work@master bin]$ cat start-dfs.sh

#!/usr/bin/env bash

# Licensed to the Apache Software Foundation (ASF) under one or more

# contributor license agreements.  See the NOTICE file distributed with

# this work for additional information regarding copyright ownership.

# The ASF licenses this file to You under the Apache License, Version 2.0

# (the "License"); you may not use this file except in compliance with

# the License.  You may obtain a copy of the License at

#

#     http://www.apache.org/licenses/LICENSE-2.0
#

# Unless required by applicable law or agreed to in writing, software

# distributed under the License is distributed on an "AS IS" BASIS,

# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

# See the License for the specific language governing permissions and

# limitations under the License.

# Start hadoop dfs daemons.

# Optinally upgrade or rollback dfs state.

# Run this on master node.

usage="Usage: start-dfs.sh [-upgrade|-rollback]"

bin=`dirname "$0"`

bin=`cd "$bin"; pwd`

if [ -e "$bin/../libexec/hadoop-config.sh" ]; then

  . "$bin"/../libexec/hadoop-config.sh

else

  . "$bin/hadoop-config.sh"

fi

# get arguments

if [ $# -ge 1 ]; then
nameStartOpt=$1
shift
case $nameStartOpt in
 (-upgrade)
  ;;
 (-rollback) 
  dataStartOpt=$nameStartOpt
  ;;
 (*)
 echo $usage
 exit 1
   ;;
esac

fi

# start dfs daemons

# start namenode after datanodes, to minimize time namenode is up w/o data

# note: datanodes will log connection errors until namenode starts

"$bin"/hadoop-daemon.sh --config $HADOOP_CONF_DIR start namenode $nameStartOpt

"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR start datanode $dataStartOpt

"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR --hosts secondarynamenode start secondarynamenode

[work@master bin]$ cat stop-dfs.sh

#!/usr/bin/env bash

# Licensed to the Apache Software Foundation (ASF) under one or more

# contributor license agreements.  See the NOTICE file distributed with

# this work for additional information regarding copyright ownership.

# The ASF licenses this file to You under the Apache License, Version 2.0

# (the "License"); you may not use this file except in compliance with

# the License.  You may obtain a copy of the License at

#

#     http://www.apache.org/licenses/LICENSE-2.0
#

# Unless required by applicable law or agreed to in writing, software

# distributed under the License is distributed on an "AS IS" BASIS,

# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

# See the License for the specific language governing permissions and

# limitations under the License.

# Stop hadoop DFS daemons.  Run this on master node.

bin=`dirname "$0"`

bin=`cd "$bin"; pwd`

if [ -e "$bin/../libexec/hadoop-config.sh" ]; then

  . "$bin"/../libexec/hadoop-config.sh

else

  . "$bin/hadoop-config.sh"

fi

"$bin"/hadoop-daemon.sh --config $HADOOP_CONF_DIR stop namenode

"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR stop datanode

"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR --hosts
secondarynamenode stop secondarynamenode

[work@master bin]$ 

第二点修改文件内容

文件路径:/home/work/hadoop/conf

从节点原来是:slaves里面有node1节点

修改后为只有node2和node3.

[work@master conf]$ cat slaves

node2

node3

[work@master conf]$ 

另外在追加一个secondarynamenode文件

里面单独放node1

[work@master conf]$ cat secondarynamenode

node1

[work@master conf]$ 

到此位置:secondarynamenode配置成功。

现在我把成功后运行的结果状态贴一下:

首先我是1个master,secondarynamenode是node1, node2和node3是datanode。

master情况:

[work@master conf]$ jps

13338 NameNode

13884 Jps

13554 JobTracker

[work@master conf]$ 

node1情况:

[work@node1 ~]$ jps

9772 SecondaryNameNode

10071 Jps

[work@node1 ~]$ 

node2情况:

[work@node2 ~]$ jps

22897 TaskTracker

22767 DataNode

23234 Jps

[work@node2 ~]$ 

node3情况:

[work@node3 ~]$ jps

3457 TaskTracker

3327 DataNode

3806 Jps

[work@node3 ~]$ 
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  Hadoop hadoop集群