您的位置:首页 > Web前端 > Node.js

ccah-500 第35题 What do you have to do on the cluster to allow the worker node to join

2016-06-15 11:13 661 查看
35.You have installed a cluster HDFS and MapReduce version 2 (MRv2) onYARN.
You have no dfs.hosts entry(ies) in your hdfs-site.xml configuration file. You configure a new worker node by setting fs.default.name in its configuration files to point to the NameNode on your cluster, and you start the DataNode daemon on that worker node.
What do you have to do on the cluster to allow the worker node to join, and start sorting HDFS blocks?

A. Without creating a dfs.hosts file or making any entries, run the commands

hadoop.dfsadmin-refreshModes on the NameNode

B. Restart the NameNode

C. Creating a dfs.hosts file on the NameNode, add the worker Node's name to it, then issue the command hadoop dfsadmin -refresh Nodes = on the Namenode

D. Nothing; the worker node will automatically join the cluster when NameNode daemon is started

Answer: A --> D

 

refrence:

If you are using the dfs.include/mapred.include functionality, you will need to additionally add the node to the dfs.include/mapred.
4000
include file, then issue hadoop dfsadmin
-refreshNodes and hadoop mradmin -refreshNodes so that the NameNode and JobTracker know of the additional node that has been added.

 

datanode will send package to namenode , then namenode will aware the new datanode.

关联问题25:
http://blog.csdn.net/tianbaochao/article/details/51672322
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  ccah ccah500 cloudera hadoop