ccah-500 第35题 What do you have to do on the cluster to allow the worker node to join
2016-06-15 11:13
661 查看
35.You have installed a cluster HDFS and MapReduce version 2 (MRv2) onYARN.
You have no dfs.hosts entry(ies) in your hdfs-site.xml configuration file. You configure a new worker node by setting fs.default.name in its configuration files to point to the NameNode on your cluster, and you start the DataNode daemon on that worker node.
What do you have to do on the cluster to allow the worker node to join, and start sorting HDFS blocks?
A. Without creating a dfs.hosts file or making any entries, run the commands
hadoop.dfsadmin-refreshModes on the NameNode
B. Restart the NameNode
C. Creating a dfs.hosts file on the NameNode, add the worker Node's name to it, then issue the command hadoop dfsadmin -refresh Nodes = on the Namenode
D. Nothing; the worker node will automatically join the cluster when NameNode daemon is started
Answer: A --> D
refrence:
If you are using the dfs.include/mapred.include functionality, you will need to additionally add the node to the dfs.include/mapred.
4000
include file, then issue hadoop dfsadmin
-refreshNodes and hadoop mradmin -refreshNodes so that the NameNode and JobTracker know of the additional node that has been added.
datanode will send package to namenode , then namenode will aware the new datanode.
关联问题25:
http://blog.csdn.net/tianbaochao/article/details/51672322
You have no dfs.hosts entry(ies) in your hdfs-site.xml configuration file. You configure a new worker node by setting fs.default.name in its configuration files to point to the NameNode on your cluster, and you start the DataNode daemon on that worker node.
What do you have to do on the cluster to allow the worker node to join, and start sorting HDFS blocks?
A. Without creating a dfs.hosts file or making any entries, run the commands
hadoop.dfsadmin-refreshModes on the NameNode
B. Restart the NameNode
C. Creating a dfs.hosts file on the NameNode, add the worker Node's name to it, then issue the command hadoop dfsadmin -refresh Nodes = on the Namenode
D. Nothing; the worker node will automatically join the cluster when NameNode daemon is started
Answer: A --> D
refrence:
If you are using the dfs.include/mapred.include functionality, you will need to additionally add the node to the dfs.include/mapred.
4000
include file, then issue hadoop dfsadmin
-refreshNodes and hadoop mradmin -refreshNodes so that the NameNode and JobTracker know of the additional node that has been added.
datanode will send package to namenode , then namenode will aware the new datanode.
关联问题25:
http://blog.csdn.net/tianbaochao/article/details/51672322
相关文章推荐
- Cloudera 推动即时通讯巨头 LINE 实现数据驱动的创新
- 详解HDFS Short Circuit Local Reads
- Hadoop_2.1.0 MapReduce序列图
- 使用Hadoop搭建现代电信企业架构
- 单机版搭建Hadoop环境图文教程详解
- hadoop常见错误以及处理方法详解
- hadoop 单机安装配置教程
- hadoop的hdfs文件操作实现上传文件到hdfs
- hadoop实现grep示例分享
- Apache Hadoop版本详解
- linux下搭建hadoop环境步骤分享
- hadoop client与datanode的通信协议分析
- hadoop中一些常用的命令介绍
- Hadoop单机版和全分布式(集群)安装
- 用PHP和Shell写Hadoop的MapReduce程序
- hadoop map-reduce中的文件并发操作
- Hadoop1.2中配置伪分布式的实例
- hadoop上传文件功能实例代码
- java结合HADOOP集群文件上传下载
- Hadoop 2.x伪分布式环境搭建详细步骤