您的位置:首页 > 运维架构

Hadoop集群常用命令

2018-01-10 18:12 591 查看

hadoop 常用命令

命令格式

[root@namenode0 hadoop-common]# hadoop

Usage: hadoop [–config confdir] COMMAND

where COMMAND is one of:

fs run a generic filesystem user client

version print the version

jar run a jar file

checknative [-a|-h] check native hadoop and compression libraries availability

distcp copy file or directories recursively

archive -archiveName NAME -p * create a hadoop archive

classpath prints the class path needed to get the

Hadoop jar and the required libraries

daemonlog get/set the log level for each daemon

or

CLASSNAME run the class named CLASSNAME

Most commands print help when invoked w/o parameters.

hadoop fs

hadoop fs -cmd

cmd: 具体的操作,基本上与UNIX的命令行相同

hadoop fs -mkdir /user/trunk (创建目录)

hadoop fs -ls /user (显示目录文件)

hadoop fs -lsr /user (递归的)

hadoop fs -put test.txt /user/trunk (复制文件到/user/trunk 目录下)

hadoop fs -put test.txt . (复制到hdfs当前目录下,首先要创建当前目录)

hadoop fs -get /user/trunk/test.txt . (复制到本地当前目录下)

hadoop fs -cat /user/trunk/test.txt (查看文件)

hadoop fs -touchz /user/new.txt(在hadoop指定目录下新建一个空文件)

hadoop fs -tail /user/trunk/test.txt (查看最后1000字节)

hadoop fs –rm /user/trunk/test.txt (删除文件)

hadoop fs –rmr /user/trunk/test.txt (删除目录)

hadoop fs –cp /user/a.txt /user/b.txt(拷贝文件)

hadoop fs –mv /user/test.txt /user/ok.txt 将hadoop上某个文件重命名

hadoop dfs –getmerge /user /home/t 将hadoop指定目录下所有内容保存为一个文件,同时down至本地

hadoop fs -help ls (查看ls命令的帮助文档)

hadoop job –kill [job-id] 将正在运行的hadoop作业kill掉

hadoop Admin

hadoop dfsadmin -safemode leave 切换安全模式

hadoop dfsadmin –report 显示Datanode列表

hadoop jar

$ hadoop jar .jar ...MainClassName inputPath outputPath

hadoop distcp

分布式复制程序,从Hadoop 文件系统间复制大量数据。典型应用场景,在两个Hadoop 集群之间复制数据,如果两个Hadoop 集群使用相同版本。

Hadoop distcp hdfs://namenode1/foo hdfs://namenode2/foo

hadoop archive

存档工具,解决大量小文件存储问题。可以将众多小文件打包成一个大文件进行存储,并且打包后原来的文件仍然可以通过Map-reduce进行操作,打包后的文件由索引和存储两大部分组成,索引部分记录了原有的目录结构和文件状态。HAR对我们来说,就是把众多文件整合到一起,文件个数减小了,但是文件总体大小并没有减少(无压缩)。归档文件与原文件分别使用了不同的Block,并没有共用Block。

[root@namenode0 hadoop-common]#hadoop archive -archiveName test.har /my/files /my

hadoop job -list

查看job资源分配情况

[h_chenliling@vm6-sj1-pro-had-32-107 ~]$ hadoop job -list

DEPRECATED: Use of this script to execute mapred command is deprecated.

Instead use the mapred command for it.

Total jobs:7

JobId State StartTime UserName Queue Priority UsedContainers RsvdContainers UsedMem RsvdMem NeededMem AM info

job_1514285597169_13524 RUNNING 1515578714165 badm root.badm NORMAL 81 1 200192M 2560M 202752M http://VM6-SJ1-PRO-HAD-32-108:8088/proxy/application_1514285597169_13524/

job_1514285597169_13525 RUNNING 1515578739049 badm root.badm NORMAL 199 0 407552M 0M 407552M http://VM6-SJ1-PRO-HAD-32-108:8088/proxy/application_1514285597169_13525/

job_1514285597169_13528 RUNNING 1515578961540 h_chencen root.clwdev NORMAL 166 1 339968M 45056M 385024M http://VM6-SJ1-PRO-HAD-32-108:8088/proxy/application_1514285597169_13528/

job_1514285597169_13515 RUNNING 1515578440350 badm root.badm NORMAL 101 0 258048M 0M 258048M http://VM6-SJ1-PRO-HAD-32-108:8088/proxy/application_1514285597169_13515/

job_1514285597169_13513 RUNNING 1515578415973 h_clwadmin root.clw NORMAL 132 0 337408M 0M 337408M http://VM6-SJ1-PRO-HAD-32-108:8088/proxy/application_1514285597169_13513/

job_1514285597169_13434 RUNNING 1515574363305 badm root.badm NORMAL 205 0 522240M 0M 522240M http://VM6-SJ1-PRO-HAD-32-108:8088/proxy/application_1514285597169_13434/

job_1514285597169_13431 RUNNING 1515574256736 badm root.badm NORMAL 3 0 7168M 0M

通过job id杀死job

hadoop job -kill job_1514285597169_13524

YARN 常用命令

命令格式

[root@namenode0 hadoop-common]# yarn

Usage: yarn [–config confdir] COMMAND

where COMMAND is one of:

resourcemanager run the ResourceManager

nodemanager run a nodemanager on each slave

timelineserver run the timeline server

rmadmin admin tools

version print the version

jar run a jar file

application prints application(s) report/kill application

applicationattempt prints applicationattempt(s) report

container prints container(s) report

node prints node report(s)

logs dump container logs

classpath prints the class path needed to get the

Hadoop jar and the required libraries

daemonlog get/set the log level for each daemon

or

CLASSNAME run the class named CLASSNAME

Most commands print help when invoked w/o parameters.

Yarn node

列举YARN 集群中的所有NodeManager

[root@namenode0 hadoop-common]# yarn node -list

15/02/13 10:26:19 INFO client.RMProxy: Connecting to ResourceManager at namenode0/192.168.90.166:8032

15/02/13 10:26:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable

Total Nodes:7

Node-Id Node-State Node-Http-Address Number-of-Running-Containers

datanode2:57758 RUNNING datanode2:8042 0

datanode5:44979 RUNNING datanode5:8042 0

datanode4:52132 RUNNING datanode4:8042 0

datanode0:51931 RUNNING datanode0:8042 0

datanode6:50078 RUNNING datanode6:8042 0

datanode3:44873 RUNNING datanode3:8042 0

datanode1:54640 RUNNING datanode1:8042 0

查看指定NodeManager 的状态

[root@namenode0 hadoop-common]# yarn node -status datanode2:57758

15/02/13 10:33:28 INFO client.RMProxy: Connecting to ResourceManager at namenode0/192.168.90.166:8032

15/02/13 10:33:29 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable

Node Report :

Node-Id : datanode2:57758

Rack : /default-rack

Node-State : RUNNING

Node-Http-Address : datanode2:8042

Last-Health-Update : Fri 13/Feb/15 10:31:30:697CST

Health-Report :

Containers : 0

Memory-Used : 0MB

Memory-Capacity : 8192MB

CPU-Used : 0 vcores

CPU-Capacity : 8 vcores
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: