hadoop shell命令介绍
2015-02-03 16:02
218 查看
Hadoop 文件系统shell使用命令介绍,基本上所有的命令与linux命令相差不大,下面为我的hadoop系统文件夹
查看命令:hadoop fs -ls /home/hadoop/
drwxr-xr-x - hadoop supergroup 0 2013-11-30 17:51 /home/hadoop/dir
drwxr-xr-x - hadoop supergroup 0 2013-11-30 17:48 /home/hadoop/input
-rw-r--r-- 1 hadoop supergroup 64 2013-11-30 17:50 /home/hadoop/ouput
drwxr-xr-x - hadoop supergroup 0 2013-11-29 22:50 /home/hadoop/output
drwxr-xr-x - hadoop supergroup 0 2013-11-29 22:50 /home/hadoop/tmp
查看所有命令帮助:hadoop fs --help
cat
Usage: hadoop fs -cat URI [URI …]
查看fs文件内容
例子:
hadoop fs -cat /home/hadoop/input/content.txt
Returns 0 on success and -1 on error.
chgrp
Usage: hadoop fs -chgrp [-R] GROUP URI [URI …]
Change group association of files. With -R, make the change recursively through
the directory structure. The user must be the owner of files, or else a super-user. Additional information is in the Permissions
User Guide.
chmod
Usage: hadoop fs -chmod [-R] <MODE[,MODE]... | OCTALMODE> URI [URI …]
Change the permissions of files. With -R, make the change recursively through the
directory structure. The user must be the owner of the file, or else a super-user. Additional information is in the Permissions
User Guide.
chown
Usage: hadoop fs -chown [-R] [OWNER][:[GROUP]] URI [URI ]
Change the owner of files. With -R, make the change recursively through the directory
structure. The user must be a super-user. Additional information is in the Permissions User Guide.
copyFromLocal
Usage: hadoop fs -copyFromLocal <localsrc> URI
例子:
hadoop fs -copyFromLocal /home/hadoop/address /home/hadoop/input
Similar to put command,
except that the source is restricted to a local file reference.
copyToLocal
Usage: hadoop fs -copyToLocal [-ignorec
4000
rc] [-crc] URI <localdst>
例子:
hadoop fs -copyToLocal /home/hadoop/input/content.txt /home/hadoop/mylocal
Similar to get command,
except that the destination is restricted to a local file reference.
cp
Usage: hadoop fs -cp URI [URI …] <dest>
Copy files from source to destination. This command allows multiple sources as
well in which case the destination must be a directory.
例子:
hadoop fs -cp /home/hadoop/input/content.txt /home/hadoop/ouput
hadoop fs -cp /home/hadoop/input/address /home/hadoop/ouput /home/hadoop/dir
Exit Code:
Returns 0 on success and -1 on error.
du
Usage: hadoop fs -du URI [URI …]
Displays aggregate length of files contained in the directory or the length of
a file in case its just a file.
例子:
hadoop fs -du /home/hadoop/dir /home/hadoop/input
输出结果:
[align=left]Found 2 items[/align]
[align=left]65 hdfs://localhost:9000/home/hadoop/dir/address[/align]
[align=left]64 hdfs://localhost:9000/home/hadoop/dir/ouput[/align]
[align=left]Found 2 items[/align]
[align=left]65 hdfs://localhost:9000/home/hadoop/input/address[/align]
[align=left]64 hdfs://localhost:9000/home/hadoop/input/content.txt[/align]
Exit Code:
Returns 0 on success and -1 on error.
dus
Usage: hadoop fs -dus <args>
例子:
hadoop fs -dus /home/hadoop/dir
输出:hdfs://localhost:9000/home/hadoop/dir
129
Displays a summary of file lengths.
expunge 手动清除fs回收站
Usage: hadoop fs -expunge
Empty the Trash. Refer to HDFS
Design for more information on Trash feature.
get
Usage: hadoop fs -get [-ignorecrc] [-crc] <src> <localdst>
Copy files to the local file system. Files that fail the CRC check may be copied
with the -ignorecrc option. Files and CRCs may be copied using the -crc option.
例子:
hadoop fs -get /home/hadoop/input/content.txt /home/hadoop/mylocal/
hadoop fs -get hdfs://nn.example.com/user/hadoop/file localfile
Exit Code:
Returns 0 on success and -1 on error.
getmerge
Usage: hadoop fs -getmerge <src> <localdst> [addnl]
Takes a source directory and a destination file as input and concatenates files
in src into the destination local file. Optionally addnl can be set to enable adding a newline character at the end of each file.
ls
Usage: hadoop fs -ls <args>
For a file returns stat on the file with the following format:
filename <number of replicas> filesize modification_date modification_time permissions userid groupid
For a directory it returns list of its direct children as in unix. A directory is listed as:
dirname <dir> modification_time modification_time permissions userid groupid
例子:
hadoop fs -ls /home/hadoop/input/ /home/hadoop/ouput/
输出:
[align=left]Found 2 items[/align]
[align=left]-rw-r--r-- 1 hadoop supergroup 65 2013-11-30 17:48 /home/hadoop/input/address[/align]
[align=left]-rw-r--r-- 1 hadoop supergroup 64 2013-11-29 22:48 /home/hadoop/input/content.txt[/align]
[align=left]Found 1 items[/align]
[align=left]-rw-r--r-- 1 hadoop supergroup 64 2013-11-30 17:50 /home/hadoop/ouput[/align]
Exit Code:
Returns 0 on success and -1 on error.
lsr
Usage: hadoop fs -lsr <args>
例子:
hadoop fs -lsr /home/hadoop/input/ /home/hadoop/ouput/
输出:
[align=left
f048
]-rw-r--r-- 1 hadoop supergroup 65 2013-11-30 17:48 /home/hadoop/input/address[/align]
[align=left]-rw-r--r-- 1 hadoop supergroup 64 2013-11-29 22:48 /home/hadoop/input/content.txt[/align]
[align=left]-rw-r--r-- 1 hadoop supergroup 64 2013-11-30 17:50 /home/hadoop/ouput[/align]
Recursive version of ls. Similar to Unix ls -R.
mkdir
Usage: hadoop fs -mkdir <paths>
Takes path uri's as argument and creates directories. The behavior is much like
unix mkdir -p creating parent directories along the path.
例子:
hadoop fs -mkdir /home/hadoop/input/ /home/hadoop/ouput/
hadoop fs -mkdir hdfs://nn1.example.com/user/hadoop/dir hdfs://nn2.example.com/user/hadoop/dir
Exit Code:
Returns 0 on success and -1 on error.
movefromLocal
Usage: dfs -moveFromLocal <src> <dst>
Displays a "not implemented" message.
mv
Usage: hadoop fs -mv URI [URI …] <dest>
Moves files from source to destination. This command allows multiple sources as
well in which case the destination needs to be a directory. Moving files across filesystems is not permitted.
例子:
hadoop fs -mv /home/hadoop/input/lzw /home/hadoop/output
hadoop fs -mv hdfs://nn.example.com/file1 hdfs://nn.example.com/file2 hdfs://nn.example.com/file3 hdfs://nn.example.com/dir1
Exit Code:
Returns 0 on success and -1 on error.
put
Usage: hadoop fs -put <localsrc> ... <dst>
Copy single src, or multiple srcs from local file system to the destination filesystem.
Also reads input from stdin and writes to destination filesystem.
hadoop fs -put /home/hadoop/lzw /home/hadoop/input
hadoop fs -put /home/hadoop/lzw /home/hadoop/lzw1 /home/hadoop/input
hadoop fs -put localfile hdfs://nn.example.com/hadoop/hadoopfile
hadoop fs -put - hdfs://nn.example.com/hadoop/hadoopfile
Reads the input from stdin.
Exit Code:
Returns 0 on success and -1 on error.
rm 删除文件
Usage: hadoop fs -rm URI [URI …]
Delete files specified as args. Only deletes non empty directory and files. Refer
to rmr for recursive deletes.
例子:
hadoop fs -rm /home/hadoop/output/lzw
Exit Code:
Returns 0 on success and -1 on error.
rmr 删除文件夹
Usage: hadoop fs -rmr URI [URI …]
Recursive version of delete.
例子:
hadoop fs -rmr /home/hadoop/output
hadoop fs -rmr hdfs://nn.example.com/user/hadoop/dir
Exit Code:
Returns 0 on success and -1 on error.
setrep
Usage: hadoop fs -setrep [-R] <path>
Changes the replication factor of a file. -R option is for recursively increasing
the replication factor of files within a directory.
例子:
hadoop fs -setrep -w 3 -R /user/hadoop/dir1
Exit Code:
Returns 0 on success and -1 on error.
stat
Usage: hadoop fs -stat URI [URI …]
Returns the stat information on the path.
例子:
hadoop fs -stat /home/hadoop/input
输出:
2013-11-30 16:00:27
Exit Code:
Returns 0 on success and -1 on error.
tail
Usage: hadoop fs -tail [-f] URI
Displays last kilobyte of the file to stdout. -f option can be used as in Unix.
例子:
hadoop fs -tail pathname
Exit Code:
Returns 0 on success and -1 on error.
test
Usage: hadoop fs -test -[ezd] URI
Options:
-e check to see if the file exists. Return 0 if true.
-z check to see if the file is zero length. Return 0 if true
-d check return 1 if the path is directory else return 0.
例子:
hadoop fs -test -d /home/hadoop/input
hadoop fs -test -e /home/hadoop/input
hadoop fs -test -z /home/hadoop/input
text 查看文件内容,跟cat基本相似
Usage: hadoop fs -text <src>
例子:
hadoop fs -text /home/hadoop/input/address
输出:
[align=left]addressID addressname[/align]
[align=left]1 Beijing[/align]
[align=left]2 Guangzhou[/align]
[align=left]3 Shenzhen[/align]
[align=left]4 Xian[/align]
Takes a source file and outputs the file in text format. The allowed formats are
zip and TextRecordInputStream.
touchz 创建空文件
Usage: hadoop fs -touchz URI [URI …]
Create a file of zero length.
例子:
hadoop fs -touchz /home/hadoop/input/lzw
Exit Code:
Returns 0 on success and -1 on error.
查看命令:hadoop fs -ls /home/hadoop/
drwxr-xr-x - hadoop supergroup 0 2013-11-30 17:51 /home/hadoop/dir
drwxr-xr-x - hadoop supergroup 0 2013-11-30 17:48 /home/hadoop/input
-rw-r--r-- 1 hadoop supergroup 64 2013-11-30 17:50 /home/hadoop/ouput
drwxr-xr-x - hadoop supergroup 0 2013-11-29 22:50 /home/hadoop/output
drwxr-xr-x - hadoop supergroup 0 2013-11-29 22:50 /home/hadoop/tmp
查看所有命令帮助:hadoop fs --help
cat
Usage: hadoop fs -cat URI [URI …]
查看fs文件内容
例子:
hadoop fs -cat /home/hadoop/input/content.txt
Returns 0 on success and -1 on error.
chgrp
Usage: hadoop fs -chgrp [-R] GROUP URI [URI …]
Change group association of files. With -R, make the change recursively through
the directory structure. The user must be the owner of files, or else a super-user. Additional information is in the Permissions
User Guide.
chmod
Usage: hadoop fs -chmod [-R] <MODE[,MODE]... | OCTALMODE> URI [URI …]
Change the permissions of files. With -R, make the change recursively through the
directory structure. The user must be the owner of the file, or else a super-user. Additional information is in the Permissions
User Guide.
chown
Usage: hadoop fs -chown [-R] [OWNER][:[GROUP]] URI [URI ]
Change the owner of files. With -R, make the change recursively through the directory
structure. The user must be a super-user. Additional information is in the Permissions User Guide.
copyFromLocal
Usage: hadoop fs -copyFromLocal <localsrc> URI
例子:
hadoop fs -copyFromLocal /home/hadoop/address /home/hadoop/input
Similar to put command,
except that the source is restricted to a local file reference.
copyToLocal
Usage: hadoop fs -copyToLocal [-ignorec
4000
rc] [-crc] URI <localdst>
例子:
hadoop fs -copyToLocal /home/hadoop/input/content.txt /home/hadoop/mylocal
Similar to get command,
except that the destination is restricted to a local file reference.
cp
Usage: hadoop fs -cp URI [URI …] <dest>
Copy files from source to destination. This command allows multiple sources as
well in which case the destination must be a directory.
例子:
hadoop fs -cp /home/hadoop/input/content.txt /home/hadoop/ouput
hadoop fs -cp /home/hadoop/input/address /home/hadoop/ouput /home/hadoop/dir
Exit Code:
Returns 0 on success and -1 on error.
du
Usage: hadoop fs -du URI [URI …]
Displays aggregate length of files contained in the directory or the length of
a file in case its just a file.
例子:
hadoop fs -du /home/hadoop/dir /home/hadoop/input
输出结果:
[align=left]Found 2 items[/align]
[align=left]65 hdfs://localhost:9000/home/hadoop/dir/address[/align]
[align=left]64 hdfs://localhost:9000/home/hadoop/dir/ouput[/align]
[align=left]Found 2 items[/align]
[align=left]65 hdfs://localhost:9000/home/hadoop/input/address[/align]
[align=left]64 hdfs://localhost:9000/home/hadoop/input/content.txt[/align]
Exit Code:
Returns 0 on success and -1 on error.
dus
Usage: hadoop fs -dus <args>
例子:
hadoop fs -dus /home/hadoop/dir
输出:hdfs://localhost:9000/home/hadoop/dir
129
Displays a summary of file lengths.
expunge 手动清除fs回收站
Usage: hadoop fs -expunge
Empty the Trash. Refer to HDFS
Design for more information on Trash feature.
get
Usage: hadoop fs -get [-ignorecrc] [-crc] <src> <localdst>
Copy files to the local file system. Files that fail the CRC check may be copied
with the -ignorecrc option. Files and CRCs may be copied using the -crc option.
例子:
hadoop fs -get /home/hadoop/input/content.txt /home/hadoop/mylocal/
hadoop fs -get hdfs://nn.example.com/user/hadoop/file localfile
Exit Code:
Returns 0 on success and -1 on error.
getmerge
Usage: hadoop fs -getmerge <src> <localdst> [addnl]
Takes a source directory and a destination file as input and concatenates files
in src into the destination local file. Optionally addnl can be set to enable adding a newline character at the end of each file.
ls
Usage: hadoop fs -ls <args>
For a file returns stat on the file with the following format:
filename <number of replicas> filesize modification_date modification_time permissions userid groupid
For a directory it returns list of its direct children as in unix. A directory is listed as:
dirname <dir> modification_time modification_time permissions userid groupid
例子:
hadoop fs -ls /home/hadoop/input/ /home/hadoop/ouput/
输出:
[align=left]Found 2 items[/align]
[align=left]-rw-r--r-- 1 hadoop supergroup 65 2013-11-30 17:48 /home/hadoop/input/address[/align]
[align=left]-rw-r--r-- 1 hadoop supergroup 64 2013-11-29 22:48 /home/hadoop/input/content.txt[/align]
[align=left]Found 1 items[/align]
[align=left]-rw-r--r-- 1 hadoop supergroup 64 2013-11-30 17:50 /home/hadoop/ouput[/align]
Exit Code:
Returns 0 on success and -1 on error.
lsr
Usage: hadoop fs -lsr <args>
例子:
hadoop fs -lsr /home/hadoop/input/ /home/hadoop/ouput/
输出:
[align=left
f048
]-rw-r--r-- 1 hadoop supergroup 65 2013-11-30 17:48 /home/hadoop/input/address[/align]
[align=left]-rw-r--r-- 1 hadoop supergroup 64 2013-11-29 22:48 /home/hadoop/input/content.txt[/align]
[align=left]-rw-r--r-- 1 hadoop supergroup 64 2013-11-30 17:50 /home/hadoop/ouput[/align]
Recursive version of ls. Similar to Unix ls -R.
mkdir
Usage: hadoop fs -mkdir <paths>
Takes path uri's as argument and creates directories. The behavior is much like
unix mkdir -p creating parent directories along the path.
例子:
hadoop fs -mkdir /home/hadoop/input/ /home/hadoop/ouput/
hadoop fs -mkdir hdfs://nn1.example.com/user/hadoop/dir hdfs://nn2.example.com/user/hadoop/dir
Exit Code:
Returns 0 on success and -1 on error.
movefromLocal
Usage: dfs -moveFromLocal <src> <dst>
Displays a "not implemented" message.
mv
Usage: hadoop fs -mv URI [URI …] <dest>
Moves files from source to destination. This command allows multiple sources as
well in which case the destination needs to be a directory. Moving files across filesystems is not permitted.
例子:
hadoop fs -mv /home/hadoop/input/lzw /home/hadoop/output
hadoop fs -mv hdfs://nn.example.com/file1 hdfs://nn.example.com/file2 hdfs://nn.example.com/file3 hdfs://nn.example.com/dir1
Exit Code:
Returns 0 on success and -1 on error.
put
Usage: hadoop fs -put <localsrc> ... <dst>
Copy single src, or multiple srcs from local file system to the destination filesystem.
Also reads input from stdin and writes to destination filesystem.
hadoop fs -put /home/hadoop/lzw /home/hadoop/input
hadoop fs -put /home/hadoop/lzw /home/hadoop/lzw1 /home/hadoop/input
hadoop fs -put localfile hdfs://nn.example.com/hadoop/hadoopfile
hadoop fs -put - hdfs://nn.example.com/hadoop/hadoopfile
Reads the input from stdin.
Exit Code:
Returns 0 on success and -1 on error.
rm 删除文件
Usage: hadoop fs -rm URI [URI …]
Delete files specified as args. Only deletes non empty directory and files. Refer
to rmr for recursive deletes.
例子:
hadoop fs -rm /home/hadoop/output/lzw
Exit Code:
Returns 0 on success and -1 on error.
rmr 删除文件夹
Usage: hadoop fs -rmr URI [URI …]
Recursive version of delete.
例子:
hadoop fs -rmr /home/hadoop/output
hadoop fs -rmr hdfs://nn.example.com/user/hadoop/dir
Exit Code:
Returns 0 on success and -1 on error.
setrep
Usage: hadoop fs -setrep [-R] <path>
Changes the replication factor of a file. -R option is for recursively increasing
the replication factor of files within a directory.
例子:
hadoop fs -setrep -w 3 -R /user/hadoop/dir1
Exit Code:
Returns 0 on success and -1 on error.
stat
Usage: hadoop fs -stat URI [URI …]
Returns the stat information on the path.
例子:
hadoop fs -stat /home/hadoop/input
输出:
2013-11-30 16:00:27
Exit Code:
Returns 0 on success and -1 on error.
tail
Usage: hadoop fs -tail [-f] URI
Displays last kilobyte of the file to stdout. -f option can be used as in Unix.
例子:
hadoop fs -tail pathname
Exit Code:
Returns 0 on success and -1 on error.
test
Usage: hadoop fs -test -[ezd] URI
Options:
-e check to see if the file exists. Return 0 if true.
-z check to see if the file is zero length. Return 0 if true
-d check return 1 if the path is directory else return 0.
例子:
hadoop fs -test -d /home/hadoop/input
hadoop fs -test -e /home/hadoop/input
hadoop fs -test -z /home/hadoop/input
text 查看文件内容,跟cat基本相似
Usage: hadoop fs -text <src>
例子:
hadoop fs -text /home/hadoop/input/address
输出:
[align=left]addressID addressname[/align]
[align=left]1 Beijing[/align]
[align=left]2 Guangzhou[/align]
[align=left]3 Shenzhen[/align]
[align=left]4 Xian[/align]
Takes a source file and outputs the file in text format. The allowed formats are
zip and TextRecordInputStream.
touchz 创建空文件
Usage: hadoop fs -touchz URI [URI …]
Create a file of zero length.
例子:
hadoop fs -touchz /home/hadoop/input/lzw
Exit Code:
Returns 0 on success and -1 on error.
相关文章推荐
- hadoop shell命令介绍
- hadoop shell命令介绍
- shell内建命令exec的介绍
- linux_shell及常用命令介绍
- 查看服务器RAID卡信息的SHELL脚本及MegaCLI命令介绍
- Hadoop Shell命令大全
- hadoop shell 命令
- 介绍SHELL命令调用系统rundll32.exe的所有命令
- Hadoop Shell命令(1)
- Linux下最常用的Shell命令的介绍(图文)
- hadoop 创建用户及hdfs权限,hdfs操作等常用shell命令
- Hadoop基于Shell命令与底层Unix操作系统的交互
- Hadoop Shell命令
- hadoop shell 命令
- 如何查看服务器RAID卡信息的SHELL脚本和命令介绍
- 【框架解析】Hadoop系统分析(一)--shell命令汇总
- Hadoop Shell命令 |HDFS Shell命令| HDFS 命令
- Hadoop Shell命令
- Hadoop Shell命令
- 查看服务器RAID卡信息的SHELL脚本和命令介绍