您的位置:首页 > 大数据 > Hadoop

HDFS文件命令

2016-03-18 10:56 519 查看
HDFS在设计上仿照Linux下的文件操作命令,所以对熟悉Linux文件命令的小伙伴很好上手。另外在Hadoop DFS中没有pwd概念,所有都需要全路径。(本文基于版本2.5 CDH 5.2.1)

列出命令列表、格式和帮助,以及选择一个非参数文件配置的namenode。

123456hdfs dfs -usagehadoop dfs -usage ls hadoop dfs -help -fs <local|namenode:port> specify a namenodehdfs dfs -fs hdfs://test1:9000 -ls /
——————————————————————————–
-df [-h] [path …] :
Shows the capacity, free and used space of the filesystem. If the filesystem has
multiple partitions, and no path to a particular partition is specified, then
the status of the root partitions will be shown.

1

2

3

$
hdfs
dfs
-df

Filesystem
Size
Used
Available Use%

hdfs://test1:9000 413544071168 98304 345612906496 0%

——————————————————————————–

-mkdir [-p] path … :

Create a directory in specified location.

-p Do not fail if the directory already exists

-rmdir dir … :

Removes the directory entry specified by each directory argument, provided it is

empty.

1234hdfs dfs -mkdir /tmphdfs dfs -mkdir /tmp/txthdfs dfs -rmdir /tmp/txthdfs dfs -mkdir -p /tmp/txt/hello
——————————————————————————–
-copyFromLocal [-f] [-p] localsrc … dst :
Identical to the -put command.-copyToLocal [-p] [-ignoreCrc] [-crc] src … localdst :
Identical to the -get command.-moveFromLocal localsrc …<dst :="" same="" as="" -put,="" except="" that="" the="" source="" is="" deleted="" after="" it’s="" copied.="" -put="" [-f]="" [-p]="" localsrc="" …="" <dst="" copy="" files="" from="" local="" file="" system="" into="" fs.="" copying="" fails="" if="" already="" exists,="" unless="" -f="" flag="" given.="" passing="" -p="" preserves="" access="" and="" modification="" times,="" ownership="" mode.="" overwrites="" destination="" it="" exists.="" -get="" [-ignorecrc]="" [-crc]="" src="" localdst="" match="" pattern="" to="" name.="" kept.="" when="" multiple="" files,="" must="" b="" e="" a="" directory.="" -getmerge="" [-nl]="" get="" all="" in="" directories="" merge="" sort="" them="" -nl="" add="" newline="" character="" at="" end="" of="" each="" file.="" -cat="" fetch="" display="" their="" content="" stdout.="" <="" p="">

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

#通配符?
* {} []

hdfs
dfs
-cat
/tmp/*.txt

Hello,
Hadoop

Hello,
HDFS

hdfs
dfs
-cat
/tmp/h?fs.txt

Hello,
HDFS

hdfs
dfs
-cat
/tmp/h{a,d}*.txt

Hello,
Hadoop

Hello,
HDFS

hdfs
dfs
-cat
/tmp/h[a-d]*.txt

Hello,
Hadoop

Hello,
HDFS

echo
"Hello,
Hadoop"
>
hadoop.txt

echo
"Hello,
HDFS"
>
hdfs.txt

dd
if=/dev/zero
of=/tmp/test.zero
bs=1M
count=1024

1024+0
records
in

1024+0
records
out

1073741824
bytes
(1.1
GB)
copied,
0.93978
s,
1.1
GB/s

hdfs
dfs
-moveFromLocal
/tmp/test.zero
/tmp

hdfs
dfs
-put
*.txt
/tmp

——————————————————————————– -ls [-d] [-h] [-R] [path …] : List the contents that match the specified file pattern. If path is not specified, the contents of /user/currentUser will be listed. Directory entries are of the form: permissions – userId groupId sizeOfDirectory(in
bytes) modificationDate(yyyy-MM-dd HH:mm) directoryName and file entries are of the form: permissions numberOfReplicas userId groupId sizeOfFile(in bytes) modificationDate(yyyy-MM-dd HH:mm) fileName -d Directories are listed as plain files. -h Formats the
sizes of files in a human-readable fashion rather than a number of bytes. -R Recursively list the contents of directories.

1234567891011121314hdfs dfs -ls /tmphdfs dfs -ls -d /tmphdfs dfs -ls -h /tmp Found 4 items -rw-r--r-- 3 hdfs supergroup 14 2014-12-18 10:00 /tmp/hadoop.txt -rw-r--r-- 3 hdfs supergroup 12 2014-12-18 10:00 /tmp/hdfs.txt -rw-r--r-- 3 hdfs supergroup 1 G 2014-12-18 10:19 /tmp/test.zero drwxr-xr-x - hdfs supergroup 0 2014-12-18 10:07 /tmp/txthdfs dfs -ls -R -h /tmp -rw-r--r-- 3 hdfs supergroup 14 2014-12-18 10:00 /tmp/hadoop.txt -rw-r--r-- 3 hdfs supergroup 12 2014-12-18 10:00 /tmp/hdfs.txt -rw-r--r-- 3 hdfs supergroup 1 G 2014-12-18 10:19 /tmp/test.zero drwxr-xr-x - hdfs supergroup 0 2014-12-18 10:07 /tmp/txt drwxr-xr-x - hdfs supergroup 0 2014-12-18 10:07 /tmp/txt/hello
——————————————————————————– -checksum src … : Dump checksum information for files that match the file pattern src to stdout. Note that this requires a round-trip to a datanode storing each block of the file, and thus is not efficient to run on a large number of files. The checksum of a file depends on its content, block size and the checksum algorithm and parameters used for creating the file.

1

2

hdfs
dfs
-checksum
/tmp/test.zero

/tmp/test.zero
MD5-of-262144MD5-of-512CRC32C
000002000000000000040000f960570129a4ef3a7e179073adceae97

——————————————————————————– -appendToFile localsrc … dst : Appends the contents of all the given local files to the given dst file. The dst file will be created if it does not exist. If localSrc is -, then the input is read from stdin.

1234hdfs dfs -appendToFile *.txt hello.txthdfs dfs -cat hello.txt Hello, Hadoop Hello, HDFS
——————————————————————————– -tail [-f] file : Show the last 1KB of the file.

1

2

3

4

5

hdfs
dfs
-tail
-f
hello.txt

#waiting
for output. then Ctrl + C

#another
terminal

hdfs
dfs
-appendToFile
-
hello.txt

#then
type something

——————————————————————————– -cp [-f] [-p | -p[topax]] src …<dst :="" copy="" files="" that="" match="" the="" file="" pattern="" src="" to="" a="" destination.="" when="" copying="" multiple="" files,="" destination="" must="" be="" directory.="" passing=""
-p="" preserves="" status="" [topax]="" (timestamps,="" ownership,="" permission,="" acls,="" xattr).="" if="" is="" specified="" with="" no="" arg,="" then="" timestamps,="" permission.="" -pa="" -f="" overwrites="" it="" already="" exists.="" raw="" namespace=""
extended="" attributes="" are="" preserved="" (1)="" they="" supported="" (hdfs="" and,="" (2)="" all="" of="" source="" and="" target="" pathnames="" in="" .reserved="" hierarchy.="" xattr="" preservation="" determined="" solely="" by="" presence="" (or=""
absence)="" prefix="" not="" option.="" -mv="" …="" dst="" move="" dst.="" moving="" -rm="" [-f]="" [-r|-r]="" [-skiptrash]="" delete="" pattern.="" equivalent="" unix="" command="" “rm="" src”="" -skiptrash="" option="" bypasses="" trash,="" enabled,="" immediately=""
deletes="" does="" exist,="" do="" display="" diagnostic="" message="" or="" modify="" exit="" reflect="" an="" error.="" -[rr]="" recursively="" directories="" -stat="" [format]="" path="" print="" statistics="" about="" directory="" at="" format.="" format=""
accepts="" filesize="" blocks="" (%b),="" group="" name="" owner(%g),="" filename="" (%n),="" block="" size="" (%o),="" replication="" (%r),="" user="" owner(%u),="" modification="" date="" (%y,="" %y)="" <="" p="">

1234567891011hdfs dfs -stat /tmp/hadoop.txt 2014-12-18 02:00:08hdfs dfs -cp -p -f /tmp/hello.txt /tmp/hello.txt.bakhdfs dfs -stat /tmp/hadoop.txt.bakhdfs dfs -rm /tmp/not_exists rm: `/tmp/not_exists': No such file or directoryecho $? 1hdfs dfs -rm -f /tmp/123321123123123echo $?0
——————————————————————————– -count [-q] path … : Count the number of directories, files and bytes under the paths that match the specified file pattern. The output columns are: DIR_COUNT FILE_COUNT CONTENT_SIZE FILE_NAME or QUOTA REMAINING_QUOTA SPACE_QUOTA REMAINING_SPACE_QUOTA DIR_COUNT FILE_COUNT CONTENT_SIZE FILE_NAME -du [-s] [-h] path … : Show the amount of space, in bytes, used by the files that match the specified file pattern. The following flags are optional: -s Rather than showing the size of each individual file that matches the pattern, shows the total (summary) size. -h Formats the sizes of files in a human-readable fashion rather than a number of bytes. Note that, even without the -s option, this only shows size summaries one level deep into a directory. The output is in the form size name(full path)

1

2

3

4

5

6

7

8

9

10

11

hdfs
dfs
-count
/tmp

3 3
1073741850
/tmp

hdfs
dfs
-du
/tmp

14 /tmp/hadoop.txt

12 /tmp/hdfs.txt

1073741824 /tmp/test.zero

0
/tmp/txt

hdfs
dfs
-du
-s
/tmp

1073741850 /tmp

hdfs
dfs
-du
-s
-h
/tmp

1.0
G /tmp

——————————————————————————– -chgrp [-R] GROUP PATH… : This is equivalent to -chown … :GROUP … -chmod [-R] MODE[,MODE]… | OCTALMODE PATH… : Changes permissions of a file. This works similar to the shell’s chmod command with a few exceptions. -R modifies the
files recursively. This is the only option currently supported. MODE Mode is the same as mode used for the shell’s command. The only letters recognized are ‘rwxXt’, e.g. +t,a+r,g-w,+rwx,o=r. OCTALMODE Mode specifed in 3 or 4 digits. If 4 digits, the first
may be 1 or 0 to turn the sticky bit on or off, respectively. Unlike the shell command, it is not possible to specify only part of the mode, e.g. 754 is same as u=rwx,g=rx,o=r. If none of ‘augo’ is specified, ‘a’ is assumed and unlike the shell command, no
umask is applied. -chown [-R] [OWNER][:[GROUP]] PATH… : Changes owner and group of a file. This is similar to the shell’s chown command with a few exceptions. -R modifies the files recursively. This is the only option currently supported. If only the owner
or group is specified, then only the owner or group is modified. The owner and group names may only consist of digits, alphabet, and any of [-_./@a-zA-Z0-9]. The names are case sensitive. WARNING: Avoid using ‘.’ to separate user name and group though Linux
allows it. If user names have dots in them and you are using local file system, you might see surprising results since the shell command ‘chown’ is used for local files. -touchz path … : Creates a file of zero length at path with current time as the timestamp
of that path. An error is returned if the file exists with non-zero length

12345678910111213141516hdfs dfs -mkdir -p /user/spark/tmphdfs dfs -chown -R spark:hadoop /user/sparkhdfs dfs -chmod -R 775 /user/spark/tmphdfs dfs -ls -d /user/spark/tmp drwxrwxr-x - spark hadoop 0 2014-12-18 14:51 /user/spark/tmphdfs dfs -chmod +t /user/spark/tmp#user:spark hdfs dfs -touchz /user/spark/tmp/own_by_spark#user:hadoopuseradd -g hadoop hadoopsu - hadoopid uid=502(hadoop) gid=492(hadoop) groups=492(hadoop)hdfs dfs -rm /user/spark/tmp/own_by_sparkrm: Permission denied by sticky bit setting: user=hadoop, inode=own_by_spark#使用超级管理员(dfs.permissions.superusergroup = hdfs),可以无视sticky位设置
——————————————————————————– -test -[defsz] path : Answer various questions about path, with result via exit status. -d return 0 if path is a directory. -e return 0 if path exists. -f return 0 if path is a file. -s return 0 if file path is greater than zero bytes in size. -z return 0 if file path is zero bytes in size, else return 1.

1

2

3

4

5

6

hdfs
dfs
-test
-d
/tmp

echo
$?

0

hdfs
dfs
-test
-f
/tmp/txt

echo
$?

1

——————————————————————————– -setrep [-R] [-w] rep path … : Set the replication level of a file. If path is a directory then the command recursively changes the replication factor of all files under the directory tree rooted at path. -w It requests that the
command waits for the replication to complete. This can potentially take a very long time.

1

2

3

4

5

6

7

hdfs
fsck
/tmp/test.zero
-blocks
-locations

Average
block
replication:
3.0

hdfs
dfs
-setrep
-w
4 /tmp/test.zero

Replication
4
set:
/tmp/test.zero

Waiting
for
/tmp/test.zero
....
done

hdfs
fsck
/tmp/test.zero
-blocks

Average
block
replication:
4.0

Posted in BigData, Hadoop, Ops.
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: