您的位置:首页 > 其它

cp拷贝

2015-07-12 11:47 183 查看

1 cp 拷贝、复制

NAME

cp - copy files and directories

SYNOPSIS

cp [OPTION]... [-T] SOURCE DEST -- cp 源位置 目标位置

cp [OPTION]... SOURCE... DIRECTORY

cp [OPTION]... -t DIRECTORY SOURCE...

DESCRIPTION

Copy SOURCE to DEST, or multiple SOURCE(s) to DIRECTORY. 拷贝一个文件(或多个文件)到目标目录

Mandatory arguments to long options are mandatory for short options too.

-a, --archive

same as -dR --preserve=all

--backup[=CONTROL]

make a backup of each existing destination file

-b like --backup but does not accept an argument

--copy-contents

copy contents of special files when recursive

-d same as --no-dereference --preserve=links

-f, --force 强制复制

if an existing destination file cannot be opened, remove it and try again (redundant if the -n option is used)

如果如已有的目标文件不能打开,删之,在尝试一次拷贝

-i, --interactive 交互式拷贝

prompt before overwrite (overrides a previous -n option) 如果目标文件已经存在,在覆盖时或询问是否覆盖

-H follow command-line symbolic links in SOURCE

-n, --no-clobber

do not overwrite an existing file (overrides a previous -i option)

-P, --no-dereference 非解除参考(如果源是一个软链接文件,则只拷贝该连接,不拷贝文件本身)

never follow symbolic links in SOURCE

测试:

[hadoop3@hadoop3 ~]$ ln -s file01.dat file01-slink.dat ##创建软连接文件file01-slink.dat

[hadoop3@hadoop3 ~]$ ll

total 4

-rw-rw-r--. 1 hadoop3 hadoop3 171 Jul 12 10:51 file01.dat

lrwxrwxrwx. 1 hadoop3 hadoop3 10 Jul 12 10:53 file01-slink.dat -> file01.dat

[hadoop3@hadoop3 ~]$ ln file01.dat file01-hlink.dat ##创建硬连接文件file01-hlink.dat

[hadoop3@hadoop3 ~]$ ll

total 8

-rw-rw-r--. 2 hadoop3 hadoop3 171 Jul 12 10:51 file01.dat

-rw-rw-r--. 2 hadoop3 hadoop3 171 Jul 12 10:51 file01-hlink.dat

lrwxrwxrwx. 1 hadoop3 hadoop3 10 Jul 12 10:53 file01-slink.dat -> file01.dat

[hadoop3@hadoop3 ~]$ cp -P file01-slink.dat backup/ ##非解除参考拷贝

[hadoop3@hadoop3 ~]$ ls -al backup/

total 8

drwxrwxr-x. 2 hadoop3 hadoop3 4096 Jul 12 10:56 .

drwx------. 3 hadoop3 hadoop3 4096 Jul 12 10:55 ..

lrwxrwxrwx. 1 hadoop3 hadoop3 10 Jul 12 10:56 file01-slink.dat -> file01.dat ##因为backup目录中没有file01.dat文件,所以下面的cat报错

[hadoop3@hadoop3 backup]$ cat file01-slink.dat

cat: file01-slink.dat: No such file or directory

[hadoop3@hadoop3 ~]$ cp file01.dat backup/ ##手动把符号链接引用的源文件拷贝到backup目录,cat就可以查看符号连接文件的内容了

[hadoop3@hadoop3 ~]$ cd backup/

[hadoop3@hadoop3 backup]$ ls -ln

total 4

-rw-rw-r--. 1 500 500 171 Jul 12 11:05 file01.dat

lrwxrwxrwx. 1 500 500 10 Jul 12 10:56 file01-slink.dat -> file01.dat

[hadoop3@hadoop3 backup]$ cat file01.dat

file01

-l, --link

link files instead of copying

sdfdsf

dgddsgd

gdfgdfhhgfhhfghfghfgjfjhgjhg

1111111111111111111111111111111111111111111111111

-L, --dereference

always follow symbolic links in SOURCE

[hadoop3@hadoop3 ~]$ cp -L file01-slink.dat backup/ ##软连接文件file01-slink.dat拷贝到backup目录中,得到的目的文件的大小为171个字节

[hadoop3@hadoop3 ~]$ ls -ln backup/ ##cp -L 拷贝软连接时,会把连接引用的源文件的内容一起拷贝过来,得到的是一个普通的文件 -- 即解引用

total 4

-rw-rw-r--. 1 500 500 171 Jul 12 11:08 file01-slink.dat

[hadoop3@hadoop3 ~]$ cat backup/file01-slink.dat

file01

-l, --link

link files instead of copying

sdfdsf

dgddsgd

gdfgdfhhgfhhfghfghfgjfjhgjhg

1111111111111111111111111111111111111111111111111

-p same as --preserve=mode,ownership,timestamps

--preserve[=ATTR_LIST] 连同文件的属性一起复制过去(而不是使用默认的属性 --- 备份常用)

preserve the specified attributes (default: mode,ownership,timestamps), if possible additional attributes: context, links, xattr, all

保存特定的属性(如,文件的模式、所有者、时间戳) 如果可能附加的属性也会保留

-c same as --preserve=context

--no-preserve=ATTR_LIST

don't preserve the specified attributes

--parents

use full source file name under DIRECTORY

-R, -r, --recursive

copy directories recursively 递归地拷贝目录

--reflink[=WHEN]

control clone/CoW copies. See below.

--remove-destination

remove each existing destination file before attempting to open it (contrast with --force) 移除每个现存的目标文件(在试图打开之前)

--sparse=WHEN

control creation of sparse files. See below. 控制稀疏文件的创建

--strip-trailing-slashes

remove any trailing slashes from each SOURCE argument 移除每个源参数中的末尾斜杠

-l, --link 硬连接

link files instead of copying

-s, --symbolic-link 软连接

make symbolic links instead of copying 做符号连接(当然是软连接)而不是拷贝

-S, --suffix=SUFFIX

override the usual backup suffix 指定通用的备份前缀

-t, --target-directory=DIRECTORY 目标dest是一个目录

copy all SOURCE arguments into DIRECTORY 拷贝所有源参到目录

-T, --no-target-directory 目标dest是一个文件

treat DEST as a normal file 把DEST参数看作为一个普通的文件

-u, --update

copy only when the SOURCE file is newer than the destination file or when the destination file is missing

更新式拷贝 --- 仅当源文件比目标文件新或目标文件丢失时,才进行拷贝

-v, --verbose

explain what is being done

-x, --one-file-system

stay on this file system

-Z, --context=CONTEXT

set security context of copy to CONTEXT

--help display this help and exit

--version

output version information and exit

By default, sparse SOURCE files are detected by a crude heuristic and the corresponding DEST file is made sparse as well.

That is the behavior selected by --sparse=auto.

Specify --sparse=always to create a sparse DEST file whenever the SOURCE file contains a long enough sequence of zero bytes.

Use --sparse=never to inhibit creation of sparse files.

When --reflink[=always] is specified, perform a lightweight copy, where the data blocks are copied only when modified.

默认,执行的是一个轻量的拷贝,仅拷贝哪些修改了的数据块

If this is not possible the copy fails, or if --reflink=auto is specified, fall back to a standard copy. 指定--reflink=auto回到标准拷贝模式

The backup suffix is '~', unless set with --suffix or SIMPLE_BACKUP_SUFFIX.

The version control method may be selected via the --backup option or through the VERSION_CONTROL environment variable.

Here are the values:

none, off

never make backups (even if --backup is given) 如果version_control环境变量值为none,则拷贝时不做备份(即使指定了--backup选项)

numbered, t

make numbered backups

existing, nil

numbered if numbered backups exist, simple otherwise

simple, never

always make simple backups

As a special case, cp makes a backup of SOURCE when the force and backup options are given and SOURCE and DEST are the same name for an existing, regular file.

cp使用:

1 同时拷贝多个文件

格式: cp 文件1 文件2 目的目录

[hadoop3@hadoop3 ~]$ ls -ln

total 16

-rw-rw-r--. 1 500 500 176 Jul 12 11:15 a_1.txt

-rw-rw-r--. 1 500 500 137 Jul 12 11:16 a_2.txt

-rw-rw-r--. 1 500 500 106 Jul 12 11:16 a_3.txt

drwxrwxr-x. 2 500 500 4096 Jul 12 11:16 files

[hadoop3@hadoop3 ~]$ cp a_1.txt a_2.txt files/

[hadoop3@hadoop3 ~]$ ls -ln files/

total 8

-rw-rw-r--. 1 500 500 176 Jul 12 11:17 a_1.txt

-rw-rw-r--. 1 500 500 137 Jul 12 11:17 a_2.txt

2 拷贝一个目录

[hadoop3@hadoop3 ~]$ cp -r files files_backup

[hadoop3@hadoop3 ~]$ tree

.

├── a_1.txt

├── a_2.txt

├── a_3.txt

├── files

│ ├── a_1.txt

│ └── a_2.txt

└── files_backup

├── a_1.txt

└── a_2.txt

3硬连接拷贝,而不仅仅是拷贝一个文件(拷贝出来的文件与源文件会同步更新的)

[hadoop3@hadoop3 ~]$ mkdir links

[hadoop3@hadoop3 ~]$ cp -l a_2.txt links

[hadoop3@hadoop3 ~]$ ls -ln links

total 4

-rw-rw-r--. 2 500 500 137 Jul 12 11:16 a_2.txt

[hadoop3@hadoop3 ~]$ echo ' adddd' >> a_2.txt ##向源文件追加字符

[hadoop3@hadoop3 ~]$ cat a_2.txt

22222222222222222222222

2222222222222222222222222222222

222222222222222222222222222222222222

2222222222222222222222222222222222222222222

adddd

[hadoop3@hadoop3 ~]$ cat links/a_2.txt ##硬连接文件的内容同步更新了

22222222222222222222222

2222222222222222222222222222222

222222222222222222222222222222222222

2222222222222222222222222222222222222222222

adddd

4 软连接拷贝 -- 符号链接

[hadoop3@hadoop3 ~]$ cp -s a_3.txt links/

cp: `links/a_3.txt': can make relative symbolic links only in current directory

##跨目录不能创建相对符号连接

[hadoop3@hadoop3 ~]$ cp -s /home/hadoop3/a_3.txt links

[hadoop3@hadoop3 ~]$ ls -ln links

total 4

-rw-rw-r--. 2 500 500 145 Jul 12 11:28 a_2.txt

lrwxrwxrwx. 1 500 500 21 Jul 12 11:34 a_3.txt -> /home/hadoop3/a_3.txt

内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: