您的位置:首页 > 其它

论文阅读笔记:Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey

2018-03-14 08:32 2531 查看

论文阅读笔记:Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey

要点

(universal adversarial perturbation)[1]

3D物理对抗样本

对抗样本生成

Box-constrained L-BFGS

minρminρ

Fast Gradient Sign Method (FGSM)

ρ=ϵsign(∇J(θ,Ic,l))ρ=ϵsign(∇J(θ,Ic,l))

one-shot 生成方式要求ϵϵ不能太小

Basic Iterative Method (BIM)

Ik+1=clip(Ik+αsign(∇J(θ,Ik,l))Ik+1=clip(Ik+αsign(∇J(θ,Ik,l))

Iterative Least-likely Class Method (ILCM)

选取测试概率最低的那个类作为target class进行BIM targeted adversarial example generation

Jacobian-based Saliency Map Attack (JSMA)

One Pixel Attack

Carlini and Wagner Attacks (C&W)

DeepFool

Universal Adversarial Perturbations

对几乎所有输入均有效、不依赖于输入的对抗扰动

UPSET(Universal Perturbations for Steering to Exact Targets)

ANGRI(Antagonistic Network for Generating Rogue Images)

对抗训练

参考文献

[1]
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐