Skip to content

Personal Adversarial Attacks and Defense Paper Collections (updating)

Notifications You must be signed in to change notification settings

Airscope/adv-attacks-defense-paper-list

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 

Repository files navigation

Adversarial Attacks and Defense Paper Collections

Papers

🌟 Paper list collections🌟


AAAI 2022

Attack

  • BSC-Attack: Kai Chen, Zhipeng Wei, Jingjing Chen, Zuxuan Wu, Yu-Gang Jiang.

    "Attacking Video Recognition Models with Bullet-Screen Comments." AAAI(2022)

    [paper]

    [code]

  • FCA: Donghua Wang, Tingsong Jiang, Jialiang Sun, Weien Zhou, Zhiqiang Gong, Xiaoya Zhang, Wen Yao, Xiaoqian Chen.

    "FCA: Learning a 3D Full-Coverage Vehicle Camouflage for Multi-View Physical Adversarial Attack." AAAI(2022)

    [paper]

    [code]

    [code]

    [code]

  • TT: Zhipeng Wei, Jingjing Chen, Zuxuan Wu, Yu-Gang Jiang.

    "Boosting the Transferability of Video Adversarial Examples via Temporal Translation." AAAI(2022)

    [paper]

    [code]

  • PNA-PatchOut: Zhipeng Wei, Jingjing Chen, Micah Goldblum, Zuxuan Wu, Tom Goldstein, Yu-Gang Jiang.
    "Towards Transferable Adversarial Attacks on Vision Transformers." AAAI(2022)

    [paper]

    [code]

  • RoHe: Mengmei Zhang, Xiao Wang, Meiqi Zhu, Chuan Shi, Zhiqiang Zhang, Jun Zhou.
    "Robust Heterogeneous Graph Neural Networks against Adversarial Attacks."

    [paper]

    [code]

  • robustgraph: Xu, Yang Yang, Junru Chen, Xin Jiang, Chunping Wang, Jiangang Lu, Yizhou Sun.

    "Unsupervised Adversarially Robust Representation Learning on Graphs."

    [paper]

    [code]

  • AT-BMC: Dongfang Li, Baotian Hu, Qingcai Chen, Tujie Xu, Jingcong Tao, Yunan Zhang. "Unifying Model Explainability and Robustness for Joint Text Classification and Rationale Extraction."

    [paper]

    [code]

  • LLTA: Shuman Fang, Jie Li, Xianming Lin, Rongrong Ji.

    "Learning to Learn Transferable Attack."

    [paper]

    [code]

  • Sparse-RS: Francesco Croce, Maksym Andriushchenko, Naman D. Singh, Nicolas Flammarion, Matthias Hein.

    "Sparse-RS: A Versatile Framework for Query-Efficient Sparse Black-Box Adversarial Attacks."

    [paper]

    [code]

  • SPGA: Zhenbo Shi, Zhi Chen, Zhenbo Xu, Wei Yang, Zhidong Yu, Liusheng Huang.

    "Shape Prior Guided Attack: Sparser Perturbations on 3D Point Clouds. "

    [paper]

  • Wooju Lee, Hyun Myung: "Adversarial Attack for Asynchronous Event-Based Data."

    [paper]

  • CLPA: Bingyin Zhao, Yingjie Lao:

    "CLPA: Clean-Label Poisoning Availability Attacks Using Generative Adversarial Nets."

    [paper]

    [code]

  • TextHoaxer: Muchao Ye, Chenglin Miao, Ting Wang, Fenglong Ma:

    "TextHoaxer: Budgeted Hard-Label Adversarial Attacks on Text."

    [paper]

    [code]

  • Rui Ning, Jiang Li, Chunsheng Xin, Hongyi Wu, Chonggang Wang: "Hibernated Backdoor: A Mutual Information Empowered Backdoor Attack to Deep Neural Networks."

    [paper]

  • Neil G. Marchant, Benjamin I. P. Rubinstein, Scott Alfeld:

    "Hard to Forget: Poisoning Attacks on Certified Machine Unlearning."

    [paper]

    [code]

  • Zikui Cai, Xinxin Xie, Shasha Li, Mingjun Yin, Chengyu Song, Srikanth V. Krishnamurthy, Amit K. Roy-Chowdhury, M. Salman Asif: Context-Aware Transfer Attacks for Object Detection.

    [paper]

    [code]

  • Xinjian Luo, Xiaokui Xiao, Yuncheng Wu, Juncheng Liu, Beng Chin Ooi:

    A Fusion-Denoising Attack on InstaHide with Data Augmentation.

    [paper]

    [code]

  • Shihong Fang, Anna Choromanska:

    Backdoor Attacks on the DNN Interpretation System.

    [paper]

    [code]

  • Jiarong Xu, Yizhou Sun, Xin Jiang, Yanhao Wang, Chunping Wang, Jiangang Lu, Yang Yang:

    Blindfolded Attackers Still Threatening: Strict Black-Box Adversarial Attacks on Graphs.

    [paper]

    [code]

  • Yibing Du, Antoine Bosselut, Christopher D. Manning:

    Synthetic Disinformation Attacks on Automated Fact Verification Systems.

    [paper]

    [code]

  • Nariki Tanaka, Hiroshi Kera, Kazuhiko Kawamoto:

    Adversarial Bone Length Attack on Action Recognition.

    [paper]

    [code]

  • Kartik Gupta, Thalaiyasingam Ajanthan: Improved Gradient-Based Adversarial Attacks for Quantized Networks.

    [paper]

    [code]

  • Anshuka Rangi, Long Tran-Thanh, Haifeng Xu, Massimo Franceschetti:

    Saving Stochastic Bandits from Poisoning Attacks via Limited Data Verification.

    [paper]

  • Yunhe Feng, Chirag Shah: Has CEO Gender Bias Really Been Fixed? Adversarial Attacking and Improving Gender Fairness in Image Search

    [paper]

    [code]

  • Maosen Li, Yanhua Yang, Kun Wei, Xu Yang, Heng Huang:

    Learning Universal Adversarial Perturbation by Adversarial Example.

    [paper]

    [code]

  • Junhua Zou, Yexin Duan, Boyu Li, Wu Zhang, Yu Pan, Zhisong Pan: Making Adversarial Examples More Transferable and Indistinguishable.

    [paper]

    [code]

  • Sayak Paul, Pin-Yu Chen: Vision Transformers Are Robust Learners.

    [paper]

    [code]

Defense

  • Jinyuan Jia, Yupei Liu, Xiaoyu Cao, Neil Zhenqiang Gong:

    Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks.

    [paper]

  • Seungyong Moon, Gaon An, Hyun Oh Song:

    Preemptive Image Robustification for Protecting Users against Man-in-the-Middle Adversarial Attacks.

    [paper]

    [code]

  • Mingyu Guo, Jialiang Li, Aneta Neumann, Frank Neumann, Hung Nguyen:

    Practical Fixed-Parameter Algorithms for Defending Active Directory Style Attack Graphs.

    [paper]

  • Thanh Nguyen, Haifeng Xu:

    When Can the Defender Effectively Deceive Attackers in Security Games?

    [paper]

  • Hanjie Chen, Yangfeng Ji: Adversarial Training for Improving Model Robustness? Look at Both Prediction and Interpretation.

    [paper]

    [code]

  • Jihoon Tack, Sihyun Yu, Jongheon Jeong, Minseon Kim, Sung Ju Hwang, Jinwoo Shin:

    Consistency Regularization for Adversarial Robustness.

    [paper]

    [code]

  • Salah Ghamizi, Maxime Cordy, Mike Papadakis, Yves Le Traon:

    Adversarial Robustness in Multi-Task Learning: Promises and Illusions.

    [paper]

    [code]

  • Yuan Yang, James Clayton Kerce, Faramarz Fekri: LOGICDEF: An Interpretable Defense Framework against Adversarial Examples via Inductive Scene Graph Reasoning.

    [paper]

    [code]

  • Jinghui Chen, Yu Cheng, Zhe Gan, Quanquan Gu, Jingjing Liu: Efficient Robust Training via Backward Smoothing.

    [paper]

    [code]

  • Ruoxin Chen, Jie Li, Junchi Yan, Ping Li, Bin Sheng:

    Input-Specific Robustness Certification for Randomized Smoothing.

    [paper]

    [code]

  • Mikhail Pautov, Nurislam Tursynbek, Marina Munkhoeva, Nikita Muravev, Aleksandr Petiushko, Ivan V. Oseledets: CC-CERT: A Probabilistic Approach to Certify General Robustness of Neural Networks.

    [paper]

    [code]

CVPR 2022

Attack

  • Zhanhao Hu, Siyuan Huang, Xiaopei Zhu, Fuchun Sun, Bo Zhang, Xiaolin Hu: Adversarial Texture for Fooling Person Detectors in the Physical World.

    [paper]

    [code]

  • Linjun Zhou, Peng Cui, Xingxuan Zhang, Yinan Jiang, Shiqiang Yang:

    Adversarial Eigen Attack on BlackBox Models.

    [paper]

  • Qiuling Xu, Guanhong Tao, Xiangyu Zhang: Bounded Adversarial Attack on Deep Content Features.

    [paper]

    [code]

  • Aniruddha Saha, Ajinkya Tejankar, Soroush Abbasi Koohpayegani, Hamed Pirsiavash:

    Backdoor Attacks on Self-Supervised Learning.

    [paper]

    [code]

  • Binghui Wang, Youqi Li, Pan Zhou: Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees.

    [paper]

    [code]

  • Yan Feng, Baoyuan Wu, Yanbo Fan, Li Liu, Zhifeng Li, Shu-Tao Xia: Boosting Black-Box Attack with Partially Transferred Conditional Adversarial Distribution.

    [paper]

    [code]

  • Zhenting Wang, Juan Zhai, Shiqing Ma:

    BppAttack: Stealthy and Efficient Trojan Attacks against Deep Neural Networks via Image Quantization and Contrastive Adversarial Learning.

    [paper]

    [code]

  • Zhipeng Wei, Jingjing Chen, Zuxuan Wu, Yu-Gang Jiang:

    Cross-Modal Transferable Adversarial Attacks from Images to Videos.

    [paper]

  • Ruijun Gao, Qing Guo, Felix Juefei-Xu, Hongkai Yu, Huazhu Fu, Wei Feng, Yang Liu, Song Wang: Can You Spot the Chameleon? Adversarially Camouflaging Images from Co-Salient Object Detection.

    [paper]

    [code]

  • Naufal Suryanto, Yongsu Kim, Hyoeun Kang, Harashta Tatimma Larasati, Youngyeo Yun, Thi-Thu-Huong Le, Hunmin Yang, Se-Yoon Oh, Howon Kim:

    DTA: Physical Camouflage Attacks using Differentiable Transformation Network.

    [paper]

    [code]

  • Wenxuan Wang, Xuelin Qian, Yanwei Fu, Xiangyang Xue: DST: Dynamic Substitute Training for Data-free Black-box Attack.

    [paper]

  • Xiaoqian Xu, Pengxu Wei, Weikai Chen, Yang Liu, Mingzhi Mao, Liang Lin, Guanbin Li: Dual Adversarial Adaptation for Cross-Device Real-World Image Super-Resolution.

    [paper]

    [code]

  • Sivapriya Vellaichamy, Matthew Hull, Zijie J. Wang, Nilaksh Das, Sheng-Yun Peng, Haekyu Park, Duen Horng (Polo) Chau: DetectorDetective: Investigating the Effects of Adversarial Examples on Object Detectors.

    [paper]

    [code]

  • Xuxiang Sun, Gong Cheng, Hongda Li, Lei Pei, Junwei Han:

    Exploring Effective Data for Surrogate Training Towards Black-box Attack.

    [paper]

    [code]

  • Cheng Luo, Qinliang Lin, Weicheng Xie, Bizhu Wu, Jinheng Xie, Linlin Shen: Frequency-driven Imperceptible Adversarial Attack on Semantic Similarity.

    [paper]

    [code]

  • Zhibo Wang, Xiaowei Dong, Henry Xue, Zhifei Zhang, Weifeng Chiu, Tao Wei, Kui Ren:

    Fairness-aware Adversarial Perturbation Towards Bias Mitigation for Deployed Deep Models.

    [paper]

  • Yu Feng, Benteng Ma, Jing Zhang, Shanshan Zhao, Yong Xia, Dacheng Tao: FIBA: Frequency-Injection based Backdoor Attack in Medical Image Analysis.

    [paper]

    [code]

  • Zirui Peng, Shaofeng Li, Guoxing Chen, Cheng Zhang, Haojin Zhu, Minhui Xue:

    Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations.

    [paper]

  • Giulio Lovisotto, Nicole Finnie, Mauricio Munoz, Chaithanya Kumar Mummadi, Jan Hendrik Metzen: Give Me Your Attention: Dot-Product Attention Considered Harmful for Adversarial Patch Robustness.

    [paper]

  • Junyoung Byun, Seungju Cho, Myung-Joon Kwon, Heeseon Kim, Changick Kim: Improving the Transferability of Targeted Adversarial Examples through Object-Based Diverse Input.

    [paper]

    [code]

  • Qidong Huang, Xiaoyi Dong, Dongdong Chen, Hang Zhou, Weiming Zhang, Nenghai Yu: Shape-invariant 3D Adversarial Point Clouds.

    [paper]

    [code]

  • Zachary Berger, Parth Agrawal, Tian Yu Liu, Stefano Soatto, Alex Wong: Stereoscopic Universal Perturbations across Different Architectures and Datasets.

    [paper]

    [code]

  • Yiqi Zhong, Xianming Liu, Deming Zhai, Junjun Jiang, Xiangyang Ji: Shadows can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Natural Phenomenon

    [paper]

    [code]

  • Yifeng Xiong, Jiadong Lin, Min Zhang, John E. Hopcroft, Kun He: Stochastic Variance Reduced Ensemble Adversarial Attack for Boosting the Adversarial Transferability.

    [paper]

    [code]

  • Shengshan Hu, Xiaogeng Liu, Yechao Zhang, Minghui Li, Leo Yu Zhang, Hai Jin, Libing Wu: Protecting Facial Privacy: Generating Adversarial Identity Masks via Style-robust Makeup Transfer.

    [paper]

    [code]

  • Mostafa Kahla, Si Chen, Hoang Anh Just, Ruoxi Jia:

    Label-Only Model Inversion Attacks via Boundary Repulsion.

    [paper]

    [code]

  • Jianping Zhang, Weibin Wu, Jen-tse Huang, Yizhan Huang, Wenxuan Wang, Yuxin Su, Michael R. Lyu:

    Improving Adversarial Transferability via Neuron Attribution-based Attacks.

    [paper]

    [code]

  • Chaoning Zhang, Philipp Benz, Adil Karjauv, Jae-Won Cho, Kang Zhang, In So Kweon:

    Investigating Top-k White-Box and Transferable Black-box Attack.

    [paper]

    [code] - coming soon

  • Byung-Kwan Lee, Junho Kim, Yong Man Ro: Masking Adversarial Damage: Finding Adversarial Saliency for Robust and Sparse Network.

    [paper]

    [code]

  • Zikui Cai, Shantanu Rane, Alejandro E. Brito, Chengyu Song, Srikanth V. Krishnamurthy, Amit K. Roy-Chowdhury, M. Salman Asif: Zero-Query Transfer Attacks on Context-Aware Object Detectors.

    [paper]

  • Tianlong Chen, Zhenyu Zhang, Yihua Zhang, Shiyu Chang, Sijia Liu, Zhangyang Wang:

    Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free.

    [paper]

    [code]

  • Jie Zhang, Bo Li, Jianghe Xu, Shuang Wu, Shouhong Ding, Lei Zhang, Chao Wu: Towards Efficient Data Free Blackbox Adversarial Attack.

    [paper]

  • Transferable Sparse Adversarial Attack

    [paper]

    [code]

  • Xiangyu Qi, Tinghao Xie, Ruizhe Pan, Jifeng Zhu, Yong Yang, Kai Bu:

    Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks.

    [paper]

    [code]

  • Zhendong Zhao, Xiaojun Chen, Yuexin Xuan, Ye Dong, Dakui Wang, Kaitai Liang: DEFEAT: Deep Hidden Feature Backdoor Attacks by Imperceptible Perturbation and Latent Representation Constraints.

    [paper]

  • Shuai Jia, Chao Ma, Taiping Yao, Bangjie Yin, Shouhong Ding, Xiaokang Yang: Exploring Frequency Adversarial Attacks for Face Forgery Detection.

    [paper]

  • Yunjian Zhang, Yanwei Liu, Jinxia Liu, Jingbo Miao, Antonios Argyriou, Liming Wang, Zhen Xu: 360-Attack: Distortion-Aware Perturbations from Perspective-Views.

    [paper]

Defense

  • Gaojie Jin, Xinping Yi, Wei Huang, Sven Schewe, Xiaowei Huang:

    Enhancing Adversarial Training with Second-Order Statistics of Weights.

    [paper]

    [code]

  • Mo Zhou, Vishal M. Patel: Enhancing Adversarial Robustness for Deep Metric Learning.

    [paper]

    [code]

  • Ozan Özdenizci, Robert Legenstein: Improving Robustness Against Stealthy Weight Bit-Flip Attacks by Output Code Matching.

    [paper]

    [code]

  • Junhao Dong, Yuan Wang, Jianhuang Lai, Xiaohua Xie: Improving Adversarially Robust Few-shot Image Classification with Generalizable Representations.

    [paper]

  • Tao Li, Yingwen Wu, Sizhe Chen, Kun Fang, Xiaolin Huang: Subspace Adversarial Training.

    [paper]

    [code]

  • Jiang Liu, Alexander Levine, Chun Pong Lau, Rama Chellappa, Soheil Feizi: Segment and Complete: Defending Object Detectors against Adversarial Patch Attacks with Robust Patch Detection.

    [paper]

    [code]

  • Liang Chen, Yong Zhang, Yibing Song, Lingqiao Liu, Jue Wang: Self-supervised Learning of Adversarial Example: Towards Good Generalizations for Deepfake Detection

    [paper]

    [code]

  • Zhaoyu Chen, Bo Li, Jianghe Xu, Shuang Wu, Shouhong Ding, Wenqiang Zhang: Towards Practical Certifiable Patch Defense with Vision Transformer.

    [paper]

  • Ye Liu, Yaya Cheng, Lianli Gao, Xianglong Liu, Qilong Zhang, Jingkuan Song:

    Practical Evaluation of Adversarial Robustness via Adaptive Auto Attack.

    [paper]

    [code]

  • Xiaojun Jia, Yong Zhang, Baoyuan Wu, Ke Ma, Jue Wang, Xiaochun Cao:

    LAS-AT: Adversarial Training with Learnable Attack Strategy.

    [paper]

    [code]

  • Kaidong Li, Ziming Zhang, Cuncong Zhong, Guanghui Wang: Robust Structured Declarative Classifiers for 3D Point Clouds: Defending Adversarial Attacks with Implicit Gradients.

    [paper]

    [code]

  • Jingtao Li, Adnan Siraj Rakin, Xing Chen, Zhezhi He, Deliang Fan, Chaitali Chakrabarti:

    ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning.

    [paper]

    [code]

  • Yi Yu, Wenhan Yang, Yap-Peng Tan, Alex C. Kot: Towards Robust Rain Removal Against Adversarial Attacks: A Comprehensive Benchmark Analysis and Beyond.

    [paper]

    [code]

  • Jiakai Wang, Zixin Yin, Pengfei Hu, Aishan Liu, Renshuai Tao, Haotong Qin, Xianglong Liu, Dacheng Tao:

    Defensive Patches for Robust Recognition in the Physical World.

    [paper]

    [code]

  • Theodoros Tsiligkaridis, Jay Roberts: Understanding and Increasing Efficiency of Frank-Wolfe Adversarial Training

    [paper]

    [code]

  • Qingzhao Zhang, Shengtuo Hu, Jiachen Sun, Qi Alfred Chen, Z. Morley Mao: On Adversarial Robustness of Trajectory Prediction for Autonomous Vehicles.

    [paper]

    [code]

  • Prithviraj Dhar, Amit Kumar, Kirsten Kaplan, Khushi Gupta, Rakesh Ranjan, Rama Chellappa: EyePAD++: A Distillation-based approach for joint Eye Authentication and Presentation Attack Detection using Periocular Images.

    [paper]

Others

  • Qibing Ren, Qingquan Bao, Runzhong Wang, Junchi Yan: Appearance and Structure Aware Robust Deep Visual Graph Matching: Attack, Defense and Beyond.

    [paper]

    [code]

  • Tianyu Pang, Huishuai Zhang, Di He, Yinpeng Dong, Hang Su, Wei Chen, Jun Zhu, Tie-Yan Liu: Two Coupled Rejection Metrics Can Tell Adversarial Examples Apart.

    [paper]

    [code]

  • Kwang In Kim: Robust Combination of Distributed Gradients Under Adversarial Perturbations.

    [paper]

  • Yingzhi Tang, Yue Qian, Qijian Zhang, Yiming Zeng, Junhui Hou, Xuefei Zhe:

    WarpingGAN: Warping Multiple Uniform Priors for Adversarial 3D Point Cloud Generation.

    [paper]

    [code]

  • Ganesh Del Grosso, Hamid Jalalzai, Georg Pichler, Catuscia Palamidessi, Pablo Piantanida: Leveraging Adversarial Examples to Quantify Membership Information Leakage.

    [paper]

    [code]

ACM MM 2022

  • Siyuan Liang, Aishan Liu, Jiawei Liang, Longkang Li, Yang Bai, Xiaochun Cao: Imitated Detectors: Stealing Knowledge of Black-box Object Detectors.

    [paper]

    [code]

  • Yuxuan Wang, Jiakai Wang, Zixin Yin, Ruihao Gong, Jingyi Wang, Aishan Liu, Xianglong Liu: Generating Transferable Adversarial Examples against Vision Transformers.

    [paper] not found yet

ECCV 2022

Attack

  • Yuyang Long, Qilong Zhang, Boheng Zeng, Lianli Gao, Xianglong Liu, Jian Zhang, Jingkuan Song: Frequency Domain Model Augmentation for Adversarial Attack.

    [paper]

    [code]

  • Ziyi Dong, Pengxu Wei, Liang Lin: Adversarially-Aware Robust Object Detector.

    [paper]

    [code]

  • Jenny Schmalfuss, Philipp Scholze, Andrés Bruhn: A Perturbation-Constrained Adversarial Attack for Evaluating the Robustness of Optical Flow.

    [paper]

    [code]

  • Zhiyuan Cheng, James Liang, Hongjun Choi, Guanhong Tao, Zhiwen Cao, Dongfang Liu, Xiangyu Zhang: Physical Attack on Monocular Depth Estimation with Optimal Adversarial Patches.

    [paper]

  • Zhaoyu Chen, Bo Li, Shuang Wu, Jianghe Xu, Shouhong Ding, Wenqiang Zhang:

    Shape Matters: Deformable Patch Attack.

    [paper]

  • Martin Gubri, Maxime Cordy, Mike Papadakis, Yves Le Traon, Koushik Sen: LGV: Boosting Adversarial Example Transferability from Large Geometric Vicinity.

    [paper]

    [code]

Defense

Others

ICLR 2022

NIPS 2022

Attack

  • Anshuman Chhabra, Ashwin Sekhari, Prasant Mohapatra: On the Robustness of Deep Clustering Models: Adversarial Attacks and Defenses.

    [paper]

    [code]

  • Sizhe Chen, Zhehao Huang, Qinghua Tao, Yingwen Wu, Cihang Xie, Xiaolin Huang: Adversarial Attack on Attackers: Post-Process to Mitigate Black-Box Score-Based Query Attacks.

    [paper]

    [code]

  • Abhishek Aich, Calvin-Khang Ta, Akash Gupta, Chengyu Song, Srikanth V. Krishnamurthy, M. Salman Asif, Amit Roy-Chowdhury: GAMA: Generative Adversarial Multi-Object Scene Attacks.

    [paper]

    [code]

  • Xiangrui Cai, Haidong Xu, Sihan Xu, Ying Zhang, Xiaojie Yuan:

    BadPrompt: Backdoor Attacks on Continuous Prompts.

    [paper]

    [code]

  • Patrick O'Reilly, Andreas Bugler, Keshav Bhandari, Max Morrison, Bryan Pardo: VoiceBlock: Privacy through Real-Time Adversarial Attacks with Audio-to-Audio Models.

    [paper]

    [code]

  • Zihan Liu, Yun Luo, Lirong Wu, Zicheng Liu, Stan Z. Li: Towards Reasonable Budget Allocation in Untargeted Graph Structure Attacks via Gradient Debias.

    [paper]

    [code]

  • Yucheng Shi, Yahong Han, Yu-an Tan, Xiaohui Kuang: Decision-based Black-box Attack Against Vision Transformers via Patch-wise Adversarial Removal.

    [paper]

    [code]

  • Haoyang Li, Shimin Di, Lei Chen: Revisiting Injective Attacks on Recommender Systems.

    [paper]

    [code]

  • Weixia Zhang, Dingquan Li, Xiongkuo Min, Guangtao Zhai, Guodong Guo, Xiaokang Yang, Kede Ma: Perceptual Attacks of No-Reference Image Quality Models with Human-in-the-Loop

    [paper]

  • Khoa D. Doan, Yingjie Lao, Ping Li: Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class.

    [paper]

    [code]

  • Henger Li, Xiaolin Sun, Zizhan Zheng: Learning to Attack Federated Learning: A Model-based Reinforcement Learning Attack Framework.

    [paper]

    [code]

  • Zeyu Qin, Yanbo Fan, Yi Liu, Li Shen, Yong Zhang, Jue Wang, Baoyuan Wu: Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation.

    [paper]

    [code]

  • Shuai Jia, Bangjie Yin, Taiping Yao, Shouhong Ding, Chunhua Shen, Xiaokang Yang, Chao Ma:

    Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face Recognition.

    [paper]

  • Zikui Cai, Chengyu Song, Srikanth Krishnamurthy, Amit Roy-Chowdhury, M. Salman Asif

    Blackbox Attacks via Surrogate Ensemble Search.

    [paper]

    [code]

  • Shengming Yuan, Qilong Zhang, Lianli Gao, Yaya Cheng, Jingkuan Song:

    Natural Color Fool: Towards Boosting Black-box Unrestricted Attacks.

    [paper]

    [code]

  • Chenghao Sun, Yonggang Zhang, Chaoqun Wan, Qizhou Wang, Ya Li, Tongliang Liu, Bo Han, Xinmei Tian: Towards Lightweight Black-Box Attack Against Deep Neural Networks.

    [paper]

    [code]

  • Fan Liu, Hao Liu, Wenzhao Jiang: Practical Adversarial Attacks on Spatiotemporal Traffic Forecasting Models.

    [paper]

    [code]

  • Shuwen Chai, Jinghui Chen: One-shot Neural Backdoor Erasing via Adversarial Weight Masking.

    [paper]

    [code]

  • Yibo Miao, Yinpeng Dong, Jun Zhu, Xiao-Shan Gao: Isometric 3D Adversarial Examples in the Physical World.

    [paper]

Defense

  • Yunrui Yu, Xitong Gao, Cheng-Zhong Xu:

    MORA: Improving Ensemble Robustness Evaluation with Model Reweighing Attack.

    [paper]

    [code] not open source yet

  • Yunjuan Wang, Enayat Ullah, Poorya Mianjy, Raman Arora: Adversarial Robustness is at Odds with Lazy Training.

    [paper]

  • Xiyuan Li, Zou Xin, Weiwei Liu:

    Defending Against Adversarial Attacks via Neural Dynamic System.

    [paper]

  • Zhuoer Xu, Guanghui Zhu, Changhua Meng, Shiwen Cui, Zhenzhe Ying, Weiqiang Wang, Ming Gu, Yihua Huang: A2: Efficient Automated Attacker for Boosting Adversarial Training.

    [paper]

    [code]

  • Ruisi Cai, Zhenyu Zhang, Tianlong Chen, Xiaohan Chen, Zhangyang Wang: Randomized Channel Shuffling: Minimal-Overhead Backdoor Attack Detection without Clean Datasets.

    [paper]

    [code]

  • Tian Yu Liu, Yu Yang, Baharan Mirzasoleiman:

    Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attack.

    [paper]

    [code]

  • Jianan Zhou, Jianing Zhu, Jingfeng Zhang, Tongliang Liu, Gang Niu, Bo Han, Masashi Sugiyama: Adversarial Training with Complementary Labels: On the Benefit of Gradually Informative Attacks.

    [paper]

    [code]

  • Haotao Wang, Junyuan Hong, Aston Zhang, Jiayu Zhou, Zhangyang Wang: Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork.

    [paper]

    [code]

  • Yongyuan Liang, Yanchao Sun, Ruijie Zheng, Furong Huang: Efficient Adversarial Training without Attacking: Worst-Case-Aware Robust Reinforcement Learning.

    [paper]

    [code]

  • Sihui Dai, Saeed Mahloujifar, Prateek Mittal:

    Formulating Robustness Against Unforeseen Attacks.

    [paper]

    [code]

  • Anna Kuzina, Max Welling, Jakub M. Tomczak: Alleviating Adversarial Attacks on Variational Autoencoders with MCMC.

    [paper]

    [code]

  • Daniel M. Ziegler, Seraphina Nix, Lawrence Chan, Tim Bauman, Peter Schmidt-Nielsen, Tao Lin, Adam Scherlis, Noa Nabeshima, Ben Weinstein-Raun, Daniel de Haas, Buck Shlegeris, Nate Thomas: Adversarial training for high-stakes reliability.

    [paper]

  • Yue Xing, Qifan Song, Guang Cheng: Phase Transition from Clean Training to Adversarial Training.

    [paper]

  • Yue Xing, Qifan Song, Guang Cheng:

    Why Do Artificially Generated Data Help Adversarial Robustness.

    [paper]

  • Ling Liang, Kaidi Xu, Xing Hu, Lei Deng, Yuan Xie:

    Toward Robust Spiking Neural Network Against Adversarial Perturbation.

    [paper]

    [code]

  • Jinyuan Jia, Wenjie Qu, Neil Zhenqiang Gong: MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples.

    [paper]

    [code]

  • Jianhao Ding, Tong Bu, Zhaofei Yu, Tiejun Huang, Jian K. Liu: SNN-RAT: Robustness-enhanced Spiking Neural Network through Regularized Adversarial Training.

    [paper]

    [code]

  • Zonghan Yang, Tianyu Pang, Yang Liu:

    A Closer Look at the Adversarial Robustness of Deep Equilibrium Models.

    [paper]

    [code]

  • Pau de Jorge Aranda, Adel Bibi, Riccardo Volpi, Amartya Sanyal, Philip H. S. Torr, Grégory Rogez, Puneet K. Dokania: Make Some Noise: Reliable and Efficient Single-Step Adversarial Training.

    [paper]

    [code]

  • Chen Chen, Yuchen Liu, Xingjun Ma, Lingjuan Lyu: CalFAT: Calibrated Federated Adversarial Training with Label Skewness.

    [paper]

    [code]

  • Xiaofeng Mao, Yuefeng Chen, Ranjie Duan, Yao Zhu, Gege Qi, Shaokai Ye, Xiaodan Li, Rong Zhang, Hui Xue:

    Enhance the Visual Representation via Discrete Adversarial Training.

    [paper]

    [code]

  • Mazda Moayeri, Kiarash Banihashem, Soheil Feizi: Explicit Tradeoffs between Adversarial and Natural Distributional Robustness.

    [paper]

  • Chengyu Dong, Liyuan Liu, Jingbo Shang: Label Noise in Adversarial Training: A Novel Perspective to Study Robust Overfitting.

    [paper]

  • Omar Montasser, Steve Hanneke, Nati Srebro: Adversarially Robust Learning: A Generic Minimax Optimal Learner and Characterization.

    [paper]

  • Avrim Blum, Omar Montasser, Greg Shakhnarovich, Hongyang Zhang:

    Boosting Barely Robust Learners: A New Perspective on Adversarial Robustness.

    [paper]

  • Jiancong Xiao, Yanbo Fan, Ruoyu Sun, Jue Wang, Zhi-Quan Luo:

    Stability Analysis and Generalization Bounds of Adversarial Training.

    [paper]

    [code]

  • Sravanti Addepalli, Samyak Jain, Venkatesh Babu R.: Efficient and Effective Augmentation Strategy for Adversarial Training.

    [paper]

    [code]

  • Minjing Dong, Xinghao Chen, Yunhe Wang, Chang Xu: Random Normalization Aggregation for Adversarial Defense.

    [paper]

    [code]

  • Chih-Hui Ho, Nuno Vasconcelos: DISCO: Adversarial Defense with Local Implicit Functions.

    [paper]

    [code]

  • Sen Cui, Jingfeng Zhang, Jian Liang, Bo Han, Masashi Sugiyama, Changshui Zhang:

    Synergy-of-Experts: Collaborate to Improve Adversarial Robustness.

    [paper]

    [code]

  • Yinpeng Dong, Shouwei Ruan, Hang Su, Caixin Kang, Xingxing Wei, Jun Zhu: ViewFool: Evaluating the Robustness of Visual Recognition to Adversarial Viewpoints.

    [paper]

    [code]

  • Bohang Zhang, Du Jiang, Di He, Liwei Wang:

    Rethinking Lipschitz Neural Networks and Certified Robustness: A Boolean Function Perspective.

    [paper]

    [code]

Others

  • Maura Pintor, Luca Demetrio, Angelo Sotgiu, Ambra Demontis, Nicholas Carlini, Battista Biggio, Fabio Roli: Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples.

    [paper]

    [code]

  • Lue Tao, Lei Feng, Hongxin Wei, Jinfeng Yi, Sheng-Jun Huang, Songcan Chen: Can Adversarial Training Be Manipulated By Non-Robust Features?

    [paper]

    [code]

  • Idan Attias, Steve Hanneke, Yishay Mansour:

    A Characterization of Semi-Supervised Adversarially Robust PAC Learnability.

    [paper]

  • Li-Cheng Lan, Huan Zhang, Ti-Rong Wu, Meng-Yu Tsai, I-Chen Wu, Cho-Jui Hsieh: Are AlphaZero-like Agents Robust to Adversarial Perturbations?

    [paper]

    [code]

  • Joan Puigcerver, Rodolphe Jenatton, Carlos Riquelme, Pranjal Awasthi, Srinadh Bhojanapalli:

    On the Adversarial Robustness of Mixture of Experts.

    [paper]

  • Roland S. Zimmermann, Wieland Brendel, Florian Tramèr, Nicholas Carlini: Increasing Confidence in Adversarial Robustness Evaluations.

    [paper]

    [code]

  • Nikolaos Tsilivis, Julia Kempe:

    What Can the Neural Tangent Kernel Tell Us About Adversarial Robustness?

    [paper]

    [code]

Benchmark

Distinguished Researchers & Teams

Distinguished TODO researchers who have published +3 papers which have a major impact on the field of TODO and are still active in the field of TODO. (Names listed in no particular order.)

  • TODO

About

Personal Adversarial Attacks and Defense Paper Collections (updating)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published