Skip to content

Latest commit

 

History

History
145 lines (138 loc) · 12.9 KB

File metadata and controls

145 lines (138 loc) · 12.9 KB

Machine Learning & Security Seminar

Welcome to Machine Learning & Security Seminar, organized by Samuel Conte Professor Xiangyu Zhang, in Department of Computer Science at Purdue University.

Spring 2023

This semester the seminar is co-organized by Kaiyuan Zhang.

Date Discussion Leader Title Source
01/12 Guanhong Tao The "Beatrix'' Resurrections: Robust Backdoor Detection via Gram Matrices NDSS 2023
01/20 Xuan Chen Provable Defense against Backdoor Policies in Reinforcement Learning NeurIPS 2022
01/27 Zian Su Chain-of-Thought Reasoning & TypeT5: Seq2seq Type Inference using Static Analysis ICLR 2023
02/03 Canceled
02/10 Kaiyuan Zhang FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information S&P 2023
02/24 Canceled
03/03 Lu Yan A Study of the Attention Abnormality in Trojaned BERTs NAACL 2023
03/10 Yunshu Mao PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning USENIX 2022
03/17 Siyuan Cheng Backdoor defense
03/24 Canceled
03/31 Guangyu Shen Single Image Backdoor Inversion via Robust Smoothed Classifiers CVPR 2023
04/07 Canceled
04/14 Qiuling Xu RecSys 2020 Tutorial on Conversational Recommendation Systems RecSys 2020
04/21 Canceled
04/28 Shengwei An Consistency Models arXiv 2023
05/05 Canceled Backdoor Attacks and Defenses in Machine Learning (BANDS) Workshop Day! ICLR 2023

Fall 2022

This semester the seminar is co-organized by Kaiyuan Zhang.

Date Discussion Leader Title Source
08/26 Shengwei An Blacklight: Scalable Defense for Neural Networks against Query-Based Black-Box Attacks USENIX 2022
09/02 Yingqi Liu Not All Poisons are Created Equal: Robust Training against Data Poisoning ICML 2022
09/09 Canceled
09/16 Canceled
09/23 Kaiyuan Zhang Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage CVPR 2022
09/30 Zian Su Repository-Level Prompt Generation for Large Language Models of Code arXiv 2022
10/07 Xuan Chen Robust Reinforcement Learning with Alternating Training of Learned Adversaries ICLR 2021
10/14 Canceled
10/21 Lu Yan Data Isotopes for Data Provenance in DNNs arXiv 2022
10/28 Siyuan Cheng Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations? ICLR 2022
11/04 Guangyu Shen Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free CVPR 2022
11/11 Canceled
11/18 Qiuling Xu Denoising Diffusion Implicit Models ICLR 2021
11/25 Canceled Happy Thanksgiving
12/02 Linyi Li (UIUC) Certifiable Deep Learning at Scale towards Trustworthy Machine Learning
12/09 Shengwei An Membership Inference Attacks From First Principles S&P 2022
12/16 Canceled Final Week
12/23 Canceled Happy holidays!

Summer 2022

This semester the seminar is co-organized by Kaiyuan Zhang.

Date Discussion Leader Title Source
05/20 Yingqi Liu Practice talk
Guanhong Tao Practice talk
05/27 Siyuan Cheng Class-Disentanglement and Applications in Adversarial Detection and Defense NeurIPS 2021
06/03 Canceled
06/10 Guanhong Tao Backdoor defense
06/17 Canceled
06/24 Kaiyuan Zhang CAFE: Catastrophic Data Leakage in Vertical Federated Learning NeurIPS 2021
07/01 Zian Su Black-Box Tuning for Language-Model-as-a-Service ICML 2022
07/08 Xuan Chen Robust Deep Reinforcement Learning through Adversarial Loss NeurIPS 2021
07/15 Lu Yan Fight Poison with Poison: Detecting Backdoor Poison Samples via Decoupling Benign Correlations arXiv 2022
Circumventing Backdoor Defenses That Are Based on Latent Separability arXiv 2022
07/22 Guanhong Tao Excess Capacity and Backdoor Poisoning NeurIPS 2021
07/29 Canceled
08/05 Guangyu Shen SPECTRE: Defending Against Backdoor Attacks Using Robust Statistics ICML 2021
08/12 Siyuan Cheng FLAME: Taming Backdoors in Federated Learning USENIX 2022
08/19 Canceled

Spring 2022

This semester the seminar is co-organized by Kaiyuan Zhang.

Date Discussion Leader Title Source
01/07 Zian Su Masked Autoencoders Are Scalable Vision Learners arXiv 2021
01/14 Canceled
01/21 Shiwei Feng Adversarial Neuron Pruning Purifies Backdoored Deep Models NeurIPS 2021
01/28 Canceled
02/04 Shengwei An Get a Model! Model Hijacking Attack Against Machine Learning Models NDSS 2022
02/11 Siyuan Cheng Anti-Backdoor Learning: Training Clean Models on Poisoned Data NeurIPS 2021
02/18 Guanhong Tao Traceback of Data Poisoning Attacks in Neural Networks arXiv 2021
02/25 Canceled
03/04 Kaiyuan Zhang Label Inference Attacks Against Vertical Federated Learning USENIX 2022
03/11 Yingqi Liu Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Production Federated Learning S&P 2022
03/18 Xuan Chen Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL ICLR 2022
03/25 Canceled
04/01 Canceled
04/08 Guangyu Shen Backdoor defense
04/15 Qiuling Xu Few-shot learning in adversarial learning
04/22 Shengwei An Practice talk: MIRROR: Model Inversion for Deep Learning Network with High Fidelity NDSS 2022
Model inversion defense
04/29 Canceled
05/06 Canceled
05/13 Canceled

Fall 2021

This semester the seminar is co-organized by Kaiyuan Zhang.

Date Discussion Leader Title Source
08/27 Zhiyuan Cheng Dirty Road Can Attack: Security of Deep Learning based Automated Lane Centering under Physical-World Attack USENIX 2021
09/03 Canceled
09/10 Guangyu Shen Adversarial Attacks are Reversible with Natural Supervision ICCV 2021
Poisoning and Backdooring Contrastive Learning arXiv 2021
09/17 Xuan Chen Adversarial Policies: Attacking Deep Reinforcement Learning ICLR 2020
BACKDOORL: Backdoor Attack against Competitive Reinforcement Learning IJCAI 2021
09/24 Guanhong Tao BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning S&P 2022
10/01 Qiuling Xu Stronger and Faster Wasserstein Adversarial Attacks ICML 2020
10/08 Canceled
10/15 Zian Su Prefix-Tuning: Optimizing Continuous Prompt for Generation ACL 2021
10/22 Siyuan Cheng Backdoor detection
10/29 Canceled
11/05 Guanhong Tao Backdoor defense
11/12 Canceled
11/19 Shengwei An Neural code editing
11/26 Canceled
12/03 Kaiyuan Zhang Model-Contrastive Federated Learning CVPR 2021
12/10 Yingqi Liu Hidden Backdoors in Human-Centric Language Models CCS 2021
Backdoor Pre-trained Models Can Transfer to All CCS 2021
12/17 Guangyu Shen Rethinking Stealthiness of Backdoor Attack against NLP Models ACL 2021

Summer 2021

This semester the seminar is co-organized by Kaiyuan Zhang.

Date Discussion Leader Title Source
05/13 Guanhong Tao What Doesn't Kill You Makes You Robust(er): Adversarial Training against Poisons and Backdoors arXiv 2021
PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking arXiv 2021
05/21 Yingqi Liu Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks arXiv 2021
Deep Partition Aggregation: Provable Defenses against General Poisoning Attacks ICLR 2021
05/28 Canceled
06/04 Guangyu Shen T-Miner: A Generative Approach to Defend Against Trojan Attacks on DNN-based Text Classification USENIX 2021
06/11 Kaiyuan Zhang Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning NDSS 2021
06/18 Shengwei An Detecting and Simulating Artifacts in GAN Fake Images WIFS 2019
CNN-generated images are surprisingly easy to spot... for now CVPR 2020
06/25 Yingqi Liu Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger ACL 2021
Turn the Combination Lock:Learnable Textual Backdoor Attacks via Word Substitution arXiv 2021
07/02 Guanhong Tao Backdoor learning
07/09 Guangyu Shen See through Gradients: Image Batch Recovery via GradInversion CVPR 2021
07/16 Canceled
07/23 Kaiyuan Zhang How To Backdoor Federated Learning PMLR 2020
Attack of the Tails: Yes, You Really Can Backdoor Federated Learning NeurIPS 2020
07/30 Shengwei An Improving the Efficiency and Robustness of Deepfakes Detection through Precise Geometric Features CVPR 2021
08/06 Yingqi Liu Detecting AI Trojans Using Meta Neural Analysis S&P 2021
08/13 Guanhong Tao Double-Cross Attacks: Subverting Active Learning Systems USENIX 2021