Here we release source code and/or data for research projects from David Wagner's research group at UC Berkeley.
-
REAP: A Large-Scale Realistic Adversarial Patch Benchmark: https://github.com/wagner-group/reap-benchmark
-
Part-Based Models Improve Adversarial Robustness: https://github.com/chawins/adv-part-model
-
SLIP: Self-supervision meets Language-Image Pre-training: https://github.com/facebookresearch/SLIP
-
Learning Security Classifiers with Verified Global Robustness Properties: https://github.com/surrealyz/verified-global-properties
-
Demystifying the Adversarial Robustness of Random Transformation Defenses: https://github.com/wagner-group/demystify-random-transform
-
SEAT: Similarity Encoder by Adversarial Training for Detecting Model Extraction Attack Queries: https://github.com/zhanyuanucb/model-extraction-defense (unsupported, research-quality code) (see also https://github.com/grasses/SEAT)
-
Adversarial Examples for k-Nearest Neighbor Classifiers Based on Higher-Order Voronoi Diagrams: https://github.com/wagner-group/geoadex
-
Improving the Accuracy-Robustness Trade-Off for Dual-Domain Adversarial Training: https://github.com/wagner-group/dual-domain-at
-
Defending Against Patch Adversarial Attacks with Robust Self-Attention: https://github.com/wagner-group/robust-self-attention
-
Model-Agnostic Defense for Lane Detection against Adversarial Attack: https://github.com/henryzxu/lane-verification
-
Minimum-Norm Adversarial Examples on KNN and KNN-Based Models: https://github.com/chawins/knn-defense