-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clean label backdoor attack scenario #949
Conversation
def poison_dataset(src_imgs, src_lbls, src, tgt, ds_size, attack, poisoned_indices): | ||
# In this example, all images of "src" class have a trigger | ||
# added and re-labeled as "tgt" class | ||
poison_x = [] | ||
poison_y = [] | ||
for idx in range(ds_size): | ||
if src_lbls[idx] == src and idx in poisoned_indices: | ||
src_img = src_imgs[idx] | ||
p_img, p_label = attack.poison(src_img, [tgt]) | ||
poison_x.append(p_img) | ||
poison_y.append(p_label) | ||
else: | ||
poison_x.append(src_imgs[idx]) | ||
poison_y.append(src_lbls[idx]) | ||
poison_x, poison_y = np.array(poison_x), np.array(poison_y) | ||
|
||
return poison_x, poison_y |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Take note that for this attack, we are only poisoning images in the target class. The attack.poison
method takes the entire dataset and selects the poisoned data within the dataset, and returns the newly poisoned data points.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Understood. The poison_dataset wrapper is only used with the backdoor object of type PoisoningAttackBackdoor, not the PoisoningAttackCleanLabelBackdoor object.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I.e. it is used at eval time, not to poison the training data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Fixes #950
Clean label backdoor attack scenario
Separate issue:
-Hyperparameter tuning so targeted attack success rate is higher.