Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Effectiveness of Attack in carla_obj_det_dpatch_undefended.json #1215

Closed
mzweilin opened this issue Nov 22, 2021 · 6 comments
Closed

Effectiveness of Attack in carla_obj_det_dpatch_undefended.json #1215

mzweilin opened this issue Nov 22, 2021 · 6 comments

Comments

@mzweilin
Copy link

mzweilin commented Nov 22, 2021

Dear Armory Team,

I ran the scenario with 1000-iteration attacks, but the adversarial mAP is very close to the benign mAP (0.4367 over 0.4467). Shall we expect the adversarial mAP to be much lower than that, considering the target model is undefended?

BTW, the rectangular patch from #1212 is used.

$ jq ".attack.kwargs.max_iter = 1000 | .scenario.export_samples = 165" scenario_configs/carla_obj_det_dpatch_undefended.json | armory run --no-docker --use-gpu -
Evaluation: 100%|█| 165/165 [12:04:47<00:00, 263.56s/it]
{
    "armory_version": "0.14.0",
    "config": {
        "_description": "XView object detection, contributed by MITRE Corporation",
        "adhoc": null,
        "attack": {
            "knowledge": "white",
            "kwargs": {
                "batch_size": 1,
                "max_iter": 1000,
                "verbose": true
            },
            "module": "armory.art_experimental.attacks.carla_obj_det_patch",
            "name": "CARLADapricotPatch",
            "use_label": false
        },
        "dataset": {
            "batch_size": 1,
            "eval_split": "dev",
            "framework": "numpy",
            "modality": "rgb",
            "module": "armory.data.adversarial_datasets",
            "name": "carla_obj_det_dev"
        },
        "defense": null,
        "eval_id": "2021-11-20T190637.414125",
        "metric": {
            "means": true,
            "perturbation": "l0",
            "record_metric_per_sample": false,
            "task": [
                "carla_od_AP_per_class",
                "carla_od_disappearance_rate",
                "carla_od_hallucinations_per_image",
                "carla_od_misclassification_rate",
                "carla_od_true_positive_rate"
            ]
        },
        "model": {
            "fit": false,
            "fit_kwargs": {},
            "model_kwargs": {
                "num_classes": 4
            },
            "module": "armory.baseline_models.pytorch.carla_single_modality_object_detection_frcnn",
            "name": "get_art_model",
            "weights_file": "carla_rgb_weights.pt",
            "wrapper_kwargs": {}
        },
        "scenario": {
            "export_samples": 165,
            "kwargs": {
                "check_run": false,
                "mongo_host": null,
                "num_eval_batches": null,
                "skip_attack": false,
                "skip_benign": false,
                "skip_misclassified": false
            },
            "module": "armory.scenarios.carla_object_detection",
            "name": "CarlaObjectDetectionTask"
        },
        "sysconfig": {
            "docker_image": "twosixarmory/pytorch:0.14.0",
            "external_github_repo": "colour-science/colour",
            "filepath": "scenario_configs/carla_obj_det_dpatch_undefended.json",
            "gpus": "all",
            "log_level": 20,
            "no_docker": true,
            "output_dir": null,
            "output_filename": null,
            "use_gpu": true
        }
    },
    "results": {
        "adversarial_carla_od_AP_per_class": {
            "1": 0.51,
            "2": 0.41,
            "3": 0.39
        },
        "adversarial_mean_carla_od_AP_per_class": 0.4366666666666667,
        "adversarial_mean_carla_od_disappearance_rate": 0.4326285872475713,
        "adversarial_mean_carla_od_hallucinations_per_image": 1.3696969696969696,
        "adversarial_mean_carla_od_misclassification_rate": 0.03465652192924921,
        "adversarial_mean_carla_od_true_positive_rate": 0.5327148908231798,
        "benign_carla_od_AP_per_class": {
            "1": 0.51,
            "2": 0.4,
            "3": 0.43
        },
        "benign_mean_carla_od_AP_per_class": 0.4466666666666667,
        "benign_mean_carla_od_disappearance_rate": 0.4371740417930258,
        "benign_mean_carla_od_hallucinations_per_image": 0.8121212121212121,
        "benign_mean_carla_od_misclassification_rate": 0.03541409768682497,
        "benign_mean_carla_od_true_positive_rate": 0.5274118605201494,
        "perturbation_mean_l0": 0.015039225589225588
    },
    "timestamp": 1637478694
}

0_benign.png
0_benign

0_adversarial.png
0_adversarial

@mzweilin
Copy link
Author

mzweilin commented Nov 22, 2021

@dxoigmn pointed out that the saturated colors in the example patch implies that the learning rate of the attack by default is too large.

After lowering the learning rate from 5 to 5/255, the 10-iteration attack is more successful: mAP 44.67% -> 37.67%. We also share the results on 1000-iteration in the next comment.

$ jq ".attack.kwargs.learning_rate = 5/255 | .scenario.export_samples = 165" scenario_configs/carla_obj_det_dpatch_undefended.json | armory run --no-docker --use-gpu -
Evaluation: 100%|█| 165/165 [09:50<00:00,  3.58s/it]

2021-11-22 09:30:30 spr-gpu04 armory.utils.metrics[94929] INFO carla_od_AP_per_class on benign test examples relative to ground truth labels: {1: 0.51000000000000001, 2: 0.40000000000000002, 3: 0.42999999999999999}
2021-11-22 09:30:30 spr-gpu04 armory.utils.metrics[94929] INFO mean carla_od_AP_per_class on benign examples relative to ground truth labels 44.67%.
2021-11-22 09:30:30 spr-gpu04 armory.utils.metrics[94929] INFO Average carla_od_disappearance_rate on benign test examples relative to ground truth labels: 43.72%
2021-11-22 09:30:30 spr-gpu04 armory.utils.metrics[94929] INFO Average carla_od_hallucinations_per_image on benign test examples relative to ground truth labels: 0.81
2021-11-22 09:30:30 spr-gpu04 armory.utils.metrics[94929] INFO Average carla_od_misclassification_rate on benign test examples relative to ground truth labels: 3.54%
2021-11-22 09:30:30 spr-gpu04 armory.utils.metrics[94929] INFO Average carla_od_true_positive_rate on benign test examples relative to ground truth labels: 52.74%
2021-11-22 09:30:30 spr-gpu04 armory.utils.metrics[94929] INFO carla_od_AP_per_class on adversarial test examples relative to ground truth labels: {1: 0.46000000000000002, 2: 0.38, 3: 0.28999999999999998}
2021-11-22 09:30:30 spr-gpu04 armory.utils.metrics[94929] INFO mean carla_od_AP_per_class on adversarial examples relative to ground truth labels 37.67%.
2021-11-22 09:30:30 spr-gpu04 armory.utils.metrics[94929] INFO Average carla_od_disappearance_rate on adversarial test examples relative to ground truth labels: 43.32%
2021-11-22 09:30:30 spr-gpu04 armory.utils.metrics[94929] INFO Average carla_od_hallucinations_per_image on adversarial test examples relative to ground truth labels: 2.4
2021-11-22 09:30:30 spr-gpu04 armory.utils.metrics[94929] INFO Average carla_od_misclassification_rate on adversarial test examples relative to ground truth labels: 3.67%
2021-11-22 09:30:30 spr-gpu04 armory.utils.metrics[94929] INFO Average carla_od_true_positive_rate on adversarial test examples relative to ground truth labels: 53.02%
2021-11-22 09:30:31 spr-gpu04 armory.scenarios.scenario[94929] INFO Saving evaluation results to path
/home/weilinxu/.armory/outputs/2021-11-22T172027.771428/CarlaObjectDetectionTask_1637602231.json

0_adversarial_1000iters_5
jq ".attack.kwargs.max_iter = 1000" scenario_configs/carla_obj_det_dpatch_undefended.json

0_adversarial_10iters_5_255
jq ".attack.kwargs.learning_rate = 5/255" scenario_configs/carla_obj_det_dpatch_undefended.json

0_adversarial_1000iters_5_255
jq ".attack.kwargs.max_iter = 1000 | .attack.kwargs.learning_rate = 5/255" scenario_configs/carla_obj_det_dpatch_undefended.json

@mzweilin
Copy link
Author

mzweilin commented Nov 23, 2021

Attack parameters

  • learning_rate = 5/255
  • max_iter = 1000

Result

mAP 44.67% -> 15.67%

How to reproduce

$ jq ".attack.kwargs.max_iter = 1000 | .attack.kwargs.learning_rate = 5/255 | .scenario.export_samples = 165" scenario_configs/carla_obj_det_dpatch_undefended.json
RobustDPatch iteration: 100%|█| 1000/1000 [05:34<00:00,  2.99it/s]
Evaluation: 100%|███████████| 165/165 [12:06:11<00:00, 264.07s/it]
2021-11-22 21:26:06 spr-gpu04 armory.utils.metrics[94429] INFO carla_od_AP_per_class on benign test examples relative to ground truth labels: {1: 0.51000000000000001, 2: 0.40000000000000002, 3: 0.42999999999999999}
2021-11-22 21:26:06 spr-gpu04 armory.utils.metrics[94429] INFO mean carla_od_AP_per_class on benign examples relative to ground truth labels 44.67%.
2021-11-22 21:26:06 spr-gpu04 armory.utils.metrics[94429] INFO Average carla_od_disappearance_rate on benign test examples relative to ground truth labels: 43.72%
2021-11-22 21:26:06 spr-gpu04 armory.utils.metrics[94429] INFO Average carla_od_hallucinations_per_image on benign test examples relative to ground truth labels: 0.81
2021-11-22 21:26:06 spr-gpu04 armory.utils.metrics[94429] INFO Average carla_od_misclassification_rate on benign test examples relative to ground truth labels: 3.54%
2021-11-22 21:26:06 spr-gpu04 armory.utils.metrics[94429] INFO Average carla_od_true_positive_rate on benign test examples relative to ground truth labels: 52.74%
2021-11-22 21:26:06 spr-gpu04 armory.utils.metrics[94429] INFO carla_od_AP_per_class on adversarial test examples relative to ground truth labels: {1: 0.25, 2: 0.20000000000000001, 3: 0.02}
2021-11-22 21:26:06 spr-gpu04 armory.utils.metrics[94429] INFO mean carla_od_AP_per_class on adversarial examples relative to ground truth labels 15.67%.
2021-11-22 21:26:06 spr-gpu04 armory.utils.metrics[94429] INFO Average carla_od_disappearance_rate on adversarial test examples relative to ground truth labels: 44.98%
2021-11-22 21:26:06 spr-gpu04 armory.utils.metrics[94429] INFO Average carla_od_hallucinations_per_image on adversarial test examples relative to ground truth labels: 1e+01
2021-11-22 21:26:06 spr-gpu04 armory.utils.metrics[94429] INFO Average carla_od_misclassification_rate on adversarial test examples relative to ground truth labels: 3.36%
2021-11-22 21:26:06 spr-gpu04 armory.utils.metrics[94429] INFO Average carla_od_true_positive_rate on adversarial test examples relative to ground truth labels: 51.66%
2021-11-22 21:26:06 spr-gpu04 armory.scenarios.scenario[94429] INFO Saving evaluation results to path
/home/weilinxu/.armory/outputs/2021-11-22T171945.081514/CarlaObjectDetectionTask_1637645166.json
Click to see CarlaObjectDetectionTask_1637645166.json
{
    "armory_version": "0.14.0",
    "config": {
        "_description": "XView object detection, contributed by MITRE Corporation",
        "adhoc": null,
        "attack": {
            "knowledge": "white",
            "kwargs": {
                "batch_size": 1,
                "learning_rate": 0.0196078431372549,
                "max_iter": 1000,
                "verbose": true
            },
            "module": "armory.art_experimental.attacks.carla_obj_det_patch",
            "name": "CARLADapricotPatch",
            "use_label": false
        },
        "dataset": {
            "batch_size": 1,
            "eval_split": "dev",
            "framework": "numpy",
            "modality": "rgb",
            "module": "armory.data.adversarial_datasets",
            "name": "carla_obj_det_dev"
        },
        "defense": null,
        "eval_id": "2021-11-22T171945.081514",
        "metric": {
            "means": true,
            "perturbation": "l0",
            "record_metric_per_sample": false,
            "task": [
                "carla_od_AP_per_class",
                "carla_od_disappearance_rate",
                "carla_od_hallucinations_per_image",
                "carla_od_misclassification_rate",
                "carla_od_true_positive_rate"
            ]
        },
        "model": {
            "fit": false,
            "fit_kwargs": {},
            "model_kwargs": {
                "num_classes": 4
            },
            "module": "armory.baseline_models.pytorch.carla_single_modality_object_detection_frcnn",
            "name": "get_art_model",
            "weights_file": "carla_rgb_weights.pt",
            "wrapper_kwargs": {}
        },
        "scenario": {
            "export_samples": 165,
            "kwargs": {
                "check_run": false,
                "mongo_host": null,
                "num_eval_batches": null,
                "skip_attack": false,
                "skip_benign": false,
                "skip_misclassified": false
            },
            "module": "armory.scenarios.carla_object_detection",
            "name": "CarlaObjectDetectionTask"
        },
        "sysconfig": {
            "docker_image": "twosixarmory/pytorch:0.14.0",
            "external_github_repo": "colour-science/colour",
            "filepath": "-",
            "gpus": "all",
            "log_level": 20,
            "no_docker": true,
            "output_dir": null,
            "output_filename": null,
            "use_gpu": true
        }
    },
    "results": {
        "adversarial_carla_od_AP_per_class": {
            "1": 0.25,
            "2": 0.2,
            "3": 0.02
        },
        "adversarial_mean_carla_od_AP_per_class": 0.15666666666666668,
        "adversarial_mean_carla_od_disappearance_rate": 0.44975220437118824,
        "adversarial_mean_carla_od_hallucinations_per_image": 10.096969696969698,
        "adversarial_mean_carla_od_misclassification_rate": 0.0336464209191482,
        "adversarial_mean_carla_od_true_positive_rate": 0.5166013747096637,
        "benign_carla_od_AP_per_class": {
            "1": 0.51,
            "2": 0.4,
            "3": 0.43
        },
        "benign_mean_carla_od_AP_per_class": 0.4466666666666667,
        "benign_mean_carla_od_disappearance_rate": 0.4371740417930258,
        "benign_mean_carla_od_hallucinations_per_image": 0.8121212121212121,
        "benign_mean_carla_od_misclassification_rate": 0.03541409768682497,
        "benign_mean_carla_od_true_positive_rate": 0.5274118605201494,
        "perturbation_mean_l0": 0.015076060606060607
    },
    "timestamp": 1637645166
}

@mzweilin
Copy link
Author

Based on the experimental results, I would suggest to add attack.kwargs.learning_rate = 5/255 in scenario_configs/carla_obj_det_dpatch_undefended.json to better demonstrate attacks.

@lcadalzo
Copy link
Contributor

lcadalzo commented Dec 2, 2021

thanks for sharing these results @mzweilin! At the time of the most recent Armory release, we hadn't had time to properly tune attack parameters and there was the issue #1212 addresses. For visibility's sake I'll also tag @yusong-tan who has been tuning attacks. We will ensure that default configs are updated accordingly as part of the next Armory release preceding Eval 4.

@yusong-tan
Copy link
Contributor

I think the default learning_rate of 5 was an artifact of the original Dpatch implementation, which assumed input in the range of [0, 255]. Since Armory now normalizes all input to [0, 1], the learning rate will need to be normalized as well. I found that learning rate on the same order as 0.01 works well.

@lcadalzo
Copy link
Contributor

Closing due to #1230

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants