Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: eval-update smoke test #1104

Closed
wants to merge 21 commits into from
Closed

WIP: eval-update smoke test #1104

wants to merge 21 commits into from

Conversation

mwartell
Copy link
Collaborator

Contains fixes for typo level errors in davidslater/eval-update (b9ae00c) found through scenario testing.

Incorporates davidslater/eval-update since that is the branch base.

@mwartell mwartell self-assigned this Jun 10, 2021
@@ -26,8 +26,15 @@ tensorflow-gpu==1.15.0

### Begin Pytorch required libraries
pytorch==1.7.0
torchvision==0.8.0
torchvision==0.7.0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

happened to glance at this PR and was curious why the change to torchvision version? For reference, corresponding PyTorch/torchvision versions are listed in this table

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was curious, too; thanks for the reference document. There was a pip version error when running --no-docker which may not be the proper consumer of host-requirements.txt. I am reverting that version regression and will better explore what the issue was.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the packages added in 4e21bcd are only needed for the ASR scenario. Perhaps we should add a comment making this clear

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably should, but let's keep that out of this PR.

@davidslater
Copy link
Contributor

@mwartell can you update for black and flake8?

logger.info(f"Applying internal {defense_type} defense to estimator")
estimator = config_loading.load_defense_internal(defense_config, estimator)

if model_config["fit"]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for the sake of modularity, would it make sense to have a separate method encapsulating model training? _load_defense() isn't the first place I'd expect training to live

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I agree.

@lcadalzo
Copy link
Contributor

Since self.x, self.y, self.y_pred, etc. are overwritten each iteration, would it be useful to keep track of this information with attributes that are appended to each iteration through the dataset? On one hand, it may be useful to have those artifacts accessible during interactive sessions, but on the other it may become a bit unwieldy

@davidslater
Copy link
Contributor

I would like to avoid those sort of append operations, which will quickly kill memory for large datasets. These are there mainly for easy interactive use for a single sample.

@davidslater
Copy link
Contributor

Moved to #1114

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants