Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

_save_metrics assumes that solve() is called just once #1543

Open
tomsilver opened this issue Sep 5, 2023 · 0 comments
Open

_save_metrics assumes that solve() is called just once #1543

tomsilver opened this issue Sep 5, 2023 · 0 comments
Labels
bug Something isn't working

Comments

@tomsilver
Copy link
Collaborator

but that's not the case when we're re-planning.

expensive way to reproduce:

python predicators/main.py --env sticky_table_tricky_floor --approach active_sampler_learning --experiment_id grid_row-planning_progress_explore --debug --strips_learner oracle --sampler_learner oracle --bilevel_plan_without_sim True --max_initial_demos 0 --sampler_mlp_classifier_max_itr 100000 --mlp_classifier_balance_data False --pytorch_train_print_every 10000 --active_sampler_learning_model myopic_classifier_mlp --active_sampler_learning_use_teacher False --online_nsrt_learning_requests_per_cycle 1 --max_num_steps_interaction_request 1000 --num_online_learning_cycles 10 --active_sampler_learning_explore_length_base 100000 --sesame_task_planner fdopt-costs --explorer active_sampler --active_sampler_explore_task_strategy planning_progress --seed 456 --execution_monitor expected_atoms

then look at the average number of predicates; it will be ridiculously high and non-constant, even though we're not predicate learning

@tomsilver tomsilver added the bug Something isn't working label Sep 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant