Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hard-Coded and Unnecessary segments should be removed #158

Open
UtkarshMishra04 opened this issue Apr 8, 2021 · 3 comments
Open

Hard-Coded and Unnecessary segments should be removed #158

UtkarshMishra04 opened this issue Apr 8, 2021 · 3 comments

Comments

@UtkarshMishra04
Copy link
Member

UtkarshMishra04 commented Apr 8, 2021

Current Setup:
There is a lot of hard coding which brings a lot of problems when trying to generalize the framework across multiple settings.

To be introduced:
Code will be cleaned and hardcoded segments will be removed, proper parameterization in Configs will be used for all the variables and hyperparameters.

Major Changes:
The overall task of running custom settings with the default config template will be much easy after this.

Eventually, this will be progressed with #157 and will also resolve #43

@sergiopaniego
Copy link
Member

sergiopaniego commented May 7, 2021

I believe we should deeply review the RL brains removing any unnecessary code and generalising them to easily train-test the agents. What do you think? It's important to already consider this issue for #157! We could add a table in the documentation (Github pages webpage) with the information about available brains and how to train-test them. I thinks it's kind of difficult to understand the RL brains now. Tell me what do you think!

@UtkarshMishra04
Copy link
Member Author

I agree with your suggestion.
Now with more and more algorithms being introduced, the number of hyper-parameters increases substantially, so it becomes necessary to remove any hard-coded segment.
Further, with the documentation, things can be easily tested, verified, and reproduced.

Just a concern, the DQN implementation by @dcharrezt is not applicable with the current settings as it is in Keras. So I propose we completely shift to pytorch as that will resolve the Keras issue and also ease up the CUDA and Tensorborad integration.

I will start working on this, just some minor updates to #157 are pending on my end.

@sergiopaniego
Copy link
Member

With this changes that you suggest, the DQN implementation would be just in PyTorch?
We need to still support Tensorflow-Keras for the non-RL part since all the brains use it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants