-
-
Notifications
You must be signed in to change notification settings - Fork 111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test/runtests.jl empty (+ arch discussion) #843
Comments
Each submodule has tests, see here for what gets tested by Github Actions. |
is there a way to trigger it locally before pushing it to github ? |
I don't think so, but running it off of a branch / PR on Github works. |
You can activate a subrepo in the REPL then run the tests like this: |
has anyone tried a workflow with TestEnv.jl ? |
What if we create a separate environment in |
Want to put together a PR with some instructions on how to use it? Happy to try it out! |
I am rethinking a bit what are the requirements and scenarios to have such a local deving/testing machinery. SummaryMy understanding is that the general ReinforcementLearning (RL) module/package is just the umbrella handler ScenariosDevelopingWhen the user wants to make certain changes in the code base. 1. Developing a specific submodule (e.g.
|
I guess it's possible, especially in the current situation where a lot of stuff is no longer on the main branch. And I agree that it's probably a mess to work with. Good tags are certainly essential for them.
The current subpackage design has given me so many headaches. If it was up to me, I'd totally be in favor of a complete reorganization of the project into something more akin to Optimization.jl, which serves a somewhat similar (but not quite the same) purpose of being a metapackage. That is, it's just a meta package and it does not "contain" the subpackages, it only imports them. Each RLXxx.jl would be its own package.
That's a good argument in favor of a standard design yes. I don't know if subpackages are extension compatible but if it is, it may be painful to implement.
I think I agree with you so I have nothing to persuade you about. |
Doesn't this look more like the effort behind https://github.com/juliareinforcementlearning/commonrlinterface.jl ? BTW the Makie.jl guys also follow a subpackage structure.
Some ongoing discussion about that on their discord now ! |
CommonRLInterface is more like MathOptInterface.jl than Optimization.jl. It is an interface for RL environments that solvers can use, but does not aim to provide any sort of I think it is important to keep the environment interface separate from policies, agents, and environment implementations. That way people are free to implement their own learning algorithms that have different strengths, i.e. someone could make a clean RL package that has easy to read and modify code and someone else could make a high performance PPO implementation that works on a wide range of environments. A tabular learning algorithm then does not need to depend on Flux for example. This is similar to the function of Optimization.jl is trying to bring together many efforts that are owned and maintained with passion by many different individual developers. Currently, ReinforcementLearning.jl tries to contain everything. This has some advantages, but I think we would end up with a much healthier ecosystem if we took a more federated, less centralized approach like the julia optimization community. I'll stop here to keep this post short, but I'm happy to discuss further in a different issue if people are interested. |
Hey all! This is a great discussion and I think sometime in the coming week, it would be great to reach a consensus, but for the moment, here are a few things from my side:
|
That is already promising. Do you intend to keep it that way and how will you handle new breaking features per subpackage ?
I cannot see how testing across the general registry would be worse here.
Sooner or later I will find some time and I could help here, but first I really need to understand the motives and the dev workflow ^^ @zsunberg I am very happy to read your comment. Coming from the JuliaPOMDP org., do you have any aspirations/expectations from RL.jl that would help both organizations to grow ? Also are there any |
I think that most development on this package will be carried out by volunteers, especially grad students (unless some company or very big research org gets behind this), so our development model needs to be tolerant to grad students only being consistent contributors for a few years. To me that means having a very minimal set of core packages that students can build their projects off of and some parts can get thrown away/abandoned without weighing down the entire ecosystem.
I mostly would like a system that would make it easy for my students to work on research projects involving RL applied to more realistic applications... I don't have time to write too much at the moment, but so far none of my students have chosen to hook into RL.jl because it is so monolithic and it has been hard for them to take just the parts that they need. (or perhaps they have not been able to understand how to do that). It would also be great to collaborate on things like common types for action spaces, state spaces etc. |
Here is perhaps an explanatory example of why my students have not decided to hook into RL.jl: people using RL for research fall into one of two categories: either they have a real-world problem that they want to use RL to solve, or they want to make better RL algorithms and test them against predecessors. It seems like these things are kind of intertwined currently in RL.jl. |
The way I see it is that RL.jl should primarily be a repo of working RL algorithms. I think Zoo is the center piece of the ecosystem, the thing most users will come here to use. And Environments to play with them. The biggest barrier imo really are the incomplete and outdated documentation and the decrepit state of the packages. I recall one issue where someone wanted to run SAC on a simple environment, and the algorithm plain didn't work. He probably moved to stable baselines in python. |
(some thoughts currently on discourse https://discourse.julialang.org/t/large-vs-small-packages/9447/19) |
Yes, in order to improve this, the community needs to understand what choices led to this current state (at least to some extent) and adopt a strategy that is less prone to it. In the discourse discussion, it was encouraging to hear cortner's experience that breaking up into smaller packages did improve the ease of collaborators' and students' contributions. |
Quite schockingly, I realised
test/runtests.jl
is empty.What's the current way to test this package ?
The text was updated successfully, but these errors were encountered: