-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement Search & Rescue Multi-Agent Environment #259
base: main
Are you sure you want to change the base?
Conversation
* Initial prototype * feat: Add environment tests * fix: Update esquilax version to fix type issues * docs: Add docstrings * docs: Add docstrings * test: Test multiple reward types * test: Add smoke tests and add max-steps check * feat: Implement pred-prey environment viewer * refactor: Pull out common viewer functionality * test: Add reward and view tests * test: Add rendering tests and add test docstrings * docs: Add predator-prey environment documentation page * docs: Cleanup docstrings * docs: Cleanup docstrings
Here you go @sash-a this is correct now. Will grab a look at the contributor license and Ci failure now. |
I think CI issue is I've Esquilax set to Python |
Python version PR is merged now so hopefully it will pass 😄 Should have time during the week to review this, really appreciate the contribution! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An initial review with some high level comments about jumanji conventions. Will go through it more in depth once these are addressed. In general it's looking really nice and well documented!
Not quite sure on the new swarms package, but also not sure where else we would put it. Not sure on it especially if we only have 1 env and no news ones planned.
One thing I don't quite understand is the benefit of amap
over vmap
specifically in the case of this env?
Please @ me when it's ready for another review or if you have any questions.
As for your questions in the description:
Nope just the environment is fine
Please do add animation it's a great help.
We do want defaults, I think we can discuss what makes sense.
It's generated with mkdocs, we need an entry in One big thing I've realized that this is missing after my review is training code. We like to validate that the env works. I'm not 100% sure if this is possible because the env has two teams, so which reward do you optimize, maybe training with simple heuristic, eg you are the predator and the prey moves randomly? For examples see the |
* refactor: Formatting fixes * fix: Implement rewards as class * refactor: Implement observation as NamedTuple * refactor: Implement initial state generator * docs: Update docstrings * refactor: Add env animate method * docs: Link env into API docs
Hi @sash-a, just merged changes that I think address all the comments, and the
Could you have something like a
Yeah in a couple cases using it is overkill, hang-over from when I was writing this example with esquilax demo in mind! Makes sense to use |
I'll look at adding something to training next. I think random prey with trained predators makes sense, will look to implement. |
If you can add more that would be great! Then I'm happy to keep the swarm package as is. What we'd be most interested in is some kind of env with only 1 team and strictly co-operative like predators vs heuristic prey or visa versa, not sure if you planned to make any envs like this? But I had a quick look at the changes and it mostly looks great! Will leave an in depth review later today/tomorrow 😄 Also I updated the CI yesterday, we're now using ruff, so you will need to update your pre-commit |
One other thing, the only reason I've been hesitant to add this to Jumanji is because it's not that related to industry problems which is a common focus between all the envs. I was thinking maybe we could re-frame the env from predator-prey to something else (without changing any code, just changing the idea). I was thinking maybe a continuous cleaner where your target position is changing or something to do with drones (maybe delivery), do you have any other ideas and would you be happy with this? |
Yeah I was very interested in developing envs for co-operative multi-agent RL so was keen to design or implement more environments along theses lines. There's a simpler version of this environment which is just the flock, i.e. where the agents move in a co-ordinated way with out colliding. Also seen an environment where the agents have to effectively cover an an area that I was going to look at.
How do I do this? I did try reinstalling pre-commit, but it raised an error that the config was invalid? |
Yeah definitely open to suggestions. I was thinking more in the abstract for this (will the agents develop some collective behaviour to avoid predators) but happy to modify towards something more concrete. |
Great to hear on the co-operative marl front those both sound like nice envs to have
Couple things to try:
If this doesn't work check
Agreed it would be nice to keep it abstract for the sake of research, but I think it's nice that this env suite is all industry focused. I quite like something to do with drones - seems quite industry focused although we must definitely avoid anything to do with war. I'll give it a think |
Hi @sash-a fixed the formatting and consolidated the predator-prey type. |
Thanks I'll try have a look tomorrow, sorry previous 2 days were a bit more busy than expected. For the theme I'm think maratime search and rescue works well. It's relatively real world and fits the current dynamics |
Thanks, no worries. Actually yeah funnily enough a co-ordinated search was something I'd been looking into. Yeah could have one set of agent have some drift w random movements that need to be found inside the simulated region. |
Sorry still didn't have time to review today and Mondays are usually super busy for me, but I'll get to this next week! As for the theme do you think we should then change the dynamics a bit to make prey heuristically controlled to move sort of randomly? |
No worries, sure I'll do a revision this weekend! |
* feat: Prototype search and rescue environment * test: Add additional tests * docs: Update docs * refactor: Update target plot color based on status * refactor: Formatting and fix remaining typos.
Hi @sash-a, this turned into a larger rewrite (sorry for the extra review work, let me know if you want me to close this PR and just start with a fresh one) but think it's a more realistic scenario
A couple choices we may want to consider:
|
Thanks for this @zombie-einstein I'll start having a look now 😄
awesome!
Agreed I think we should actually hide targets once they are located so as to not confuse other agents.
I think individual is fine and externally users can sum it outside if they want. e.g we do this in mava for connector
Not quite following what you mean here. I would say an agent should observe all agents and targets (that have not yet been rescued) within their local view.
Maybe add this as an optional reward type, I think I prefer 1 if target is saved and 0 otherwise - makes the env quite hard, but we should test what works best.
Definitely!
We don't have a convention for this. I wouldn't add remaining steps to the obs directly I don't see why the algorithm would need that, although again needs to be tested. Agreed with remaining targets, makes sense to observe that. I think normalised floats makes sense. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Amazing job with this rewrite, haven't had time to fully look at everything but it does look great so far!
Some high level things:
- Please add a generator, dynamics and viewer test (see examples of the viewer test for other envs)
- Can you also add tests for the common/updates
- Can you start looking into the networks and testing for jumanji
Sorry a bit tedious tasks, but I really like the env we've landed on 😄
Thanks @sash-a, just a couple follow ups to your questions:
So I was picturing (and as currently implemented) a situation where the searchers have to come quite close the targets to "find" them (as if they are obscured/hard to find), but the agents have a larger vision range to visualise the location of other searchers agents (to allow them to improve search patterns for example). My feeling was that this created more of a search task, where if the targets are part of their larger vision range it feels like it could be more of a routing type task. I then thought it may be good to include found targets in the vision to allow agents to visualise density of located targets.
I thought if treating it as a time-sensitive task some indication of the remaining time to find targets could be a useful feature of the observation.
Yup will do! |
* Set plot range in viewer only * Detect targets in a single pass
* Prototype search-and-rescue network and training * Refactor and docstrings * Share rewards if found by multiple agents * Use Distrax for normal distribution * Add bijector for continuous action space * Reshape returned from actor-critic * Prototype tanh bijector w clipping * Fix random agent and cleanup * Customisable reward aggregation * Cleanup * Configurable vision model * Docstrings and cleanup params
Hi @zombie-einstein, so I got it piping through in Mava unfortunately it doesn't seem to be learning there either, although I didn't leave it training for long and it was only feed forward independent PPO. I just did an env with 2 agents and 2 targets, maybe that is too easy? This is the branch: feat/search-and-rescue and this is the commit where you can see what I've changed. Most of the important changes live in One thing I needed to do to get it working is to change the shape of the reward during reset - it doesn't contain the agent dimension during reset but it does during the step so I added it during reset.
Definitely not the cleanest though, maybe just repeat the reward instead of flattening the discount. If you could try get this training well either with Mava (use any algorithm you like) or with jumanji, I think that is the last blocker to merging this. If you have any questions about Mava feel free to ask here or open an issue there 😄 |
* Add observation including all targets * Consistent test module names * Use CNN embedding
Great thanks @sash-a, I'll grab a look, I suspect that as it stands with only 2 agents this may be really difficult stochastic, the agents would see nothing until they bump into a target pretty much by chance. I actually just added an additional observation type that includes found and unfound targets in the view so I will try this first, should be easier (the different observations now treat this as if they have different visual channels for the different agent/target information). |
Sounds great, I'd try other settings that make it easier also like increasing sight range and radius and generally anything else you can think of that would make it easier |
@sash-a I think latest JAX release 0.4.36 is breaking something, getting a bunch of |
Yes please! I was getting it this morning in Mava also 😄 Seems to be related to jax-ml/jax#25332 |
Just wanted to track a couple final design choices I think have got a bit lost in the comments:
|
|
Hey @sash-a, it seems like at least for the fully visibility observation (i.e. agent can see unfound targets) this is relatively straightforward to train. single agent seems to be doing pretty well to get most of the targets, and w two agents they finish before the time limit. I'm guessing rewards recorded here are individual agents right? Only thing I want to double check is that for 2 agents rewards seem to max out at 50. This makes sense in that the total the agents can receive 100 in total, but it seems unlikely that one agent would not randomly locate more targets than the other. Will run more experiments, wanted to ask:
|
The rewards are the mean over all agents, and summed over the episode.
You can do this through
No, but we have some scripts for this:
Then it's use in ff_ippo:
|
* Use channels view parameters * Rename parameters * Include step-number in observation * Add velocity field to targets * Add time scaled reward function
Hey just tracking that issue with JAX, it's been fixed in 0.4.37, so can you please unpin the JAX version in this PR |
vision_range=0.1, | ||
view_angle=searcher_view_angle, | ||
agent_radius=0.01, | ||
env_size=self.generator.env_size, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this case do we want to pass these arguments in from the constructor arguments to make tweaking these values a bit a more streamlined the arguments are standardised across the different vision models? Though appreciate this is a standard pattern.
I did this when testing with Mava, passing in the type and constructing here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ye it can be quite a mission to instantiate, it's possible we should reconsider this pattern, but for now leave it as is just to stay consistent
### | ||
|
||
# Search-and-Rescue environment | ||
register(id="SearchAndRescue-v0", entry_point="jumanji.environments:SearchAndRescue") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it worth registering the environment with different vision models?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean fully versus partially observable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah I think you mean the different observation functions. I'd say we should aim to use the observation that only visualizes targets and searchers within their vision cones. If we see a good training curve with this then that should be the default.
In general I'd like to have a set of scenarios for most/all environments in jumanji (see #248). So it would be cool to think of a set (3-4) easy/hard envs and we can register those. If they happen to have different observation models that's fine
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I was picturing something along these lines, like the easy one is where un-found targets are visible, and then harder versions use the version with hidden targets.
Hi @sash-a, I just pushed a bunch of changes to add those features from the comments above (scaled rewards, time-step, and target velocities) so think this no mostly feature complete. Added a couple last comments above I wanted to run by you. I also now want to do a final pass overt the docstrings and docs, and add a couple more tests. From testing, the environment with full visibility is relatively straightforward, but from the animation the agents seem to be essentially routing to the closest next unfound target (I need to find a way to upload a gif here!). I've not made much more progress with the model with hidden targets, though I need to test a few more configurations, I had a couple thoughts:
|
* Update docstrings * Update tests * Update environment readme
Hey @zombie-einstein this is really great progress and agreed it seems pretty much feature complete to me 🔥! Just a heads up I will be on holiday from the 16th of December to 6th January. I won't be able to do code review in that time, but I'll be available for any questions you may have. I think it's realistic to expect this to be merged early January. I'll just have a final look once I'm back and hopefully everything will be good to go 😄
Ok this is definitely an issue when learning and something I've noticed as being very important to the performance in other environments. I would add the absolute position of the current agent to its observation, and normalize it by world size.
Ye maybe the global state could be the absolute position of all agents and the target part of their observation?
I would highly recommend trying rec-mappo in mava (assuming there's a sensible global state) because this problem seems like it would definitely benefit from some recurrence in the policy. |
Great yeah this makes sense, will add this and try it out. Just to check, the global state should have one entry per agent, something like
the per-agent position observation might need to be rotated like?
|
That's the way we do it in mava, 1 per agent. But to be honest it's a bit of an open question as to how the global state should be structured. In my opinion it should be the same across all agents (I'm pretty sure this is what the theory says) but empirically we've seen good results when it's tailored to each agent. So what I would do for a first pass is a global state of shape Also I'm assuming this was just an example above, but don't forget the global state should include information about targets. Some global state ideas for targets, in all cases these include the normalised absolute positions of all agents:
Honestly often concatenating all observations is a good global state also, it's hard to know what will work best without testing it. |
Add a multi-agent search and rescue environment where a set of agents has to locate moving targets on a 2d space.
Changes
swarm
environment group/type (was not sure the new environment fit into an existing group, but happy to move if you think it would better fit somewhere else)Todo
Questions
jumanji.environments
do types also need forwarding somewhere?animate
method to the environment, but saw that some other do? Easy enough to add.