Skip to content

Commit

Permalink
precommit hooks
Browse files Browse the repository at this point in the history
  • Loading branch information
P-Schumacher committed Dec 21, 2023
1 parent 1da6f8c commit 3e35827
Show file tree
Hide file tree
Showing 3 changed files with 13 additions and 19 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ If you just want to see the code for DEP, take a look at `deprl/dep_controller.p

### Big update!

We now provide code for our newest preprint, [Natural and Robust Walking using Reinforcement Learning without Demonstrations in High-Dimensional Musculoskeletal Models](https://sites.google.com/view/naturalwalkingrl). With this work, we take a step towards _natural_ movement generation with RL.
We now provide code for our newest preprint, [Natural and Robust Walking using Reinforcement Learning without Demonstrations in High-Dimensional Musculoskeletal Models](https://sites.google.com/view/naturalwalkingrl). With this work, we take a step towards _natural_ movement generation with RL.
This update provides code for adaptive energy costs in muscle-driven systems and provides support for the SCONE and Hyfydy softwares in the shape of the recently released [sconegym](https://github.com/tgeijten/sconegym/tree/main) environment suite.

The new features also include pre-trained baselines from the preprint, enabling rendering from SCONE and much more. See the [docs](https://deprl.readthedocs.io/en/latest/?badge=latest) for more information.
Expand Down Expand Up @@ -61,7 +61,7 @@ We also collaborated with groups that provide musculoskeletal control environmen
### Hyfydy and sconegym
We include several pre-trained baselines and configuration files to train the policies from our newest [preprint](https://arxiv.org/abs/2309.02976). These allow you to train walking agents in [Hyfydy](hyfydy.com) with RL for natural walking and robust running tasks. We worked together with Thomas Geijtenbeek [@tgeijten](https://github.com/tgeijten) to create a python environment interface for Hyfydy, called [sconegym](https://github.com/tgeijten/sconegym/tree/main)!
This repository also includes the definitions of all the cost terms we used, see [here](https://github.com/tgeijten/sconegym/blob/main/sconegym/gaitgym.py).
The configuration files to train our sconegym policies are included [here](https://github.com/martius-lab/depRL/tree/main/experiments/hyfydy).
The configuration files to train our sconegym policies are included [here](https://github.com/martius-lab/depRL/tree/main/experiments/hyfydy).


Check out how to install sconegym from their [repo](https://github.com/tgeijten/sconegym/blob/main/sconegym/gaitgym.py), you can immediately start with a simple OpenSim model. To access the fast Hyfydy engine, and the complex 3D models, you need to request a trial license from the [Hyfydy](hyfydy.com) website or purchase a license. Some usage examples can be found [here](https://deprl.readthedocs.io/en/latest/hyfydy_baselines.html) and [here]()
Expand Down
19 changes: 7 additions & 12 deletions examples/example_only_dep_myosuite.py
Original file line number Diff line number Diff line change
@@ -1,25 +1,20 @@
import gym
import myosuite
import time
from deprl import env_wrappers
from deprl.dep_controller import DEP

import gym
import myosuite # noqa

from deprl import env_wrappers
from deprl.dep_controller import DEP

env = gym.make('myoLegWalk-v0')
env = gym.make("myoLegWalk-v0")
env = env_wrappers.GymWrapper(env)

# You can also use SconeWrapper for Wrapper
# env = env_wrappers.SconeWrapper(env)
dep = DEP()
dep.initialize(env.observation_space, env.action_space)

env.reset()
for i in range(1000):
action = dep.step(env.muscle_lengths())[0,:]
print(action.shape)
action = dep.step(env.muscle_lengths())[0, :]
next_state, reward, done, _ = env.step(action)
time.sleep(0.01)
time.sleep(0.005)
env.mj_render()


9 changes: 4 additions & 5 deletions examples/example_only_dep_scone.py
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
import gym
import sconegym
import sconegym # noqa

from deprl import env_wrappers
from deprl.dep_controller import DEP


# create the sconegym env
env = gym.make('sconewalk_h2190-v1')
env = gym.make("sconewalk_h2190-v1")

# apply wrapper to environment
env = env_wrappers.SconeWrapper(env)
Expand All @@ -28,7 +28,7 @@

while True:
# samples random action
action = dep.step(env.muscle_lengths())[0,:]
action = dep.step(env.muscle_lengths())[0, :]
# applies action and advances environment by one step
state, reward, done, info = env.step(action)

Expand All @@ -46,4 +46,3 @@
break

env.close()

0 comments on commit 3e35827

Please sign in to comment.