You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Significant improvements for generating slow trajectories. Added re-timing post processing to
slow down optimized trajectories. Use MotionGenPlanConfig.time_dilation_factor<1.0 to slow down a
planned trajectory. This is more robust than setting velocity_scale<1.0 and also allows for
changing the speed of trajectories between planning calls
curobo.util.logger adds logger_name as an input, enabling use of logging api with other
packages.
Changes in default behavior
Move CudaRobotModelState from curobo.cuda_robot_model.types to curobo.cuda_robot_model.cuda_robot_model
Activation distance for bound cost in now a ratio instead of absolute value to account for very
small range of joint limits when velocity_scale<0.1.
TrajResult is renamed to TrajOptResult to be consistent with other solvers.
Order of inputs to get_batch_interpolated_trajectory has changed.
MpcSolverConfig.load_from_robot_config uses world_model instead of world_cfg to be
consistent with other wrappers.
BugFixes & Misc.
Fix bug in MotionGen.plan_batch_env where graph planner was being set to True. This also fixes
isaac sim example batch_motion_gen_reacher.py.
Add min_dt as a parameter to MotionGenConfig and TrajOptSolverConfig to improve readability
and allow for having smaller interpolation_dt.
Add epsilon to min_dt to make sure after time scaling, joint temporal values are not exactly
at their limits.
Remove 0.02 offset for max_joint_vel and max_joint_acc in TrajOptSolver
Bound cost now scales the cost by 1/limit_range**2 when limit_range<1.0 to be robust to small
joint limits.
Added documentation for curobo.util.logger, curobo.wrap.reacher.motion_gen, curobo.wrap.reacher.mpc, and curobo.wrap.reacher.trajopt.
When interpolation buffer is smaller than required, a new buffer is created with a warning
instead of raising an exception.
torch.cuda.synchronize() now only synchronizes specified cuda device with torch.cuda.synchronize(device=self.tensor_args.device)