-
Notifications
You must be signed in to change notification settings - Fork 68
ROS Node Infrastructure
I created the diagram below to help visualize the various nodes and control processes that comprise OpenQuadruped's control structure.
Each layer of abstraction is separated into a discrete block (within larger hardware blocks) that has rounded corners. However, since it isn't necessarily efficient to have each of these layers as individual ROS nodes, I end up grouping various layers of abstraction within single ROS nodes.
In the diagram below, any set of blocks that are surrounded by a dashed line box are part of one node. I.E. the PS4 controller is 1 node.
To clarify exactly what is happening on the Jetson Nano: currently there can only be 1 of two main control processes running at a time on the jetson, based on the desired robot output. The first control process is the body IK model, which takes in the desired static yaw/pitch/roll/xyz within the scope of the body, and returns the hip-to-leg vectors (aka the xyz coordinates of each leg relative to the hip location). This is for static manipulation of the body's pose (when it isn't walking). The second control process is the RL Agent (which takes in IMU values and passes them through a matrix to figure out control parameters for foot clearance, etc to increase robustness) which feeds into the foot path planner (a 12-point bezier curve), which is then passed into the gait planner which time-parameterizes the aforementioned foot path into a trajectory for each foot. Now, we have a trajectory for a foot to follow that we can sample with real-time. However, we don't want all the feet to move at the same exact time or even in the same direction. What I then do is apply the phase offsets for each leg (which define the gait), which simply offsets the sample time by a factor of the stride period. I also rotate each leg's trajectory about the z axis based on user input from the controller. Now that I have a trajectory, I simply sample it with ROS time and send the resulting xyz points for each respective leg to the LLC controller (teensy). The main point here is that no matter which control process is running, the output of the jetson nano to the teensy is a set of xyz points (or hip-to-foot vectors if you want to think about it that way) that need to be converted somehow into motor angles.
Once the teensy receives this information from the jetson nano using the rosserial package, the teensy runs the hip-to-foot vectors through the leg IK model, which outputs a set of joint angles for each leg. Then, the newly computed desired joint angles are sent to the servo speed controller, which compares the relative distance that each servo needs to move and scales the corresponding joint speed in order to reduce unwanted body acceleration and to linearize motion. The servo speed controller is constantly running on the teensy LLC (low level controller), and the ROS topic simply updates the setpoints.
Due to the way that rosserial works, all the code running on the teensy must be encapsulated within a single node, so I just treat the teensy as a single node altogether.