Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggestion regarding further steps for speed estimation #18

Open
Rak-r opened this issue Dec 26, 2023 · 70 comments
Open

Suggestion regarding further steps for speed estimation #18

Rak-r opened this issue Dec 26, 2023 · 70 comments

Comments

@Rak-r
Copy link

Rak-r commented Dec 26, 2023

I have tried to setup the stack with Intel realsense D435 and I have checked that topics are susbcribed by the yolo nodes and I can view the camera rgb feed and depth feed in rviz2.

What I want now, is after having the 3d data about the objects, I want to do speed estimation of the detected and tracked objects (let's say person's speed). For this, using yolo default trackers byte track/botosrt, they don't assign re-d to the object which leaves the frame. I want to take some suggestion.

  • Suppose, if the object is getting detected and tracked with track_id = 1 and it leaves the frame and comes back, tracker will agin assign a new id but will this affect the speed estimation because as soon as the object reappears the speed estimation will start from fresh but will report the correct speed data of that object.
@mgonzs13
Copy link
Owner

So, If I understand you, you would like object ReID to improve the speed estimation, right?

@Rak-r
Copy link
Author

Rak-r commented Dec 27, 2023

Although this scenario might be a future work. My current application involves the pedestrian coming from one direction and the vehicle is from other (perpendicular paths of the two), so there might be no scenario for now where the pedestrian goes out of frame. So, currently what I want is the pedestrian's speed. I am looking for Deep sort for some re-id tasks and to extract out the trajectories of the objects but I am getting bit confused how can I compute the speed or velocity.

@mgonzs13
Copy link
Owner

If you have an RGB-D camera, you can use the 3D version from this repo (yolov8_3d.launch.py). You will get the 3D positions and tracking ID for each object. Finally, you can compute the speed using the 3D center and the timestamp.

@mgonzs13
Copy link
Owner

Hey @Rak-r, have you tried the speed estimation from Ultralitycs https://docs.ultralytics.com/guides/speed-estimation/?

@Rak-r
Copy link
Author

Rak-r commented Jan 22, 2024

I think that example draws a line region and if object corsses it then it outputs the speed. However, in my case the robot is in motion meaning camera too in motion, I don't know how its going to handle in that scenario, so I am just using the 3D centre coordinates (which are real-world coordinates) and calculating the euclidean distance between the previous message and current message and for speed, divide it by the time stamp (delta_t in this case difference between old timestamp and current timestamp). I tested it with simulating an actor in gazebo it throws out some values but the actor speed was too low it seems it gave nearly (0.0085) kinda values. I will test it with some videos and also have to do some averaging for smoother values.

Could you tell that this approach is ok? The motion of the camera is already getting handled as there is get_tranform() function and transform_3d() function which is handling this, so i don't have to find the relative velocity here. Also, I am attaching the script which I tried to wrote for this, I think I have to make the self.old_time to get_clock.now() instead of zero and update it to current time after calculation of delta_t. @mgonzs13
If you agree, could we make this as add-on to the repo after checking its working also?

`

import rclpy
from rclpy.node import Node
from std_msgs.msg import Header
from yolov8_msgs.msg import DetectionArray
from geometry_msgs.msg import Twist, Point
import math

class SpeedEstimate(Node):
def init(self):
super().init('speed_estimate_node')
self.previous_detections = {}
self.old_displacement = []

    self.detection_sub = self.create_subscription(
        DetectionArray,
        '/yolo/detections_3d',
        self.speed_calc,
        10)  # Change the topic and message type as needed
    self.current_position = []
    self.prev_position = []
    self.old_time  = 0.0


def euclidean_dist( self, dist, detection_msg):
    
    if not detection_msg.detections:
        return None        
    new_time = detection_msg.header.stamp.sec + detection_msg.header.stamp.nanosec/ 1e9
    delta_t = new_time - self.old_time
    self.old_time = new_time
    euclidean_dist = [math.sqrt(dx**2 + dy**2 + dz**2)/delta_t for dx, dy, dz in dist]
    for i, speed in enumerate(euclidean_dist):
        print(f'Index {i+1} Speed {speed}')
    return speed
    

def speed_calc(self, detection_msg):
    # Check if there are detections
    if not detection_msg.detections:
        return None

    timestamp = detection_msg.header.stamp.sec + detection_msg.header.stamp.nanosec / 1e9
  
    for detection in detection_msg.detections:
        object_id = detection.id
        # Check if it's the first time seeing this object
        if object_id not in self.previous_detections:
            self.previous_detections[object_id] = {'timestamp': timestamp, 'position': detection.bbox3d.center.position}
            return
        self.current_position.append(detection.bbox3d.center.position)
        self.prev_position.append(self.previous_detections[object_id]['position'])
        displacement = [(curr_pt.x - prev_pt.x, curr_pt.y - prev_pt.y, curr_pt.z - prev_pt.z) for curr_pt, prev_pt in zip(self.current_position, self.prev_position)]

    self.euclidean_dist(dist=displacement, detection_msg= detection_msg)

    # # Update the previous detections
    # self.previous_detections[object_id] = {'timestamp': timestamp, 'position': detection.bbox3d.center.position}

def main(args=None):
rclpy.init(args=args)
speed_estimate_node = SpeedEstimate()
rclpy.spin(speed_estimate_node)
speed_estimate_node.destroy_node()
rclpy.shutdown()

if name == 'main':
main()

`

@mgonzs13
Copy link
Owner

Starting from your code, my idea was something similar to the following node. In this case, adding a speed field to the detection message to republish the 3D detected objects with the computed speed. But I have to test it.

import math
import rclpy
from rclpy.node import Node

from geometry_msgs.msg import Pose
from yolov8_msgs.msg import Detection
from yolov8_msgs.msg import DetectionArray


class SpeedEstimate(Node):

    def __init__(self) -> None:
        super().__init__("speed_estimate_node")

        self.previous_detections = {}

        self._pub = self.create_publisher(
            DetectionArray, "detections_speed", 10)
        self._sub = self.create_subscription(
            DetectionArray, "detections_3d", self.detections_cb, 10)

    def calculate_euclidian(self, old_pos: Pose, cur_pos: Pose) -> float:
        return math.sqrt(
            math.pow(old_pos.position.x - cur_pos.position.x, 2) +
            math.pow(old_pos.position.y - cur_pos.position.y, 2) +
            math.pow(old_pos.position.z - cur_pos.position.z, 2)
        )

    def detections_cb(self, detection_msg: DetectionArray) -> None:

        timestamp = detection_msg.header.stamp.sec + \
            detection_msg.header.stamp.nanosec / 1e9

        detection: Detection
        for detection in detection_msg.detections:

            if detection.id in self.previous_detections:

                old_pos = self.previous_detections[detection.id]["position"]
                cur_pos = detection.bbox3d.center

                dist = self.calculate_euclidian(old_pos, cur_pos)
                t = timestamp - \
                    self.previous_detections[detection.id]["timestamp"]

                detection.speed = dist / t

            self.previous_detections[detection.id] = {
                "timestamp": timestamp,
                "position": detection.bbox3d.center
            }

        self._pub.publish(detection_msg)


def main():
    rclpy.init()
    node = SpeedEstimate()
    rclpy.spin(node)
    node.destroy_node()
    rclpy.shutdown()

Besides, in the case of objects that disappear and reappear, we can clean the previously detected dict with a time like this.

 def clean_detections(self) -> None:

        cur_time = self.get_clock().now().nanoseconds / 1e9

        for detection_id in self.previous_detections:

            if cur_time - self.previous_detections[detection_id]["timestamp"] > 1:

                del self.previous_detections[detection_id]

@Rak-r
Copy link
Author

Rak-r commented Jan 22, 2024

Seems reasonable approach. If you could test it and find if something more could be considered, suggest some changes, then it will be great. I will try to test the scripts with videos on my end as well and adding the averager for smoother values.

@mgonzs13
Copy link
Owner

mgonzs13 commented Jan 24, 2024

I have created a new branch with the changes for speed estimation. Running the 3D launch includes speed estimation. You can visualize it using the /yolo/dbg_image topic.

@Rak-r
Copy link
Author

Rak-r commented Jan 24, 2024

Nice, but have you done some testing with it. Is it giving reasonable outputs?@mgonzs13

@mgonzs13
Copy link
Owner

Yes, I have launched it. I am getting errors for the speed in m/s of immobile objects, maybe due to the bounding boxes but I have to review the code. Btw, it would be interesting to have a ground truth to compare the results.

@Rak-r
Copy link
Author

Rak-r commented Jan 24, 2024

For the immobile/stactic objects, it should show nearly zero m/s. Thats the way we can verify it atleast for static one. For moving objects we can try on on some video stream might be not on ros side but genereic yolo style. We are clear on one thing, that we don't have perform transform as we are having 3d cordinates in world right? or we need to take into account the fps instead of ros2 message timestamp. Also might store some messages say(20) and get the averager for smoothing. What you suggest for this? @mgonzs13

@Rak-r
Copy link
Author

Rak-r commented Jan 24, 2024

I tried to test it with just simulation and I have attached he gif for the same. Also providing the topic echo for detections_speed topic
![Screenshot from 2024-01-24 16-02-42](https://github.com/mgonzs13/yolov8_ros/assets/85680564/936789bb-e9e1-45ae-a8b5-142
Screencastfrom24-01-24161011-ezgif com-video-to-gif-converter

Also, tested with simulating actor, showing nearly zero, I dont know fully that the actor's simualtion speed is slow and it is looping from start everytime or how can we test it with more precise moving objects.

One thing, why in the topic frame_id is getting to base_link while the capturing frame or input frame to yolo is camera_link? All these should be in camera_link , right? Just needed to reclarify.

Yolo_gz

@mgonzs13

@mgonzs13
Copy link
Owner

mgonzs13 commented Jan 24, 2024

The /yolo/detections_3d and /yolo/detections_speed have messages with a frame_id different from the camera_link because the 3D is transformed to another, for instance the base_link, the odom or the map. On the other hand, /yolo/detections messages should have camera_link as frame_id, or whatever is the frame used by the camera.

I think that simulated actor could be a good way to get the ground truth to make tests.

@Rak-r
Copy link
Author

Rak-r commented Jan 24, 2024

Yeah, that what I am trying but some how the gazebo guys don't provide to set the actor's speed. so that we can have some groundtruth

@mgonzs13
Copy link
Owner

mgonzs13 commented Jan 24, 2024

I think we can obtain it by using the gazebo_ros_state plugin. You can enable it by adding the following lines to your world and then using the /get_entity_state service to get the data.

<plugin name="gazebo_ros_state" filename="libgazebo_ros_state.so">
    <update_rate>1.0</update_rate>
</plugin>

@Rak-r
Copy link
Author

Rak-r commented Jan 25, 2024

I have my stack build up with the new gazebo (https://gazebosim.org/docs/garden/actors) not with gazebo 11. I am finding something similar in the new one but for actor I couldn't find one till now. There is odometry publisher plugin which publishes odometry for any entity but again for the actors you can't apply that plugin. it should be a model. @mgonzs13

@mgonzs13
Copy link
Owner

Are you using Ignition? Do you have a world with objects, actors and a robot to test this?

@Rak-r
Copy link
Author

Rak-r commented Jan 25, 2024

Yes, I am using Gazebo but have the same setup for ignition as well with actors, robot mounted with simulated RGB-D camera and objects that's the same one which I sent over in the 2 gifs
I have tested it with real intel realsense D435 the speed values are flickering a lot, I think needs to apply some averaging of let say 10 messages? or do the average of the messages between every 0.5 seconds?

@Rak-r
Copy link
Author

Rak-r commented Feb 3, 2024

I am working with speed stuff. However I found that the callback in speed estimation node sometimes running twice which is making the timestamp difference double and ultimately wrong values. The desired value should be 0.037 something like this but it is getting to 0.066, 0.1000

I have attached the screenshot which debug flag, the data showing timestamp when entering the callback which is the time stamp of detection_msg, with the flag number below, then when there are detections it shows the time stamp of the message along with flag number and then the t which is the timestamp difference. I don't know but something is making the callback run twice. If I set the timestamp to normal ros time instead of message time stamp then it remains consistent to 0.03 which is nearly 27 hz at which the callback runs for detect_3d_node.py

Also, in calculate euclidean, isn't it current position - old position? the script does old-cur.
@mgonzs13

Screenshot from 2024-02-03 18-02-46

@mgonzs13
Copy link
Owner

mgonzs13 commented Feb 3, 2024

Where are you printing that information?

Regarding the Euclidean distance, the order of the positions does not matter. I mean old-cur distance should be the same as cur-old distance.

@Rak-r
Copy link
Author

Rak-r commented Feb 3, 2024

I have added logger to the speed estimation node. Yes, it will be same when talking about magnitude but if vectors then I believe it should be crr_x - old_x, crr_y-pld_y and same for z

Anyways the timing seems an issue here? @mgonzs13

def detections_cb(self, detection_msg: DetectionArray) -> None:

        timestamp = detection_msg.header.stamp.sec + \
            detection_msg.header.stamp.nanosec / 1e9

        self.flag +=1
        self.get_logger().info(f"Timestamp: {timestamp}")
        self.get_logger().info(f"Flag: {self.flag}")

        detection: Detection
        for detection in detection_msg.detections:

            if detection.id in self.previous_detections:

                self.get_logger().info(f"Message Time: {detection_msg.header.stamp}, Flag: {self.flag}")
                old_pos = self.previous_detections[detection.id]["position"]
                cur_pos = detection.bbox3d.center

                dist = self.calculate_euclidian(old_pos, cur_pos)
                t = timestamp - \
                    self.previous_detections[detection.id]["timestamp"]
                self.get_logger().info(f"Time Difference (t): {t}")
                self.get_logger().info('************************************')
                detection.speed = dist / t

            else:
                detection.speed = math.nan

            self.previous_detections[detection.id] = {
                "timestamp": timestamp,
                "position": detection.bbox3d.center
            }

        self._pub.publish(detection_msg)`

@mgonzs13
Copy link
Owner

mgonzs13 commented Feb 3, 2024

Take into account that you are printing it inside the loop. How many objects are detected? You may be getting the information printed twice because you are detecting two objects.

@Rak-r
Copy link
Author

Rak-r commented Feb 3, 2024

I am just detecting person class. So it should be only one object.

But how that will going to effect time stamp difference to change? @mgonzs13 Could you try on your end?

Also could you suggest something here, i am finding velocities in x,y and z. For that subtracting previous point from current recieved point

  • dx = cur.position.x - old.position.x

  • Then, dx/t is this right?

  • Then taking average of last 10 velocities for smooth values.

because when I print the velocities i observe some sudden jumps. How could I handle this amd correct this.

One more thing, I think I have already know this but still we dont have to do any transform or relative velocities right because we have 3d points of the objects relative to the camera or whatever frame id we specify, right?

@mgonzs13
Copy link
Owner

mgonzs13 commented Feb 3, 2024

I'll try it. Yes, all 3D data is already transformed to the frame of the launch file.

@mgonzs13
Copy link
Owner

mgonzs13 commented Feb 3, 2024

I have tried it in Gazebo with only one object. This is the result. As you can see, the callback is not run twice. So, as I mentioned before, you may be using two objects, not two classes.
image

@Rak-r
Copy link
Author

Rak-r commented Feb 3, 2024

Sorry but i am not getting it clearly using two objects meaning? When i point the camera towards me then it starts publishing.

I have set like

if detection.id in self.previous_detections and if detection.class == 'person': Then do rest of the speed stuff

  • then inside this finding the time diff. As per the above code.

@mgonzs13
Copy link
Owner

mgonzs13 commented Feb 3, 2024

I mean that two objects are detected. I have tested it with my RGB-D camera and I getting the same results. In this case, only with me.
image

@Rak-r
Copy link
Author

Rak-r commented Feb 3, 2024

I am detecting myself as I wrote in that if condition but according to what you just explained, there might be some other object in the background getting detected so the time stamp of that object is getting printed along with my timestamp?

@mgonzs13
Copy link
Owner

mgonzs13 commented Feb 3, 2024

Yes, that's what I think could be happening when you run it. You can check it with rviz2 and the debug topic or echo the detection topics.

Btw, do you want the velocity for each axis?

@Rak-r
Copy link
Author

Rak-r commented Feb 3, 2024

Yes. What I need is to predict the trajectory of the person fir that using velocities instead of magnitude.

I am also thinking to implement the Kalman filter for this.

Would you mind suggesting with this. Then I can file the PR and we can more to this repo.

@mgonzs13
Copy link
Owner

mgonzs13 commented Feb 4, 2024

I have just pushed a new Kalman filter speed estimation but using the cv2 version. I have used the navigation2_dynamic that you said. It seems to work better but I have to test it more.

@Rak-r
Copy link
Author

Rak-r commented Feb 4, 2024

Ok. That's good to see. I was thinking to do the same. What you put to UUID field of those messages? The object id which we get? and what are the initial velocities and position to the KF?. It seems that it is KF based tracker, so can't, we just remove the yolo default tracker and use that one. However, after thinking about this, I stuck at one question, what to add to UUID if we dont use yolo tracker?

@mgonzs13
Copy link
Owner

mgonzs13 commented Feb 4, 2024

I am not using the UUID. Instead, I continue using my messages which include the velocities in a Twist msg. The initial position that I am using is the position of the object at its first detection.

@Rak-r
Copy link
Author

Rak-r commented Feb 4, 2024

Ok. If we want to use the tracker from that repo,I am just wondering what to add in UUID field as I think the tracker subscribes to Obstacle message.
I am thinking if this because of the issue I am facing with the tracker. @mgonzs13

Have you test the data after adding the cv2 based one? Does it seems reliable velocities or sudden jumps when there are more than one object?

@mgonzs13
Copy link
Owner

mgonzs13 commented Feb 4, 2024

I am not sure you will get better results with the tracker of the repo since bytetracker also uses Kalman filter.
I haven't tested the new version with cv2 Kalman filter.

@Rak-r
Copy link
Author

Rak-r commented Feb 4, 2024

Ok. I will do some testing tomorrow and will update here. You haven't experienced any jumps? from the KF before using the cv2 Kalman filter.

@Rak-r
Copy link
Author

Rak-r commented Feb 11, 2024

Hi @mgonzs13 , have you got some time to test the velocity stuff? I have been looking for some groundtruth from that gazebo actor but didn't found much. You were mentioning about the actor state in ignition. Could you provide some brief for it.

@mgonzs13
Copy link
Owner

I haven't time to test it further. On the groundtruth, I suggested using the gazebo_ros_state plugin, which can be enabled by adding some lines to the gazebo world. Then, using the /get_entity_state service returns the position and twist data for a given object in the simulation.

@Rak-r
Copy link
Author

Rak-r commented Feb 11, 2024

But I think its not the same for ignition gazebo?

@mgonzs13
Copy link
Owner

I don't know if it is the same or has something equivalent. I have only used the plunging in Gazebo.

@Rak-r
Copy link
Author

Rak-r commented Feb 11, 2024

Yeah. I got it. Let me know whenever you can try to test it a bit more. Till the time I am doing it on my end. Could you suggest how to add the orientation. I was reading about getting cartesian to polar cordinates? or there is something else needs to be considered.

@mgonzs13
Copy link
Owner

I'm not sure how to obtain the orientation. Ultralytics uses OBB, but it's 2D.

@Rak-r
Copy link
Author

Rak-r commented Feb 12, 2024

If I want to use 2D information (x and y) in (meters)real -world cordinates of detected/tracked object and then compute the velocity. So to achieve just simply ignore the z from the 3D centre point? @mgonzs13

@mgonzs13
Copy link
Owner

Yes, if you're only interested in calculating the velocity of an object based on its position in 2D space, you can ignore the depth coordinate.

@Rak-r
Copy link
Author

Rak-r commented Mar 1, 2024

Hi @mgonzs13 was just testing the speed stuff with me walking infront of the camera.

I think I getting confused here but could you tell that if my camera is mounted on the robot and in the node i have provided target frame as camera link and not base link. Will this report different values? Because I am not publishing static tf between base link and camera link.

Basically I was recording a bag file so wanted to verify that will the data going be different if no transform is there and in actual camera is mounted on robot?

I think it shouldn't.

Also saw speed values, I see when camera in motion there are noisy values below 0.1 m/s looks ok but sometimes it touches 0.3 for stationary objects. I also see sometimes detection is lost. I think this also affects the data.

I also visualised speed for person only both with robot stationary and moving. It shows 0.5 -0.8 m/s with some noise reaching till 1.2-1.3 m/s.

Could you suggest some more tricks to make it stable when camera in motion? Can we do the average now of (say last 10 speed messages received)?

@mgonzs13
Copy link
Owner

mgonzs13 commented Mar 2, 2024

The target frame should not affect the velocity unless the robot is moving. In that case, you should use a global frame such as odom or map, as the target frame.

@Rak-r
Copy link
Author

Rak-r commented Mar 2, 2024

Yes, The robot was moving. I think that's why it affected. I just read about it.

@Rak-r
Copy link
Author

Rak-r commented Apr 2, 2024

Hi @mgonzs13 , I was testing the velocity estimation, but it gives some jumps for static objects. Did you observed similar pattern? Could you advice some potential reasons or how to minimize this. I was looking for implementing EKF but could not find some approach to begin and how to implement it properly.

If you could help with this towards getting more reasonable values, it would be highly appreciated.

Also, to remove that callback running twice because of detecting more than one object, the number of classes to person, so that the detector will only report messages for person class this can be changed in yolov8.py where the bounding box messages are generated?

@mgonzs13
Copy link
Owner

mgonzs13 commented Apr 3, 2024

Hey @Rak-r, I haven't time to perform more tests on this. For the EKF, I suppose the KalmanFilter from cv2 can be used with some edits. Are you using my approach or which code are you running?

@Rak-r
Copy link
Author

Rak-r commented Apr 3, 2024

Hi @mgonzs13 , I am using the same what we have discussed this far and from the speed estimation branch with cv2.kalman filter which is set to true by default. If I do not use kalman result is more noisy. If you find time to test it for noise, kindly let me know.

@Rak-r
Copy link
Author

Rak-r commented May 2, 2024

Hi @mgonzs13 , I am trying to implement the EKF for handling non-linealrities if person do not move in a straight line path. I found this https://github.com/hvzzzz/Vehicle_Trajectory_Dataset/blob/interaction/scripts/tools/kalman_filters.py but getting a bit confused how to integrate in the code. Also for orientation, we have pose.x and pose.y from Bbox3D , we can find the polar cordinates from this and feed in the EKF update step to estimate it?

Another thing, I wanted to ask, if I need the predicted future trajectories of the Person (from existing code which assumes constant velcotiy model), right now I am extracting it by iterating over the kf object say for 10 future predicted trajectory which nearly be linear (straight line) because the predict step is does same as this x at time t = x at time t-1 + vx at time t-1*delta_t

for pose in range(10)
pose_x = kf.statepost[0]
pose_y = kf.statepost[1]

@Rak-r
Copy link
Author

Rak-r commented May 7, 2024

Also, I often get this warning:

RuntimeWarning: invalid value encountered in subtract [detect_3d_node-3] z_diff = np.abs(roi - bb_center_z_coord)
@mgonzs13

@mgonzs13
Copy link
Owner

mgonzs13 commented May 8, 2024

Hi @Rak-r, obtaining the orientation would be a great feature. I have to check that code more deeply. About the warning, let me check what is happening.

@Rak-r
Copy link
Author

Rak-r commented May 8, 2024

What about the future trajectories? Is that the right approach.@mgonzs13

@mgonzs13
Copy link
Owner

mgonzs13 commented May 9, 2024

It seems reasonable to me though I have never done it.

@Rak-r
Copy link
Author

Rak-r commented May 9, 2024

Well assuming constant velocity model and lets say person just follows linear straight path. For this particular, can just take current positions and represent as a line in slope intercept form. Let me know whenever you test the code for ekf if you have time.

@Booooms
Copy link

Booooms commented May 9, 2024

Hello. I am also implementing an Extended Kalman Filter (EKF) from a Kalman filter, but I am facing some issues.
Is there any existing code that has implemented EKF?

@Rak-r
Copy link
Author

Rak-r commented May 9, 2024

Hey, we are also discussing the similar approach. I have shared a link above which implements both simple KF and EKF. I was thinking to integrate EKF to the speed estimation but getting confused.@Booooms

@Booooms
Copy link

Booooms commented May 10, 2024

I'm confused because there are many links, what is it?
Are you talking about this link?@Rak-r
https://github.com/hvzzzz/Vehicle_Trajectory_Dataset/blob/interaction/scripts/tools/kalman_filters.py

@Rak-r
Copy link
Author

Rak-r commented May 10, 2024

Yes, it is the only in the discussion for EKF.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants