Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pyrealsense2: frames didn't arrive within 5000 #6766

Closed
anguyen216 opened this issue Jul 6, 2020 · 9 comments
Closed

pyrealsense2: frames didn't arrive within 5000 #6766

anguyen216 opened this issue Jul 6, 2020 · 9 comments

Comments

@anguyen216
Copy link

  • Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):

  • All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)


Required Info
Camera Model {D435i }
Firmware Version (Open RealSense Viewer --> Click info)
Operating System & Version {Win 10}
Kernel Version (Linux Only) (e.g. 4.14.13)
Platform PC/Raspberry Pi/ NVIDIA Jetson / etc..
SDK Version { legacy / 2.. }
Language {C/C#/labview/nodejs/opencv/pcl/python/unity }
Segment {Robot/Smartphone/VR/AR/others }

Issue Description

I wrote a script to automatically record and stream depth frames from the camera. The recorded bag file can be replayed using Intel RealSense viewer; however, when I try to run a script to extract the frame from the bag file with python, it keep getting the following error message

Traceback (most recent call last):
  File "dummy.py", line 19, in <module>
    frames = pipe.wait_for_frames()
RuntimeError: Frame didn't arrive within 5000

The file accurately identified the number of frame (how long the recorded video is), but keep crashing halfway through with the error above. I attached my code below. Please help!

# dummy get frame script
import numpy as np
import pyrealsense2 as rs
from skimage import io

fname = "007_custom.bag"
cfg = rs.config()
cfg.enable_device_from_file(fname)
pipe = rs.pipeline()
profile = pipe.start(cfg)
colorizer = rs.colorizer()
playback = profile.get_device().as_playback()
playback.set_real_time(False)
t = playback.get_duration().seconds
numframe = t * 30
print(numframe)

for i in range(numframe):
    frames = pipe.wait_for_frames()
    f = frames.get_depth_frame()
    tmp = np.asanyarray(colorizer.colorize(f).get_data())
    #f = frames.get_infrared_frame()
    #tmp = np.asanyarray(f.get_data())
    io.imsave("test_res/" + str(i) + ".jpg", tmp)
playback.pause()
pipe.stop()
@MartyG-RealSense
Copy link
Collaborator

If you do not need to extract images from the bag in real-time (i.e it is done after the bag is recorded) then it may be worth considering using the librealsense SDK's convert tool.

https://github.com/IntelRealSense/librealsense/tree/master/tools/convert

If you prefer to do it with Python, the script in the link below may be a useful reference for your own project.

#1887 (comment)

@MartyG-RealSense
Copy link
Collaborator

Hi @anguyen216 Do you still require assistance with this case, please? Thanks!

@anguyen216 anguyen216 changed the title pyrealsense2: frames did arrive within 5000 pyrealsense2: frames didn't arrive within 5000 Jul 24, 2020
@anguyen216
Copy link
Author

Yes, I still have trouble with this case. I have about 500 .bag files I need to extract frames from. I create a a script to grab each .bag file and extract every 10 frames. Regardless of the file size, I got the runtime error, "frame didn't arrive within 5000," every 2 - 5 files. Even if I extract frames from these files individually, I still get the error. These file replayed perfectly well when thrown into Intel Realsense Viewer

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jul 24, 2020

You are not the first person to be able to read bags in the RealSense Viewer but have problems extracting the frames from a sequence of multiple bags. A recent example of such a case by a RealSense Python user, who has a different problem to yours, is in the link below:

#6887

If you are using a loop to read through the sequence of bag files, you may find helpful the advice given by a RealSense team member about how they implemented such a loop successfully.

#2693 (comment)

The code (C++ though, not Python) that the RealSense team member makes reference to basing their successful test program upon is at the start of the case.

#2693

@anguyen216
Copy link
Author

anguyen216 commented Jul 27, 2020

I've finally resolved my issue by creating a separate function to find number of frames in a bag file. I include a snippet of my code below, hopefully this will help other people in the future.

def get_num_frames(filename):
    cfg = rs.config()
    cfg.enable_device_from_file(filename)
    # setup pipeline for the bag file
    pipe = rs.pipeline()
    # start streaming from file
    profile = pipe.start(cfg)
    # setup colorizer for depthmap
    colorizer = rs.colorizer()

    # setup playback
    playback = profile.get_device().as_playback()
    playback.set_real_time(False)
    # get the duration of the video
    t = playback.get_duration()
    # compute the number of frames (30fps setting)
    frame_counts = t.seconds * 30
    playback.pause()
    pipe.stop()
    return frame_counts


def extract_frames(filename, frame_type, output_folder, newsize=0):
    """
    Extract frames of bag file based on the indicated frame_type
    Inputs:
    - filename: the path to the bag file
    - frame_type: int, indicating the frame type to be extracted
                  Allowed inputs: 1, 2, 3 indicating depth, infrared, and color
    - output_folder: the name of the folder to save the extracted frames
    - newsize: the scale of the output image (compared to input frame) if
                  resize is desired
    Output:
    """
    cfg = rs.config()
    cfg.enable_device_from_file(filename)
    # setup pipeline for the bag file
    pipe = rs.pipeline()
    # start streaming from file
    profile = pipe.start(cfg)
    # setup colorizer for depthmap
    colorizer = rs.colorizer()

    frame_counts = get_num_frames(filename)
    for i in range(frame_counts):
        frame_present, frames = pipe.try_wait_for_frames()
        if not frame_present:
            print("Error: not all frames are extracted")
            break
        if frame_type == 1:
            f = frames.get_depth_frame()
            #f = np.asanyarray(colorizer.colorize(f).get_data())
        elif frame_type == 2:
            f = frames.get_infrared_frame()
            #f = np.asanyarray(frames.get_infrared_frame().get_data())
        elif frame_type == 3:
            f = frames.get_color_frame()
            #f = np.asanyarray(frames.get_color_frame().get_data())
        else:
            raise ValueError(str(frame_type) + " is not a valid frame type")
        tmp = np.asanyarray(colorizer.colorize(f).get_data())
        if i % 10 == 0:
            if newsize:
                tmp = rescale(tmp, 0.25, anti_aliasing=True, multichannel=True)
                io.imsave(output_folder + str(i) + ".jpg", img_as_ubyte(tmp))
    pipe.stop()

@MartyG-RealSense
Copy link
Collaborator

Great news about your success - thanks for the update and for kindly sharing your solution with the RealSense community!

@ChairManMeow-SY
Copy link

The function get_num_frames will not work as expected. For most of the bag files the actual Fps is not 30 even though it is set to 30 when it is recorded.

@anguyen216
Copy link
Author

@OldOG that's correct. I later found out about this issue and started a new issue on it. Even RealSense library function (rs-convert) that extracts frames in various formats (rgb, point cloud and depth) has the problem of frames drop. The correct and probably most efficient way to extract frames is to use ROS bag. I put the issue/solution below for future reference

#7067 (comment)

@ChairManMeow-SY
Copy link

@OldOG that's correct. I later found out about this issue and started a new issue on it. Even RealSense library function (rs-convert) that extracts frames in various formats (rgb, point cloud and depth) has the problem of frames drop. The correct and probably most efficient way to extract frames is to use ROS bag. I put the issue/solution below for future reference

#7067 (comment)

So thank you for this info. The frames drop drives me mad.... How ridiculous it is! The I/O functions should be the basic APIs of a library.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants