Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Saving Video but the color is wired, why? #9541

Closed
pangpangshui opened this issue Aug 1, 2021 · 20 comments
Closed

Saving Video but the color is wired, why? #9541

pangpangshui opened this issue Aug 1, 2021 · 20 comments

Comments

@pangpangshui
Copy link


Required Info
Camera Model D435
Firmware Version 05.12.13.50
Operating System & Version Win 10
Kernel Version (Linux Only) Win 10
Platform PC
SDK Version 2.47.0
Language c++
Segment other

Issue Description

I want to save rs::frame to video, format is .avi. Now it is ok for saving video, but the color of video is wired. It shows blue but the original color is yellow, and it shows yellow but the original color is blue, and . the code like this:

......
rs2::pipeline pipe(ctx);
rs2::config cfg;
cfg.enable_device(serial);
cfg.enable_stream(RS2_STREAM_COLOR, width, height, RS2_FORMAT_RGB8, 30);
rs2::pipeline_profile profile = pipe.start(cfg);
......

cv::VideoWriter writer;
writer.open(m_video_path, CV_FOURCC('M', 'J', 'P', 'G'), 30.0, cv::Size(m_width, m_height));

auto vf = video_frame.as<rs2::video_frame>();
// Create OpenCV matrix of size (w,h) from the colorized depth data
cv::Mat image(cv::Size(vf.get_width(), vf.get_height()),
              CV_8UC3, const_cast<void*>(vf.get_data()), cv::Mat::AUTO_STEP);
// release frame
video_frame.reset();
writer << image;

Why would I get the error color of Mat?

@MartyG-RealSense
Copy link
Collaborator

Hi @pangpangshui If you are getting reversed colors in your depth image in OpenCV, it is typically because you need to take account of the fact that OpenCV uses the BGR color-space by default instead of RGB.

@pangpangshui
Copy link
Author

pangpangshui commented Aug 1, 2021

I change this config from cfg.enable_stream(RS2_STREAM_COLOR, width, height, RS2_FORMAT_RGB8, 30); to cfg.enable_stream(RS2_STREAM_COLOR, width, height, RS2_FORMAT_BGR8, 30);, then the color of video is OK.
but There has a more problem, I cannot preview cameras stream use example code like this:


......
cfg.enable_stream(RS2_STREAM_COLOR, width, height, RS2_FORMAT_RGB8, 30);
......

window app(1920, 1080, "realsense_cameras");

while (app) {
        std::vector<rs2::frame> new_frames;
        for (auto &&pipe : pipelines) {
            rs2::frameset fs = pipe.wait_for_frames();
            new_frames.emplace_back(fs.get_color_frame());
        }

        for (auto& frame : new_frames) {
            auto serial = rs2::sensor_from_frame(frame)->get_info(RS2_CAMERA_INFO_SERIAL_NUMBER);
            render_frames[frame.get_profile().unique_id()] = colorizers[serial].process(frame);
        }

        app.show(render_frames);  /* ------ **crash here** ------- */
    }

Is there any way to change RGB frame to BGR frame? Or How can I saving videio and previw cameras'stream at the same time? @MartyG-RealSense

@MartyG-RealSense
Copy link
Collaborator

In RealSense OpenCV programs, imshow is typically used to render the camera images, like in the RealSense SDK's OpenCV 'Getting Started' C++ example in the link below.

https://github.com/IntelRealSense/librealsense/blob/master/doc/stepbystep/getting_started_with_openCV.md#displaying-infrared-frame

There was a case of reversed colors in Python (I know you are using C++ but the concept should be similar) where a RealSense user used OpenCV's cvtColor instruction to perform conversion of the depth image.

#9304 (comment)

@MartyG-RealSense
Copy link
Collaborator

A C++ script in the link below provides an example use of cvtColor:

#1919 (comment)

image

@pangpangshui
Copy link
Author

pangpangshui commented Aug 2, 2021

I use cv::cvtColor(image, image, cv::COLOR_BGR2RGB); to convert color. Now the color of video is OK, but it will lost frames:

thread one-> push frame

rs2::frame video_frame;
while (true) {
    std::vector<rs2::frame> new_frames;
    // Collect the new frames from all the connected devices
    for (auto &&pipe : pipelines) {
        rs2::frameset fs = pipe.wait_for_frames();
        new_frames.emplace_back(fs.get_color_frame());
    }


    // Convert the newly-arrived frames to render-friendly format
    for (auto& frame : new_frames) {
        frame.keep();
        m_rec_video_map[serial]->pushColorFrame(frame);
        LOG("LOG_DEBUG", "serial: %s, frames: %d", serial, count[serial]);
        count[serial]++;
    }
}

thread second->save video

rs2::frame video_frame;
while (!quit) {
    b_is_recording_video = true;
    popColorFrame(video_frame);
    if (!video_frame) {
        continue;
    }

    auto vf = video_frame.as<rs2::video_frame>();
    // Create OpenCV matrix of size (w,h) from the colorized depth data
    cv::Mat image(cv::Size(vf.get_width(), vf.get_height()),
                  CV_8UC3, const_cast<void*>(vf.get_data()), cv::Mat::AUTO_STEP);
    video_frame.reset();
    cv::cvtColor(image, image, cv::COLOR_BGR2RGB);
    // release frame
    writer << image;
    std::this_thread::sleep_for(std::chrono::milliseconds(5));
}

Once I use cv::cvtColor, thread one will lost many frames. I confused, do you know why? @MartyG-RealSense

More over, is there any way to convert rs2::frame to AVFrame?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Aug 2, 2021

It could simply be that the processing demands of color conversion are slowing the program down.

It looks as though you are using multiple-camera code in thread one. Is that correct please, and if so, how many cameras are being used simultaneously? Using multiple cameras that are all attached to the same computer can in itself impose a processing burden. For example, in a seminar in 2018 about use of multiple RealSense 400 Series cameras, Intel recommended a computer with an Intel i7 processor for using 4 cameras simultaneously.

The C++ multithreading programming discussion in the link below may be a useful reference to check against your own multi-thread approach.

#6865

@pangpangshui
Copy link
Author

I use 4 cameras in my computer that has prosessor named Intel(R) Core(TM) i9-10900K CPU @ 3.70GHz 3.70 GHz and has 16GB memory. Why color conversion will influence rs2::frame pushed from camera. If I remove cv::cvtColor(image, image, cv::COLOR_BGR2RGB);, it is ok for get rs2::frames from wait_for_frames. I'm so confused.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Aug 2, 2021

Does this C++ example provide any useful insights aobout using the color stream without cvtColor:

#4218 (comment)

@pangpangshui
Copy link
Author

If I use imshow, I cannot suit for my custom UI. Do you know how to convert rs2::frame to AVFrame? I want to try to use ffmpeg.

@MartyG-RealSense
Copy link
Collaborator

I researched the subject of AVFrame or ffmpeg conversion again but found very little that was usable in librealsense, except for advice in the link below that was for Python instead of C++.

#8969 (comment)

I could not find anything else, unfortunately.

@MartyG-RealSense
Copy link
Collaborator

Hi @pangpangshui Bearing in mind the difficulty mentioned above of finding reference mterial about ffmpeg / AVFrame in relation to librealsense, do you require further assistance with this case, please? Thanks!

@MartyG-RealSense
Copy link
Collaborator

Case closed due to no further comments received.

@pangpangshui
Copy link
Author

pangpangshui commented Aug 29, 2021

I have found the way that can convert rs2::frame to AVFrame. But I found that if I record video with resolution<1280*720>, it will lost frames. the code like this:

window app(1920, 1080, "realsense_cameras");
    rs2::context                          ctx;        // Create librealsense context for managing devices

    record_time = 1;
    // Capture serial numbers before opening streaming
    for (auto&& dev : ctx.query_devices()) {
        std::string serial = dev.get_info(RS2_CAMERA_INFO_SERIAL_NUMBER);
        m_serials.push_back(serial);
        m_total_record_time.insert(std::make_pair(serial, 0));
        m_record_timer.insert(std::make_pair(serial, QTime(0, 0, 0, 0)));
        m_cameras_shown.insert(std::make_pair(serial, true)); // 默认开始全部相机都打开
    }
    LOG("LOG_DEBUG", "cameras width: %d, height: %d, record_time: %d", width, height, record_time);

    // Start a streaming pipe per each connected device
    for (auto&& serial : m_serials) {
        rs2::pipeline pipe(ctx);
        rs2::config cfg;
        cfg.enable_device(serial);
        cfg.enable_stream(RS2_STREAM_COLOR, width, height, RS2_FORMAT_RGB8, 30);
        rs2::pipeline_profile profile = pipe.start(cfg);
        pipelines.emplace_back(pipe);
        // Map from each device's serial number to a different colorizer
        colorizers[serial] = rs2::colorizer();

        multiThreadRecordVideo* rec_video = new multiThreadRecordVideo(serial);
        m_rec_video_map.insert(std::make_pair(serial, rec_video));
        freams_count.insert(std::make_pair(serial, 0));
    }

    while (app) {
        std::vector<rs2::frame> new_frames;
        // Collect the new frames from all the connected devices
        for (auto &&pipe : pipelines) {
            rs2::frameset fs;
            try {
                fs = pipe.wait_for_frames();
            } catch(std::exception &e) {
                LOG("LOG_DEBUG", "wait_for_frames, error: %s", e.what());
            } catch(...) {
                LOG("LOG_DEBUG", "wait_for_frames, error");
            }
            new_frames.emplace_back(fs.get_color_frame());
        }


        // Convert the newly-arrived frames to render-friendly format
        for (auto& frame : new_frames) {
            // Get the serial number of the current frame's device
            auto serial = rs2::sensor_from_frame(frame)->get_info(RS2_CAMERA_INFO_SERIAL_NUMBER);
            if (m_cameras_shown[serial] == false) {
                continue;
            }
            if (frame && (m_rec_video_map[serial]->getVideoStage() == CMD_REVORD_VIDEO_STAGE_ING
                          || b_start_capture_png_flag)) {
                if ((m_total_record_time[serial] + m_record_timer[serial].elapsed()
                     <= (60 * 1000))) {
                    // keep frame in case of release by rs2::frame queue
                    frame.keep();
                    freams_count[serial]++;
                    LOG("LOG_DEBUG", "serial: %s, %d, time: %llu, %d", serial, freams_count[serial],
                        m_total_record_time[serial], m_record_timer[serial].elapsed());
                    m_rec_video_map[serial]->pushColorFrame(frame);
                } else {
                    m_total_record_time[serial] = 0;
                }

            }
            // Apply the colorizer of the matching device and store the colorized frame
            render_frames[frame.get_profile().unique_id()] = colorizers[serial].process(frame);
        }

        // Present all the collected frames with openGl mosaic
        app.show(render_frames);
    }

There are four cameras. The log show that there only about 990 frames in one minutes, but the camera is set 40 frames/s. Why?
@MartyG-RealSense

@MartyG-RealSense
Copy link
Collaborator

Hi @pangpangshui If this is a multiple camera application (4 cameras controlled by the same program on the same computer) then ideally poll_for_frames() should be used instead of wait_for_frames(). The difference between the two is explained in #2422 (comment)

@pangpangshui
Copy link
Author

pangpangshui commented Aug 30, 2021

I change the code like this:

      for (auto &&pipe : pipelines) {
            rs2::frameset fs;
            try {
                 pipe.wait_for_frames(&fs);
            } catch(std::exception &e) {
                LOG("LOG_DEBUG", "wait_for_frames, error: %s", e.what());
            } catch(...) {
                LOG("LOG_DEBUG", "wait_for_frames, error");
            }
            for (const rs2::frame& f : fs) {
                new_frames.emplace_back(f);
            }
        }

It does not work, it still lost frames. The log show that there only about 990 frames in one minutes. @MartyG-RealSense

@MartyG-RealSense
Copy link
Collaborator

How does it behave if only one camera is attached?

@pangpangshui
Copy link
Author

pangpangshui commented Sep 20, 2021

Is there any way for recoding video of 4 cameras with 720P or 1080P and it cannot lost frames ? I use cv::VideoWriter for recoding video. @MartyG-RealSense Now I recoding video with 4 cameras with 720P, the usage of cpu is 100%, and I lost three frames in one minute.

@MartyG-RealSense
Copy link
Collaborator

Using multi-threading may be the best way to achieve efficient multiple camera recording. There is a C++ case about doing so at #3802

@pangpangshui
Copy link
Author

I use four thread to record videos, still ost frames.@MartyG-RealSense

@MartyG-RealSense
Copy link
Collaborator

Are you using poll_for_frames() in your code? If you are then you need to control when the CPU is put to sleep and for how long, otherwise the CPU can max out at 100% usage.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants