Skip to content
Dmitry Zolotukhin edited this page Aug 26, 2023 · 16 revisions

Cybervision documentation

This wiki will explain how Cybervision works and how to use it.

reconstructed mesh

reconstructed photo

When two images are taken from slightly different positions, the parallax effect will cause objects to move a different distance - depending on ther position relative to the camera.

This process is called photogrammetry, and Cybervision is a domain-specific photogrammetry tool.

Sample showcase contains a gallery of 3D images generated by Cybervision.

How it works explains the steps executed to reconstruct an image.

Ideas for future improvement

Domain model

Cybervision was originally built to process images from a Scanning Electron Microscrope and requires the following:

  • High contrast images with a lot of detail (corners) and a non-repetitive texture;

  • Parallel projection;

    • Typical cameras have perspective projection - in addition to moving, objects also change their dimensions depending on the distance to the camera;
    • But if the distance to the object is large enough, perspective projection becomes parallel.

It is possible (although not tested) that Cybervision could work with aerial photography.

In the latest release, Cybervision can also process photos taken with a regular camera (e.g. iPhone 11). To use this feature, make sure to switch projection to perspective.

Structure from motion (reconstruction of an object from multiple photos) is experimental, and results are not great.

Test images

Some demo (test) source images can be dowloaded from here. These images can be used as input data for Cybervision.

Image filenames contain the angle and series:

  • 01+4.5 was collected with the tilt angle +4.5 degrees;
  • 01-4.6 was collected with the tilt angle -4.6 degrees.

Some data is stored in the TIFF metadata: a more precise angle, scale and other details.

Only images with a matching number can be matched (e.g. 01 cannot be matched with 02).

Clone this wiki locally