Metashape step-by-step tutorial using GUI and Python API for photogrammetry (point clouds, DEM, mesh, texture and orthomosaic) from arial images.
The tutorial prepared by Viet Nguyen (Earth Observation and Geoinformation Science Lab - University of Greifswald) based on
- The Geo-SfM course from The University Centre in Svalbard
- The Structure From Motion tutorial from USGS
- The Drone RGB and Multispectral Imagery Processing Protocol from The University of Queensland.
- And the work from Derek Young and Alex Mandel.
(https://www.inrae.fr/en/news/remote-sensing-dossier)
- Add photos
- Estimate image quality
- Reflectance calibration
- Set primary channel
- Image projection
- Align photos
- Add ground control points
- Improve alignment
- 8.1 Optimize Camera Alignment
- 8.2 Filter uncertain points
- 8.3 Filter by Projection accuracy
- 8.4 Filter by Reprojection Error - Dense point cloud
- Mesh model
- Orthomosaic
- DEM
- Texture
The tutorial not only guide the main steps of photogrammetry in Metashape GUI, but there also are Python scripts for those steps to use in Metashape Python console. The scripts were designed for Metashape version 1.8.4.
It is recommended to use the standardised project structure (or something similar) throughout all future projects.
{project_directory} (The folder with all files related to this project)
| overview_img.{ext}
| description.txt
├───config (where you place your configuration files)
{cfg_0001}.yml
{cfg_0002}.yml
...
├───data (where you unzipped the files to)
├───────f0001 (The folder with images acquired on the first flight)
| {img_0001}.{ext}
| {img_0002}.{ext}
| ...
├───────f0002 (The folder with images acquired on the second flight)
| {img_0001}.{ext}
| {img_0002}.{ext}
| ...
| ...
├───────f9999 (The folder with images acquired on the last flight)
| {img_0001}.{ext}
| {img_0002}.{ext}
| ...
├───────gcps
| (...)
├───────GNSS
| (...)
├───export (where you place export models and files)
...
└───metashape (This is where you save your Agisoft Metashape projects to)
{metashape_project_name}.psx
.{metashape_project_name}.files
{metashape_project_name}_processing_report.pdf
(optionally: {metashape_project_name}.log)
The standardised project structures are important for automated processing and archiving.
Tip
Below are step-by-step guildance in Metashape GUI and Python scripts for those steps. For fully automate workflow, use the GUI for step 1 to step 7 (add GCPs), the next steps can use the code for all-in-one workflow here
It is helpful to include the subfolder name in the photo file name in Metashape (to differentiate photos from which flight). Below is the code for Python console to rename all photos to reflect the subfolder they are in.
import Metashape
from pathlib import Path
doc = Metashape.app.document # accesses the current project and document
chunk = doc.chunk # access the active chunk
for c in chunk.cameras: # loops over all cameras in the active chunk
cp = Path(c.photo.path) # gets the path for each photo
c.label = str(cp.parent.name) + '/' + cp.name # renames the camera label in the metashape project to include the parent directory of the photo
Images from MicaSense RedEdge, MicaSense Altum, Parrot Sequoia and DJI Phantom 4 Multispectral can be loaded at once for all bands. Open Workflow menu and choose Add Photos option. Select all images including reflectance calibration images and click OK button. In the Add Photos dialog choose Multi-camera system option:
Metashape Pro can automatically sort out those calibration images to the special camera folder in the Workspace pane if the image meta-data says that the images are for calibration. The images will be disabled automatically (not to be used in actual processing).
This is done by right clicking any of the photos in a Chunk, then selecting Estimate Image Quality…, and select all photos to be analysed, as shown in figure below.
Open the Photos pane by clicking Photos in the View menu. Then, make sure to view the details rather than icons to check the Quality for each image.
Tip
Then, filter on quality and Disable all selected cameras that do not meet the standard. Agisoft recommends a Quality of at least 0.5.
Open Tools Menu and choose to Calibrate Reflectance option. Press Locate Panels button:
As a result, the images with the panel will be moved to a separate folder and the masks would be applied to cover everything on the images except the panel itself. If the panels are not located automatically, use the manual approach.
For multispectral imagery the main processing steps (e.g., Align photos) are performed on the primary channel. Change the primary channel from the default Blue band to NIR band which is more detailed and sharp.
Go to Convert in Reference panel and select the desired CRS for the project.
Below are recommended settings for photo alignment. The code to use in Python console can be found here.
Go to Import Reference in the Reference panel and load the csv file.
Follow this tutorial to set the gcp.
Personal experience
GCPs should be set over the study area many enough to correct the position of the orthomosaic. With only few GCPs, the distortion may be not significantly visible in the orthomosaic interms of X, Y values, but can be clealy see in the DEM (Z values).
The Z values from geo-taged photos are relatively good already. So 1 solution could be only using X, Y values from GCPs to correct position of the orthomosaic and DEM, but Z values from photos used from DEM. To do that, set the accuracy of GCPs to 0.005/10 (5mm for X, Y and 10m for Z) and go to Tool -> Camera Calibration -> GPS offset and set camera accuracy to X: 0.05, y: 0.05, Z: 0.02. This way the program will prioritize X, Y values of GCPs and Z value of photo itself for the orthomosaic and DEM.
The following optimizations to improve quality of the sparse point cloud including Optimize Camera Alignment, Filter uncertain points, Filter by Projection accuracy, Filtering by Reprojection Error. Those optimizations can be automated by Python console using this code.
Note
Save project and backup data before any destructive actions
This is done by selecting Optimize Cameras from the Tools menu
Change the model view to show the Point Cloud Variance. Lower values (=blue) are generally better and more constrained.
A good value to use for uncertainty lever is 15, though make sure do not remove all points by doing so!. A rule of thumb is to select no more than 20% of all points, and then delete these by pressing the Delete key on the keyboard.
Tip
After filtering points, it is important to once more optimize the alignment of the points. Doing so by revisiting the Optimize Camera Alignment
This time, select the points based on their Projection accuracy, aiming for a final Projection accuracy of 2.
Tip
After filtering points, it is important to once more optimize the alignment of the points. Doing so by revisiting the Optimize Camera Alignment
A good value to use here is 0.3, though make sure you do not remove all points by doing so! As a rule of thumb, this final selection of points should leave you with approx. 10% of the points you started off with.
Tip
After filtering points, it is important to once more optimize the alignment of the points. Doing so by revisiting the Optimize Camera Alignment
Select Build Point Cloud from the Workflow menu. Below are recommended settings, the code to use in Python API can be found here.
Visualise the point confidence by clicking the gray triangle next to the nine-dotted icon and selecting Point Cloud confidence. The color coding (red = bad, blue = good).
Open Tools/Point Cloud in the menu and click on Filter by confidence… The dialog that pops up allows you to set minimal and maximal confidences. For example, try setting Min:50 and Max:255. After looking at the difference, reset the filter by clicking on Reset filter within the Tools/Point Cloud menu.
Selecting Build Mesh from the Workflow menu, you will be able to chose either Dense cloud or Depth map as the source. The code for Build Mesh to use in Python API can be found here.
Tip
Depth maps may lead to better results when dealing with a big number of minor details, but Dense clouds should be used as the source. If you decide to use depth maps as the source data, then make sure to enable Reuse depth maps to save computational time!
Sometimes your mesh has some tiny parts that are not connected to the main model. These can be removed by the Connected component filter.
Select Tools-> Mesh->Decimate mesh. Enter an appropriate value, for example, to halve the number of faces in the original mesh.
Select Tools ->Mesh->Smooth mesh. The strength of smoothing depends on the complexity of canopy. Three values are recommended for low, medium, and high smoothing: 50, 100 and 200 respectively.
Select Build Orthomosaic from the Workflow menu. To begin, you have to select the Projection parameter.
-
Geographic projectionis often used for aerial photogrammetric surveys.
-
Planar projection is helpful when working with models that have vertical surfaces, such as vertical digital outcrop models.
-
Cylindrical projection can help reduce distortions when projecting cylindrical objects like tubes, rounded towers, or tunnels.
It is recommended to use Mesh as surface. For complete coverage, enable the hole filling option under Blending mode to fill in any empty areas of the mosaic.
The code for Build orthomosaic to use in Python API can be found here.
Select Build DEM from the Workflow menu. The code for #Buil DEM# to use in Python API can be found here.
It is recommended to use Point Cloud as the source data since it provides more accurate results and faster processing.
it ist recommended to keep the interpolation parameter Disabled for accurate reconstruction results since only areas corresponding to point cloud or polygonal points are reconstructed. Usually, this method is recommended for Mesh and Tiled Model data source.
Open Build Texture from the Workflow menu.
Texture size/count determines the quality of the texture. Anything over 16384 can lead to very large file sizes on your harddisk. On the other hand, anything less than 4096 is probably insufficient. For greatest compatibility, keep the Texture size at 4096, but increase the count to e.g. 5 or 10.
Open File/Export and select Generate Report… Store the report in the metashape folder with the project file.