I presented this topic in a general session at the Free and Open Source Software for Geospatial (FOSS4G) conference in San Diego a couple weeks ago.  The attendance at my talk was really exciting, and somewhat nerve-wracking, but I think it says something about the interest in the topic that's out there, and since the general sessions weren't taped, I figured I'd write it up here in case anyone was interested.

When I first set out to talk about getting the most out of OpenDroneMap (ODM) I assumed that I could just look at the different arguments that you as the user have available to you and describe which ones you need to modify to get the best results.  I started to realize that while I had a pretty good idea at a high level what ODM is doing, I didn't feel like I had quite enough of a handle on it quite yet.  I couldn't find a very comprehensive overview of the pipeline out on the web, so I pivoted my talk to focus on the actual workflow taking place under the hood, since that's really the first step in optimizing any tool for your use.

OpenDroneMap (ODM) is an open-source photogrammetry pipeline aimed at small UAS (Unmanned Aerial Systems, i.e. Drones) built on top of a set of more task-specific open-source projects.  

Photogrammetry by definition is making measurements on images.  What we're really interested in is the subset of photogrammetry called stereo photogrammetry, where we can use the principles of parallax to take two overlapping images and extract information and measurements along the X and Y axes as well as in depth or the Z axis.  This same principal is responsible for how we use two eyes to be able to see in 3D.  The main requirement for stereo photogrammetry is having two images where you can see the same object, but from different angles.

Historic Stereograph

Stereo pairs have been around about as long as photography has been a thing.  Most of us have seen images like the one above that you would put in a hand-held contraption that split the views for each eye and allowed the user to see the scene in 3D.  Early photogrammetrists used stereoscopes like the one below to view aerial photographs and extract elevation information, this was how contour data for many of the early USGS topographic maps were created.

Stereoscope

More modern examples of using parallax to view images in 3D include those blue and red glasses that you could watch movies or view images with color-based offsets to see depth.  3D tv's and modern 3D movies use a similar approach that requires you to wear glasses but separates the visual perspectives using polarized light instead of color.

Scaling the principle up a little bit (ok, a lot), ODM allows us to use the same basic concepts to extract depth information from our drone photography to create complex 3D datasets and orthophotographs.

The project started life as a command line utility that orchestrated the different underlying processes.  Over time, more options have been built out for users wishing to use ODM, all essentially built on top of the original command line version:

  • ODM - Python command line utility that orchestrates the tasks.
  • LiveODM - Bootable DVD/USB ISO for running ODM
  • CloudODM - Command line utility for working with cloud deployments of ODM
  • NodeODM - Node.js API for working with ODM, exposes a RESTful web API
  • WebODM - Web application built on top of the NodeODM project for providing a user interface for working with ODM.
  • PyODM - Python API for working with NodeODM

Each of the projects is open-source with options for purchasing ease-of-use packages such as installers for Windows and Mac OS or for pre-flashed ISO images of LiveODM.

ODM Workflow

In order to get the most out of OpenDroneMap we don't necessarily need to know all the math behind extracting three dimensional data from overlapping images, but we should understand, at least at a high level, the processes that go into the ODM pipeline.

Extract EXIF Data

First we start by extracting the EXIF header data from each of the images in our dataset.  This information tells ODM where the sensor was via GPS (if available).  It also contains information about the sensor it self which is used to account for specific distortion effects cameras impart to their images.  We can use the location information to help speed the image matching step later on by only looking at images close to a subject image for possible matches instead of having to search through the entire catalog for each image, basically impossible with large datasets.

Structure from Motion (SfM)

At the heart of ODM is the open-source OpenSfM (Open Structure from Motion) library.  OpenSfM is a pipeline that ingests sets of images and uses them to reconstructs camera poses and 3D scenes from overlapping coverage of a subject.  OpenSfM does feature extraction and matching for neighboring images creating depthmaps for each image and then combines them to generate a sparse point cloud of tie points that are refined using bundle adjustment.

Sparse Point Cloud from Structure from Motion

The image above shows the sparse point cloud generated from images taken from a moving vehicle and processed using OpenSfM.

Multi-View Stereo

To get a more dense point cloud, the data is fed through a Multi-View Stereo process that picks up where OpenSfM leaves off, creating a dense point cloud from the images.  Users of ODM can bypass this step optionally, if you only want an orthophoto that looks good from above, then you might not need the more dense point cloud and you can speed up your processing time.

Dense Point Cloud Classified by Elevation

Meshing

Meshing is essentially the process by which we take the point cloud and create a network of triangles out of it.  The algorithm works to minimize the number of faces used to represent each surface, removing any redundant points along the way.

Results of meshing the dense point cloud

Texturing

The mesh in and of itself is pretty cool, but the next step is very important to our output imagery.  Texturing is the process of adding images to the mesh faces.  ODM uses the angles of each face to determine the best image to use for the source pixels. This can be a finicky process; the desired output, nadir orthophoto vs. 3D model really drives which images should be used to texture each face.

Textured Mesh

ODM has opted to favor nadir images for faces that are visible from the top in order to make the best orthophoto.

Outputs

There are a number of outputs you might be looking for from ODM.  The most common is probably the orthophoto, but you can also generate digital terrain and surface models as well as point cloud datasets that can be used in all major GIS software.

Orthophotographs

Orthophotograph stands for Ortho-rectified Photograph.  These photos have been geometrically corrected (ortho-rectified) so that it has a uniform scale and conforms to the chosen map projection.  To generate our orthophotograph, the software basically takes the textured mesh we mentioned above, tilts it back so that the viewer is looking straight down and exports that image to a geoTIFF that can  be used in any GIS software.

Textured mesh viewed from above

All of the time spent generating the point cloud and mesh provides the ortho-rectification to the image.

Full orthophoto as viewed in QGIS
Subject area as geoTIFF

Digital Elevation Models

In addition to the orthophoto, digital elevation models (DEMs) in raster formats can be generated from the dense point cloud.  Two flavors of DEM are available from ODM.  A Digital Surface Model (DSM) represents the surface of everything that the camera sees.  This includes buildings, trees and anything else that's sitting on the ground.

Below is the hillshade of our subject area in DSM form:

Digital Surface Model

As you can see, the roofs of the houses are easy to pick out in the data.  A Digital Terrain Model (DTM) uses an algorithm to try and filter out features that are not part of the ground surface so that we can create an accurate representation of ground.

Digital Terrain Model

Here we can see the resulting DTM of the same area.  The algorithm did an ok job for most of the houses, you can see where it replaced the roofs with flat triangulated surfaces.  ODM is currently working on an even better algorithm that will make those areas much smoother.  

You might also notice that the houses at the left of the image still appear in the DTM.  To track down why this might be, we can look at the classified point cloud.  ODM classifies the dense point cloud using a fairly common, Simple Morphological Filter (SMRF) classification algorithm to classify each point as ground or not ground.  Points that are not classified as ground are removed from the mesh and the resulting gaps are filled using basic interpolation, that's where the blocky triangular surface filling each house comes from.

Classified Point Cloud

In the classified point cloud above, we can see that the points depicting roofs are pretty well classified except for the two houses at the left, the ones that still show up in the DTM.  One reason for this mis-classification might be the unusual nature of these particular structures.  This facility is set up to run mock flood rescues and the houses are meant to depict a flooded neighborhood and aren't all at typical full house height.

Ah, that makes sense...

Point Clouds

At RSGIS we're pretty point cloud focused, with our GRiD program providing point cloud hosting for billions and billions of lidar points all across the country.  When comparing to lidar, the point clouds we get from photogrammetry might not be quite as detailed or precise, but after accounting for the cost difference in platforms used to collect, photogrammetry starts to look a lot more attractive.

Point Cloud Symbolized by Height

ODM creates a point cloud in the common .las (compressed into .laz) file format that can be used in any common point cloud processing software, including PDAL, the Point Data Abstraction Library.  One really nice feature of the ODM point clouds is that in addition to the classification you get from SMRF, each point is also colorized using the textured mesh so that when you turn on that symbology you can see the actual scene depicted by the points.

RGB Colorized Point Cloud

Running ODM

Since the common denominator for all the ODM sub-projects is the command line ODM project, I spend most of my time working with that platform directly. Making our lives easier, the ODM command line stack is available in a docker container at opendronemap/odm on docker hub.  The docker container allows us to run ODM without having to deal with the configuration and installation of the stack on our host machine.  We can get into the weeds on the pros and cons of using docker another time, in this case it makes things nice and easy to encapsulate what we're doing.

To run ODM at the command line using Docker, we use the command below:

docker run -it \
  	-v $(pwd)/img:/code/images \
  	-v $(pwd)/images_resize:/code/images_resize \
  	-v $(pwd)/opensfm:/code/opensfm \
  	-v $(pwd)/odm_meshing:/code/odm_meshing \
  	-v $(pwd)/odm_texturing:/code/odm_texturing \
  	-v $(pwd)/odm_georeferencing:/code/odm_georeferencing \
  	-v $(pwd)/odm_orthophoto:/code/odm_orthophoto \
  	--rm opendronemap/odm --<argument>

docker run -it tells docker to run the container establishing a terminal session for the process so we can see the log messages posted to STDOUT as it processes.

Each of the -v declarations is mapping a location from the host machine to a volume in the running container.  This allows docker to save data as the process runs, and make sure the data lives in the host machine after the docker process completes, this is especially important since we're using the --rm flag to tell docker to clean up the container and it's file system when the process is complete, important house keeping when throwing a lot of data around.

More information can be found here about the folder structure used by ODM, the key is to map the folders where the outputs that you're interested in will end up.

After all the docker arguments, we specify the image name opendronemap/odm that we want to run.  This syntax tells docker that if the image does not exist locally go pull it from docker hub.  You can use other hosting options for docker images, but docker hub is the default.

Finally we can supply arguments to the ODM process itself.  I find that the majority of the time, the default settings work well, but there are a few that we'll take a look at that might be of use.

Available Arguments

In order to keep this post at a fairly high level I'm just going to highlight a couple of the important arguments we can use, there are a lot more and I encourage you to dig into the documentation if you're interested.

--resize-to <integer> allows you to resize the images prior to using OpenSfM.  By default ODM resizes to 2048px on the long side of the image, but you can give it a custom value if you like.   By resizing the images, you can speed up the processing time.  This will result in a lower level of detail in the final mesh, but during the texturing process the full-resolution images will be used so that the final output orthophoto will be high resolution no matter what.  If you want to use the 3D data, this will impact the look of the point clouds and mesh and you should probably use a higher value.

--gcp <string filepath> tells ODM to use a ground control point file.  Ground control is important in making sure that the spatial data generated by ODM is registered correctly to the real world.  Typically the quality of the GPS units on most drones are good enough to get the X/Y location quite well, but we've seen instances where the Z is off by as much as 50 meters and ground control allows you to fix that kind of offset.

A ground control point (GCP) file looks something like this:

+proj=utm +zone=16 +ellps=WGS84 +datum=WGS84 +units=m +no_defs   
627220.3 4323464.1 205.6796 5344 3409.5 EP-01-27112_0045_0196.JPG
627220.3 4323464.1 205.6796 4505.5 1776.75 EP-01-27112_0045_0223.JPG
627220.3 4323464.1 205.6796 4618.73 1433.31 EP-01-27112_0045_0195.JPG
627220.3 4323464.1 205.6796 3003.37 980.5 EP-01-27112_0045_0511.JPG
627220.3 4323464.1 205.6796 3178.91 3361.71 EP-01-27112_0045_0512.JPG
627338.9 4323342.7 207.0231 3772.65 616.38 EP-01-27112_0045_0260.JPG
627338.9 4323342.7 207.0231 3818.48 2744.95 EP-01-27112_0045_0261.JPG
627338.9 4323342.7 207.0231 2333.19 1636.27 EP-01-27112_0045_0434.JPG
627337.5 4323299.9 207.303 3498.85 264.81 EP-01-27112_0045_0259.JPG
627337.5 4323299.9 207.303 1500.02 1577.49 EP-01-27112_0045_0433.JPG
627337.5 4323299.9 207.303 3867.09 2981.64 EP-01-27112_0045_0260.JPG
627347.7 4323176.6 207.5149 1321.54 1991.08 EP-01-27112_0045_0430.JPG
627347.7 4323176.6 207.5149 3906.14 566.74 EP-01-27112_0045_0256.JPG

Starting with a projection definition on the first line, it's important to note that the coordinates provided are in a projected coordinate system, not an angular coordinate system such as lat/lon, which ODM cannot work with at this time.

Each line denotes a point (X, Y, Z)  in the real world and the corresponding pixel coordinate in the image listed.  To get the best results you want to capture coordinates for a single point that you can see in ideally 3 or more images, and capture at least 5-10 of those locations.

There's a tool that has been developed for the Portable OpenStreetMap (POSM) project that can help in building out the GCP file.

POSM Ground Control Point Interface (GCPI)

The POSM GCPI project is available on github where you can download the source code and run it as a local web application on your computer.  It took me a little bit to get the hang of the exact click workflow, but for an open source project it really makes generating the GCP file much more user friendly.  And, in the spirit of open source, I'm fully aware that I should contribute a pull request if I can come up with any UX updates that could make the process smoother.

--dsm and --dtm must be used to create the digital surface or terrain models.  By default ODM will not generate these outputs so you have to manually tell it to do so.

--fast-orthophoto and --use-opensfm-dense both tell ODM to bypass the Multi-View Stereo step and use the sparser point cloud generated by OpenSfM to create the mesh and ultimately the orthophoto.  Using these flags can make your processing time faster, but should be used when you only need the orthophoto and don't care as much about the 3D information.  This is handy in situations where you want to quickly take drone images and make a map, such as in disaster response.

--video & --slam-config interesting, but not critical, lets you use the experimental video pipeline rather than a bucket of images.  Chalk this one up under future research for me.

Flying Tips

As with the different arguments available in ODM, how you fly and capture the source imagery will be dependent on the results that you're looking for.  I typically use drone imagery for orthophotos and digital terrain models, and the tips below correspond to those types of outputs, but if you're interested in 3D modeling of structures or objects then that's a whole other post.

Use a Flight Planner

Software can be used to pre-program flight lines and collection areas to optimize the photo spacing and overlap when flying your drone.  When mapping always use this software vs. trying to capture images by free-flying the drone.  There are a number of options from different vendors, I only recommend not using one that locks you into their proprietary imagery processing pipeline (obviously).

Capture Oblique Images

In order to get good detail on the sides of structures and create the best point cloud, oblique images can really help.  Setting the camera of your drone to take photos at 20-45 degrees off nadir allows the camera to see more than it can from straight above.

Example Oblique Photo

This isn't always possible, depending on your platform and flight planning software, so your mileage may vary.

Limit Shadows

Shadows can really confuse the multi-view stereo algorithm, especially since the one currently included in ODM is made specifically to use shading to refine the depiction of the underlying surface.  In high contrast areas, shadows may cause the algorithm to think there is more of a surface deviation rather than realizing that the shadows are just from a low sun angle.

Strong Shadows Confuse Surface Generation

Flying in the middle of the day if its sunny will provide you with the best imagery for surface generation rather than flying early or late when the shadows are more pronounced.  Days that are overcast, but still have enough light are the best for mapping missions since they knock down the harsh shadows but still provide good lighting and color.

As with anything when it comes to flying, this is going to be site and time specific, but you should keep the lighting conditions in mind when planning your collections.

Stay Tuned...

I'm going to keep playing with ODM and getting even deeper in the weeds on the photogrammetry pipeline.  ODM is currently a really good option for drone photogrammetry, it's still a little behind some of the commercial options out there, but improvements are coming and I'm excited!