Additional Field Work and Processing

After the multi-rotor was repaired, which was a small matter, I began to process the images.  Capturing the photos is just the beginning of this process and the tip of the iceberg at least as far as time is concerned.  Processing the images in Photoscan involves three steps each of which has several sub steps.  The first step matches the photos and creates a point cloud.  The second step generates a polygon mesh and the final step creates and applies a skin to the mesh.  Each step has different quality settings and they can be either very processor intensive or very memory intensive in the case of the mesh generation.  You can see an example of the work flow below.

I initially tried to process the images all in one chunk.  There were about 1800 images to process at the time and after several days of processing I had a big disappointment.  The point cloud ended up being just a cloud of random points.  In order to get all of the photos to align correctly with the processing power available to me would require me to break the photos up into smaller chunks.  The chunks would later be merged.  This process worked much better and after many days of trial and error I ended up with a point cloud that ended up looking very encouraging.

Point_Cloud

While the point cloud is being calculated by the software it also extrapolates the location of the camera for every photo processed.  Being as half of the photos were shot from the ground and half were shot from the multi-rotor, the camera locations would give me a good indication of how steady the multi-rotor had been during the flight and whether or not there was adequate coverage.  Here is one view of the flight.  The blue boxes represent the locations at which each photo was taken.

Point_Cloud_Cam

As you can see, the top two rows of blue boxes represent the photos taken by the multi-rotor.  There appears to be a very consistent distance and height between each photo.  There also appears to be very good coverage and overlap.  Here is another view.

Point_Cloud_Cam_Alternate

 

The black lines through the boxes represent the angle the camera was at when the shot was taken.  Here is the model after the mesh has been created and the model has had the skin applied.

Textured

So with these encouraging results I have continued to process and merge chunks with the goal of modeling the two main parts of the structure and then joining them.

More to come..