Hello everyone,
Grzegorz Baran here.
In this video I am going to present how 
I captured and turned a real sculpture into a game ready asset using photogrammetry.
I also have some key updates regarding to my previous photogrammetry workflow about image 
pre-processing for photogrammetry reconstruction
In details I am going to present:
capture using Mavic 2 Pro Drone
the photogrammetry reconstruction using 
Agisoft Metashape
3 different ways for retopology in ZBrush
UVmaping  the lowpoly model in the RizomUV app
and finally using the Substance Painter I am going to texture the final model by applying a smart material 
I made for this purpose
and let's start
A while ago I decided to do a test where I do a full 3D capture with a drone. 
As I mentioned before, there are strict rules I need to follow when flying a drone. I need to stay away from any no fly zones like airports,
military zones, prisons or any events
as well as keep the distance of at least 50 meters to any buildings, vehicles and people.
During one of my photogrammetry trips, I have found a perfect subject on side.
It was a lone sculpture of Dolly Pell, a 200 year old local hero-smuggler from the South Shields
The sculpture was about 4 meters high so quite hard to capture with a standard camera and tripod setup
It was located away from any buildings, roads and people so it seemed to be a perfect
place for the drone capture.
Unfortunately the weather wasn’t the best for scanning.
The wind was quite strong – about 4m/s 
so it was strong enough to affect the drone stability and sharpness of images. 
I was also afraid that low, direct sun might 
cause a lot of contrast, the camera won’t be able to cover within a singe tonal range.
Hopefully since I didn’t like 
the sculpture’s surface, I also didn’t plan to capture it with a purpose of creating a material from it.
In long term I found that it would be more beneficial to create a totally new material which can 
be reused on other scans like this one, and turn it into a smart material instead.
The last issue I noticed were birds. But due to strong wind I was hoping they won’t risk 
flying in close distance. I decided to proceed with the scan despite the conditions to find out what results
I can get from it. Even for the price of total failure. Since the grass was quite high
I have found this scan might be also a good opportunity to test 
hand take off and hand landing. 
I watched a few videos before so I knew it is doable but also that it is never safe, especially 
not during strong wind conditions.
So I considered the pavement nearby as a plan B in case I fail with a hand-landing or just 
if I change my mind
Setting up the equipment in high grass wasn’t easy. Especially after I removed a gimbal’s cover and the camera lost it’s
protection. Without a gimbal covered, the grass could scratch the camera lens and damage its mechanism during gimbal’s autocalibration.
The landing pad would be very useful for sure in here even just to get a clean, flat surface to temporary 
put the drone on it.
The third hand would be very useful 
but since I had just two, I had to put the controller on the ground 
and swipe the TAKE OFF button while keeping the drone over my head
I kept the drone with the camera facing away to myself.
Finally the drone was hovering in the air next to me. I can tell that the take off was a piece of cake. Of course needs 
a bit more practice and maybe consider some improvements but it wasn’t anything hard at all.
I was more concerned about the hand landing part I was planning to test when the scan is over.
But it was a concern about the future.
The next step was to proceed 
with the scanning.
Since I was planning to scan a single subject I used an ACTIVE TRACK feature in 
a TRACE MODE. This feature allows to select the target the drone will follow. 
It is very useful when the object moves as the drone maintains the initial distance and altitude.
After I push the sticks to the left or to the right in this mode,  the drone won’t just move to the left or to the right in a straight line
as usual, but will try to fly around the subject within the constant radius
with the camera facing it all the time. 
I don’t think this mode was designed to scan still stuff but is seems to be 
perfect for prop scanning.
To select the target I simply had to draw a box with
a finger on the mobile screen around the target while the mode was active.
With this mode active and the target selected, the only thing I had to do was to push sticks left or right to fly
around the object and release the shutter every time I wanted to take a picture. A few times, due to the strong wind pushing the drone, 
I had to correct the radius distance manually.  Otherwise the drone would crash. 
Sensors blocks the controllers orders 
to move forward if they detect any obstacle in front of the drone, but they don’t do anything if the drone itself was 
pushed by the wind, so it is very important to keep the drone in a distance where I still have time and space to react to the wind.
After a few circles the drone lost 
lost the target.
I guess it was caused by the wind pushing the drone backward and forward and getting to close to the target, but whatever the cause was 
it was very easy to retarget the drone again and continue with the capture.
After the last time the drone has lost focus on a target I decided to simply fly around 
and take a few additional images from side and from the top of the sculpture.
Finally the capture was over
It took me 8 minutes and during this time I captured 75 images.
The next step was to
proceed with a hand landing.
Before I started landing I watched the drone carefully and have noticed a few things.
The bottom sensors are placed in a back part of the drone.
So when the drone faces away from me, it detects my hand as soon I put it beneath.
In result to avoid the collision it 
increases the altitude every time it detects my hand beneath.
Also the back propellers are placed a bit lower when compared with those at the front. So catching the drone from the bottom gives me less space 
and
catching surface to actually grab the drone.
It means that if I want to grab it, I need to do it very fast, before the drone even reacts after my hand is detected
and when done, I would need to hold is tight as probably the drone will try to run away.
Sounds like a finger loss receipt a bit.
So I decided to change the approach. I rotated the drone the way the camera was facing myself. 
And since front propellers were mounted much higher, it gave me a bit more space to catch the body behind the gimbal
without risking my fingers as much as before. I have also noticed
that when my hand was below the drone around the gimbal, the bottom sensor didn’t see it 
It meant that the sensor was located at the drones back or that the front bottom one wasn’t as sensitive 
as the back one. Anyway, I just had to stay away from the back one so far. I still had a few doubts 
but finally I broke and I did it.
I grabbed the drone behind the gimbal and pushed the altitude stick back to 
initiate the landing. Engines turned off and the landing was over.
Unfortunately I had no other choice as to put the drone in a high grass again worrying about the  camera safety.
I think this is the part which has to be improved for sure if I consider any hand landings and take offs.
After this trip I did a few more hand take offs and hand landings and I realised that after a few gimbal didn’t behave properly.
I have contacted the Dji support then and asked to tell me what do they think about hand landing or hand take off for Mavic 2 drones.
Their response was that the drone hasn’t been designed for any hand based take offs and any 
hand landing and it hasn’t been tested for this purpose. 
and in result I bought another landing pad - the smallest one
The one I can always carry with myself as it fits any backpack I use.
So now I have 3 of them:
the first one which is 75 cm wide
It is quite small and it doesn’t leave any space for any landing errors  when used 
In my last video you can see that I almost lost a blade when it hit the stake after being pushed by the wind.
The second one.. is 110cm wide. This size works very well for Mavic 2 Pro drone. The issue with this one was that it is 
very big even when folded and doesn’t fit any of my backpacks. The only solution to carry it is to attach it outside of the backpack.
In result I started carrying the small one again instead 
to finally order the last one
So now I am a bit confused which one is the best to use. 
In result I decided to use 
mavicpilots.com forum and ask more experienced Mavic 2 drone pilots 
to myself to get their opinion on hand take offs and hand landings.
So far I was told that many of them do hand landing and hand take offs since years 
and their drones are totally fine and they never had any issues with them.
It means that the last landing pad might be the only one I will use
since it is very easy to carry and fit my backpack
But I believe I am going to use it mostly as a flat, clean temporary surface 
to set up the equipment and don’t risk any gimbal or camera damage.
Not sure if I use it as a landing pad itself since it seems to be too small,
but who knows, will see after a while.
Now I have a choice and I believe time will show.
Despite the capturing conditions captured images look quite good and are 
ready to be used as a source for photogrammetry based reconstruction in Metashape. 
To do this I need to bring all those images into the software and 
start the process.
Since I am not planning to create any texture 
from captured 
data, I am going to use pure DNG files without any postprocessing and any fixes.
I will expand the topic in a moment. Currently let’s just load all selected images.
The Metashape can score each image quality for reconstruction. It can be done 
in detailed Photos view by running ‘Estimate Image Quality’ feature.
This is the moment of truth really for the capture I made. Image quality below 0.5 means that image is useless and has poor quality.
The image scored as 1 has the best quality.
Everything between 0.5 and 1 is ok. The closer to 1 
the better.
As you can see, most of images are scored around 0.8 which isn’t bad in this case. 
Just a few are around 0.5 but since I didn’t plan to capture a surface material, 
for the purpose of this reconstruction it is a very good score and all images are ready for the next step. 
A Photo Alignment
In this step Metashape finds shared points from each image and estimates it’s position 
relative to the camera position the image was taken from and other images.
I have used all images as DNG without any pre-processing in any photo editing 
app intentionally even if Mavic 2 Pro has terrible barrel distortion.
I usually used to fix that distortion in photo editing app until someone pointed me out
that I might be wrong and
and it might not be a good thing to do.
By fixing barrel distortion before photogrammetry reconstruction we trim the image and remove part of the data. 
Depending on how big distortion is, it can be from almost nothing for aspherical lenses 
to even a few % of the entire image for those used by drones. 
This data might have an impact on image alignment.
As long we have a lot of image overlapping, it shouldn’t be a big issue.
But less overlapping we get, the more relevant this data can be for the alignment process. 
But that’s not everything and consequences of barrel distortion fix can go even deeper.
Since the photogrammetry software estimates 3D coordinates using mathematical models, the results are usually more
accurate to what photo app based un-distortion feature can deliver. 
Additionally the photo editing app 
un-distortion feature is usually measured with grid transformation, 
what introduces additional non-linear, 
unpredictable  distortion within the image.
So by fixing the distortion in photo
editing app we remove some useful for image alignment data 
but also introduce unwanted perspective noise.
To dispel doubts I have spoken with Agisoft and Reality Capture guys and they confirmed that.
it means that
it is way better if any image perspective distortion, is fixed 
directly by photogrammetry app, and fixing it 
by Photo Editing software is not recommended. 
If from any reason we decide otherwise and fix the barrel distortion using any Photo Editing app, 
it is suggested to fix all the calibration 
parameters in the Camera Calibration dialog 
to avoid the calibration coefficients being estimated again. On the occasion I have also learnt a new very important 
thing: Finally in the Metashape we can process the images in the original color bit depth and it will be treated 
properly at every processing stage without any internal downsampling to 8 bits.
Metashape supports DNG and CR2 raw formats, but
to avoid de-bayering which takes additional processing time and increased 
memory consumption, they can be converted to TIFF.
It simply means that for cases like this one, we can simply use pure RAW file data without any color depth and dynamic range tweaking
as the Metashape sees, read and utilise 
all necessary information 
from RAW file for us.
Of course in case of standard material capture, when we need to calibrate the color, set up the white balance, push up blacks to reduce 
ambient occlusion shadows and reduce highlights, those tweaks are still beneficial and are a must
After the Photo Alignment process is over we can see the cloud of shared points
and cameras positions when camera position 
preview is 
active.
Since we don’t want to reconstruct entire scene, just the sculpture, we need to limit 
reconstruction area with the ‘region’ box just around the sculpture.
When done, we can proceed with next steps.
