Has anyone figure out a good workflow for terrestrial photogrammetry scanning? I’ve thought about a few methods but was curious what others had come up with. Being close to the ground, using GCPs exclusively would be challenging. I’m trying to figure out a straightforward way to geotag photos using RTK/PPK. I have an RS2 base/rover setup but do not have an M2.
(1) Use M2 with hotshoe connector to mirrorless/dslr camera. Geotag photos in Emlid studio prior to importing to photogrammetry software.
(2) Use Android phone with existing RS2: using Lefebure app, photos will be geotagged at time of capture.
Both methods require some sort of rig/mount for the phone/camera. Care also needs to be taken to allow for tilting of the camera without tilting the antenna as it would be challenging to correct for changing lateral/vertical offset.
EDIT TO ADD: I thought I read on the forum that an RS2 will generate events similar to the M2 with the right cable. Wiring according to the pin out diagrams, would that work to generate the necessary event file to post process in Emlid Studio?
Why would that be challenging?
I would say it much less errorprone, much simpler and more precise solution than trying to tag camera-positions.
I suppose GCPs only could work. However, many of the features that need to be scanned include structures (vertical surfaces) and areas where access is limited. I’m hoping to minimize model drift. I’m admittedly new to this so am definitely open to suggestions.
It really depends on what the subject is… A 100 acre farmer’s field or just the barn.
I don’t think a 100acre farm would lend itself well to terrestrial photogrammetry
I guess I can make the question simpler. Does anyone have a suggested way to geotag photos with using an RS2 rover as the location source?
Well, I would use a drone for just the barn too. So, again, it depends on the subject, right?
Geo-location is only one element in a photogrammetric workflow. The photos, or better, the pose or ability of the camera to capture the scene is also kind of important. Maybe you just want to model a store front or something so overhead camera positions are not needed. The software is going to match key points in images if it can. Knowing precise (centimeter level) camera positions will help the software calibrate the camera. But even if they are close, the software will be able to model the scene if your overlaps are good.
So, maybe centimeter precise camera positions are not even critical to achieve your goal. Modeling with a drone that has non-precise gps coordinates can facilitate a beautiful model. Throw in some GCPs and you can align the model to a projected coordinate system if that is what you want to do. Centimeter precise camera locations are not critical.