Terrestrial / Handheld Photogrammetry Workflow

Has anyone figure out a good workflow for terrestrial photogrammetry scanning? I’ve thought about a few methods but was curious what others had come up with. Being close to the ground, using GCPs exclusively would be challenging. I’m trying to figure out a straightforward way to geotag photos using RTK/PPK. I have an RS2 base/rover setup but do not have an M2.

(1) Use M2 with hotshoe connector to mirrorless/dslr camera. Geotag photos in Emlid studio prior to importing to photogrammetry software.

(2) Use Android phone with existing RS2: using Lefebure app, photos will be geotagged at time of capture.

Both methods require some sort of rig/mount for the phone/camera. Care also needs to be taken to allow for tilting of the camera without tilting the antenna as it would be challenging to correct for changing lateral/vertical offset.

EDIT TO ADD: I thought I read on the forum that an RS2 will generate events similar to the M2 with the right cable. Wiring according to the pin out diagrams, would that work to generate the necessary event file to post process in Emlid Studio?

Why would that be challenging?
I would say it much less errorprone, much simpler and more precise solution than trying to tag camera-positions.


I suppose GCPs only could work. However, many of the features that need to be scanned include structures (vertical surfaces) and areas where access is limited. I’m hoping to minimize model drift. I’m admittedly new to this so am definitely open to suggestions.

It really depends on what the subject is… A 100 acre farmer’s field or just the barn.

1 Like

I don’t think a 100acre farm would lend itself well to terrestrial photogrammetry :wink:

I guess I can make the question simpler. Does anyone have a suggested way to geotag photos with using an RS2 rover as the location source?

Well, I would use a drone for just the barn too. :grin: So, again, it depends on the subject, right?

Geo-location is only one element in a photogrammetric workflow. The photos, or better, the pose or ability of the camera to capture the scene is also kind of important. Maybe you just want to model a store front or something so overhead camera positions are not needed. The software is going to match key points in images if it can. Knowing precise (centimeter level) camera positions will help the software calibrate the camera. But even if they are close, the software will be able to model the scene if your overlaps are good.

So, maybe centimeter precise camera positions are not even critical to achieve your goal. Modeling with a drone that has non-precise gps coordinates can facilitate a beautiful model. Throw in some GCPs and you can align the model to a projected coordinate system if that is what you want to do. Centimeter precise camera locations are not critical.


Figured I’d provide a brief update as I have tried both methods to some degree of success.

(1) I assembled an RS2 camera hot shoe adapter with the two cables. With a camera connected, events are triggered in the logs just as expected. Logs can be post-processed and images geotagged.

(2) I got an Android phone to test the direct geotagging. With the Lefebure app, I am able to directly geotag the images with an RTK fixed rover. One thing I noticed is that not all camera apps write geolocation tags with sufficient precision to be useful. It seems like the default camera app on my phone truncates the coordinates. The ‘SurveyCam’ app seems to work fine.

However, I still have one main question related to method (1). I typically have the base setup over a known point providing corrections to the rover. My scans are typically small and a PPK workflow doesn’t make sense to me. Is there anyway use the logged events to extract/interpolate the corrected position of the rover as it has a RTK fixed solution?

Hi, not sure i fully understand your question. Do you mean like the Stop&Go feature?

1 Like

Not Stop&Go, the hotshoe cable triggers events in the log file. So, I have exact time of image capture and would like to reference that to the RTK fixed position. I suppose I should be logging position.

Ok, It sounds like drone PPK. But instead you hand held the camera.
If you record logs, each event will have the FIX position with each triggered events. Its not in RTK as you need to post process it.
Am I getting closer?

1 Like

I think so, however, the raw log that includes the events is just uncorrected GPS data. Logging position includes the fixed/corrected positions but doesn’t include the events. I guess I just need to write a script to do it: pull events out of observation log, interpolate positions from position log.

Simplest Method

1- Create GCP visible to drone camera every 20x20m 50x50 m etc or any distance
2- Find GCP coordinates by RS2 (PPK)
3- Take several images by drone or 4K vid
4- Use Metashape or other photogrammetry software to build a 3D model .
The software uses GCP to georeference and correct the 3D model and map image

1 Like

Hi guys,

Sorry for the delayed comments!

Thanks for this discussion. I think I can just sum up possible options for terrestrial photogrammetry scanning and share some links here:

Reach receivers don’t support geotagging the photos in RTK. The workaround you suggested sounds technically correct. But Reach can’t stream the info about time marks. It can be accessed after the log recording is finished only. Can you read this info in some other way? Perhaps, right from the camera?

I have the RS2 connected to my camera using the hotshoe adapter connected to the RS232 port. It is creating events just as expected. I can GREP " 5 0" the observation file of the rover and get the time of all the events easily. I can log the position of the rover had have the RTK positions. Then it just takes some scripting or manual interpolating to get the RTK position for each event.

Reach receivers support geotagging in RTK I suppose; it’s really just the software that is the missing piece. Emlid Studio could do it if you could feed it an observation and position file directly. In the “Drone Data” workflow, I’m assuming Emlid Studio is just interpolating the PPK’d position file with the event times. It could just as easily do that with an RTK’d position file.

I realize it’s probably not a really popular or needed workflow but I appreciate all the responses and discussion.

It is much easier to do than this.
Just use events file in Emlid Studio. Then you have time-interpolation down to the nano-second.

1 Like

Another update:

I’ve got a fairly straightforward PPK/RTK setup now. I assembled a small rig to hold a camera and RS2. It looks a little awkward but it’s not handles fine. A smaller rig could be assembled with an M2 and smaller antenna. Camera hotshoe is hooked up to RS2 with semi-custom cable. RS2 is centered above image sensor. Metashape takes care of antenna offset.

Workflow for RTK:
Raw and position logging on rover/camera RS2.
Emlid Studio to generate the events.pos file from the raw log.
Excel to generate the image geolocation from position.LLH and events.pos files.
Metashape for photogrammetry (import geolocation file generated by excel for photo reference)

Workflow for PPK:
This can be done within Emlid Studio.

There are a few commercially available handheld scanning options. This has a few benefits in that is not limited to NTRIP corrections and using the same receiver for photos as well as surveying GCPs.


Seems really complicated simply to replace just having a couple of GCPs. But, if you’re happy, that is all that matters!



Oh, I see what you meant now. Thanks for sharing your setup!

I agree… it’s a bit of solving a problem that doesn’t really exist. Most cases, GCPs are the way to go. A few scenarios I have run into where placing GCPs would be difficult and this could fill that gap. Otherwise, it’s not really worth the complication.


Hi implicite

Your method of the measurement interested me very much. Can you make accessible one’s worked out measurement. I he does not need the model 3D. Can one Your method can make the map 2D (ort fotomapa)?