I’m using emlid reach m2 in order to geotag my photos.
I want to add the Yaw, Pitch, Roll angles to the photos.
Is it possible to get the data from the M2?
Reach M2 doesn’t output yaw, pitch, and roll values. But with the unit, you get image centers calculated with high accuracy. It ensures good results in photogrammetry software.
How do I enter the various distances from the antenna to center pixel of my camera? Also, where are the values that the M2 calculates? Are they logged with the raw/obs files (as DJI does with the P4RTK in a .mrk file), when the hot-shoe adapter detects a shutter release? When using the M2 in PPK mode, there is no real-time solution, so the offset must be applied later.
Seems like you could “hack” the 9DOF IMU in it to get what you need?
Welcome to our forum!
You can set these values while post-processing in photogrammetry software.
Our software doesn’t support outputting these values. They also aren’t logged in raw data. Raw data from Reach contains time marks recorded when the camera triggers.
Thanks for your reply. But if the antenna is above (as will always be the case) and/or out of line with center of the camera’s pixels, adding the static offsets in post-processing is insufficient. From Ms Suzdaltseva’s reply to Kubar above, I assumed that the M2 uses its onboard 9-DOF sensor to calculate dynamic offsets, as the DJI P4RTK does. If not, what is the purpose of the 9-DOF sensor?
In my use-case of the M2, I’m hand-holding a mirrorless camera to photograph rocks, a meter or 2 in the distance, from various angles, for 3-D reconstruction. In my first attempt at this, I mounted the camera on a video-gimbal in such a way that the GNSS antenna remained more or less over the center pixel as I changed the camera’s pitch. In the field, photographing in uneven terrain, this rig turned out to be heavy, awkward, and impractical. With the M2, and its 9-DOF sensor, I was hoping to mount a helical antenna directly on the camera and then correct for pitch and yaw in post-processing. Not correcting could add error in position and altitude on the order of 5 cm.
It is mentioned because Reach M2 has built-in IMU sensor. But there’s no way to derive this data from the app, and we don’t use it in software calculations.
Still, you can use an external IMU sensor. I can hardly provide you with a guidance since we’ve not tested it ourselves, but you can check similar topics in our forum. You can also use a ArduPilot’s forum as well.
Yes to an external 9DOF, but then I’d have to sync it to the M2 and record the data somehow; or I could bolt a smartphone to the camera or even just use the smartphone’s camera. In fact, the mirrorless camera I use (a P S5) has an IMU and includes pitch and roll in Exif but no yaw.
If the distance from the ARP to the center pixel is 5 cm, a requirement for 1 cm precision creates a checker board of 78 squares around the center pixel where the ARP might be, depending on yaw and pitch. Thus, for this use case (with a helical antenna bolted to the camera), the accuracy of yaw (and pitch) can be low, 10 degrees with the camera pointing down.
I suppose one solution would be to take 2 photos at each station around the rock, moving the camera horizontally between the 2 shots. The direction of motion would always need to be consistent in terms of the geometry of the camera and mounted antenna; eg, perpendicular to the long dimension of the camera body. In post, I’d use the two positions to calculate yaw and then, using pitch from Exif, the antenna offset.
For yaw in this use-case, this solution is actually superior to using 2 RTK GNSS receivers rigidly connected – you’d have to mount them on some sort of pivot for large pitch. The M2’s ability to detect shutter-release helps in the 2-photo solution because it reduces the data-stream from the receiver to just 2 discrete points.
This topic was automatically closed 100 days after the last reply. New replies are no longer allowed.