Feature request: logic on time mark pin

Hello. Would it be possible to include an option to reverse the logic on the time mark pin? That is have it trigger an event recording on a rising voltage?

And have an option where recorded events can be offset forwards or backwards in time by a constant amount of milliseconds?

I don’t think that is possible because the timemark pin is a pin of the uBlox chip and directly connected to it.

Offsettting an event backwards in time would need a thing called time machine. I’m not aware of a model which is commercially available and light enough for a drone. But you can add or substract constant offset easily in post.

For any further logic I would go for an Arduino “compagnion computer”.

you only know of the private, heavier ones?

i assume you knew i meant recording the timestamp with a value offset by a certain amount, to avoid doing it in “post”? at least i hope you gathered i wasn’t asking for the impossible.

the offsetting in post-processing, is this something that can be done with the emlid flavour of RTKLIB, or are you just referring to anything done post-flight in any software?

The commercially available ones I know are very heay and bulky, more like a car.

Sorry for the joke. I know this because I had a similar question regarding the timemark pin.

The timemark pin needs a latency of less than one 1 ms to produce a accurate position. Otherwise the drone has traveled some decimeters before the time mark is recorded. To achieve this, the pin is directly wired to the uBlox chip.

I would do the timeshift during post processing with a Python script.

Added comment:
I realized it is far easier to change the event time before processing the rinex file in RTKlib. You probably have to keep (and therefore move the position of the timemark in the file) the ascending time structure of the file.

Maybe @emlid could include this time machine feature into RTKlib together with the lever arm correction soon.

Yes you do need to keep the ascending nature of the rinex file. If you inject an event marker between two random measurement epochs it seems to give you an interpolated position for your epoch based on the two adjacent measurements. This can lead to some pretty wild positions for your events the further in time your event is from the measurements you slotted in between, as you can imagine. I tried cheating the system, but you have to remain faithful to the time order of things. It’d be great if you could give it all the events just after the header of the rinex file and have RTKLIB slot them in appropriately. I can’t imagine this’d be a hard thing to program in.

I’d just prefer not to have to do the interpolation between epochs in a giant excel spreadsheet with lookups, or learn python scripting.

Can you describe more precisely how you cheated the system and what was the unexpected result? The interpolation between the epochs you describe seems a perfectly valid solution to me.

I think I did understand, you didn’t correct the order of the timemarks.

I think the script should

  1. scan the Rinex file for existing events and
  2. add or substract the time offset
  3. if necessary move the corrected event at the appropriate position in the Rinex file (ascending time order)

The other solution is to make it after RTKlib processing. If you need to correct the lever arm anyhow and add the names of the images this might be a solution to save one processing step.

At the beginning of the next week I could write a quick and dirty beginner style Python script for that problem. Are you ready to donate some money to the Against Malaria Foundation if I write the script for you?

Tobias,

I tried cheating the system by making up event markers in the rinex file and putting them after the header and before the first epoch. no result.

I then tried putting all the markers after the first epoch. from memory it processed the first or last one only and ignored the rest. The result was a pretty wild position as, I assume, it did an interpolation (linear?) in time, using the adjacent measurements.

I then tried spacing the markers one after each of the first few measurements. so it was meas, marker, meas, marker, meas, marker, meas, meas, meas… All events processed, but as before, the wildness of the solutions seemed to be based on how close in time the event marker was, or wasn’t to the adjacent measurements.

For me, my ideal would be to take a list of my recorded events from the events.pos file, offset the event times by -0.25 seconds, and put these back in the rinex file in their proper chronological order. If you know your logging rate then you know the exact measurement record immediately following or preceding the spot where your new measurements are supposed to fit in.

Your script would definitely need to move the event markers by either 2 or 3 measurement blocks. 2 if the event is more than 0.05 seconds from the previous measurement epoch, and 3 if it is less than 0.05 seconds from the previous measurement epoch. That is the combination of my 0.1 second logging rate and -0.25 second needed time offset.

I would be willing to make a donation to that foundation. I do have an excel solution, but event positions straight from RTKPOST would make life easier. Could the script accommodate a prompt for time offset if it turns out my -0.25s needs refinement?

@wsurvey: after thinking about it I would write a script which is based on the pos-files which you get after processing the data in RTKlib.

I will include the following futures:

  1. time offset for timemarks in milliseconds. If you specify an offset other than 0 the script will interpolate a corrected event position based on the previous and consecutive positions in the waypoint pos-file.

  2. simple lever-arm correction based on xyz-offset between antenna and camera under the assumption that the drones heading is towards the next waypoint.

  3. add the file names of images in a certain folder into the event position table.

@wsurvey: how does your excel solution interpolate the positions? Which operating system are you using, Windows?

It’s simply a linear interpolation based on the position of the solution in the prior epoch, and the delta E, N, H and t to the next position. These deltas are multiplied by the appropriate fraction of the time interval that the recorded event occurs at. Sounds like we are doing the same thing, except I’m using the GPS antenna offset X, Y, Z in photoscan to take care of my eccentric mount.

As I’m using photoscan I also use the sheet to estimate some standard deviations to use for each point by considering the expected timing accuracy, speed, and stability of flight through the surrounding epochs.

Yep, Windows 10.

How do you handle the heading of the drone in Photoscan? As far as I understand you need to know the orientation of the image in order to use the offset feature of Photoscan.

From the Agisoft forum by user JMR:

The offset is fixed with the origin considered in the nodal point of the camera and looking through the lens, +Z points to your eye, +X points to your right, and +Y points to hotshoe.

It would be great to use the IMU data in oder to get the orientation of the cameras, but I lack the time to figure out how to do that. Its a pity that Reach does not have the feature to generate another output file with the IMU data.

As I’m using photoscan I also use the sheet to estimate some standard deviations to use for each point by considering the expected timing accuracy, speed, and stability of flight through the surrounding epochs.

Can you share youre excel file/formulas or more information in order to calculate the statistics? I think it would be only logical to integrate these calculations into the script.

this information, as far as I’m aware, is contained within the EXIF header of each individual image taken from a Phantom 4. Probably derived from on-board compass, IMU and gimbal state.

The formulas are nothing fancy and just what felt right to me at the time (don’t judge me!). sdE is just the square root of the sum of the squares of: E component of the drone horizontal velocity multiplied by the expected error in timing, a constant 0.01, and the stability number I made up. The sdN and sdH is calculated likewise. The stability number is the standard deviation of 5 adjacent dE. This stability guestimation is meant to pump up the stdev when the drone is not showing consistent epoch to epoch component velocities, such as when turning or being buffeted by wind.

I’m not sure I’d use my formulas though; I have no good data to test it on as I came up with these when working from the flight logs from Map Pilot app for the timing, rather than my event recording to Reach M+, so I don’t know if they reflect reality or not. But i’m not too concerned. Photoscan gives me exceptional horizontal results, and I feel a big enough stdev for position will allow photoscan to slide the photos around horizontally and produce relative horizontal positions probably as good as relying on my timing triggering. Height is the one I want to accurately determine and constrain.

Just to keep you up to date, I have finished about 65% of the work.

grafik

Looks good. We’ll have to compare it to my spreadsheet to see if they give the same answers.

Do you mind providing me with some of your ublox and rinex tracks for processing for comparison?

Here’s a sample of my spreadsheet output:

PRO_0053.JPG 9779.964 9573.155 203.033 0.237 0.042 0.015
PRO_0054.JPG 9799.931 9569.737 203.090 0.250 0.047 0.016
PRO_0055.JPG 9818.977 9566.232 203.324 0.213 0.039 0.015
PRO_0056.JPG 9839.027 9563.048 203.396 0.244 0.038 0.015
PRO_0057.JPG 9858.822 9559.897 203.510 0.245 0.040 0.016
PRO_0058.JPG 9877.955 9556.782 203.649 0.248 0.040 0.015
PRO_0059.JPG 9897.581 9553.689 203.896 0.247 0.040 0.015
PRO_0060.JPG 9917.655 9550.527 204.110 0.241 0.040 0.016
PRO_0061.JPG 9937.914 9547.219 204.118 0.245 0.043 0.015
PRO_0062.JPG 9956.462 9544.053 204.235 0.245 0.042 0.016
PRO_0063.JPG 9977.014 9540.514 204.129 0.242 0.043 0.015
PRO_0064.JPG 9997.084 9537.107 203.908 0.246 0.042 0.016

This is tab formatted text output paste into a text file with image name, E, N, H, sdE, sdN, sdH fields; ready to import into Agisoft as camera station coordinates. It now makes corrections for grid convergence as I believe the outputted E/N/U baseline components are calculated with reference to true north.

I do not work with ENU but with EPSG:4326 WGS84 and height above the ellipsoid. Is that a problem?

I’m sorry that I lack knowledge about these things and need some explanations.

As far as I understand is “ENU” a local system with its origin at the base station position. The distance between your cameras and your base station is about 14,000 m . Using that system makes it easy to correct the produced model. Are there any other advantages?

Can you provide a formula for grid convergence correction? What exactly does that do, correcting the orientation of the whole “model” or the orientation of the images?

I can convert using software

It doesn’t seem to be explained in the RTLIB documentation, but it could have been UTM e/n/u or a local e/n/u relative to base station. UTM would have suited me because I usually work in a modified grid system that is close enough to UTM that using it would have been a breeze. A local e/n/u aligned to true north requires a rotation by the grid convergence, that varies by distance eastward from the central meridian. A small pain but I have programmed a correction into my spreadsheet. As well as compensation for geoid slope to convert heights differences to orthometric differences. Which I need because my results need to be real-world orthometric heights.

From Wikipedia:
Conv=Arctan (tanh (e/k/a) . Tan(n/k/a))
where k should be 0.9996 for UTM and a should be 6378137m. I can’t get it to work for me atm with test data but I don’t use this formula. And its not needed anyway because I have software and spreadsheets that will convert L/L/H directly to UTM grid coords

EDIT: convergence formula works ok. Just a degrees-radians issue. Agrees within 15" or so within rigorous formula calculation values to near edge of UTM zone.

hi there. It is looking great. I would like to follow your tool for processing.

I’m refreshing a little bit my geodesy knowledge and I’m thinking about switching from WGS84 to ECEF XYZ. Eventhough I have to rewrite the code completely, it seems to be the better approach and the interpolation of the positions should be quit simple compared to the WGS84 system.

Do you think this is a good idea???

Anyhow I had some problems finding ECEF XYZ in Photoscan. Do I have to use a local coordinate system?

You are welcome, it would be great if you could contribute some knowledge to improve the script. Maybe some information about your workflow?

And I if it works I would expect a donation to the foundation mentioned above (I’m sympathize with effective altruism…).

I don’t think you should change. Due to the short distances between PPK epochs (decimetres), Doing a linear interpolation for your events between adjacent epochs in decimal latitudes and longitudes should be fine.

BTW, I found what I was looking for earlier. Topcon Tools for post processing is perfectly happy to deal with all your epoch event flags at the top of a rinex file. So you can offset the event records by your desired offset, format them as a rinex event, insert them all together at the top of a rinex file and process the kinematic track through Topcon Tools. I think I’ll still use my spreadsheet though, as the standard deviation estimates it provides are essential to my agisoft photoscan processing. Especially now I think using the LED to trigger events is imperfect in a systematic way that ensures some events will be as bad as the calculated accuracy estimates.