Why does decimation degrade my solution?

I set up two static Reachs, base and rover, both logging raw data at 1Hz for 24 hours, which I then converted to RINEX and post-processed using the command-line versions of the rtklib tools (in fact the ‘demo5 b29a’ custom version from rtkexplorer.com). Results are excellent:

undecimated

But 24 hours at 1Hz is a lot of data. Just out of curiosity I wondered if decimating the data would still give acceptable results but with decreased file sizes and processing time. First I tried using the -ti flag on convbin.exe with a value of 30, to drop the epoch interval from 1s to 30s. Total failure. No obs file was produced from the base’s raw data (but curiously it worked fine for the rover). I’ve no idea why this didn’t work.

So then I tried a two-step approach: run convbin as normal but then decimate the obs file itself using teqc.exe with ‘-O.int 1 -O.dec 30’. This seemed to work: both the resulting obs files contained 24 hours of data at 30s intervals. However, post-processing these files gives much worse results than the undecimated data:

decimated

There are now only 12% fix solutions, compared to 93% for the undecimated data. There are outliers that are metres away.

Why should decimating the RINEX data lead to such a serious degradation in overall solution quality? I was expecting to see roughly the same result but with 1/30th the number of points! What am I missing?

My decimated and undecimated data is available at https://www.dropbox.com/s/ctqhfnf0mfuex4h/Decimation.zip?dl=0. The zip file also contains the .conf file that I am passing to rnx2rtkp.exe.

1 Like

decimation (the word sounds so destructive!)


I have three theories for you that are totally speculative and not based on any evidence:
  1. Overall, the quality of the received data was not so great, but there was a short time where the data was good enough to produce a fix and subsequently hold it. After decimation there was no longer a group of good data which would produce a fix, so you got mostly float.

  2. There could be some kind of harmonic in the data, where there is more information at certain intervals than others (for example: data from certain satellites only appears on odd numbered intervals). If that is true, then sampling from only some intervals not only thins the data, but it also effectively discards a whole subset of the data.

  3. The algorithm that is doing the decimation is not removing every nth piece of data, but removing every piece of data that does not equal a certain time mark. If the time marks of the data do not align well to the expectation of the algorithm, then it could be discarding more data than it intends to do.

4 Likes

What happens if you use the interval checkbox in RTKpost and set the interval to 10 seconds?

1 Like

Thanks to Simon’s suggestion, I discovered that I can use the -ti flag to set the time interval for rnx2rtkp without having to decimate the data at all. This is sufficient for my ultimate purpose, which is to combine several consecutive RINEX files into one (i.e. one for the base and one for the rover) and process them as a single job, rather than processing each overlapping pair of files separately. If the resulting files were over a certain size, I was finding that rnx2rtkp would fail to produce any output, apparently because of insufficient memory, which is why I thought I would need to decimate the data. Turns out that the -ti flag can be used instead to reduce the amount of data to be processed.

2 Likes

This topic was automatically closed after 100 days. New replies are no longer allowed.