Averaging

Not vague, it’s a good question but there is a lot more going on under the hood to also consider & potentially do yourself.

A receiver can’t see the buildings and tree so difficult for it to determine what’s good and what’s multipath and bad to reject. Receivers like the Trimble R12i do have incredibly powerful & proprietary software algorithm’s (ProPoint) that can sense & eliminate most multipath effects e.g. in heavy vegetation but these benefits come a $20k cost.

Otherwise, most receivers provide time proven simple averaging. The idea being that over time with enough data the impact of the outliers will reduce and not significantly affect the statistical result. They will not “ruin” the result. And over time the physical aspects also tend to net out, as the satellites move the multipath varies. E.g. the objects that reflect change, as do the angles and distances to the receiver. Time is your best friend.
As an example of this, the requirements to calibrate a survey mark in my state of Australia are a minimum static observation of 1 hour with 5 seconds logging, or two (independent) continuous static observations of 5 minutes each in metro (10 mins regional) with 1 second logging, and at least one hour apart.

If you are interested in real cm RTK accuracy (e.g. not just face value false fixes and other distortions), then 2 minutes is way too short. If you are happy to be within a handful of cm and are pressed for time it’s your own value judgement. Personally, for anything of relative interest I collect an absolute minimum of 3 minutes, and upwards from there to 30mins or more for something legally or dimensionally important.
To give you some idea why, you can see the typical GNSS convergence curve in the results of my own testing of precise geodetic equipment over a relatively long’ish 13.8km baseline and post-processed through Emlid Studio here: More functions and better usability request for Emlid Studio - #15 by Wombo

Only Emlid can answer if they apply any additional processing algorithm’s to the RTK averaging, but I wouldn’t expect a lot of detail as anything they did would be commercially sensitive.

Otherwise, your own practices would have a far more significant effect on reducing the impact of outliers in difficult environments. The time proven ones are:

  1. Time, as long an observation / average as you possibly can.
  2. If it’s really important then drop the RTK and do static.
  3. An onsite base, minimal baseline in the same environment.
  4. Reference marks to validate your results & re-observe as needed.

I understand your original question is in relation to the software algorithms but if you are not doing point number 4 then that question and the outliers becomes somewhat moot.

4 Likes