Has anyone measured how does RTK rover accuracy drops with the increasing distance of the rover to the RTK base station? Everyone is repeating the 10 km radius but do we have some numbers for it actually? Have some scientific tests been performed? I haven’t found anything so far…
The best would be a measurement against a professional base station (governmental NTRIP server etc.) so that possible glitches on amateur base station HW do not play into the measurements.
Someone else will best answer your question I think.
What I can say is that I used 2 geodetic control markers (GCM) which are 9km apart to measure to a certain point which is roughly in the middle. I have repeated this on different days. In my observations, the furthest I have been out is 3cm.
Part of that error could have been imprecise antenna placement over the marker. Perhaps 0.5 to 1cm for each antenna.
I have not tried beyond 10km baseline.
10km is mentioned all the time because it is was a guideline referenced by many survey standards. You won’t get an definitive number, there are many factors that can determine your position accuracy in relation to your distance from your base station. This error is usually represented in ppm (parts per million). Bottom line in most cases you will run into longer initialization times before your accuracy has degraded a substantial amount. You are better of worrying about the coordinates of your base position as accurate as possible as they will have a greater effect on your rtk solution.
Recently I helped a student to work it out. We designed the experiment on two roads with a common intersection and then pans out at about 70 degrees apart. Stations were established at 10km apart up to chainage 60km. Static observations were done. Then RTK shots were read using a base station positioned at the road intersection ie KM-0. When the static results were compared to the RTK shots, a linear decay of accuracy came up. The degradation follows an average linear trend of ym=0.0177xkm +0.0106. Where ym is the 2d std dev at 2 sigma in metres.
Thanks for the function!
From trying out using EUREF base stations (Post Processing Static) and have gotten much lower std dev even using 1 hour samples.
Did you try the same data, using postprocessing instead?
I forgot to mention that the equipment used were five Tri-frequency GPS+BDS+GLONASS. Of course Glonass is just dual frequency. Naturally, the static data was post post processed. I think the least observation time per station about 2 hours. The location was in central Kenya.
Maybe that can explain the difference in std dev. How was your degradation?
Yesterday I tried using 1 hour 1 hz CORS (IGS) data, Final clocks and orbits. I had a ~650 km baseline, std 2d dev of 0.17 meters, and about 60 cm off the absolute location of the CORS station (so, the deviation was much smaller than the absolute position error).
This was using RTKpost, with a Float solution (couldn’t get a fix, probably due to the very long baseline), processed as L1 only, to mimic Emlid equipment as much as possible.
Interesting. I did not post processing the data in RTK mode and certainly didn’t use the final orbits. The RTK fixes were logged at 1hz and averaged for about a minute to mimic how people ordinarily use RTK in classical surveying. But now that you have mentioned it, I will do RTK post processing and see how it turns out.
Did you average all the RTKLIB float solutions in the entire observation duration?
Yes, all float solutions used for the whole duration.