Not looking for anything highly detailed or technical. Just wondering how RTK corrections work in general.
My understanding is that location errors at a base and a rover are similar at any given moment in time. If the base is not moving, it knows where it actually is, knows where the Satelite system says it is, and can calculate a correction to the Satelite signal based on the difference between the two. It then sends this difference to the rover so the Rover can add this same difference to its gps derived position to calculate its actual position. If this simplistic description is wrong, I’d love a better explanation.
If that is more or less correct, then I wonder if such a correction is applied to a whole bunch of satellite positions calculated individually and then averaged or just to the average location they collectively generate?
The reason I wonder relates to the size of the correction data stream. The former is very data intensive, the latter much less so.
I got to thinking about this when I noticed that the location of my wife’s cell phone seems to wander around more or less the same as my own even though they are different phone technologies. That suggests that someone could write an android program that uses another cell phone as a poor man’s base and/or rover and get much better results than typically available on a phone. Though nowhere near as good as even a simple dual band gnss receiver even without RTK.
Thoughts?