How do RTK corrections work?

Not looking for anything highly detailed or technical. Just wondering how RTK corrections work in general.

My understanding is that location errors at a base and a rover are similar at any given moment in time. If the base is not moving, it knows where it actually is, knows where the Satelite system says it is, and can calculate a correction to the Satelite signal based on the difference between the two. It then sends this difference to the rover so the Rover can add this same difference to its gps derived position to calculate its actual position. If this simplistic description is wrong, I’d love a better explanation.

If that is more or less correct, then I wonder if such a correction is applied to a whole bunch of satellite positions calculated individually and then averaged or just to the average location they collectively generate?

The reason I wonder relates to the size of the correction data stream. The former is very data intensive, the latter much less so.

I got to thinking about this when I noticed that the location of my wife’s cell phone seems to wander around more or less the same as my own even though they are different phone technologies. That suggests that someone could write an android program that uses another cell phone as a poor man’s base and/or rover and get much better results than typically available on a phone. Though nowhere near as good as even a simple dual band gnss receiver even without RTK.



You’ve got a grasp on the basic concept. Most of the time the base has a defined coordinate and does not move. It really hard to explain well in a simple manner because there are several factors involved with technical terms but i’ll try.

Each receiver has a connection to a satellite and the satellite communicates on a specific wavelength. They each count the number of waves and the base is able to resolve the ambiguity in real time. That combined with the baseline between the two creates an accurate triangulation. Imagine that each sees 20 satellites and with dual-frequency how many triangles you have. Also they each see the satellites slightly differently so your geometry becomes even stronger. The distance can become a problem when the two get so far apart that they are not not seeing all the same satellites.

I hope I did it justice.


You probably did, but I don’t know enough to know. How does that saying go? You don’t know what you don’t know…

So when you talk about triangulation are you meaning the triangle formed by pairs of satellites?

I had assumed the distance to a Satelite was calculated from the time the signal takes against an an accurate clock and that two satellites provided one good triangle, and a third was required to get two triangles which provides a lat/lon and four satellites provides three triangles to get LL & Height. But again, maybe I’m way off.

Anyway, the question that arises from all this is how do multiple fixes (from more than 3 or 4 satellites) get averaged? Is it before or after being sent to a rover to form one single fix at the rover. In other words, does the rover calculate a fix based on the average of corrected locations each calculated individually from corrections for each triangulation or does it apply one correction to the calculated average of all the Satelite triangles? The former involves a lot more data than the latter.

I use that saying all the time! I’m not sure but it seems like you might be thinking about it backwards. The receivers are two points of the triangle and the carrier cycle is what is measured by each receiver. Because of software they need a certain amount of satellites (triangles) that meet a certain criteria to create a fixed solution. Normally seven in the equipment that I have used.

1 Like

You totally lost me on that one.

How do the two receivers know the baseline BEFORE they talk to each other?

I thought each receiver triangulated on satellites, calculated their own positions from those triangles, and then the base calculates an error from the measurements and its known position and then send that info to the rover so it can correct its own triangulated position by the same amount…

Perhaps I need to go back to the basics of how this all works before I think about ways to improve phone gps accuracy.

The base and rover start talking to each other as soon as the radio signal is ready whether they are initialized or not. Within seconds corrections are already being sent.

Each receiver is calculating it’s position satellite by satellite based on the time a signal was transmitted and when it was received and the satellite’s position on the data stream. There is no communication from satellite to satellite so there is no triangulation. Trilateration is the proper term here because there are no angles involved just distances. It takes the distance from three independent satellites to calculate a location (one of which is in space) and a fourth to define an altitude or single out the one on earth. When you add a base station to the equation you then have the relationship between it and the rover as calculated by three satellites each.

1 Like

Ouch… My head hurts now. I definitely need to do more research.

I understand your point about triangulation VS trilateration. I “THINK” I was just using the wrong word. But maybe there is more to it than that…

I never did think the satellites talked to each other. I thought the only thing each Satelite contributes is a distance that is calculated by the reach based on a time interval for the radio signal to “reach” the reach which is based on a time stamp in the signal inserted by the Satelite VS the time clock in the reach. I used the word triangulate because the reach can use three of these distances to compute where it is. At least that was my understanding. But you are throwing some wrinkles into that that I would like to understand better. That said, I’m not sure it really matters for the point of my questions.

The purpose of wanting to know more about how the corrections work is to know if it is practical to write an app for an android phone that can be used to quasi correct the phones location using two phones. Sort of a poor man’s RTK.

If both phones produce an LLH location, and one of them is placed on a fixed location, it could calculate a correction from the known location to the calculated location in terms of +/- LLH which could then be streamed to the other phone to correct its measured LLH. If that could be done it might be possible to improve phone gps accuracy to a much better number (perhaps +/- a meter or so instead of 5 to 10 meters that I see now.

I arrived at this thought by noticing how my wife’s phone and mine seem to wander around in much the same way as time passes. So it struck me that If there error was the same, I might be able to calculate it and use it.

That thought led me into trying to understand how and when the RTK corrections are done.

This comment of yours “suggests” that the correction is done for each location BEFORE averaging for a fix. If so, my phone idea is probably dead in the water. It would be much better (for the purposes of a phone app) if each device established an average location first and then the rover phone calculated its actual based on one correction for each devices average instead of correcting each individual group of Satelite calculations and then averaging the corrected locations afterward.

If you already use that one all the time, perhaps you will like three other sayings that are similar that I like.

  1. “The more you know, the more you know you don’t know.”. That just about sums up my knowledge of farming. I have learned so much since I retired 15 years ago that I have finally reached that magic point where I know that I know absolutely nothing about anything.

  2. “Never overestimate how much the chief engineer knows, and never underestimate how smart they are…”

  3. “When you are dead, you don’t know you are dead, and you feel nothing. It’s just hard on everyone else around you… It’s the same when you are stupid.”


I think the main hurdles are making sure the phone has autonomous GPS and not A-GPS (cellular assisted), making sure they are dual-band, the quality of the GNSS chipset (what satellites it can track and how well) and the lack of a quality antenna which leads to an inability to deal with interferences. If you could find a device with really good capabilities I don’t see why you couldn’t cast corrections if you had the right software. I’ve seen reports that the Xiaomi Mi 9 is really good at tracking. Some phones have L5 capability so they can get a few extra satellites from the E5 signals that are already available. L5 GNSS is a whole topic in of itself and will be great for us once it is fully released because of it’s higher transmission power and its increased ability to mitigate interference.

I may not have worded it well. The correction isn’t being done but each receiver has what it thinks its position is. Since the base has been given a coordinate and does not move it is the anchor and the rover knows where it is in relation to the base.

1 Like

Here you go, pretty simplified explanation


Thanks Bryan, the article describes the process in very easy to understand language. It is more or less exactly what I assumed about how RTK (and other methods) work. Of course, that assumes that I really did understand what the author was saying! I think so, but you never know…

Unfortunately it doesn’t address the sequence of how and when corrections are calculated which is really what I want to know.

I agree that cell phone gps will never equal a good system like Emlid’s. And I’m not thinking anything remotely similar is possible. I’m just kinda hoping that a cell phone might be able to achieve a meter or two using a cellular “RTK” like system.

I more or less understand the weaknesses of cell phone GPS VS other systems. If I am honest with myself, the truth is that I don’t have time to take on a project like that right now anyway. If I do, it will probably have to wait till next winter.

In the meantime, I’d appreciate it if you would keep an eye open for anything that details how corrections are processed in a system like the reach and in what order.

1 Like

How I explain to other farmers,

The base is stationary and it knows the coordinates of where it should be, so any motion it sees is BS and an error.

Corrections are a report of the error the base is seeing sent to the rover.

The rover is in motion and does not know if its movements are valid or in error. When the rover receives the corrections from the base, it adds or subtracts those errors from its motion. The rover now knows its true path.

Most simplistic way of looking at it.

In a a single base solution, I agree with this statement. I have found VRS systems very accurate. If you setup your own base and use Emlid’s or another server client depending on the distance you are trying for, you should easily get the meter or two accuracy.

I understand that. But I was specifically referring to the development of an app for smart cell phones. No Emlid’s or other gps hardware allowed.

See full thread above

I think I understand that part. See full thread above. But I like your simplified explanation. I may use it for others who ask me.

But it’s not really what I was asking.

I am investigating the correction process to see if I can write an android app to do a cell phone to cell phone correction similar to RTK but using the two cell smart phone’s gps receivers. I have no personal interest in a system like that. But I think the app would be cool if it can be done. I enjoy doing cool stuff!

Before I begin, I want to know the sequence of calculations that a regular RTK system uses.

Does the base calculate the correction for each Satelite lock (4 satellites), then send that info to the rover for every set of locks, which then calculates a corrected position for its own 4 Satelite locks, and then averages all of them to get a fix, or

Does the base average all of the various 4 Satelite locks to get an average fix, subtract that from where it knows it is, and then send that much smaller amount of data to the rover, which calculates its own average location and then uses the received correction data to corrects that?

In other words, when do the correction and averaging take place? Which one comes first?

It seems to me that the latter sequence (correct after averaging) should be faster and less data intensive. However, it may also be that wide variations in errors for some 4 Satelite sets might dictate the longer a more complex solution.

Not only that, but it would mean that I could access googles gps API without a lot of messing around to access the raw data. In other words, a much less complicated app.

You know, after explaining that to you, I’m thinking maybe I shouldn’t care how the 1cm systems do it. Just do the corrections on the averages and be done with it! After all, it’s a cell phone system. No sense letting the perfect be the enemy of the good.

The magic of RTK is both Hardware and Software, so unless your cellphone contains a F9P or equivalent you are out of luck.

The closest currently is viewing the M2 or RS2 on the cell screen.

You could write an app that reads the M2’s information and uses its data.

Sorry, didn’t catch all that. So basically you want to create an Android RTK processor App for a phone using the internal phone GPS. I would assume using some sort of NTRIP.

That is exactly what he is asking, or how I understand it

Here is a quick look at a RTK, Things have to happen mighty quick.
What i draw from this block diagram is the gnss engine and ambiguity resolution require 2 separate processors just for them to function.

Not sure what RTC is but i think it may be yet another processor.

I think all these have to play in real time, GNSS is just angle and clock signal differential math. Easy to say but i think the poor phone would be crushed under it plus correction processing. Then you would need to know all of the hardware bias information for each phone it would be installed on.

Your absolute biggest hurdle is sharing resources with the OS.

RTK chips are now about the size of your thumb, in 10 more years it will be a grain of rice.

Yes. But not using NTRIP. That would require access to the raw data. I just want access to the gps output which is a simple call to the OS with permission. The correction will come from another phone.

I believe RTC stands for Real Time Clock. But that’s just an educated guess.

So ya, if I tried to do real RTK, Android would probably choke. That’s why I’m interested in learning how the system works. I need to find an easier and faster way to do things. That said, the android OS is no slouch. A well written app can scream. The problem in my eyes isn’t the speed. It’s the requirement to stick to API based apps. Running raw machine code on any of the new smartphones would run circles around this issue. But such is life. With rampant security concerns and cyber crime activity, I think the days of running raw code are over.

My idea is really pretty simple. The app would make an API call to the system gps, get an LLH position. Get a plain vanilla correction from another cell phone setup as a base, correct the measured position, and then display the corrected position. The plain vanilla correction would take the form of three LLH adjustment differentials (one for each) that are simply the measured position minus the actual position. In this simple form, such a system could make hundreds of corrections per second. So once a second would be a walk in the park. This could even be spoofed so other programs could benefit from it too.

It wouldn’t be super accurate, but it might be an order of magnitude better than cell phones are now. That’s a worthy trip.