I have a basic question about rovers/base. If I can be directed to an educational resource that would be appreciated.
I dont understand occupation times. Suppose you have two reach units. The base position will be determined by referencing a local CORS. How can a rover accurately determine its relative location from the local base with only a couple minutes of observation, yet the base needs an hour to determine its location relative to the CORS? From my limited understanding it seems like the calculations are the same, just what is being called rover and base are different. Thanks.
I think most of the time when people speak of long occupations for a base point they are doing so for PPK. Collecting a log for an hour or so and defining it in cooperation with the logs from a CORS. If you have good RTK access to a CORS then there is really no need for a local base until you start to get out of range for the accuracy you need. If you wish you can establish a local base point using your rover to RTK from the CORS and the only reason you would occupy it for any longer than a normal control point is because of the longer baseline. We traditionally shoot control points with RTK off of a local base for 1 minute whereas you might want to occupy a point for 5 minutes when doing RTK from a CORS, but that time of occupation depends on the distance from the CORS. That said I have seen many different methods and the time and method would depend on the accuracy you need and the purpose of the survey. Our DOT has their own requirements specified for GNSS surveying.
Thank you for the response. That doesnt really address the core theory question though. Why does it take an hour plus to establish the position of the local base relative to the CORS, yet only minutes to establish the relative position of the rover to the local base? I have run through the process for both in rtkpost, and it seems like the calculation process is the same. Is it because the rover is very close to the local base thus less time is required to gather reliable relative positioning, vs the local base is much further from the CORS so more time is required? Thats my intuitive sense but i have never seen this specific theory question answered.
To be clear, im not asking for a how-to guide or a set of instructions. There are great resources for that. My goal is to understand the background theory.
Like I said above it depends on if you are using PPK or RTK and the proximity to the base being used. An RTK occupation to determine a local base point using CORS should not take much longer than a rover off an RTK local base, but yes that increased time would be due to the increase in baseline. If you are using PPK it comes down to the accuracy you want on the points the rover is collecting. Normally you would occupy a point longer in PPK than RTK, but also the absolute precision of a normal topo shot is not as critical as the point for which your entire survey is based upon.
Ok thanks for that. So i guess my more focussed question would be, why does ppk need longer occupation times than rtk, with all else being equal, e.g. conditions, distance etc.? I thought they were performing the same kind of corrections, just one does it in real time and one does it after the fact? Or, is it a completely different computational approach? Thanks
Personally I don’t think it should take that much longer if you are close enough to the base to RTK. I don’t know anyone that PPK’s when RTK is available unless they are verifying that the RTK was true. RTK carries corrections the entire time so the point should be precise for every shot that is averaged whereas PPK could still be trying to initialize and build an RTK condition. I think PPK is the preferred method over long distances particularly when there is no site monumentation and you are trying to establish a globally accurate point. There again that has just been my personal experience. I usually have logging on whether or not I am RTK so I have the backup. I also at times toggle logging during RTK to isolate the shots better to make PPK easier to decipher. Turn on logging, take RTK shot and when the observation is done turn logging off and move to the next point. This is good for control and GCP’s, but probably not worth the trouble elsewhere. I am sure someone will chime in that can answer the question to better satisfaction.
As far as static observations go, it’s simply a matter of statistics and the averaging of epochs in the solution per point. As posted in another thread, the more epochs you have of clean observations, the greater the strength of the solution for the point.
For example, in submitting data to NGS for the current “GPS on Benchmarks” campaign, a minimum of 4 hours of dual frequency data is required. Submitting data to NGS for static processing (OPUS) requires a minimum of 2 hours. OPUS Rapid-Static standards are more relaxed with a minimum of 15 minutes for data submission and are based on more stricter requirements and CORS availability. Also, PDOP (overall viewable satellite geometry) plays an important role in the solution of the point. Lower PDOP = better geometry for the solution.
In both processors above, “clean” multi-path free data is required. Longer observation times with low PDOP values = optimum solution for the point.
So it looked like they made DOP a culprit in an inability to achieve absolute results. Testing on different days had better or worse DOP values and the generalized conclusion was that a longer observation was better, but a good DOP value was just as important. Does that makes sense?
There was another paper based on short baselines that stated there was no significant increase in accuracy after 3 hours, but looking at the line graphs it looked obvious to me that a flattening of the curve of improvement actually started at about 5-10 hours. Right?
I guess one of the main questions that has arisen is in order to achieve a similar result of a good RTK observation how long does a logged observation for PPK need to be?
That’s a good question… I guess for static purposes as in boundary, photogrammetric control and geodetic control, it’s up to the user. Using RTK, via local base or RTN, I’m like you. I always record the raw data. Any time I locate a point of significance, I’ll re-observe the point and inverse between them. Usually I’m within 1-2 cm. If it’s another day on the project, I’ll also re-observe any points nearby for a check. For points of significance in the open with RTK, I usually observe no less than 240 epoch’s (1 sec epochs). This is on baselines less than 1/2 mile with radio RTK. If it’s a point with no significance (edge of pavement, etc.), I usually observe no less than 1 minute. All this based in clean environments.
Total time on station is based on surrounding conditions and user confidence in the system they’re using and also verification of the located point for me. In my field software, I also monitor the pdop values.
It would super interesting to develop a multidimensional matrix of PDOP/Occupation Time/PPK stdev/RTK stdev… Any volenteers?
This topic was automatically closed 100 days after the last reply. New replies are no longer allowed.