RMS Values

I just completed a set of ground control grids for a LiDAR survey, six GCPs containing between nine and 13 points each captured with a RS2 rover using NTRIP from our own RS2 base locations. The client requires the error values for each point which is recorded in the Reachview project, however I’m a bit concerned about the Z value which is always around 0.011m to 0.014m. The X and Y values range from 0.005m to 0.025m so I would expect to see the Z around double the lateral errors.

Of course I would be delighted if the Z errors were truly representative of the actual errors, but somehow it didn’t feel quite right! As a test I put the rover in the back of the van to lose RTK fix, then measured one of the points again when the fix was restored. The X and Y were indeed pretty much spot on, 0.002m and 0.003m difference respectively, but the Z was 0.025m greater than the original measurement even though the Z RMS was 0.011m.

Doubling the Z RMS value seems to give a truer error estimation, could there be something going on with the calculation method?

1 Like

We see the same with photogrammetry with a Yuneec H520E RTK and it surprised me as well. We use 4 (depending on the shape of the site) to set a plane for the entire project and let the RTK go from there. We set a couple of checkpoints in the middle as a sanity check. We are typical getting 5-7cm H and 3-5cm V on our DTM’s. The DSM reports better but that doesn’t mean much in our process. I think there might be some confusion between the GNSS values and the ground accuracy values on how the XY and Z relate.

We’ve done numerous projects setting/measuring photo targets down through the years using GNSS. We’ve always run terrestrial leveling on each project for the photo targets. Vertical errors seldom exceed 0.02’ (0.6 cm) between photo targets throughout the site even on large projects. In most cases, we’ll use available NGS passive control marks if the vertical was demined by leveling. Sometimes in areas without leveled control marks (benchmarks) we hold “fixed” either a NGS mark without leveled vertical (GNSS heights) or hold a photo target as the main control mark, then run a leveled loop to determine each photo targets elevation.

Errors or differences in GNSS versus terrestrial leveling vary depending on delta heights of the photo targets and the size of the project site. Comparing ellipsoid and ground heights helps define the shape of the geoid surface to determine the true ground heights for the photo targets.

Some surveyors use only GNSS heights for the site and are always questioned by the photogrammetrist about large vertical errors and wether or not leveling was performed. This effects the accuracy of the surface determined by the photogrammetrist and large errors occur in the imagery due to not determining the actual ground heights.

You will not have consistent heights between photo targets using GNSS unless you occupy each station for 4-5 days. The time for occupying just one station by GNSS could be better used leveling the photo targets determining the “true” ground heights.

1 Like

Hi Rory,

When you collect a point, you get the result ± RMS. This range should cover the true value. So when you compare two measurements, you need to take into account both RMS.

You’ve said that the first RMS is about 0.011/0.014 m, while the second one is 0.011 m. That means that their difference may variate up to ± (0.014 + 0.011) = 0.025 m. So as I see, in this case, 0.025 is within a tolerable range.

You are right that the Z value usually has a higher RMS than X and Y. However, to check if the real Z value is within the measurement ± RMS range, it’s better to conduct a test with a known point.

Also, I’d like to mention that we’ve improved error estimation in ReachView 3. So if you use a Reach Panel to collect the points, I’d recommend you to try ReachView 3 as well.

1 Like

Remember that what you see here is the spread of the points during your averaging. The precision of the point when collected, not the accuracy.
To get an idea of the accuracy you (sadly) have to do the math yourself, using the hardware specs + ppm + the spread during average.

It would be great if Emlid could add a mode where you could choose between a relatively and absolute mode for error display.

6 Likes

Thanks Julia. Yes, we now always use RV3 for this type of work as it does everything we need. We do have Field Genius on a Windows tablet if we need to survey lines or footprints, but as most of our survey work is for control then RV3 is perfect, lightweight and fully integrated.

I echo what Christian says about an absolute mode for the error display.

2 Likes

Hi guys,

Thanks for your suggestion.

As Christian mentioned, at the moment, you indeed can see the precision of the point only. This value includes the receiver’s internal error estimates and how jumpy the position is.

So it’s an interesting idea to display the accuracy value as well. But how would you like to use it? And which parameters do you want to be considered in its calculation?

For NTRIP users this would be especially valuable.
With NTRIP it matters alot whether you are 5 km or 50 km away from your NTRIP base (granted you are not working VRS or similar).

With a long baseline, your 30 second observation might have 1 cm std dev, so the spread of the point. That might be falsely interpreted as absolute accuracy, or that the precision between points are 1 cm in relation to each other.

The problem with the long baselines is that the repeatability of a measuring a given position gets exponentially lower and lower with longer baselines.
So, as mentioned above, you might be able to have a 1 cm stddev on a 30 sec obs, and also on the next etc, however, if you measure again a few hours or days later, the likelyhood of hitting that position again within 1 cm stddev is very low on long baselines.

So, in that scenario it would be more realistic to show the user the sum of errors:
stddev
+horizontal/vertical hardware error
+PPM
+spread of points during observation.

So yes, with this way of displaying the error, it might not look as impressive, but it will be more realistic and in-tune with the provided hardware specification from the manual.

To some extent You guys are already doing some of it in RV3, as the error when collecting a point can never go under 1 cm. This is, I guess, to display the hardware error up front? Now that should be extending to reflect all error sources.

For representation in RV3, I guess it could be represented as “Best case” (current) and “Worst case” (all errors taken into account)?

4 Likes

Hi Christian,

Hmm, I think that I got what you mean.

It’s an interesting request, and it’s worth considering. But first, I think it’s better to clarify with our devs the current accuracy calculation process. It’ll help us understand what we have now and what we can do.

So I’ll clarify it and get back to you for further discussion.

3 Likes

Looking forward to the verdict :wink: If I or anyone else can elaborate further, i.e. with examples, let us know.

2 Likes

Hi Christian,

I’ve clarified the current calculation process with our devs, and it’s just as we thought. The baseline indeed isn’t taken into account in calculations. So that’s true that the value that you see in ReachView 3 is precision only.

I agree that considering the baseline in calculations can make the estimated value more realistic. However, it’s just a step along the way to getting absolute accuracy. This value should also consider parameters such as the accuracy of placing the base position and the base coordinates themselves. Even parameters like temperature and pressure affect the accuracy in some way.

What I see is that all of these parameters can hardly be taken into account in calculations. But considering just one of them - the baseline - can give the sense of knowing the absolute accuracy without knowing it. That’s why such a feature seems to be too deceptive.

On the opposite, manual calculations of absolute accuracy appear to be a fairer way. When you do them, you can be 100% sure on which parameters you take into consideration and how each of them affects the result.

1 Like

This topic was automatically closed 100 days after the last reply. New replies are no longer allowed.