Reach RS2 placing avg 1.6 cm RMS GCP with a 130 km Baseline

Just as you thought we wouldn’t remind you about the RS2 and why you need 1 (or 2) in your life, there is more!

Today I teamed up with my “close-by” EUREF NTRIP service in Warnemunde, Germany, a “mere” 130 km away from my test-ground.

The purpose this time was to test the relative accuracy of the RS2 over a baseline over twice as a long as the maximum recommended 60 km.

So overall plan was the following:

  • Start RS2 on rover pole, and verify Mobile connection to NTRIP and start logging raw+corrections
  • Start charging batteries
  • Place GCP’s
  • Survey GCP’s in Reachview
  • Fly mapping mission using Pix4d Capture and a DJI Phantom 4 Pro.
  • Pack down
  • Get back and process using Agisoft Metashape

So, to begin with, this is what the mission looks like:


Using an excessive overlap of 85% on both front and side, camera pointing directly down.

Before I started, I turned on the RS2. It provided a fixed solution in roughly 1 minutes time. Pretty impressive given the long baseline, and the NTRIP only providing GPS+GLO.
Turning on the unit this early in the process gives me the opportunity to use the “Static Start”-method in RTKpost, should I choose to post-process the data.

Next was surveying the GCP’s. Quite uneventful, no broken fixes, business as usual, execpt for a 130 km baseline. 20 second collection time, could have easily went for 5 seconds, from looking at the RMS.
Here are a few shots from the process:

After successfully surveying all the point, it was time to get airborne. The mission created a total of 301 images.

So back to the computer, I started all the photogrammetry processing using Agisoft Metashape. After cleaning up the thin-cloud and adding the GCP, this is how it looks:

General statistics:
FalkenRS2Stats

And now, what really counts for RS2, the GCP error! Average Control point RMS of 0.0165 m and Average Check point RMS of 0.0172 m. Even with a 100 meter baseline, that would have been acceptable to most!
Here are all the numbers:

Here is a more graphical view, at 100x error magnification:

All GCP’s are pulled directly from Reachview, so all RTK-data.
I might also postprocess at a later stage, when I get the time.

Thank you for taking your time to read through the post !

15 Likes

Impressive @wizprod. Thanks for sharing.

You are welcome!

I’m very confused by the title of this and what error you are actually measuring…

I don’t understand what your benchmark is - ie. what are you measuring against? Are you assuming that your photogrammetry is perfect and if there was 0 mm of error in the RS2, then your check points would yield a 0mm RMS error? This accuracy statement appears that it should be checked with a more accurate instrument like a total station rather than a derivative product like photogrammetry.

In addition, how are you getting a statistically significant sample? According to industry standards, 20 check points would be necessary to give a true RMS accuracy statement.

I’m not saying your results are bad; I’m just confused how your control points and check points are being accurately measured. In addition, it looks like the weight of the bipod would be pushing down your targets by a few mm, so it just looks like a place for more error that would be a significant part of your study. On a final note, it appears that you’re measuring Sigma error, not RMS error. RMS error includes biases that the collection here cannot identify.

Thank you very much for your comments!

Before we go into too much detail, this is clearly not something that claims to statistically perfect. It is more of a typical relative accuracy test, with more GCP’s (control & check) per area than normal.

Assuming the photogrammetry is perfect (as you mention), or as least not a prominent source of error (using excessive overlap, flat scenery, sharp images from slow speed, and consistent exposure), the benchmark is the inter-point accuracy.
We are not talking absolute values or coordinates, but relative GCP vs GCP.

Having done the same test with the RS+ with a 100 meter baseline at the same location, where the error was below 1 cm, which is more less on the limit of a typical GNSS receivers capability, I am fairly confident that increased error here is due to longer baseline, as that is only significant change (excluding the change of equipment obviously).
Had the baseline been shorter, the rms would have been even smaller.

Absolutely true, but this was never meant to be a test with a scientific approach, but as mentioned, as user test.

I have minimized (but not entirely eliminated) that source of error by spiking down the targets. I would say there was around 1-2 mm of play on some of the targets, none on others.

I will do that when I redo the test at some point, but I didn’t have a total station setup (that I trusted) at that time, and my skills for traversing with a total station are still being “perfected” (or, more precisely put, being learned by doing (and thus also failing miserably from time to time)).

2 Likes

Thank you very much for your explanations, they help me understand your process. I see what you did, however I would not trust the photogrammetric results for a few reasons.

At 45m, your GSD is aapproximately 1.2cm with a DJI P4P. Let’s be generous and say that the GCP selections is very accurate at 0.5pixel. That would mean each point would have 0.6cm of error just to photogrammetric selection. More overlap does not increase the resolution or the certainty of the center of the target. Photogrammetrically speaking, we really can’t make a statement about accuracy below half of the pixel size/GSD, and generally the rule of thumb is 2-3x GSD horizontal accuracy and 3-5x GSD in vertical. 1.6cm would be <2x GSD, and that would be superb, but take a lot of validation.

All-in-all, I’m sure the whole system is somewhere in the 2-3cm GNSS accuracy standard, but without measuring with an instrument with more resolution, I don’t really trust the 1.6cm shown here. I’d love to see what you come up with using a total station!

1 Like