Can some one explain the RMS values during a survey, I come from the electrical eng field where RMS is the “truer” valvue of the effevice wave form… not sure what it means here.

http://www.navipedia.net/index.php/Accuracy

Root Mean Square Error (rms): The square root of the average of the squared error. This measurement is an average but assuming that the error follows a normal distribution (which is close but not exactly true) it will correspond to the percentile 68% in one-dimensional distributions (e.g. vertical error or timing error) and percentile 63% for bidimensional distributions (e.g. horizontal error). For the horizontal error this measurement is also referred as drms and can have variants such as 2rms or 2drms (2 times rms).

OK, @TB_RTK, but what is your “error” metric. What are you considering to be the expected value for your RMSE calculation?

As new positions are recorded for the Survey point, is error computed relative to the initial position? Or is error computed relative to a continuously updated average position at the Survey point? I believe it is the latter. If so, this shouldn’t this technically be std, not RMS.

Also, to confirm, these metrics are only considering variance in apparent positions recorded at the Survey point, and are not including the std estimates (+/- values) listed under Position on the Status menu. The Survey error values we’re seeing seem way too small to be including these additional errors.

I haven’t been able to find detailed information about how these two metrics (Status error and Survey error) are calculated and updated, so if you know where to find that information in the doc/forum, please share. Thanks.

Your are asking the right question to the wrong dude

I am no way capable of going in depth, explaining detailed and complex math in such way its understandable to everybody coming across this thread. I hope someone qualified can answer this.

@TB_RTK, thanks for taking a bullet for me

You are right, it is the latter. The difference from std is that we can not compare our result with an actual position, so we have to compare it to our prediction, which is a continuously updated mean.

Yes, survey operates separately from the positioning engine. The values you see in the status tab are based on internal filter states of the engine at this exact moment. Survey averaging starts from scratch with every point.

I live to see another day

Thanks for the explanation @egor.fedorov. Would be great to incorporate this information in the documentation at some point.

FYI, I just wrote a little script to update survey point positions and errors using RTKLIB pos output: https://github.com/dshean/sfm_tools/blob/master/emlid_survey_update.py. Maybe useful for others. Will likely move to a separate repo for gnss tools at some point.

We just finished 3 weeks of tests with the ReachRS receivers. Really impressed so far. Will be playing with Reach + Sony NEX-5 setups in the next few weeks. Hope to find time to share some results on the forum, and/or encourage students to do so. Keep up the great work.

I agree, will do this in the near future.

We would love to hear about your experience here on the forum. Are you doing some kind of research? Or just testing the equipment for future needs? Complaints and feature requests are also welcome.

Hi Brian,

I did some research few years ago about survey grade GPS accuracy.

TB_RTK is right, DRMS (2 dimensional root mean square error) is 63.2 %

It means that 63.2 % of points taken are within claimed accuracy.

Some manufacturers (Leica) uses what they call RMS in their devices specifications, but it seems to be in reality a one-sigma 2D error, which is 39.3 %

So 60.7 % of measurements are *outside* claimed accuracy.

In this case, you have to multiply by 3 the RMS to have a realistic idea of the accuracy.

Your python solution for survey point updates is intriguing! Great job - and thanks for sharing!

Mean Squared Error ( MSE ) is defined as Mean or Average of the square of the difference between actual and estimated values. This means that MSE is calculated by the square of the difference between the predicted and actual target variables, divided by the number of data points. It is always non–negative values and close to zero are better.

Root Mean Squared Error is the square root of Mean Squared Error (MSE). This is the same as Mean Squared Error (MSE) but the root of the value is considered while determining the accuracy of the model.

Root Mean Squared Error using sklearn

```
import numpy as np
import sklearn.metrics as metrics
actual = np.array([56,45,68,49,26,40,52,38,30,48])
predicted = np.array([58,42,65,47,29,46,50,33,31,47])
mse_sk = metrics.mean_squared_error(actual, predicted)
rmse_sk = np.sqrt(mse)
print("Root Mean Square Error :", rmse_sk)
```