Reach on Phantom4

RTK is when your base is corrected by sitting on a known location, or by receiving correction in real time via CORS. Then it transmits that correction to a rover in Real Time.

PPK is when the rover just records a log of it’s locations and then that log is post processed later with corrections using the logs of either a base that was on a known location or a CORS log file for the same time period that the rover was recording.

I’m not sure what Simon means when he says he hotspots the rover on the uas to the base to get corrections before takeoff unless of course the base is on a known location or receiving correction from CORS which would be typical for RTK.

So am I kinda doing both?

Base gathers data via CORs corrections for 30mins, but NOT sent to rover. I take the average location and enter manually.

Then…

I send base correction on my known location to the rover.(base no longer receives corrections via CORs since it’s on a known location).

Is this an ok procedure?

If you are receiving corrections in real time on the base, but then don’t send the corrections to the rover. And then afterwards apply (post process) the correction to the rover’s location, that is PPK.

If you were transmitting the corrections from the base to the rover as you are moving the rover around so that the rover adjusts it’s location with the correction in real time. That is RTK.

Dane, are you using your phone to connect a Reach module to a CORS network?

I use my iPad’s hotspot and connect my base to the hotspot for CORs corrections.

I then let my base sit for 30mins and gather data. I do not post process this data, I use the Reach averaging to collection my know location.

I then turn off the hotspot, enter the “known location” manually.

Then I send base corrections to my rover as I walk and take GCPs

That is an RTK workflow. If your rover is receiving and applying correction vectors as it is moved from point to point, it is RTK.

If you are going back to the same place at a later time and set up your base on the same point you established earlier with the CORS connection, you could then just send corrections to the rover without the connecting to CORS step as you already know where the base is and can send corrections to the rover in real time, also RTK.

To get correction data a CORS station just does this in reverse. It knows where it is but then compares that to a GNSS receiver at the same location that doesn’t know where it is and compares the difference. That is the correction vector that it broadcasts.

I have gone to multiple local “known locations”, one that was just verified again last month by a local gas company. I set my base on it, took a 30min “Fix” average(using CORs corrections) using the Reach. Adjusted ellipsoid for my log/lat and I was with 3/4" vertical.

1 Like

I added a paragraph to my last post.

When you use a CORS connection on a know point, you are just comparing your correction to the previous surveyor’s results. Unless you don’t trust the known points pedegree then you can forgo the CORS connection and just set up, enter the known location coordinates and go. That is why having established known points is beneficial and time saving. If you had to do the CORS dance every time, they wouldn’t save you any time.

I will add that if I am out in left field on any of this, those in the know please correct me.

I check the local known locations just to check accuracy of the reach.

But yes, in the field I always set a known location with my base first. Then I don’t have to continue to stream CORs corrections for the few hours I’m out in the field.

Some may think I waste some time setting a known location at every project. More so than any, I do this for my records. I have a spreadsheet of all projects, with known locations that I keep on my computer. I can then always go back and save the time in the future.

Sounds good. Here, it costs $1900 to get a single CORS license but the logs are free so it’s PPK for me.

I’m in Ohio, I literally just had to email the state and set up a username and password; completely free to the public.

Great discussion and most things resolve.
To clarify why I do it the way I do.

You may have noticed that sometimes Reach just doesn’t want to get FIX. The danger of post processed kinematic (PPK) is that you collect the data durning one of these periods, and no matter what you do in post processing, it just doesn’t deliver. I try and avoid that by using real time kinematic at the start. I figure that if I cannot get FIX with the UAV on the ground in the field then I am wasting my time flying a mission.

So…
I set up on a known reference point (or one I do a static average position for (with or without corrections))
Once I have that position record it and set base to be a wifi hotspot and output corrections via tcp as a server on port 8100. I turn logging on
I connect the rover/flyer to that wifi hotspot and set correction input to be 192.168.42.1 port 8100. I turn logging on.

[This is the simplest and easiest way to set up RTK, but it only works for 10-20m. You could connect both to a wifi router and get up to 100m from the router for each unit if you want.]

Now that I have RTK I can use reachview to monitor the progression towards FIX, once my ambiguity ratio climbs above 10 then I am pretty sure I am in good shape and am happy to fly the mission. During the mission I know that I will lose the connection, but I am happy that things were going well and there is a very good probability I can post process the mission and get 98% FIX. I need to post process the mission anyway to pull out the camera events from the Phantom 4.

On landing if I wait 5 mins, I will most likely get the flyer reconnected to the base and can see status again, but this is not essential.

Hope this makes things clearer.

@DaneGer21
You are doing Real Time Kinematic using your base as a known reference point and transmitting corrections in real time.

Your procedure is good. As is the checking of the ‘known reference locations.’ Most reference marks have a defined accuracy and in many cases a 30min CORS/Reach observation will be better than that accuracy. Here in Tasmania many of the second order marks are in locations with a poor skyview, so whilst you can verify their location they make for very poor base station locations for RTK.

You could of course just provide the cors corrections to the rover and get rid of the base if it was a networked solution or you are close enough to the CORS. This would save you the 30 mins and you could verify the reference mark as just another survey point.

Simon

Hi Simon
how are you compensating for the offset from the receiver to the camera in regards to pitch and roll?
To break this down. Lets say on a horizontal plane there is 200mm in height difference between the receiver and the camera. if the UAV was tipped forward at a 10 degree angle while flying, the vertical difference between the receiver and the camera now be 196mm, while there would be a horizontal shift of 35mm. this would be even worse if the uav was tipped sideways against a cross wind at the same time.
I also see from your photos that the receiver is slightly behind the camera. this can cause issues if you fly a grid in one direction then turn around and fly back facing another direction. This would result in the horizontal errors appearing to be the horizontal distance between camera and receiver.
i hope iv explained myself well enough and would be interested to get your thoughts on this. Iv been flying UAVs for surveying for a number of years now and am a photoscan user
Cam

The antenna is slightly to the rear which is about perfect for a 5m/s flight speed to be directly above the camera. The vertical offset is corrected for based on this in my processing script.

For other speeds, Agisoft Photoscan allows for a GPS / camera offset correction and at 10m/s this is about another 5cm in the Y (along track) direction. Photoscan allows for a constant X offset, not a wind induced bias.

You are right of course to be perfect I should account for all Pithc and roll explicitly as I have the information. DJI’s exif metadata includes:
---- XMP-drone-dji ----
Absolute Altitude : +6.10
Relative Altitude : +60.10
Gimbal Roll Degree : +0.00
Gimbal Yaw Degree : -172.50
Gimbal Pitch Degree : -83.30
Flight Roll Degree : -7.50
Flight Yaw Degree : -169.80
Flight Pitch Degree : -18.00
Flight X Speed : -9.30
Flight Y Speed : -1.30
Flight Z Speed : +0.00
Cam Reverse : 0
Gimbal Reverse : 0

So all I have to do is convert all coordinates to UTM, apply convergence and magnetic variation (or confirm DJI is delivering true heading) calculate the offsets and reconvert to lat lon. I am converting to UTM anyway, as it is easier to display the data in 3D from a file with the same units in all three dimensions.

It’s on the to-do list, but I haven’t flown (until last mission) in strong enough winds to introduce a constant bias.

I have thought about a rover only with RTK, but I’ve been worried about the amount of data transfer and usage. I don’t have unlimited data on my phone plan. Maybe I’ll check my iPad and see what kind of usage it had this past month.

Full Photoscan report. Note the 15cm offset from single Ground Verification Point. I would shift the final product to match this point, thus reducing the absolute errors further. Comments/discussion welcome.

report.pdf (1.6 MB)

Looks promising! Nice work.

Nice test. Need to read it more thorough later.
But one thing about the lense on the camera. Did your ever find a distortion profile (not calibration profile software) for it? or a test.

No, each camera is different. especially the principal point.
I used Photoscan Lens and a 40 inch UHD monitor to calibrate my lens.

2 Likes