Reach on Phantom4

Ok thanks.
Was hoping for something like this

1 Like

Hey, This may be a silly question, but is there no way to ā€œhackā€ the DJI protocol and feed the GNSS from the reach (rtk) straight into the A3? Even if it is using a arduino or raspberry pi zero to convert the nmea to DJI string?

This would mean the exif written at the source are RTK already, and the whole flight is rtkā€¦

Even at 5Hz that means up to 200 (+ transmission time) millisecond difference between the last reach position and the actual position. at 10 metres per second that is 2 metres error. You could do a forward looking predictive filter that assumes linear movement, but why not just capture the timing and Post process? Post processing is not hard.

Good point I guess, but 10 m/s is quite a high speed, and you can eliminate this by using hover and capture?
I am in no rush when acquiring the data if it means there is less post processingā€¦

If you are happy to hover and capture then a solution already exists, no need to hack the drone.
Use Pix4DCapture in safe mode and it will stop for each shot. The Pix4D log has millisecond reference marks that are a little wobbly, but ā€˜good enoughā€™
You can use these to extract the position from the real time, or post processed, reach .POS file (.LLH when real time).
Message me if you are interested in a Python 2.7 script to do this.

Hi Simon,

Another question. By this process, we assume that the center of the photo is the position of the drone at the time. What about any gimbal roll/pitch offset? Have you found this to be minimum?

Jim.

I apply the vertical offset between the camera and antenna in processing. I have positioned my antenna to be vertically above the camera at 5m/s forward speed. So when hovering there is a 3-5cm offset in the Y (along drone axis), X offset is minimal. The information is available within the EXIF header to correct all the offsets I just havenā€™t got around to it as I tend to fly grids at a given speed which minimises their impact.

Sorry I meant the offset in the gimbal calibration, what if your gimbal is not looking/pointing straight down, but with say a 2 degree to the right, at 60m altitude, your GPS point will be 2 meters off from the center point of your photo. (60 x tan(2Ā°) if Iā€™m not mistaken).
How can we remove that error possibility?

Hi Jim, the critical factor is knowing where is the camera in 3d space when the image is captured. Not so much the exact direction it is pointing.

Hi Jim,
It is a good question and Dave is right. SFM really concerns itself with reconciling the 3d positions of the cameras and the image matches. Often over flat terrain I will fly with a deliberate 10 degree off vertical camera to put ā€˜tensionā€™ between the images that cannot be resolved by loosely adjusting the camera positions it is the realtive angular triangulation between points in images that is resolved in SFM. If a point is directly below a camera, but you take two pictures of it one straight down and one offset by 20 degrees the relationship of the angles to another picture taken at another location doesnā€™t change.

It becomes an issue if it a constant offset irrespective of drone heading and speed, say you are doing the survey in a strong wind then the drone will be tilted in the same real world direction irrespective of course. This can lead to an image offset (over the whole image).
I counter this by launching from a GCP and spending a few minutes acquiring fix on that GCP before launching.

Looks like the P4 Gps is actually being covered to me. The legs hold the antenna, so i understand the shielding.
Is it interfering or are in disregarding DJI OSD coordinates all together (as youā€™re not needing them).
Want to try this on the Inspire 1 Pro systems but considering a 3d space as mentioned doesnā€™t look feasible for REACH placement.
REACH RS the only answer here?

Inspire 1, the only simple option is to use Pix4D capture and safe mode to ensure the vehicle is stationary(ish) for each shot and use the Pix4D log. I ave seen a 3D mount by @David_Leslie that might work with an inspire 1 and put the Reach antenna directly above the camera. On the Phantom the reach antenna is far enough above the P4 antenna (which is not high accuracy) that I get a 10-14 satellite fix and can use the DJI gps for autopilot functions.

The Reach RS is HUGE compared to the bare board Reach RTK, not sure what you are proposing for Reach RS as the answer. If you are happy with PPK, then you can use two Reach RTKs to collect GCPā€™s.

1 Like

The Nose Cone of the Inspire 1 is empty and my thought was that you could arrange placement of a REACH RTK module in there. This would be ā€œcloseā€ to 3d alignment with the X5 camera.
I WAS originally thinking one on the bird and one (not RS) on the ground as a base, using 3dr radio, cell internet wifi and a CORS connection to get real time RTK.
This would be used as a hover solution ie., N,S,E,W, and a NADIR picture all with RTK accuracy .
What do you think? Dennis Baldwin, on youtube has videos of a start of a project but not yet completed.

The antenna should be mounted above the blades, with a groundplane to reduce the multipath. The Reach Unit itself can go anywhere.

Having installed a Reach module on my Inspire1, and using pix4dcapture in SAFE mode, what is the easiest and fastest way to edit the DJI image exif data with the post processed Reach data?

I will be using Pix4dmapper to process images/job.

Thanks

@Simon_Allen, not sure I totally understand, but: Did you write a script that looks at the time stamp of a photo, then looks up the position for that time in the PPK record and finally updates the photo EXIF location to the PPK location, info based on the time?
Because this is something I was thinking about, not for drone camera use but for a more efficient & user friendly workflow for surveying points out in the field, especially in cases where itā€™s difficult to have a good enough connection between base and rover to make RTK feasible.

I was thinking about something like:

  1. switch on base & rover, take rover with you
  2. at every survey point, put the rover GPS antenna in position for some time, then/meanwhile take photo using mobile phone
  3. when back at home, throw the base & rover log files as well as the photos into a folder and run a script that
    a) runs PPK, and
    b) finds the correct coordinate for each photo, based on time when photo was taken, and
    c) updates EXIF into and/or produces a nice CSV file with file name and coordinates
    ā€¦
    or perhaps even, combine a) and b) and only do the PPK required to get the locations for the photos (and not for what could be hours of rambling around in between)

Of course this requires time of GPS and mobile/camera to be reasonably in sync and/or a method of determining the offset between them.
For my use cases it would be acceptable to say that the rover has to be stationary for, say, 30 seconds for every point, which should make it a bit easier.

Is this (similar to) what you have developed?
If so then Iā€™d be very interested!

Thanks,

Tobi

a) you run the PPK
b) and c) gets done by the script
Because you are not moving the 2-3 seconds mismatch between image exif time and real gps time can be ignored (corrected for roughly)
Simon

Anyone seen this???

I havent translated it yet, but maybe this could be the ā€œnewā€ release they keep talking about?

Looks very interesting if itā€™s true.

BUT, will the RTK be for flying accuracy like their other ā€œRTKā€ kits? Or will it actually be used for geotagging?