How do Emlid receivers (or most GNSS receivers) determine True North when static, not moving and autonomous?
For example, using Flow (or most survey software), NORTH reference is normally UP on the display (unless manually rotating it of course). From measuring only 1 point statically, not moving at all, how does it know where geographic True North is in order to proceed to stake out a point?
When moving, I would assume the IMU (with accumulated drift) would determine that and/or magnetometer referencing declination data for its location to correct from magnetic to True North, but what about when not moving?
The RS3 with TILT from what I understand does NOT have a magnetometer (6DoF) as the other previous models (9DoF) do to avoid magnetic interference from metal / magnetic objects so TILT can work correctly.
Assuming no Magnetometer, I thought things like handheld GPS (or car navigation) used the average between samples/solutions and calculated vector (bearing/orientation and speed) from there. Not sure if it would be at the hardware level (like a ublox chip) or software level, but I think that would be the way to do it.
Accuracy would obviously be low, but perhaps at the software level you keep a moving average over the last x-seconds so that the “needle” isn’t bouncing around?
When stationary, we don’t account for true north in our internal calculations.
During a stakeout, the guidelines are generated from your current location (the rover) to the point you intend to stake out.
There is a case for such OPUS processing, which recommends “rotating the receiver’s NRP to face True North.” In this scenario, you can face the receiver in the same direction during all the measurements to compensate for some of the antenna phase center offset, which is rarely more than a couple of millimeters.