Raspberry PI OSD using NAVIO+

Hi , My name is George and I would like to present to you my design thoughts regarding a new OSD ( I want to believe a revolutionary also) using the Rasp PI and the NAVIO+ Board.

The concept is very simple.
An OSD is information display on the Screen in real time (or near real time). All info should be displayed on time (less delay/lag possible) and should be clearly displayed under any circumstances (pointing to the sun is not a good shot for OSDs).

Using also colors can help the pilot or the FPVer to get his attention on warnings or change the way the system interacts (?) with the view of the camera.

But first let me tell you how I got this idea.
When I first read about the NAVIO+ it got my attention on the part " …leaving spare CPU resources to use the Pi for other uses like map making image analysis etc."
So, this has triggered me to start finding uses of info gathered by the NAVIO+ and the spare Pi resources.

So here we are and with out further delay below are the concept/draft steps of functionality of the software OSD:

The Pi has a camera interface to be used by the Pi Cam. This interface and cam are proprietary of BMC who creates the Pi board but are very versatile. The Pi board has also a SVIDEO (RCA) output for TV.

With only the above we have Pi Cam translated to Analogue signal in order to be used and transmitted by the RF AV Transmitter.

Now we need to put as Overlay Video the OSD info Pi Cam “stream” and this will come out from the analogue port of Pi board. The info exists or it can be calculated from the NAVIO+ and the Pi GPU will take care of the rest.

Pros:
The OSD does not rely on an IC that has limited or very complex capabilities. The OSD will be pure software overlay which means graphics, images, colors etc.
The OSD can provide real flight capabilities like landing virtual runways patterns (a virtual air route to guide you with landing checking speed , AOA, vectors), ILS like approach.
-TO DO- more pros to come

Cons:
Currently I cant find any cons…if you can, please post it here to discuss it.
I will need any possible feedback in order to finalize the design as it will drive the entire software lifecycle.
This will also decide the future programmability of the software.

Regards G. aka “Thanatos”

2 Likes

Hi George!
I am interested in a software solution as well. Board based OSDs have performed unreliably in my pixhawk flights.

The HUD and moving map displayed by Mission Planner have been very helpful for me at times. As I watch the flight through a headset, it would be nice to have the moving map overlaying the fuselage as I look forward or down (nose of aircraft). It would allow for obstruction free viewing when looking around, then when the camera is pointed at the nose, a green screen effect or head tracker position range, would cause Mission Planner display (laptop screen) to feed in.

It would be great to have this on the PI and skip the laptop and Mission Planner all together!

Regards,
Justin

1 Like

At the basic level this has already been done with the RasPi camera over Wifi stream combined with APM HUD overlay. But I think you do have an interesting twist on this, talking about a low latency analog link, using the RasPi to provide an enhanced MinimOSD in end effect.

That in itself is very interesting and I’m still waiting to see (or just do myself) a good test and comparison between FPV over Wifi compared to “native” analog FPV. Because the biggest issue is latency.

Technically I think it’s better to add OSD to the Wifi/digital stream, because you can re-use one of the standard streaming protocol container formats to send the OSD graphics/text like subtitles on a DVD movie, then split and re-mix them later as you wish, like a pure data log next to live video.

With the analog solution all you are really doing is “looking” at the TV output of the RasPi which is displaying the camera output plus some graphics on top. If the latency is lower than Wifi this makes total sense. If not, it’s a waste of time unless you have a specific requirement for the FPV transmitter/frequencies.

I like that idea, but I wonder how much latency will be introduced by having the whole A2D -> D2A going on in the RasPi compared to a direct analog video signal with a dedicated OSD overlay chip (MinimOSD). I think it will be hard to get the same performance. Either way it’s well worth the effort to try and prove this concept!

1 Like

While you are developing this vr osd, you can use the much less complex yet still quite useful option in mission planner to stream live video to your hud display.

http://vps.oborne.me/gcs/Mjpeg%20Video%20Source%20with%20VLC.htm

I am getting virtually real time response from mjpg-streamer, just set the stream as source for the hud and you are good to go.

3 Likes

p.s. It would be quite funny if the program crashes and you see the Linux command prompt as your UAV crashes :sunglasses: For robust usage it might also make sense to write some utility which sends a MAVLink failsafe mode switch when an important Linux process stops or fails to send some kind of heartbeat.

1 Like

Hi guys,

Thank you very much for you comments.

@CodeChief I am aware of the wifi stream “concept” and the latency is the issue. It is very difficult to overcome due to communication protocol dependencies. TCP/IP or UDP is to “noisy” and by noisy I mean they talk a lot on the network media, not mentioning the added delay by several ms added by the Wifi packeting and encryption.

The signal from camera to the Analog output of Pi is near real time (if not real-time) . For sure it is has less latency compared to any H/W OSD implementations as their ICs uses buffers to store the overlay and do the syncing.
Also put in the equation the capacitance and then you will find that what you see in a Traditional FPV setup has lost at least 10 frames (what you see is 400ms in the past).
Using WiFi and Rasp Pi Cam it has been achieved a video stream with 100ms latency …nearly 3 frames in the past.

Regarding the DAC for the Rasp Pi Cam delay I can only say that the camera is not an IP Camera. The camera interface is an ultra fast interface that has a stream bandwith of 2Gbps over 15 pin directly to the SoC (CPU). So no DAC (Digital to Analog Conversion) is between the Camera and the SoC. Also the SoC directly from the onboard GPU translates to Analogue output RCA. So any latency that could existed…it does not exist anymore. :wink:

Regarding the Crash of software in Pi observation. Real Time Operating systems as those working with NAVIO+ give you the opportunity to recover from any bad condition as far as the Software Real Time methodology is used. Not mentioning that if a software is designed correctly and manage the memory with care will never ever crash due to software fault or memory leak. Other reasons for crash applies also in all OSD cases either software or hardware. BTW I haven’t seen the ARM Processor running Linux/FreeBSD Real Time OS on a Mirage 2000-5 crashes the OSD because of data put on the compact flash that has the mission data. :slight_smile:

@auila J Thank you for the point. I am aware of the Virtual OSD that is offered by the mission planner. Let me provide you some CONs always from my point of view.

  1. You need a stream of data to pass the values to Ground Station and there is latency added by the OS and protocol used. Not much but more than the Pure Analogue OSD.
  2. Added values in the OSD are stacked in the left side of the view and personally I couldn’t found a way adjust positions of labels and values.
  3. Adding video streaming on your WiFi network with the UAV gives you some pain in the network as the network is flooded due to the priority of the Video Streaming Packets on the network. This put the “real-time” values stream to grounds station required to feed the OSD in delay (see point 1).

There is an alternative solution to feed the Ground Station with Video and leave your free WiFi for Flight Data. But you need a separate link for video.

In practice this is what I want to achieve. Use less links on the UAV and utilize 100% the resources. That’s what I do for living. You can have 2 links with your UAV , the RC Transmitter and the Video Link to handle all requirements and with great distance. btw Including the telemetry data for your ground station.

I can provide you a better approach with your streaming solution if you want.

Regards G.

@Thanatos , latency is almost undetectable to me, but by all means tell me your better approach.

this is my setup in a nutshell:

I am currently using UDP for telemetry and mjpg-streamer via http for video. Which I route too the hud.

I have a wrt54g (dd-wrt) w/ 8dbi patch, 2 watt amp on the ground. In the air I have a 7dbi whip w/ 2 watt amp attached to my wifi usb dongle.

On a side note can you tell us what you are currently flying? What is your current streaming setup?

You can double click the hud cell/window in mission planner and it will pop out and you can re-size / position it as you wish, fyi

1 Like

Use an analogue rf av transmitter on the UAV.
Get the signal on ground via a receiver.
Plug the output of the receiver to a USB TV dongle with rca input.
The input will be visible as USB device in the ground station av source.

Some comments on your rf setup. Too much power on both ends of the wifi network. If I were you I would exchange the power with a better antenna with higher gain. When you amplify a signal you also amplify the noise around it so the S/Nr signal to noise ratio is getting low rapidly due to the nature of the measurement. Both are logarithmic so a change of 1 db could mean a double or triple gain or loss. Mainly a loss when you amplify signals. If you want to increase the distance of transition choose a media that is low on freq and with a modulation/transmitting that have less attenuation over distance. Like uhf band with fm/Fsk modulation.

UDP for data and http for video it’s a bit paradox. UDP is a streaming protocol that does not have a guarantee delivery of the package like tcp.

I would choose tcp for data and udp or a higher level streaming video protocol. Good protocols are those used by Xbmc (kodi) to stream live video…but I am sorry I don’t remember them.

My setup is mainly cheap. A bix 3 with EzUHF transmitter at 600mW and a 600 mW av transmitter with receiver. Cloverfield antennas on av set with 2-4 db gain. RC is has the normal antenna on transmitter with a diversity 2 db antennas on the receiver.

Nothing serious but I am starting now to build up. Unfortunately due to capital controls here in Greece the ability of order parts form internet is still prohibited.

My calculations and tests showed at least 3km safe range with everything setup as it is. Definitely tuning is possible and will start with antennas to gain more signal eliminate as possible the SWR waves and an antenna tracking system will make the system blow off tHe range.

http://www.amazon.com/Micropac-Usb-avcpt-Sabrent-Creator-High-quality/dp/B002H3BSCM

The sub TV dongle I was telling you.

For me next is the purchase of navio and go on with the project.
Next is the osd software with data transmition over the video
There are several ways to get data over the video and not via modem using the audio channel. There are several ways you can achive this.

Regarding the window of the ground station I meant that the values on display are not placed were you like. Offcourse the main window is detachable from ground station. Sorry for my English if I gave another meaning to the argument

@Thanatos , lol, thanks for the solutions to the problems I am NOT having.

no worries Aquila j …YOU asked for them :slight_smile:
I only created this post for the community regarding the Soft OSD.

1 Like

@Maingear

Map overlay and/or mission map overlay on the OSD is a great idea.
The only thing “bad” for it is that we have to preload google map area on the PI before flight.

Also a minimum info display for landings and takeoff alert inhibit will be implemented.

I assume a transparent map display is preferable.

I will take over a channel for this function for sure with the current design.

I think augmented reality (i.e. map overlay and beyond) is going to be a very strong feature of both methods.

One last thing on the WiFi subject. the reason UDP is used makes total sense because that is the MAVLink notification pattern anyway, e.g. heartbeats and message updates. It doesn’t matter if you miss one because you’ll catch the next update which is many times a second anyway. The broadcast is outwards from the drone to interested parties. Of course the commands going back TO the drone are targeted TCP messages, which could easily be configured to have absolute priority and it seems by default they are dealt with pretty well already.

Regardless of transmission style, if you create some useful code libraries (e.g. video overlay routines, OSD formatting and positioning system/designer tool) please split them off a bit from your preferred analog video design, then they could be re-used for wifi streaming too :wink:

I’d like to see a new standard arise as MinimOSD is way too old fashioned and limited. There are a few people playing around with vector graphic capable MinimOSD repalcements, but they are still specific hardware solutions which are nowhere near manufacturing and probably won’t be for a long while. Making the transision to on-board computer generated OSD makes sense (regardless of transmission method). Here’s those links anyway, maybe you find something useful in their source code when available:

Look at the recent posts of the last link for lots of other links to new OSDs… Seems to be a hot topic right now!

UDP is an old military created protocol mainly for no unreliable connection or data you dont care ig they reach the destination. Heartbeat is a very serious packet and believe me you do care to receive it.
What happens when you dont receive a heartbeat “many times a second anyway”. what is your TTL ?

Good things in life and in Software development are those that can withstand time. “Till now” is never enough when you code.

So are talking about latency and we dont care if we are going to receive a data of ex. the cordinates ? Isnt this fail in the system ? Codechief I am convinced that you over reacting about latency and a possible crash of software (although this problem is solved) and you dont care what might happen if you dont receive necessary data …a flip of death maybe ?

Redirecting the video (pure video or OSD or both) is a matter of 1 line of code.No worries about that.
This software will be based mainly for data input from NAVIO+ but will have the ability to receive input of data from other platforms with compatible interfaces and protocols with RPi…like PIX etc.

Thank you very much for the links.

I prefer another style for displaying the OSD and its changes on the screen. Vector is a bit old and you always rely on specific hardware to implement.

@Thanatos, CodeChief was telling you how UDP works in this scenario. Get it? it works! No need to fix if its not broken, lol.

Nobody is suggesting mission critical data is ignored. You’re using the same old technology with TCP don’t you know?! You’re a bit too prescriptive in your responses, we are all experts in our own fields.

The benefit of UDP is ease of configuration and ability to “tune-in” from multiple ground stations (phone/tablet/etc…) like you can do with the FPV monitors and goggles. That’s a bonus for the average guy who wants a flexible solution, and by no means wrong! The same guy doesn’t care if the position/battery voltage update missed an tick or two on the GCS updates. What’s important is when the critical condition occurs the APM still makes it’s own (onboard) decision to failsafe, and that the overall status is still within a few seconds so the pilot can decide to send an override command/control input.

Of course the video downlink and control uplink must be fast enough to react. That’s why latency is an important part of the design choice. Even with analog FPV you’ll see this being discussed as one of (if not the) main performance characteristic when comparing products. Try FPV racing then you’ll know what I mean :wink:

As you talk about processing all data immediately on the drone, what mission critical data do you need on the GCS then (which is so bad to send via UDP)? Then actually I think you need a different protocol than MAVLink, because that’s a “fire-and-forget” model which sends the status at regular intervals anyway.

For a more directed notification, with retries and perhaps guaranteed delivery, of course either TCP, or better a message queuing technology, is required. I’m well experienced in that area, in fact do it professionally!!! Something similar has been done already by a guy on this forum using a .NET Azure service. I will also be building something similar in the future. But for now MAVLink is THE protocol to quickly set-up and interact with all “APM compatible” devices, good or bad as it is.

Regarding control/critical messages, I think all of us are not so silly to broadcast or choose a mechanism which has no retries, i.e. we use direct RC transmitter control (including mode switch/mission commands/failsafe switch) or TCP. All standard stuff.

Anyway good luck with whatever you are building. It would be good if you could do a write-up/blog/video and share your results. Then we can see what is so different in your network/radio design.

5 Likes

@aquila j chill out…I am not going to touch your systems. Everyone has the right to build it as he/she like.
BTW I am not telling that I will use TCP or any other protocol that exists in the universe.

@CodeChief. I am not going to start a conversation about the pros and cons of protocols…but MQ ? what am I transferring …bank transactions.? :smile:

To be honest regarding FPV racing …I have only watched live and never pilot such a bullet.

To cut a long story short… My intentions are not to purchase or use of SoftOSD for obvious reasons:

  1. It is not for sale.
  2. It will be public.

I just asked for cons and features that the community would like to have and see in an OSD…for what is missing.
Leave the delays,latencies, transition protocols to the design phase and the correction phase of the project.

Thank you.

I am in between of two choices currently regarding the frame that the horizon gimbals will move:

The gimbals frame can be either circle or rectangle.

Circle Frame:
While on turn and dive or climb the part of the gimbals are hidden due to the circularity of the frame.

Rectangle Frame:
While on turn and dive or climb the part of the gimbals are not hidden because the frame is always vertical to the gimbals.

In real HUD the frame of the horizon gimbals are circular due to lens effect on the projection screen.
As a base idea I am using the Mirage 2000-5 HUB design which has also the engine “throttle” level on the edges of the Horizon frame with >| |< going up and down.

Any suggestions. ?