Wanted to share my progress with Navio2, Reach and ROS!

A little background. A friend approached me about 6 months ago asking if i had any insight into how he might construct “a metal detecting” robot.

As a software developer (mostly web apps by day, microcontrollers and cartesian robots by night), I thought this sounded like an amazing challenge!
He had procured two used wheelchairs and already had the metal detector gear. He brought them over, we tore down the chairs and assessed the situation.

Step 1, motor control.

We opted for a 25a Sabertooth and were quickly able to get some basic movement with the motors and a potentiometer. Next, a simple 3 channel RC rig got it moving and I had a ton of fun watching my kids chase around the stripped back base.

Step 2, patrolling a field / recording data.

I was already familiar with Ardupilot and was pretty sure I just needed a basic flight controller paired with another arduino for data logging. This gets you thinking about where the data is sent / received / processed! It would have been more complex trying to use a simpler flight controller and raspberry pi was looking pretty good. My partner, simultaneous stumbled upon some marketing info on Emlid’s Reach products and that pretty much sealed the deal with our choice of Navio2. We figured the raspberry pi + navio2 would become the motion control part of the project allowing us to run other robot apps on the pi and later add Reach for more precision.

Step 3, recording / transmitting the data.

Being a full stack web developer, my mind instantly went towards using a pub/sub or queue like RabbitMQ or MQTT. I decided on MQTT as its specifically tailored towards telemtry data. I was easily able to construct a web app with React / Python - python would receive MQTT packets, and distribute websocket notifcations to the frontend. Tying in Mapbox’s API / Leaflet, I was able to plot my bots travel and set marker points as messages were received.

It was all pretty brittle though and once I started adding ultrasonic distance sensors and integrating more sensor data, it was clear that the web wasn’t the proper platform for that level of visualization.

Step 4, My searches pointed me straight at ROS!

I was definately impressed by what I saw, its an amazing pub/sub framework for robotics of all sorts and to my amazement, the good folks at Emlid had it all there, ready and waiting for me to use. A bit of simple config, and I had the MavROS (Mavlink ROS layer) talking to the bot. ROS works by sending messages to topics and subscribers listen to these topics and react accordingly. I worked through several of the ROS examples and was able to read mavlink messages, publish serial data from an arduino. I was able to use python to create an ROS plugin that brings together several off the shelf plugins as well as my own publshers and subscribers.

Step 5, RViz

Next I had my eye on the visualization side of ROS. Rviz allows one to create a “URDF” file (xml definition of the physical properties of your robot). I assembled a series of links and joints describing my robot. My arduino at this point is publishing range finder data, so I made sure the “frame IDs” matched what my URDF said the sensor locations. After a few days, I had a 3dimensional model of my bot on my screen, showing live range finder data! AMAZING!

Step 6, Rviz meets GPS

In the meantime, I took a bit and explored RTKLib (which Reach uses under the hood) along with a cheap uBlox gps module. This gave me a pretty good understanding of GPS / GNSS as well as RTK. At this point, I’m DROOLING to get a pair of reach receivers, which I do. I coupled that with an open street maps plugin for RViz and can now see my bot over a map with ~1-2 inch accuracy!

Step 7 - Mapping 3d space!

The metal detector idea is neat, and I’ve proved it all out using a stepper to swing the arm and a usb audio interface on the pi to read the frequencies from the detector. But I wanted more! I was quite happy to find that ROS plays very nicely with Kinect (not kinect2 though, because pi has no usb3). With little more then some updates to my ROS module, I was able to get the kinect to record depth maps of the world around my bot! This means that as the bot travels through the physical world, its storing data about the things it “sees” which I can later use to introduce autonomy as I start to explore SLAM (Simultaneous localization and mapping).

My next steps aim to use all of these things to micro map the world. Taking geo-fence coordinates, or other mission planning data and allowing the robot to perform is sensor / mapping tasks with increased autonomy.

Show me the code! !

The ROS / Arduino stuff can be found here: GitHub - billieblaze/ROS_Rover_addons: ROS addons and realtime web app for my Navio2 Based rover

/arduino = ROSSerial publisher for ultrasonic range finders and a stepper motor
/launch = ROS launch files for rover and ground station, integrates ROS dependencies
/scripts = python scripts that I’ve created for various functions of my bot

Some rough stuff for the web side that I haven’t touched in a while can be found here as well: GitHub - billieblaze/react-docker-kit: Example of using React and SocketIO with Flask and Celery workers in Docker containers

Some pics

Ground Station case w/ Intel Nuc and 10inch screen, as well as video / networking / battery stuff

Rviz displaying depth map from Xbox Kinect

MMM, brains… Navio2, arduino + breakout board, stepper driver

Motor controller + Attopilot + 12v converter

Short Video trying to hit some waypoints a few feet apart

https://drive.google.com/file/d/0ByaOkVM8UM6tUVl0a3ZUajREbnc/view?usp=sharing

Thanks for checking it out! I’d love to talk more about it.

8 Likes

Wow, you did an amazing job building this bot! Found anything interesting below the ground? :slight_smile:

Haven’t taken it out much outside of my immediate area. I’ll let you guys know if I do! :slight_smile:

Hi Bill
Thanks for your generous project share.
I have a project with all most the same setup and you are much ahead of me.
Can you share your hardware list to make sure I don’t have something overlooked.
I’m I correct that you are using ardurover for autonomous navigation?
Many thanks
Sam

Correct, I plan on building autonomy through ROS / OpenCV.

Hardware list is as follows:

  • 2x12v 35a batteries
  • Sabretooth 25a
  • 12v regulator
  • Pi / Navio2
  • Attopilot Battery Monitor
  • Xbox Kinect
  • arduino nano
  • 3 x HC-sr04 distance sensors
  • 9 channel RC transmitter / receiver with SBus

Thats everything bot / autonomy related.

Hi Bill

Thank you for your updates.

  • Do you know some helpful websites to configure sabertooth to work with ardurover? I looked through ardurover docs and many other forums but still not very clear.
  • I want manual and auto controls for my rover (a track based differential drive with 2x32a sabertooth, 2x12v batteries . We need to configure sabertooth in mission planner or ardurover?
  • Is sabertooth an ESC? If it is we follow navio2 doc for sabertooth hardware setup?

Many thanks
Sam

HI Bill,

Are you able to share how you are commanding Navio2 via mavros?

What mode do you set the ardurover to? what version of ardurover are you using (there have been suggestions for me to o version 3.2 rc1)

How do you go about setting the mode?

Which mavros node do you publish to in order to set waypoints/velocities

Cheers

Esa,
Mavros is launched in “launch/rover.launch” in my github project. This is basically the same as what is discussed in “Using ROS” section of the Emlid wiki. MAVROS is launched and sets up a mav proxy between the bot and ROS. All typical parameter operations work the same as always w/ my ground station software (APM Planner or QGroundControl).

I’m using Emlid’s raspbian image as a starting point, so I think thats 3.1 maybe?

I haven’t gotten to the point of publishing yet, but looking over the MAVROS docs: http://wiki.ros.org/mavros it looks possible to load a NEW waypoint table, but not a single waypoint.

In my case, I expect the waypoints to be predefined, and the bot to respond to changes in the map or environment. This will most likely happen by either publishing RC messages to the appropriate channel or finding another way to say “move over 1 foot”.

6.2.1 waypoint
Allows to access to FCU mission (waypoints).
Published Topics

~mission/waypoints (mavros_msgs/WaypointList)
Current waypoint table. Updates on changes.
Services

~mission/pull (mavros_msgs/WaypointPull)
Request update waypoint list.
~mission/push (mavros_msgs/WaypointPush)
Send new waypoint table.
~mission/clear (mavros_msgs/WaypointClear)
Clear mission on FCU.
~mission/set_current (mavros_msgs/WaypointSetCurrent)
Set current seq. number in list.

Hope that helps?

I was able to get the sabretooth itself to work w/ a potentiometer at first (per the docs). Next I had to set the dip switches. I’m doing mixing on my sabretooth, so “skid steer” is OFF in APM params. The exponential and response dip switches seem to be on a per-bot basis.

After that, it was a matter of finding the proper rc channels. On my rc, I use the right stick so that wound up being CH1 + CH2. In APM Planner’s params screen, I configured these appropriately to match up to throttle and roll IIRC.

Configuring “in mission planner” as you state, IS configuring ardupilot. I did not have to reconfigure the firmware if thats what you are asking?

Sabertooth is essentially an ESC (it also has a digital input, not sure if that makes a difference). Hookup is pretty much the same. Batteries in to Sabretooth, RC PWM from navio into sabretooth channel inputs, motors out.

Hello,

I wanted to know how you managed to use RTK GPS with ROS.
Great project by the way…

Thanks.