DIY Self Driving - Part 6. Software in depth


Please Note:  Unlike most of my projects which are fully complete before they are published, this project is as much a build diary as anything else.  Whilst I am attempting to clearly document the processes, this is not a step-by-step guide, and later articles may well contradict earlier instructions.

If you decide you want to duplicate my build, please read through all the relevant articles before you start.

Until now this project has dealt with the mechanical and electrical components.  I have been putting off the software for a while, because many of the ideas are still forming in my mind.  But with everything else having reached a functional state, it is time to consider the overall software environment.

Software Goals

The overall goal is very simple.  To control the steering and acceleration of the car in such a way that it follow the road in front of it without impacting with other objects.

Vision

I think most will agree this is the most exciting part of the project.  It is also probably the most complex as well.  So I will start here, and save the boring stuff for later.

The vision component is tasked with the following:
  • Capturing an image of the "road" ahead of the car
  • Stabilise and reduce noise in the image
  • Reduce the image content to find the road edge
  • Determine the steering input required to centre the car in the image
The capture component relies on the Video For Linux (V4L) libraries to handle camera and import a video stream one frame at a time.  This is a stable and well-known interface, and little configuration is rarely required.

Since our car has no damping at all, there is the probability that images will be distorted due to vibration.  This is actually a function of camera performance as much as anything.  If this is a problem for your car, you can use the OpenCV stabilize APIs to generate a new image by comparing a set of previous frames.  The problem with this approach however is that it takes time, and when the car is on the move this can be a luxury we simply don't have.

Reducing the image takes a number of steps:
  • Conversion to greyscale, which removes complex colour information
  • Histogram Equalisation.  This increases the contrast of the image, resulting in a greater difference between dark and light regions (explanation)
  • Smoothing via Gaussian Blur.  This removes detail around the edge of objects, giving overall clearer lines to follow (explanation)
  • Apply a threshold.  Every pixel with a value greater than the threshold of 125 is assigned a value of 255.  Essentially the image is reduced to two values (explanation)
  • Erode the image to make the remaining edges thicker and easier to find (explanation)
Having converted the incoming frame to little more than a collection of edges, the software attempts to find the mid point between vertical edges, and then calculate how far the centre point of the frame is from the calculated midpoint.

The software determines based of the centre offset value whether right or left steering input is required, and the command is sent to the Arduino for execution.

It is important to remember here that the camera needs to be located at the front, along the longitudinal centre of the car.  If the camera is significantly offset it will be necessary to offset the steering value as well to compensate.

All the code is written in C++ because it is fast and the same language that OpenCV is written in to start with.  This reduces the interfaces required to get to the functions and will offer us a small improvement in performance over say Python (I like Python by the way).  Another reason for using a compiled rather than interpretive language is that a portion of the error detection (especially semantics) occurs during the compile phase so at least those errors won't get loaded into the car.

When developing the code I found it really useful to be able to see what the software was working out, so I added some code to display the final image and provide an overlay on the original.  Since there is no monitor attached to the Raspberry Pi it is not practical to run it this way on the car, but the code will compile under Linux, Windows and MacOS so you can define _GUI in the Makefile and run it on a graphics equipped machine.

This code is contained under the "videostream" directory in the source.

Controls

Initially I wanted to run all the functions from the Raspberry Pi but I discovered in early testing that there were performance issues, particularly around controlling the steering.  I could have changed the Pi for an Odroid XU4 or similar, but in the end I decided that providing a dedicated controller for motion made more sense.  I settled on an Arduino Uno, because it has sufficient I/O and well-proven libraries for the devices I needed to control.

The Arduino is responsible for the following functions:
  • Motor speed and direction of rotation
  • Steering servo control
  • Ultrasonic ranging
The motor and servo functions are very simple and came straight from other robot projects in the past.  They also have the advantage of being static from a software perspective.  That is, when the Arduino code sets an output or servo position, there is no refreshing of this required.  The attached devices will remain signalled until the software changes the I/O pins.

Ultrasonic ranging operates only when commanded and uses the NewPing library.  Eventually I intend to get the videostream logic to spot obstacles, but in the interests of getting something moving I have left this task on the Arduino for now.  The logic for rangefinding is very simple.  If an object is closer than 15cm to the sensor, the Arduino raises its US_OUT pin (refer to steering_hw.h).  This serves as a trigger to the Raspberry Pi to send a "stop" command.

The software on the Arduino is a simple interpretive loop.  Using a serial port (at 115200bps) the Raspberry Pi sends commands to the Arduino for processing.  Each command is a single character and is decoded using a simple switch construct.

Also built into the Arduino code is a "calibrate" routine for the steering.  When an input is taken low, the drive motor is disabled, and the steering is controlled from the keyboard of an attached computer using "." for right and "," for left.  Once the limits of travel have been determined, the values are stored in EEPROM on the Arduino.  This also helps us correct for any misalignment in the steering components.

You can (and I think should) test the software platform in a number of phases.  Because the physical control has been offloaded onto the Arduino it is easy enough to use the serial monitor function of the Arduino IDE to communicate.

Test each function by sending the relevant command over the serial monitor.  Remember that on a Mac or Linux machine (and probably Windows, I just haven't tried) you need to click the "Send" button after each command character.

Once you are happy that the controls work as expected, attach a USB-Serial interface to the Raspberry Pi and monitor the commands coming out of it, and make sure they correspond with your expectations.


If you are following my design there are a couple of important things to remember before you connect the Raspberry Pi to the Arduino:

  1. You must disable the serial port as a console
  2. Add your user (probably pi) to the "dialout" group
  3. Ensure the camera is connected.
  4. Restart the Raspberry Pi.
On the Arduino side:
  1. Since the UNO has only a single serial port, do not connect the serial lines from the Raspberry Pi to the UNO's serial port while the USB cable is plugged in.  I don't think it will damage anything but you can be sure nothing is going to work right.



Further Reading
Part 1 - Introduction
Part 2 - Preparing the car
Part 3 - Wiring Harnesses
Part 4 - Steering
Part 5 - Sensors
Part 7 - Final assembly and testing


 

Comments

Popular posts from this blog

The QYT KT-7900D

Exploring Solar Power - Let there be smoke!

Recording weather with Arduino, Elasticsearch and Kibana