Deep Racer - A first look

Today, Amazon Web Services (AWS) announced the AWS "Deep Racer" at it's annual re:Invent conference in Las Vegas.  I was fortunate enough to get hold of one of the first batch of production units, and since most of you will not yet have seen one I thought I would give you a run down, as well as my first impressions.

What is it?
Deep Racer is a 1/18th scale AWD buggy with a cute body shape finished in back and silver-grey.  The body shape is quite interesting, but we'll come back to that later.  The chassis appears similar in design and components to many other low-cost remote control buggies

You're no doubt interested in what's inside.  There is a single-board controller carrying an Intel Atom processor, 4GB RAM, 32GB Storage 802.11ac WiFi, a 1080p camera and battery packs.  It can also be used as a remote control vehicle through an app.

(updated November 29) The controller board looks a lot like an Up Board, but I can't confirm this without pulling the car apart completely.  The specs match the Up-Board, as does the overall layout.  The one difference I found so far is the power connector is not the Barrel socket of the UP.  Given the pricepoint of this car, and that Intel exited the single board market, it's almost certainly an OEM item.

In the launch presentation, it was mentioned the storage is expandable (presumably using either USB or SD-Card), but the documentation makes no such statements.

I am confident, but prepared to be proven wrong that it is not the same board as used in Deep Lens.  The specs alone don't match.  In the end, it doesn't really matter, it's simply an interesting technical challenge

The controller runs Ubuntu Linux 16.04, and has the Intel OpenVINO tools installed.  The middleware solution is Robot Operating System (ROS).  At the moment, I don't have details on what if any customisation may have been done. 

After evaluating a model, I discovered that the network is Tensorflow, but any hard details about it are scarce at present.

AWS say that Deep Racer is a platform to allow more people to experiment with Machine Learning, especially Reinforcement Learning.

What it isn't
Here's where things actually get more interesting.  Straight out of it's box Deep Racer is not capable of much at all.  It needs to be trained first, and right now (at least) the only way you can do this is via the Deep Racer console (  This training leverages Amazon SageMaker and RoboBuilder (announced on Sunday night) to do he hard work for you.

Curiously it's not a fully fledged machine learning kit either.  You don't have access to the models themselves,  nor can you run arbitrary code on the controller.  It was hinted these limitations are likely to be removed sooner rather than later.

Getting Started
Right now, the services are in closed beta, and to be honest they are not quite ready for the big time.  That aside, let's walk through the process.

Firstly, you will need to pull down the documentation from GitHub (, but be aware, by the time you can actually get one for yourself, that documentation and lab exercises will most like have been changed and moved.

There is only one track available today for you to use, called "MGM Speedway" , because that's where all the workshops took place.

You then need to create a "reward function", which is a pretty simple Python function where you establish outcomes for conditions the car finds itself in.  Having completed your function (or simply pasted in the example), you select "train", and SageMaker fires up in the background.  If you haven't worked in this space before it might be a surprise to you that the training time is usually measured in hours, and the current documentation recommends a run of 3 hours to properly train your model.

The training display looks like this:

On the right is a video representation of the model output, and the chart is the historical training score.  In the current release, the video is buggy and often freezes up or times out.  I am sure this will get better when the services reach GA.

Once training is complete, or you stop it because you're impatient, you need to evaluate the model's performance against the simulated track.  The evaluation step will attempt to drive the course and record the time to complete a lap.  My first attempt was 1:20 after a twenty minute training run, and using the default reward function.  It was able to complete the circuit on two out of three passes.  A more advanced model got this down to 1:13, and it is still very basic.

Finally, you download the trained model into Deep Racer via a USB drive, and off you go.

Some additional trivia for you. 
There are two battery packs, one for the controller, which is claimed to give it a running time of up to four hours.  The other pack is for the motor, and it's run time is in the order of 15 - 20 minutes.

The curious body shape is a result of getting the plastic body far enough away from the passive heatsink on the controller.  Early prototypes apparently could melt or distort the body.

AWS also announced "Deep Racer League", which will conduct heats at various events around the world.  A prototype of which is being conducted at re:Invent this week.  Sadly I was too late finishing to have my final times counted.

It's exciting to see AWS putting Deep Racer out there for the world to experiment with. at $US399 ($299 if you pre-order) plus shipping and ongoing AWS fees for building and testing models, it's not the cheapest solution for learning, but they do make it very easy.  To be fair, you get a lot for your money, and I suspect AWS are selling this pretty much at cost, or at most, a very slim margin.

I do hope that AWS do relax the sharing of knowledge around Deep Racer in the future.  Being based on OpenVINO (and presumably OpenCV as well), It would be fascinating to add a Movidius NCS to the machine and see what improvement that makes.

If you want the cheapest way to get into autonomous vehicles, this is probably not it.  That said, AWS and Intel have done a lot to lower the barrier to entry into this world.

Auric - The Software Side

Auric is a combination of hardware and software.

Last time we looked at the hardware required, and in this installment we will consider the software components.

Like my previous self-driving car, there are two controllers in Auric.  The main processor and the motion control processor.

Let's start with Motion Control because it is much simpler.   Motion Control is based on a NodeMCU (v1.0) and an associated L293D H-Bridge driver.  The software loop monitors the incoming serial port for a character stream from the main controller.

This character stream is in the form of 'channel:value'.  These channel IDs are a single ASCII character (case-insensitive), and the value data is dependent on the channel.  For example R:512 is interpreted as Right Motor with a value of 512.

It is the responsibility of the channel code to act and generate a response to the main processor.

The main processor, a FriendlyARM NanoPC-T4 is primarily tasked with navigation, as well as edge and object detection.  Edge detection takes a continual stream of still images (as opposed to a video stream).  The image is devolved to identify edge boundaries (such as a pathway or road).  This edge data is used to determine the relative location of the camera between the edges.

An important assumption at this stage is that the camera faces forward along the centre-line of the chassis.  Given the size of the chassis, any misalignment is most likely to be negligible.

New for this release is support for a compass/magnetometer module on the I2C buss.  This gives the software a heading value.  For the moment, this is reference data only, but in the future it will be used to determine on/off course condition.

A second process runs on the main processor to support the LIDAR unit.  At present, this is data gathering to help me understand the nature of the LIDAR data, and because of the relatively high volume of data (720 range values / second) I am sending this data off-board using MQTT for storage since I would rapidly exhaust the on-board storage.  I plan to write a follow-up article on interpreting the LIDAR point cloud data once I get a handle on it for myself.

News Flash

Take a look at Auric's first day out on YouTube

Want to help?

If you would like to make a financial contribution to the project, please go to my Patreon Page at:

Wasting your and my time

I had a really interesting experience recently which I hope might enlighten others as much as it did me: I was approached (via LinkedIn) by ...