Self-driving GAZ66 Monster Truck 1/16

Habr, hello!

I want to tell you about how I did and made a self-managing machine :)

I could tell right away how to do it by dryly attaching circuits and bash commands, but it will be so boring. I offer you an interesting (I hope) story about how I personally went this way and where I came.

Those places where there was something to take pictures with photos. Where about software - most likely without a photo.

It will really be a story in a narrative format, as I would tell you over a cup of coffee. This is not about bash commands, python scripts, and that’s all.

Let's start with photos and videos of what happened, and then the whole story under the cut.




History will follow this scenario.


  • Why i wanted it
  • How a self-driving machine works (top view)
  • Age 1 - Gelendwagen from the children's world + Raspberry Pi Zero W + camera
  • Age 2 - GAZ66 + NVIDIA Jetson Nano + Camera for RaspberryPi
  • Age 3 - Remo Hobby SMAX
  • Age 4 - Connection SMAX and GAZ66
  • Age 5 - Mounting Components on a Monster Truck
  • Age 6 - Installing Donkey Car and surroundings
  • Age 7 - Track assembly, trips
  • Age 8 - Traveling with the Joystick
  • Age 9 - Neuron Training
  • Age 10 - Everything works, finally!
  • What's next?
  • Battle challenge
  • Community
  • The image of the sd card of my typewriter

Pour coffee, we are leaving!

Why i wanted it


It all started with the fact that I was frustrated by the fact that in one big IT company in Russia they make very cool drones, this is incredibly cool, but I'm out of work :)

No, well, it's really so cool - self-driving cars) An excellent alloy from mechanics and algorithms :)

Frustration continued until I combined in my head different facts about myself, namely:

  • I can write in python
  • I (approximately) understand how machine learning works
  • I know how to work with Linux on the console
  • I spent my childhood with a soldering iron
  • I have a whole box with diy components (raspberry pi, arduino, sensors, etc.)

When everything worked out in my head, I decided - self driving car (sdc) to be!

To begin with, I decided, it’s worthwhile to figure out how sdc works, and this will be the next section.

How a self-driving machine works (top view)


In order for the car to go by itself, it needs four components - a cart, sensors, a computer, an algorithm.

Let's figure it out


Trolley

What actually will drive. Wheels, motors, the battery that powers it all.

There are two conditional cohorts of cars that I named for myself - cars from the children's world, and cars for a hobby.

Do not even try to flirt with cars from the children's world, I tried, this is a failure. Their minus is that they have weak motors without feedback. This means that you are likely to be able to stop any home carpet, and that you cannot turn with the given accuracy.

Cars from the hobby world are what you need. They have powerful engines, good batteries, front-wheel servos for turns. Consider this an entry threshold. The cheapest and normal thing I could find was Remo Hobby SMAX.



Sensors

It collects some information about the world around it and transfers it to a computer for decision-making.

Basically, a gentlemanly set for SDC is:

  • Camera. The foundation of the basics of SDC. He looks at a piece of space in front of him, transfers the image to a computer that recognizes what is happening, and decides what to do. It seems that I have not seen SDC implementations without a camera.
  • IMU sensor. A piece showing the acceleration and the angle of inclination along the axes. It helps to understand where we are actually going, and how our location has changed relative to the starting point. Used in almost all copters.
  • . , , , . , SDC, , . , 75$, SDC Velodyne 4K$. , 3D , 2D.
  • GPS. , , , SDC .
  • . — , . 3D .



Computer

What receives values ​​from sensors analyzes the situation, we transfer control commands to the cart.

In the world of computers for embedded electronics, energy-efficient ARM processors (like in your phone), and single-board computers based on them, rule the ball.

Today there are two of the most popular single-board options - RaspberryPi and NVIDIA Jetson.
RaspberryPi has a lower price, a wide variety of projects, a large community.

NVIDIA has a higher price, fewer projects, but more productivity in machine learning tasks. It has onboard 128 CUDA cores (like in your large NVIDIA graphics card), which are used to accelerate machine learning algorithms.

There are three pieces of Raspberry Pi (ZeroW, 3, 4) and NVIDIA Jetson Nano in my collection. I decided to assemble the machine, of course, on Jetson.



Algorithm

That decides on actions based on sensor readings. Usually, a combination of computer vision and neural networks is used for this. In the most basic version, you drive yourself on your typewriter along some markings, record videos of such trips with gas / brake / turn tracking, and then train on this neural network so that it finds you the dependence of engine signals on pictures from the camera. Quite simply, this is the task of recognizing markup, and trying to stay in it.

If you want to refresh how the neural network works, then I suggest you watch this video:


Here I described the simplest option, where there is only a camera and a marking ride. But there are options with more sensors and a different logic of work - there will be a separate post about this here.

If very high level, then that’s it.

It remains only:

  • assemble trolley
  • hang sensors
  • connect computer
  • draw markup
  • ride on it
  • to train a neural network
  • to go

Now that we’ve figured out what a self-driving car is all about, let's move on to what era of implementation I had specifically.

Age 1 - Gelendwagen from the children's world + Raspberry Pi Zero W + camera


Yes, my very first approach was just that. It happened because there was a children's world near my house that I went into, in which I liked Gelik, and I bought it.

Okay, I thought, there’s a helix, now I need a computer and sensor. Thought - done. Ordered an RPi Zero W and a camera for her. While I was waiting for the computer and camera, I went in and bought a power bank for this business.
So, everything is in place, it's time to collect. I found such a project , I decided to follow it.
Gelik disassembled, pulled out his native brains, transferred them to the engine controller, he, in turn, transferred them to RPi, connected a camera to it, powered the whole thing with a power bank, and was satisfied.

Before proceeding to self-government, I decided to ride through the console for fun, chasing a cat, broadcasting the image from the camera to my laptop.

It was then that a couple of failures awaited me.

First, the Raspberry Pi Zero W is very weak in terms of performance.

Second - Passability of the gelik from the children's world is almost nonexistent, it was stopped by almost any minimal obstacle.

Already, it became clear that the project was stillborn, but for the sake of interest, I tried to assemble computer vision (OpenCV) for Raspberry Pi Zero directly on it. It took, without jokes, more than a day, and was the last nail in the coffin lid of this SDC implementation.
It became clear that you need to change both the computer for greater performance, and the cart, for greater cross-country ability.

It turned out pretty funny:



2 — 66 + NVIDIA Jetson Nano + RaspberryPi


So, at this point it became clear that you need some more passable car, and preferably a truck, to put all the components in the body. After studying one service for the selection of goods, it became clear that the model of our native GAZ66 suits me, it’s also a shishiga among the people. Okay, I ordered it, wait, it's time to think about the computer. By this time, NVIDIA was just preparing the start of sales of its Jetson Nano, and I placed an order on the first day of sales.

A truck arrived, I continued to wait for Jetson, impatiently rode a shishiga around the house, rolled the kittens whom the cat mentioned above gave birth to. Not to say that the kittens liked it - had to stop.

Meanwhile, Jetson was still driving, and I ordered a vacuum lidar from China - I did not know how to use it specifically, but I understood that I wanted to.

One day, a business courier appeared at the office entrance, handed me a rather large box of NVIDIA single-payer, I signed on the invoice and felt like a developer enthusiast - wow, a device bought at the start of sales came to me.

It's time to collect! But first, you need to make out, lol. Dismantled the shishigu, threw out her native brains, lubricated the mechanisms, began to assemble already on the basis of a computer.

I connected the camera, the engine controller, the rotary engine, the gas / brake engine, started the Python scripts for the test - again a bummer!

This time, the story is this: a shishigi uses a conventional motor for turning, not a servo drive. So, he has no feedback. So, I can’t control them for sure, which means it is not suitable for SDC.

Okay, again, you need to somehow solve this, do something. We pass to the next era.





Age 3 - Remo Hobby SMAX


Since at this point in time it was clear that the machine needed not only passable, but also minimally good in components, the choice fell on the shops for those who have RC, this is a hobby.
Without further ado, I arrived at one such store, and without hiding told me what I was doing and what kind of machine I needed. The seller, be kind, told me which machine fits my minimum requirements, and it was a Remo Hobby SMAX. I bought it.

I came home, got a shishigu, took everything off from it, sat down to connect to SMAX. And what do you think? That's right - failure again!

Basically, RC cars are designed so that the engine connects to the engine controller, and that, in turn, connects to the radio module that communicates with the remote control. And it was SMAX that was designed so that there the engine controller and the radio module were combined - I literally did not have the opportunity to connect to the engine controller instead of the radio module.

Okay, you have to do something again. I return to the site of RC cars, I climb into the components. Poking around there, and cheers, I find a motor controller that has a separate wire to the radio module.

I order, bring, collect everything anew - it works, but only turns. But there is no gas and reverse! Why, damn it, I think, but I keep picking it.

This time it didn’t work that, in order for the SMAX engine to wake up, the console must send it a certain value (360) through the radio module. But I did not know about it, and entered the values ​​directly for the brake gas. And the engine did not react, proceeding from the logic that no one asked him to wake up.

At some point, I sat down to sort through literally everything in a row, expecting that it would react to at least something.

At first I went over 100 - past. Then 50 by. And when I reached the search of 10, I heard a welcome squeal in 360 - cheers! Works!

I tested gas / reverse / left / right from the console, everything works. This is fire, here I am a programmer =)

It seems it's time to assemble, but there is a problem - there is absolutely nowhere to put the components. RC cars are designed so that their top is a very conditional thing. Firstly, the top consists of very thin plastic, and secondly, it is a jeep, and there’s just nowhere to put everything.
At this moment I decided to search, and how, in fact, others do it.

I found the donkey car project, which has everything on a turn-key basis to assemble your SDC - both Hardware examples and Software framework. It would seem cool, take it and use it, but there are nuances:

  • they print the top of the car on a 3D printer, and then it reminds the car very distantly. ugly, in short, not aesthetically pleasing
  • their 3D models are compatible with machines that we don’t sell.

Ok, remember Donkey Car, then we’ll take their Software framework, but for now we need to think about hardware.

One day, turning my head in my apartment, I looked at the disassembled shishigu, at SMAX without the upper part, and thought - hmmm, and they seem to be of the same scale (1/16). He took a shishiga, took an SMAX, just put one by one on the other - and really, it does! And it looks cool! OK, I have to do it! We pass to the next era.







Age 4 - Connection SMAX and GAZ66


So, at the start of this era, I have an internal target - to connect the top from one machine to the bottom from another. Since my colleagues and I took a chip in a 3D printer, and I am its co-owner (a serious investor), it was decided to draw a connection in a CAD program, print, and thus connect them.

With this idea, I walked for about 2 months, thinking that I was just about to sit down to understand CAD systems. Lol, no. Having admitted to myself that I do not want to understand CAD systems, I began to think what other options are.

I went back into the children's world, decided to see the designers, suddenly they somehow help me. I bought a classic metal constructor, which was with me when I was in school (ash old school brought together).

He dragged him home, put two parts of the machine side by side, and began to apply all kinds of designer elements to them. How long, briefly, some kind of understanding began to appear, how, at least in theory, this could be done.

Started to do. Spent more than one day with a baby wrench, nuts, and spatial thinking.

While I connected, I learned to drill plastic with a screwdriver, carefully tear off excess parts so as not to damage the case, lock the nuts with other nuts (but without washers it’s still so-so). In general, my Trudovik would be proud of me.

After about three alterations and three days, I saw this monster truck in front of me - the GAZ66 SMAX Edition by Beslan.

So, it seems that the hardware base is ready, move on to the next era.





Age 5 - Mounting Components on a Monster Truck


At last:

  • I have a cart with good components
  • I have an aesthetic and roomy upper
  • gas / brake / turns work normally on this trolley
  • top and bottom are even connected together =)

It's time to mount components on this beauty.

Armed with a screwdriver as a drill for plastic, and double-sided tape as a universal mount for everything, I got down to business.

I made a mount for the camera on the cab, which allows you to adjust the angle of the camera. Threw a long train from the camera to Jetson, which, in turn, settled in the back.

In addition to Jetson, the following people settled in the back:

  • powerbank for powering the computer (donated his main, cool, with usb power delivery so jetson does not fail on power)
  • PCA9685 (PWM controller) for motor control
  • battery to power the engine of the machine

Since the project was already considered to be long-term at that moment, I decided not to mess with the lidar yet and make MVP at least on the camera and software from Donkey Car.

For fun I connected my native headlights from GAZ66 to make it more beautiful and more confident in the dark.

Okay, my machine turns on, the engines respond to commands from the python, the camera gives a picture, the lights are on, everything is okay, it's time to install software.





Age 6 - Installing Donkey Car and surroundings


Fortunately, in the last stages I found the Donkey Car project, and it made my life much easier, save me from writing everything myself. Simply put, DonkeyCar is a framework that already has everything you need for SDC. And they even have guides on how to install software. But, as is usually the case with OpenSource, the guides are outdated and contradict each other in moments.

Okay, have to figure it out. For the normal operation of the framework, the following libraries are needed:

  • Opencv
  • tensorflow-gpu (gpu is for jetson, for there is a cuda kernel. for rpi there is tensorflow-lite)
  • tensorrt (library for accelerating the inference of neurons)
  • and all that is set automatically based on the list of environment

Let's start with OpenCV.

The DonkeyCar guide says that you need to build it from the source itself, because for ARM there is no OpenCV in pip. I even did this, compiled OpenCV, but before installing it I decided to check, suddenly the system has an old version of OpenCV, and it needs to be demolished. I called a python, imported cv2, asked for a version, but it is bam, and relevant. Quickly searched and found out that in the latest versions of linux4tegra (which is in jetson), the guys from NVIDIA began to install OpenCV. Cool, I have less to do. Well done, that I could compile myself :)

Next, tensorflow-gpu.

The DonkeyCar guide indicates, firstly, an outdated version branch (1.xx), and secondly, not even the latest version from outdated ones. I decided not to listen to them, and put the latest current version (2.0).

The next step is tensorrt.

The guide for installing tensrort on jetson is written in a separate wiki page, and it is clear from it that the author did not read the main guide =) Because in the tensorrt guide, environment variables are reassigned, and OpenCV stops working. I twisted it this way and that, rolled everything back, and decided to hammer on the environment and environment variables - rolled it directly into the main environment.

Satisfied with himself, he opened the python, in turn called cv2, tensorflow, tensorrt, and then asked the python for their version - they were all imported, all showed the latest versions. Cool!

The installation process for donkey car itself is quite simple, I will not describe it, I suggest reading their guide. The only thing that I note now is that in the donkey car config you can increase the resolution of the image from 86x86 for RPi to 224x224 for Jetson (because there will be more performance and accuracy will be higher).

So, everything is ready, time to run and test!

My machine really turns on, it starts the web server on that IP, to which the router issued the machine. And you can really go there, and from the browser drive the joystick, looking at the picture from the camera.

I also had to calibrate the values ​​supplied to the PWM (PCA9685) to find a full forward, full backward, maximum turns to the sides.

Here, by the way, I found out that my engine was connected incorrectly - the machine went back much more energetically than forward - I experimentally found the wires, threw them the other way around. It was so arranged there that all the wires from the engine are the same color, and you cannot remember how it was. But I connected it correctly, and put a heat shrink on each wire, so that I could distinguish them later.

Cool, it's time to move on to preparing the track!

Age 7 - Track assembly, trips


The Donkey Car algorithm is so designed that there is a neuron trained by a teacher. This means that the picture from the camera is being tracked, and a json file appears next to each picture, in which the name of the picture, acceleration, rotation, timestamp is written. And in order to train a neuron, such picture + json pairs need at least 5K.

It was decided to assemble the track at home, they say the apartment is large, there is where to turn around. But starting to collect, it became clear that there wasn’t a ride around the apartment - the floor is of different colors, the contrast will be different, and the model can not take out.

Okay, I decided to put it in one room. I bought 4 rolls of masking tape, and glued them to the track on the floor.

I put the car in, started it, drove off, and again failed - it turns out that one room is too small, and my car is corny not included in the turns. More precisely, it’s included, but at such a speed that it will be a shame then =)

Okay, we need to do the second iteration, and we need a big room. The choice fell on the office - there are many places, the floors are monophonic, open 24X7. The only problem is that the cleaners work at night, and the track will need to be removed. That is, you need to do everything in one go - to ride with your hands to be a teacher, train the model, throw it back into the typewriter, and go already without hand control.

Okay, day X, after the event about A / B experiments, it was decided to stay in the office and make the track.
The place is chosen, the tape is ready, the team of builders in the game. Just an hour, and in the office corridor there is a great track.

I put the car in, turn it on, try to drive - hurray, it enters into turns, and I had to limit the speed to only 80%.



Age 8 - Traveling with the Joystick


So, I have a track, I have a car, and I need 5K pairs picture + json.

Empirically, I found out that one lap of my track is 250 pairs of photo + json, which means that I need to leave at least 20 laps.

Preferably in a row. You can, of course, intermittently, but then the thrown gas is rattled off by the model, and it can begin to slow down, but I do not want this.

I started trying to ride in 20 laps without a break, and this, I must say, is not an easy task.

The first difficulty arose because there was a hefty column in the center of the route, and when the machine was driving after it, the connection with the laptop from which it was controlled became lagged, and this small lag knocked me out of the bounds of the route.

Okay, so you need to make sure that the connection is from the device with which I myself go to the typewriter when I drive. And this means that you need to drive from the phone’s browser.

But there is still a joystick, and I hold it with both hands, where else can I get the phone? Carry on a typewriter is not an option, it will slow it down, and then, without a phone, it will go faster, and may get confused in the corners due to excessive acceleration.

Hmm, then you need to somehow combine the phone and the joystick. Okay, I have a reader, it’s big enough, it will fit both a phone and a joystick - it’ll do. He took the tape, and stuck the phone to the reader with tape, and just below the joystick. I looked at this miracle, and thought - what are you, in general? :-)

But, it worked :) With this thing I managed to leave 20 laps. But actually, even 25, because I got a taste somewhere around 15 circle.

Dachshund, ready, I have a dataset for training a neuron, it's time to train!

Age 9 - Neuron Training


At this moment I have a car, a track, a dataset - yes, I am one step from the result!
At home, the PC was spinning at idle, with the NVIDIA RTX 2070, on which I planned to study. Fortunately, for a smart home, I have an external IP, and I just had to get port 22 from the Internet to a PC. It’s good that there were assistants who did this for me while I was in the office.

So, I go by ssh to the computer with ubuntu, I monitor the home folder by sshfs, upload files. It would seem that only 40 megabytes, but it lasted about 30 minutes. It happened, as I understand it, because there were a lot of them.

Files on your computer, tensorflow-gpu installed, DonkeyCar software installed, it's time to train.
I’m calling a script from DonkeyCar for training a neuron, I point it to the folders with the dataset - it ran.
While the neuron is running, nvtop (video card load monitor) shows 1406% ​​utilization, regular htop shows 100% cpu load for all 16 cores, it’s going).

After some 20 minutes, I have a trained model for driving a car. It would seem, take it, use it. But no :)

Remember, I wrote above about tensorrt, which optimizes the inference of neurons and runs them on cuda kernels? Of course, I want to run through it.

And that means what I need:

  • freeze the model (you need to pack everything for the model in one file)
  • convert frieze result to tensorrt format

I'm trying to freeze the model, I call the script from DonkeyCar, failure. And in the meantime, it's up to night, soon the cleaners are dismantling my track, I need to quickly.

The hypothesis was born that this is because I took the wrong tensorflow that DonkeyCar had. Okay, I'm pulling tensorflow 2.0, I set 1.15, I try again - success, cheers!

Now the conversion, and again the breakdown - the team was not found. Okay, I'm going to search what’s the matter. It turned out that NVIDIA marked this feature as obsolete, and tore off support. Now, they say, you need to convert by hand. Fortunately, I found a git repo, where there was a similar request, and the user found the place where the python script actually lies, which converts the models.

I call the script from that place, and the truth responds. But, he says, no third pythons for you, come on second.

Okay, I'm calling the second python. He tells me - I do not have tensorflow. Well, I ask him to put tensorflow-gpu 1.15, and he tells me that there is no such version, there is only 1.14. Okay, I agree, let's take a chance, and put the different versions in different python environments. Put tensorflow in the second python, called for conversion - cheers, it worked!

Okay, I have models for tensorrt and for ordinary tensorflow-gpu, I drop it into the machine.

I start the machine with the model for tensorrt, a huge error trackback, time presses - ok, I'll try the usual model.

I start the usual error again, but this time it’s pretty clear - your picture size is 224X224, while 86X86 is expected. Remember, somewhere much higher I wrote that the config rules, changed the image resolution from the camera?

So, on the typewriter I corrected, but not on the host computer.

I return to the host computer, correct the configs there, re-train, re-do the frieze, re-convert, throw back.



I start the machine with a model for tensorrt, and ...

Age 10 - Everything works, finally!


Hooray! My car went! Itself, without me. Very cool. I'm incredibly happy)


For almost a year I did all this, and now)

What's next?


There are a number of plans for further development; I will go from simple to complex.

  • Add an IMU sensor to the model to possibly increase accuracy. For example, when driving uphill, more effort is needed on the engine.
  • Translate the logic into trips not on the highway, but just ride around obstacles
  • Add lidar and take readings from it

Battle challenge


If you yourself, or with a group of friends, feel that you want racing, then write to me, let's arrange competitions =)

Community


I also put together an interest chat room and am preparing a channel. I am not sure that according to the rules of Habr it is possible so, so I will send to the PM on request.

The image of the sd card of my typewriter


Upon request, as well, I will send you an img image of my car, if you want to make it on a similar base, and do not want to bathe with the setting.

All Articles