1. Purpose

 

Cherokey Robot

 

2. Components

 

3. Results

I'd like to introduce a robot called Heisenberg! He doesn't do much, other than driving around. He is, however, aware of his own health and the environment around him. He talks and understands speech. He knows where he is and where he wants to go, well, most of the time. He makes mistakes and stumbles upon things, but that's what makes him human robot!

One more video where Heisenberg complains about low battery and tries to go back to base to recharge. Of course, at this point, he cannot recharge without human help yet, but I'm working on it...

 

4. Progress Report

4.1. Intro

My first attempt to make a self-navigation robot using stereo vision was not exactly successfull. I think the concept should be definitely doable, unfortunately, my problem was unsynchronized stereo camera (Minoru) that I was using back then. So, this time, I decided to overcome that limitation by building my own stereo camera module.

I discovered a suitable imaging sensor - MT9V034, which is a large, very sensitive imaging array with global shutter and hardware synchronization capability!

This project builds upon the Mobile Robotic Platform, which I created specifically for the purpose of experimentation with autonomous navigation. This platform was modified somewhat to fix various hardware issues. It has also been equipped with a custom-made stereo camera as a primary sensor.

 

4.2. Hardware

 

4.2.1. Camera

Doing away with Minoru webcam was not easy because it was just so damn cute! However, who can resist building a stereo camera module from scratch! smile

First, some specs:

  • Stereo baseline: 8 cm
  • Resolution: 720 x 480 (vertical can be reduced to save bandwidth)
  • Image format: raw, uncompressed, 10 bits per dot (monochrome)
  • Interface: single USB 2.0 high speed port, CDC device profile
  • Power: through USB, 150 mA
  • Horizontal field of view: 105°
  • Speed: 30 FPS max (configurable)
  • Exposure: manual or automatic
  • Global shutter, exposure synchronized by hardware
  • Configurable number of lines to send to host

Design on GitHub here:

I discovered that MT9V034 monochrome sensor is actually quite popular and easy to use. A camera module based on it provides a two-fold advantage:

  1. Each of the two cameras uses global shutter, which means that all rows within a frame are exposed simultaneously. In a rolling shutter sensor, on the other hand, exposure and read out of each row happens in turn, as explained here. The resultant geometric distortions can break stereo disparity calculations when the robot is in motion or turning sharply. This becomes expecially critical when driving during sub-optimum, indoor lighting conditions when exposure time is longer.

    Rolling vs Global shutter

  2. Start of exposure is exactly synchronized by hardware between the two sensors. This is also obviously an important point for pretty much the same reasons as above.

The datasheet version D on the ON-Semi website actually does not provide register programming settings. Fortunately, an older version from Aptina can be easily found online, which does contain register descriptions. Using this info, as well as several programming references, such as this and this, I was able to successfully configure and use this module.

4.2.1.1. TeensyCam schematics

The four thick red lines on the schematic are due to post-design bug fixing. These connections are completed by soldering wires.

4.2.1.2. PCB design

Following some link on the Fritzing.org website, I discovered AISLER.net - german prototype PCB maker. Apparently, they are able to manufacture excellent quality 2-layer PCB for about 35 EUR, including Stencil (since then they have also added 4-layer capability). That was the reason I constrained myself to 2 layers for the TeensyCam board design. It wasn't easy to maintain signal integrity. My strategy was to route as many connections as possible on Top layer and have a more-or-less solid GND plane on Bottom layer directly below, with an effort to control high frequency return currents. Having separate voltage regulators for each sensore also helps to minimize cross-coupled noise.

It was extremely difficult to solder the sensors to the board. I had a hard time trying to equalize solder paste deposited on pads through the stencil. Luckily, silk screen registration on the board was quite good and I used that for alignment.

 

4.2.1.3. Microcontroller

I decided to use Teensy 3.6 mainly because it is quite powerful and very easy to program using Teensyduino extension to Arduino IDE. Version 3.6 in addition has a high-speed USB port capability. Besides, using a ready-made module, such as Teensy, I didn't have to bother with details of microcontroller implementation on a board.

There are two ways to read image data off of the MT9V034 sensor - parallel or serial (LVDS). The latter is not really an option, unless the sensor is interfaced to an FPGA. To read out 10-bit parallel data from sensors requires a very fast 20-lines pin reading at the rate of SYSCLK, which is 13MHz minimum. This is challenging, even for the 180MHz Teensy 3.6. The only chance to manage this would be 1) to use the sensor in Slave Mode and 2) to overclock Teensy to 240MHz, not to mention carefully monitoring generated code at the assembly-level.

Slave Mode specifically means that the exposure is controlled by providing pulses on EXPOSURE and STFRM_OUT pins simultaneously to both sensors. After that, the captured images are stored in internal buffers ready for readout at some later time by pulsing STLN_OUT pins. It took many interations of trial-and-error experimentation before I was able to achieve correct timing.

Another trick that helped me to achieve readout timing was to group data lines from each sensor into same ports on the MK66FX1M0 chip, such that 10 bits from left sensor are connected to PTC and 10 bits from right sensor are connected to PTD. This way, data lines could be sampled simultaneously within the same MOV or LDR instruction.

During debuggin, I also discovered that it was helpful to capture LINE_VALID and FRAME_VALID signals together with data bus from each sensor. That is the reason I later added four wires, marked on the schematic with thick red lines.

4.2.1.4. USB interface

I initially hoped to accomplish all of the above within Arduino environment, unfortunalely, at the time of this writing, device-mode high speed USB functionality is not available for Teensy 3.6. Using the full speed mode (12Mbps) I was only able to achieve about 1 - 2 FPS rate max. I mucked about for some time and gave up.

⇒ branch teensyduino

Next, I was experimenting with uTasker, as I read good reviews about this project, specifically for its support of KINETIS line of NXP processors, including Cortex M4 of Teensy 3.6. Unfortunately, I found many things quite difficult to understand and configure. After a lot of experimentation, I was not able to achieve reliable USB-CDC operation in high speed mode.

⇒ branch uTasker

While playing with uTasker, I got to know the MCUXpresso environment from NXP. This led me to the idea to try native NXP SDK and, voi la, things started to work!

⇒ branch master

The reason I decided to stick with USB CDC class instead of custom data class or perhaps Video class is that CDC is very easy to use and it is fast. I have also retained original VID/PID configuration provided by NXP SDK demo code.

 

The micro-USB connector on Teensy 3.6 module is only capable of full speed (12Mbps) operation. In order to establish high speed (480Mbps) USB connection, one needs to solder a separate cable to the 5-pin "USB Host Port" on the Teensy. However, in order to enable high speed device mode operattion, additional hacks are needed:

  1. Connect external power to the VUSB input, instead of the 5V pin on the host connector
  2. Connect 3.3V rail to pins USB1_VBUS and VREG_IN0 

Here's a diagram with highlighted connections:

 

4.2.2. Robotic Platform

There's been quite a few changes introduced to the platform. The main one is replacement of Raspberry Pi with a LattePanda single-board computer. This board (1st generation) is based on Intel Cherry Trail Z8350 Quad Core Processor which is significantly more powerful than Raspberry Pi 3 that I used previously. In addition, Intel architecture proved to be an advantage since the ELAS dense stereo library that I used already contains optimizations based on Intel SSE instructions.

LattePanda 1st gen.

To cope with additional power consumption mainly due to the new computer board, I had to add one more AA battery. In addition, a P-Channel MOSFET replaced one of the diodes in the battery connection circuit. This ensures very low drop-out and maximum efficiency when running on battery power. As soon as the external charger is inserted, the MOSFET shuts off and disconnects battery from load. The power is then routed to the system through the diode.

I noticed that it is absolutely essential to provide rock steady supply voltage to the LattePanda in order to prevent sporadic restarts. To this end I added one more 5V switch-mode regulator (MP1584EN) to power auxilliary circuitry. This way the main 5V switcher - S18V20F5 - is dedicated to LattePanda and is able to deliver to it up to 3A of peak current.

Communication with slave microcontroller (Teensy 3.2) is now running over USB using CDC profile. This was easy to implement using existing MessageSerial module. A proportional-on-measurement control mode was added to the motor PID controllers together with the automatic motor shutdown functionality if no commands received for some time.

Slave microcontroller firmware is here: https://github.com/icboredman/cherokey_slave.git

 

Complete Platform Schematics:

Robotic Platform Schematics

Another major addition to this robot is the ability to speak and understand speach! Well, sort of :) That's why there is a microphone and an amplifier with speaker on the schematic diagram. Software components to enable this functionality are explained in next sections.

I used a small condenser usb microphone and a 2W, 8 ohm speaker, 28 mm in diameter. Speaker is driven by a nice little class-D amplifier module - Adafruit PAM8302A - with volume control.

USB microphone

Class-D Amplifier PAM8302A

Speaker 2W

Finally, I noticed that during peaks of activity, the processor can get very hot, sometimes to the point of throttling. Obviously, little heat sinks glued to the metal lid are not adequate to properly cool the device. So, I added a small fan. And what is a better way to control the fan if not with the Arduino MCU embedded within LattePanda board itself? The idea is to spin the fan only when needed and as fast as needed, to keep overall power efficiency.

The sketch for the embedded Leonardo is pretty simple. It receives a one byte temperature over serial connection and uses it to control the cooling fan through PWM on pin 9. LED blinking rate serves as a visual indicator of current temperature.

Arduino Leonardo sketch: fan_controller.ino

Since I didn't have a suitable switch to drive the fan handy, I used a logic level shifter board for this purpose. It turned out very convenient to attach to the LattePanda using a few header sockets.

Fan and Driver

 

 

4.3. Software

4.3.1. Localization and Navigation

   under construction...

4.3.2. Speech

   under construction...

4.3.3. Cooling Fan

   under construction...

 


 

Please login or register to post comments