Friday, April 30, 2010

RS-232 to TTL level shifter

While working on equipping Pokey with a bluetooth modem (articles forthcoming), I found that I needed an RS-232 to TTL level shifter, so I could safely hook up the modem to the PC for troubleshooting.  I probably should've saved the time and bought one from Sparkfun Electronics.  But, it was fun to build.  Here's how.

A re-purposed computer mouse cable with PS/2 connector plus a DB9 adapter provided the connectivity to the computer on one end and the bluetooth modem on the other. Some web searching uncovered a few suitable circuits.  I selected the SparkFun diagram (pdf), which I drew up in Eagle, then laid out a board to fit into one of the small enclosures I got from the TGIMBOEJ awhile back (a Hammond 1551KBK). Here are the Eagle files for the project.

SparkFun Circuit Diagram in Eagle

POV-Ray Rendering With Eagle3D

Fabricating the PCB went easier this time than in prior projects. I used single-sided board this time and the transfer went really well too.  Instead of using just the sponge, I soaked the board in pure ferric chloride for about a half hour then scrubbed with a sponge (and no water this time!). The copper came off lickety split. Took about 2 or 3 hours from inception to completed device. Not bad!

The etched and drilled level shifter PCB

The populated circuit board, enclosed, and wired

Completed RS-232/TTL Level Shifter

I'll admit that the circuit didn't work the first time I tried it, but a little bit of troubleshooting and rewiring had it working perfectly in no time!  What's nice is that I can now use this for all kinds of serial communications projects and it is dandy for troubleshooting communications.

Tuesday, April 27, 2010

An AVR Programming Tip

I've encountered avr-gcc / Arduino IDE compiler errors like this a couple of times in the last few weeks:
error: variable or field 'delobj' declared void In function 'void camReadPicture(boolean, boolean)':
 In function 'void wireInit(char)':
 In function 'void recvByteHandler(int)':
 In function 'void sendByteHandler()':
Bad error line: -11
A negative line number?  One possibility: check your comments at the top of the file for extraneous characters ... here's what I found in a program I was tweaking. Note the letter 'd' that shouldn't be there.
d/**
 * @file       main.c
 * @brief      MAIN for the gameboy camera interface for AT90S4433
 *
 * @author     Laurent Saint-Marcel
 * @date       2005/07/05

Friday, April 23, 2010

GameBoy Camera Prototyping

Updated 9/9/2010: Source Code is now available on Google Code.

Holy TTL, Batman. My cobbled-together code and circuitry works! I just took my first Game Boy Camera picture.  Here are all the secrets I know of for interfacing a Game Boy Camera (Mitsubishi M64282FP) to a microcontroller.

First picture!

The actual scene

Summary Version

With Game Boy Camera, Ardweeny running tweaked version of code here, HP 1650A Logic Analyzer to get the timing right, Java Swing desktop application based on code here, and after fixing goofed up wiring and timing it works!  Some tweaking of camera configurations and it now takes some nice shots, and the flame detection software does its job with real images, too!

Really Important Tips
  • Timing is key when interfacing with the M64282FP
  • But, you can also clock the M64282FP as slow as you need to 
  • Setting the bias (dc offset) voltage to 1.0V is mandatory (the chip outputs 2Vp-p)
  • Setting the black level offset correctly is important
  • The camera actually spits out 128x128 pixels, but the last 5 rows are junk
  • Setting the gain too high can cause odd pixel artifacts (MSB truncation?)

The Long Version

Game Boy Camera
First, I cut wires off of the 9-pin connector, one by one, and spliced them to longer wires and attached each to a small breadboard with 9-pin header so I could plug the camera into my protoboard.

Microcontroller
The Ardweeny from Solarbotics that I recently ordered and assembled lends itself well to rapid prototyping. It's Arduino-compatible running an ATmega328P MCU.

The first step was getting the code put together and getting the timing signals right to activate the Game Boy Camera (Mitsubishi M64282FP image sensor chip aka "Artificial Retina").

I started with code here plus the datasheet. I copied the code into my Arduino IDE and tweaked it as necessary to get it to compile. Then tweaked some more to get the timing right. Along the way, I merged several functions so signal timing was more obvious to me as I read the source.

I ran the code, and... it didn't work. I wasn't getting any response from the image sensor... until I realized I'd crossed a couple of wires on the protoboard. Fixing that, the data came streaming through on the Arduino IDE Serial Monitor.  My Arduino code can be found here.

Mitsubishi M64282FP Timing
I've found two versions of the datasheet so far and the timing is a bit ambiguous so let me provide the following hints. If you're in the middle of working with one of these cameras, all this will mean something. Otherwise it won't...
  • RESET/XRST has to be low on the rising edge of XCK
  • Raise LOAD high as you clear the last bit of each register you send
  • START has to be high before rixing XCK
  • Send START once
  • The camera won't pulse the START pin; the datasheet is confusing about this
  • READ goes high on rising XCK
  • Read VOUT analog values shortly after you set XCK low
Logic Analyzer
In debugging and fixing the timing, the HP 1650A Logic Analyzer that I recently put in operation was absolutely invaluable. I can't imagine trying to debug the issues I encountered without a logic analyzer.

Ardweeny Under Test

Checking Signal Timing

PC Software
Next up, capture the serial data and display it as a picture on the screen. I started with code here and decided to take a dive into the NetBeans IDE. I like it so far. Lighter weight than Eclipse, more intuitive to use, and it has a really nice GUI designer built in. I found it rather familiar after having worked with Xcode while equipping Pokey with a Bluetooth modem (a series of articles coming soon).

I created a new project, designed a GUI from scratch using the IDE, then copied the relevant code into the appropriate spots. Did a few tweaks to get it to talk to the software on the Arduino.  Finally got an image to display on the screen--consisting only of lines and gibberish. Not the real picture. Crap!

The preliminary version of the M64282FP datasheet suggested the cause might be a timing issue when reading the analog pixel data. The datasheet I'd been using was ambiguous on that issue.

I tweaked the code to read Vout (analog) shortly after dropping XCK and... Shazam!  The image at the top of this article appeared.

After the time put in bashing through, seeing that image was nothing short of miraculous!  The source code and NetBeans project files for the PC client are here.

Configuring the Camera
Getting that first readable image was great, but the second one sucked, with bizarre artifacts where bright spots should appear (see below).

There's no way my simple bright-spot detection algorithm could correctly handle this mess of pixels. I had to learn more about how the camera settings worked.

Artifacts from high gain and MSB truncation

To help with troubleshooting, I extended the functionality of the client significantly, providing a means of setting the relevant camera registers and displaying a histogram below the picture.

One last article I found on the camera held a revelation. The Vout voltage is 2 volts peak to peak!  So one has to configure the voltage offset register V for 1.0V, a value of 7 per the datasheet, to get positive signals that the ADC can handle. Doing so immediately yielded a better result.

Then I discovered that the bright artifacts disappeared when setting the camera's gain above 0. It dawned on me that I am using a 10-bit ADC but passing an 8-bit value to the Java Application; I was truncating the most significant bits, which mattered at higher gains with higher maximum voltages. That explained everything.

I found that you can either continue to use the lowest 8-bits and set the gain to 0, or rotate off the lowest two bits, then increase the gain substantially, and possibly also tweak Vref and offset to maximize the dynamic range of the picture.. bottom line, just be careful of the resolution of your ADC and the data types (signed, unsigned, int, char, short) used to store the results.

The black level in the image is set by the offset register O in 32mV increments plus or minus. If the offset is too low, and the image is underexposed.  I had strange white pixel artifacts appear where the darkest parts of the picture are supposed to be. Setting the black level a little higher solved the problem.  Apparently the "negative" voltage values were being converted to an unsigned value and became high value pixels (white) which you can kind of see when you look at the histogram.

Offset Too Low + Underexposed

Using the histogram feature made it easy to quickly dial in a decent exposure. Ideally, software auto exposure would be great, but for the narrower purpose of finding the candle, manually calibrating the camera for competition conditions will probably be adequate.  Depends on how much time I have for refinement.

Correct Exposure... Finally!

So does it work?  Can the camera see a candle?  Does the flame detection software work?

Nothing like a blogging cliffhanger, huh?  Click here to find out what happened.

Updated 9/9/2010: Source Code is now available on Google Code.

Tuesday, April 20, 2010

Sparkfun Autonomous Vehicle Contest Photos

The Sparkfun Autonomous Vehicle Contest (AVC) was a riot.  Quite a crowd turned up, the robots put on some amazing shows ranging from agonizing failure to triumph and victory.

Full recap and great pics by Sparkfun here.

Great photos by felis (comment below) here.

Here are some pics I took.

A robot rounds the 3rd corner followed by "chase humans"


Heading right for the curb!

Curb detected, evasive action!

We arrived between the first and second heats.  Only a few land robots succeeded in the second heat. We couldn't stay to see the aerial portion of the second heat but as we were leaving we saw a few robot airplanes fly fairly far astray.

Elmo was a favorite among the kids...

Under Elmo's seat 

SHARC's Kurt drives his robot to the starting line

A competitor prepares his robot for starting

Another of the successful robots approaches the finish

Friday, April 16, 2010

HP 1650A Logic Analyzer is Working

A bit more than a year ago, one of the generous members of SHARC gave me an old logic analyzer.  The Hewlett Packard 1650A didn't come with probes, but with some cash outlay and simple cable building I finally got the old machine back in business.

First, I needed a pod, which is a collection of probe wires bundled together to a common connector. I found one on eBay for a decent price and it came with a set of claw grabber clips, but I ordered another set of single claw clips that will work better for DIP chips.

 
Claw clip in upper left, DIP clips in foreground

When the pod arrived, I realized I also needed a female-female IDC ribbon cable to hook the pod to the back of the machine. Finding the original HP cable for anything less than an astronomical, outrageous price proved impossible, so I decided to build my own.  Easy enough.

 
Ribbon cable and IDC plugged into pod 1 (center)

A 40-pin IDC connector on each end and 40-conductor ribbon cable is required. I chose to use twisted pair ribbon cable similar to HP's cable, in hopes that this will increase noise immunity versus a plain flat ribbon. Technically the pod only has 37 pins but the connector width is designed for a 40-pin IDC. If you want to save a little time, here's a link to the list of parts on Mouser. Cost was just over $25.

Putting The Logic Analyzer to Use

So how do you use one of these things?  I found this Tektronix white paper, The XYZs of Logic Analyzers.pdf to be a helpful, quick introduction. So, I tried the thing out, with my still-in-progress Oscilloscope Calibrator project as the SUT (system under test).

Logic analyzers can display state information or timing information (some do both at once). Selecting timing waveform display, and telling the analyzer to capture and overlay multiple samples, the logic analyzer quickly identified that the clock signal coming from the 74HC04 is showing some variance (the top waveform on the screen below).


Screen shot, multiple samples accumulated

The vertical lines with X and O (detail below) are markers used to measure timing. By moving these markers around on the screen, I could figure out that the clock signal was rising as much as 20ns early and falling as much as 20ns late as shown by the double vertical rise bars on the top signal, where multiple samples are overlaid.

The waveforms below the top waveform are the frequency-divided signals that will be the output of the calibrator (coming off the 74LS393 and 74LS162). Now, I'm not crazy about 40ns of clock jitter out of a 240ns clock period, but I think that overall accuracy of the frequency is good enough for now.

Screen detail, timing variance, and markers

I'll probably try to fix it anyway to see what I can learn about the cause of the problem. Maybe it's simply a question of replacing the resonator with a crystal, or addressing noise or capacitance issues, I don't know.

Meanwhile, the logic analyzer works. So I can use it for testing and troubleshooting all sorts of digital signals. It's an old machine so I can't use it for modern high frequency CPUs but I would think it'd be more than fine for slow robot microcontrollers and certainly for slow serial and parallel communications between devices.

The only thing left was to fix the the dim CRT.  Some tweaking of an internal calibration potentiometer and fixing the rear intensity control knob and it's as bright as it will ever need to be now.

Tuesday, April 13, 2010

Happy National Robotics Week

Happy National Robotics Week from the Bot Thoughts robots!

Left to right: Hexbug, unnamed UFO toy that avoids falling off the edge of tables, dancing robot in the background, Spunky the photopopper (article forthcoming), Pokey in the midst of his refit, Microrobot Cruiser, and unnamed cool retro robot toy.

I'm looking forward to the Sparkfun autonomous vehicle contest this weekend. Also the Denver Area Robotics Group is doing a show and tell at a local library for the kids. Should be fun (alas I can't make it).

What are you doing to celebrate National Robotics Week?

Friday, April 9, 2010

Candle Seeking Vision Software

Pokey, the firefighting robot, absolutely must find the candle this time!  Not like last time when he completely ignored the candle right in front of him. (sigh)

While waiting for a Game Boy camera to show up in my mailbox, I figured I better see how hard it would be to cook up some code that could reliably detect a candle at various distances.

So the next proverbial bite of the elephant was to do some code prototyping in an environment that's comfortable and easy.  To wit, C in Cygwin on my PC (yes despite all my posts referencing Macintosh, and a house full of them, I have--and use--a PC, too, because it was faster than my G4/450 and it cost $5).

Simulating Pictures
The Game Boy camera outputs 128 x 123 pixel, 8-bit grayscale images.  To simulate contest scenarios, I shot pics with my DSLR of a candle in various spots around the room, uploaded them, then batch converted the images using Irfanview to approximately 128x123 pixels, 8-bit greyscale, and saved as an easy-to-work-with Windows BMP (bitmap) file:

Greyscale 200x123 bitmap of candle

Reading a Bitmap File
Then I coded up a simple C program to reprint the BMP as ASCII art, to verify that I can access each and every bit and it's brightness value.  Of course, the aspect ratio is a little skewed but... clearly the program works!  (Click on the image for a much larger, clearer, and hopefully brighter version if you're skeptical).  I will hereby confess that my C skills were pretty rusty.  How could I forget the proper way to malloc() a char ** type??  That's just sad.  Perl has made me soft and weak...

Converted to ASCII art

Notice in the detail shot below, that the candle flame is, in fact, the brightest thing in the picture, represented by the character X (assigned to any pixel with a value greater than 240 out of 255); the next brightest thing is indicated by the character +, like the white candle, itself. Clearly the flame is the brightest thing in the picture. Cool!

Detail of candle; flame is brightest

So that tells me there is actually some hope of detecting bright spots in a snapshot image.  I didn't use any IR filtering, which "should" improve things even more by eliminating most everything in the image except the flame or reflected IR.

Some Difficult Scenarios
This test photo above represents an easy scenario.  I'll need to anticipate the possibility of multiple bright spots of different sizes: sun shining on a wall, or the reflection of the flame on the wall behind it.  The algorithm will have to key in on the brightest spots that are the size and/or proportions of a candle flame.

Candle flame and distant, sunlit door

If that happens, the robot will have to somehow evaluate each candidate candle flame. Maybe with other sensors, maybe by going up closer and taking another 'look'. The robot also has to be able to recognize a flame despite sizes varying in size, whether because of distance, drafts, length of candle wick, type of candle, or whatever the cause.

Candle flame and reflection off of HP LaserJet

Some Experiments
Now that I had the "lab" set up, it was time to experiment with some statistical analysis, perhaps try out some published algorithms for finding bright spots, or whatever else came to mind.

First, I plotted a histogram for each of the images. Roughly speaking, the bright pixels accounted for a pretty small percentage of the intensities represented in the images. My thinking is that histogram statistics might help to yield an optimal exposure so there's more work to do with that.  I'd rather wait on that until I have a better sense of what the camera sensor can do.

Next, I tried simply projecting (summing) the bright spots vertically and horizontally. In the case of one bright candle object, this approach would yield a quick way to identify a bounding box around the object.

Prototyping Flood-Fill
Then I decided to play around with multiple object detection. After some research, the flood-fill algorithm caught my fancy.  It was simple enough to play with and hopefully could be efficient enough to support analysis of multiple objects at a reasonable frame rate (10-30fps). Here's what I did.

The image from the camera will be coming in serially. Likewise, my simple C program reads the bitmap pixels sequentially.

Scenario 1
A two-dimensional array of unsigned integers represents each pixel's object assignment. (Inefficient, but quick to prototype). When the code encounters the first bright pixel (above a set threshold) after one or more dark pixels, it assigns that pixel to the next available object number (essentially, object_mask_array[x][y] = nextavailableobj). All subsequent, contiguous bright pixels are assigned that same object number. Like this.

.. .. 01 01 01 .. .. 02 02 .. XX XX XX

The ".." is a dark pixel. The XX hasn't been processed yet. Two objects identified so far, and the final three pixels will be assigned to 03.

Scenario 2
That's the simple scenario. But if there's a bright pixel above the current bright pixel, the two are contiguous.  So whatever object was previously assigned to the pixel above should be assigned to the current one.  The simplest scenario follows.

.. .. 01 01 01 .. .. 02 02 .. 03 03 03
.. .. .. XX XX .. .. .. .. .. .. .. ..

When the first XX is encountered, it is contiguous to the pixel above, assigned to 01.  So the current pixel is assigned to 01 also, as well as all subsequent, contiguous bright pixels, like this:

.. .. 01 01 01 .. .. 02 02 .. 03 03 03
.. .. .. 01 01 .. .. .. .. .. .. .. ..

Scenario 3
If the above pixels 'start' before the bottom set of pixels do, it's easy. A harder scenario, below, occurs when one's already assigned an object to a row of pixels only to discover part way through that the line is contiguous with an object above.

.. .. 01 01 01 .. .. 02 02 .. 03 03 03
04 04 XX XX XX .. .. .. .. .. .. .. ..

The current pixel (leftmost XX) is contiguous with 01 above, but we've already assigned 04 to this object. Since I was only prototyping, my inefficient solution was simply to stop where I was and re-do the prior pixels.

.. .. 01 01 01 .. .. 02 02 .. 03 03 03
01 01 XX XX XX .. .. .. .. .. .. .. ..

And then I could continue assigning subsequent pixels to the 01 object.

.. .. 01 01 01 .. .. 02 02 .. 03 03 03
01 01 01 01 01 .. .. .. .. .. .. .. ..

Scenario 4
The hardest scenario, which I didn't address in my prototype code, was that of a pair of bunny ears. In other words, the object has two lumps at the top that are not contiguous themselves, but a subsequent row ties them both together. One has to go back and redo the object above.  Like this.

.. .. 01 01 01 .. .. 02 02 .. 03 03 03
01 01 01 01 01 01 01 XX XX .. .. .. ..

The 02 object has to be reassigned to the 01 object.  If it's just one row that isn't even all that hard.  But what if it's several rows.  And what if some of those rows 'start' earlier than the ones below?  You can easily come up with additional tricky situations.

.. .. .. .. .. .. .. 01 01 .. .. .. ..
.. .. .. .. .. .. 01 01 01 .. .. .. ..
.. .. 02 02 02 .. .. 01 01 .. 03 03 03
02 02 02 02 02 02 02 XX XX .. .. .. ..

This complexity is an artifact of processing pixels on the fly -- versus reading everything first, and processing after.  I wanted to see if the former approach was even possible in case the vision system turns out to be memory constrained.

Flood Fill Results
Once again this was just a proof-of-concept to see if there was any chance in the world that I might be able to identify separate bright objects in an image and the experiments successfully showed that it is possible even with a relatively simple algorithm.

Of course to do this 'for real' the algorithm would then have to keep track of the bounding box coordinates for each object and eventually some code would have to determine which objects were likely to be candle flames. All in due time.

A Difficult Scenario

At least for now I can take a pretty tough scenario like the above, with a candle in front of a sunlit door, and identify that the candle and the swath of sunlight are separate objects.  Click on the text image to see that the swath of light is assigned to object 05 and the candle flame is assigned object 03.

The Algorithm Works!

My astute readers will no doubt notice the lower left part of the swath of light is assigned to object 01.  The algorithm processes the bitmap pixels upside down, the order in which they're stored in the file. So it runs into the bunny ears scenario (4 above) and ends up assigning the second bunny ear to 05 then assigns the line connecting 01 and 05, and all subsequent lines, to object 05, leaving a the first bunny ear still assigned to object 01.

Bounding Box
Writing code to calculate the bounding box of each object was pretty straightforward.  The hard stuff was already completed (above).  A "C" struct represents an object and contains an "exists" flag to indicate if the object has been created or deleted, as well as bounding box coordinates for top, bottom, left and right.

One simple function adds a pixel to an object: if the pixel lies outside the bounding box, the box's coordinates are changed to encompass the new pixel.

A function to delete an object is called when encountering scenario 3 above. Pixels that were originally assigned to a new object are later discovered to be connected to a second object. This new object can be discarded because all of its pixels have to be reassigned to the second object.

Finally, a print function displays info about each object, including calculating size, aspect ratio and midpoint, and then printing out the bitmap within the bounding box.  Here's the results from the simple test image:

-------- Candle006.bmp --------
Width: 185 Height: 123 Planes: 1
BitCount: 8 Colors: 256 SizeImage: 23124

Object 02
  Box: (97, 66) (100, 60)
  Size: (4, 7)
  Ratio: 57%
  Mid: (99, 63)

....02..
..0202..
020202..
..0202..
..0202..
02020202
02020202

Recall that the y coordinates are upside down due to the BMP file format.The midpoint coordinates are for pointing the robot at the flame.  The width-to-height proportion may help filter out non-flame objects.  From here, I can add any other info or calculations that are needed, like average intensity within the bounding box.

Also, I could add pixel coordinates to each object struct to enable recreation of the original image or the bright/dark processed image without having to store the entire bitmap in memory.

Whee!
Maybe it seems silly but I'm incredibly excited that I got all this working. The vision approach is starting to look pretty hopeful...

...notwithstanding the mountain of electronics interfacing work yet to do...

Monday, April 5, 2010

Sparkfun Autonomous Vehicle Contest

Sparkfun is hosting another Autonomous Vehicle Competition on April 17, during National Robotics Week.

2009 Sparkfun Autonomous Vehicle Competiton
(Picture from Sparkfun website)

The mission: circumnavigate the Sparkfun building in Boulder, totally autonomously, either on land or by air (avoid the 'sea' -- the nearby lake). Fastest air and land times each win $300.

I've never been but hopefully this year I'll be able to go check it out. And who knows, maybe next year I may enter.

Sunday, April 4, 2010

Happy Easter!

Wishing you a Happy Easter and/or enjoyable Sunday (as applicable)! :) :) :)

Friday, April 2, 2010

Ardweeny

The Solarbotics Ardweeny

I've been wanting to play around with Arduino to see what all the fuss is about. Solarbotics sells an inexpensive Arduino-compatible backpack style kit called Ardweeny. It consists of a small PCB with headers that allow it to sit on top of the included ATmega328, giving it the smallest through-hole footprint possible.

Building it was simple thanks to very easy to follow, full-color, illustrated instructions. I also bought the Sparkfun 5V USB-to-TTL FTDI breakout board through Solarbotics which allows for programming and powering the Ardweeny.  Right after plugging in the USB-connected serial board, powering up the Ardweeny, the LED started blinking. Apparently this Arduino started out life with an LED blinking sketch?

The Ardweeny lives!

Now... what to do with it?  Naturally I started by writing a custom LED blinking sketch and downloading it to the chip.

But first, I had to download the Virtual COM Port (VCP) drivers from FTDI, then unplug/replug the USB cable, and tell the Arduino IDE (that I'd downloaded previously) to use the correct COM port after checking in Device Manager to find the virtual port for USB.

Similar procedure for Mac OS X.  Arduino IDE is cross platform. Download FTDI's OS X VCP driver disk image, install the appropriate package, and when you plug in the programmer, look for a file to appear named /dev/tty.Sparkfun*  -- For example, on my computer I saw: /dev/tty.SparkFun-BT-SerialPort-1 

Here's the blinking LED code:

int ledPin = 13; // LED on digital pin 13

void setup()
{
  pinMode(ledPin, OUTPUT); // set pin as output
}

void loop()
{
  digitalWrite(ledPin, HIGH); // led on
  delay(1000);
  digitalWrite(ledPin, LOW); // led off
  delay(1000);
}

Next test was reading the analog value from a Sharp GP2D12 IR ranger sensor and blinking the LED at a rate proportional to the value. I modified the blink code above by adding reference to analog pin for IR and a variable to store the analog value.

int irPin = 5;
int val = 0;

Then I incorporated the sensor value into the blinking loop. Pretty simple.

void loop()
{
  val = analogRead(irPin); // get sensor value
  digitalWrite(ledPin, HIGH);
  delay(val); // use value for delay
  digitalWrite(ledPin, LOW);
  delay(val); // use value for delay
}

The final test was sending the value from the sensor directly to the computer via the FTDI serial port.

void loop() {
  val = analogRead(irPin);
  Serial.println(val, DEC); // print value with newline
  delay(1000);
}

Hmm. That was all really easy--not entirely unlike using a Basic Stamp (but cheaper).  Of course, there's much more to explore.

Speaking of cheap, the Ardweeny was inspired by Kimio Kosaka's "One-Chip-Arduino" project. That's about as low budget as it gets...