Friday, February 17, 2012

AVC: Encoders, Quantization Error

As I continue working on various aspects of my Sparkfun AVC robot, I learn all kinds of interesting things, this time about quantization error, dither, and the exponential filter.

A recent test run provided the following speed data from the wheel encoders, which digitize position at regular intervals.


Here's a detail view of one of the plots.

Detail speed plot showing quantization error
It seemed to me that a smoother curve might not be a bad idea. Maybe it doesn't matter but I was curious to find out what if anything I could do. Here's what I learned...

First I tried filtering the data but still ended up with a bunch of spikes. Before I could filter out the noise, I supposed that I should learn about the nature of the noise.

Filtered speed, green, shows spikes
Any time analog data is digitized, the real value is rounded up or down to the next closest digitally represented number. The error between the real value and the digital one is quantization error.

Picture of Pokey showing quantization error
Quantization error shows up in pictures where the original scene has a smooth gradient of color. After digitizing, quantization error creates steps of colors in wide bands. The picture above, of Pokey, was reduced to 16-bit color depth. Compare the picture below. It also uses 16-bit color. And dithering.


Dither adds random noise to each pixel, dissipating the quantization error effect. At least to our eyes. Here's the plot with some noise added.


That doesn't help the robot much. Our eyes/brains do quite a bit of noise filtering. One of the AVC entrants pointed me to the double exponential filter, basically a super fancy moving average. I opted to try a less fancy exponential filter (the green line below).

Dithered speed, blue, and exponential filtering, green

It's still not glass smooth, but it doesn't need to be. More smoothing means greater lag between the filtered signal and the real one. But clearly the filtered plot is much better; as with the picture above, the effective resolution of the signal is higher.

Is this filtering necessary? I don't know but I am considering some navigation-related calculations that depend on speed (and distance) to improve navigational accuracy. It seems to me that improved resolution will reduce error in these calculations. If it's simple to implement in the real world, I may just go ahead and do it.

2 comments:

  1. If you need a faster response, you might also try a 2nd order filter, which uses the past *two* values rather than just the immediate past value. A 1st order filter will never overshoot, but will always lag. A 2nd order filter can react more quickly to changes (but you'll want your noise to be short spikes, don't want to overreact to them), but can overshoot. If you think of a step function input from 0 to 1, the first order filter will look like a simple low pass filter, ramping up to 1, whereas a 2nd order filter will initially overshoot the top of the input, and have a dampled oscillation down to the steady value of 1.

    ReplyDelete
  2. double exp filter will actually remove some of the lag because it takes into consideration the trend of the data. Instead of just being a moving avg, its makes a prediction based on the avg. + the trend.

    ReplyDelete

Note: Only a member of this blog may post a comment.