Friday, April 27, 2012

AVC: First Autonomous Runs

I've finished up the code required for Data Bus to run autonomously from waypoint to waypoint.

Unfortunately, the robot's performance is awful, resembling the circular meandering of a drunken butterfly.

I whipped up a Processing program to graphically display what the robot thought it was doing.

In the video capture, Data Bus is the green ellipse. The white line shows reported bearing to the next waypoint, depicted as a red circle. All other waypoints are blue.


It appears to me that the robot is over-correcting when trying to steer to the next waypoint. It's also possible that the steering control is lagging too far behind the position and bearing estimation.

At least now I have some idea of what is happening.

Current in-development version of the Processing program is here.

3 comments:

  1. I'll start by saying how awesome your blog is. I've been coming here for some time and have seldom seen any comments. Pretty odd if you ask me giving the quality of the stuff you put here. So please don't stop sharing your work like that.

    Now back to your post:

    The behavior of databus reminds me the one of an unstable closed loop system (which I guess it is). So yes, poor databus is way oversteering (too much gain) with lagging data and that's a recipe for disaster (ask any control engineer).

    I don't know how exactly the robot steers in reaction to its input but you should definitly consider using a PID control algorithm. It's a pain to tune but it's better than shooting arrows in the dark.

    ReplyDelete
  2. I think your main problem is that your algorithm is TOO flexible.

    You don't need to be able to wildly change your heading and velocity... the course is fixed. It should be possible to PRE-DETERMINE the precise velocity and turn angle for the entire course ahead of time.

    By using waypoint navigation, you force yourself to have huge error correction capability at the waypoint changes, with the algorithm having no prior knowledge of the 'next' waypoint until it hits the current one.

    You don't want huge error correction. Instead, you want to use the error servo as an extra mechanic in the sensor fusion. Don't think of it in terms of steering the vehicle to point to a waypoint, but rather making small adjustments in the course you're keeping. Does a race car driver go waypoint to waypoint? No. They follow a set course around the track, correcting for deviation with minute corrections intended to follow the set course. They're far more concerned with their current velocity/direction vector than they are with their actual position.

    For example, if you're heading into turn 1, and the sensor data says you're 6 inches right of calculated, and 6 inches forward of calculated, you need to slightly adjust you steering to the left, and slightly adjust your velocity down. But at the next data point, the sensor data could tell you you're 12 inches right and 12 inches slow. Your next correction is a tiny bit the other way, but it's always centered on the pre calculated perfect course around the track.

    The desired position is absolute, it's based on mathematics. But the actual position is subject to the whims of chance and sensor error. As such, the sensor derived position is always wrong. By knowing what your velocity/direction vector SHOULD be and only allowing small corrections from that, you add in an additional filter that compensates for the inherent errors in position.

    Think of sensor derived position as a scatter pattern of best guesses. The Kalman filter attempts to smooth that, but it's slow. Allowing only small deviations per cycle to the velocity/direction vector automatically compensates for the scatter pattern nature of the data.

    ReplyDelete
  3. @simbilou - thanks very much for the kind words! Also thanks for the thoughts about what's going on. I think you've hit a nail on the head. I'll post up some details tomorrow. :)

    ReplyDelete