Engineering Portfolio
This project was created as part of the systems engineering course Controls for Autonomous Vehicles. The goal was to create a robot that could autonomously navigate a path using computer vision, IR sensors, and IMUs.
A car platform was provided with all necessary hardware including a Romi 32U4 control board, IR sensors, camera, and an LED display, but no software libraries were provided. Overall, I gained experience with control theory (Simulink, PID's, filtering, pure pursuit, dead reckoning, etc.) while also building on my software skills (Arduino, MATLAB).
Wall Following
The first challenge tackled in this course was getting the robot to follow a wall at a constant distant. PID control was used to achieve zero steady state error while the IR sensors were used to measure wall distance.
I built a complete, closed loop Simulink model to simulate the robot's motion. I experimented with different types of feedback and the resulting plots are shown below.
Proportional Feedback (Unstable)
Proportional + Derivative Feedback (Steady-State Error)
Proportional + Derivative + Integral Feedback
Pure Pursuit and Bezier Curves
Next, pure pursuit with Bezier curves was considered. Bezier curves have two major benefits for robotic path planning. First, they have few turning points and are therefore more smooth. Second, the tangent vectors of the curve at the first and last control point are directed along the Bezier polygon. This allows easy control of the robots final location and orientation.
To the right is a Matlab script that allows the user to select n number of waypoints (which act as the first and last control point) for the robot to travel to. Once the waypoints have been chosen, two control points are calculated between the waypoints (green dots). Then, Bezier curves are drawn for each set of four control points (red line). Finally, the path of the path of the robot is calculated using a Simulink model and overlaid (blue line).
Dead Reckoning and Kalman Filter
Dead reckoning was implemented using wheel encoders to calculate both distance traveled and steering angle. The robot executes straight-line motion between successive waypoint. The velocity is preset by the user, while the change in steering is executed by feedback on the orientation error. Once the robot reaches the final waypoint, it revisits the waypoints in reverse order.
Dead reckoning is subject to cumulative error. For this reason a Kalman filter is used as a correction method. Using the encoders we know the position of the robot with some accuracy. The orange cones (seen in the video above) act as fiducial markers. The robot knows where the cones are supposed to be in space and the IR sensor is able to read the robots actual location from them, again with some accuracy. The Kalman filter is able to combine these two data points to create a more accurate guess to where the robot is. Finally, a correction step is implemented so the robot will remain on the correct path over time.
Computer Vision
To replace the IR sensors, computer vision was used to identify and locate cones. The pinhole camera model was used to determine location.
I wrote my computer vision scrip in MATLAB. First, I converted all images to HSV. Next, I translated the image to black and white using a custom HSV threshold. Then, I was able to use edge detection and identify the largest zone. Finally, the left-most point on the zone is identified as the corner and the pixel values of the center of the cone can be found using simple algebra.
Now, the pixel location of the cone, the height of the camera off the ground, and the focal length are all known. The global coordinates of the cone can be calculated using the pinhole camera model.
Unfortunately, due to course time constraints the computer vision was not integrated into the vehicle controls.