(Note: the hyperlinks only works if you are on the homepage of this GitHub reop, and if you are viewing it in “github.io” you can be redirected by clicking the View the Project on GitHub on the top)
Anaconda is used for managing my dependencies.
If you want to run the demo, you can simply run:
If you want to use the code to calibration your own camera and test the video, save the image into camera_cal and calibrate them with:
The code for this step is contained calibration.py. It start by preparing “object points”, which will be the (x, y, z) coordinates of the chessboard corners in the world. Here I am assuming the chessboard is fixed on the (x, y) plane at z=0, such that the object points are the same for each calibration image. Thus,
objp is just a replicated array of coordinates, and
objpoints will be appended with a copy of it every time I successfully detect all chessboard corners in a test image.
imgpoints will be appended with the (x, y) pixel position of each of the corners in the image plane with each successful chessboard detection.
I then used the output
imgpoints to compute the camera calibration and distortion coefficients using the
cv2.calibrateCamera() function. I applied this distortion correction to the test image using the
cv2.undistort() function and obtained this result:
lane.py contains every step described below.
In this step,
cv2.undistort is used with the previous calibration results (see lane.py line 580). One of example for test images is as follows:
I used a combination of color and gradient (thresholds to generate a binary image (see function
lane.py line 187).
The image is firstly converted to HLS space, and I use the S channel to filter the image, by which is more robust to locate yellow and white lanes even under different lighting conditions.
Then a sobel filter (along x direction) and gradient direction filter is used to filter out most of the horizontal lines. Finally, to handle the cases where more than two candidate lanes are found,
I assgin the twice the weights to the S channel filter than the gradient filter. As a result, the yellow lane should be more visible than the edge of the road. Here’s an example of my output for this step:
The code for my perspective transform includes a function called
lane.py line 214). Firstly, I choose a
straight line driving image, and select 4 source points which form a trapezoidal along two lane lines, then I defined another 4 destination points,
so that I could get a perspective transform with
cv2.getPerspectiveTransform function (see the two images below). The hardcoded source points and destination points are
listed as follows.
This resulted in the following source and destination points:
|194, 719||290, 719|
|1117, 719||990, 719|
|705, 461||990, 0|
|575, 461||290, 0|
I verified that my perspective transform was working as expected by drawing the
dst points onto a test image and its warped counterpart to verify that the lines appear parallel in the warped image.
As we could probably noticed that the image are not in the right aspect ratio (flattened lanes), but it will have no effect on the final augmented image if we use the right inverse transform to warp the image back. If we want to see the image in a more right scale, we could also select the destination points according to the right ratio of lane width and length in the trapezoidal. One example could be as follows:
After applying calibration, thresholding, and a perspective transform to a road image, we should have a binary image where the lane lines stand out clearly. However, we still need to decide explicitly which pixels are part of the lines and which belong to the left line and which belong to the right line.
If the input is single image or first frame of the video, a function named
detector (line 548 in lane.py) is used. It processes the image as follows
I first take a histogram along all the columns in the lower half of the image. The two most prominent peaks in this histogram will be good indicators of the x-position of the base of the lane lines. I can use that as a starting point for where to search for the lines. as a reuslt I will get similar figure as follows:
From that point, I use a sliding window, placed around the line centers, to find and follow the lines. It starts at the bottom, and search up to the top of the frame. The idea is visualized as the following figure.
Finally, I fit my lane lines with a 2nd order polynomial by
np.polyfit like in
lane.py line 293:
lane.pyis used for this case.
The radius of curvature of the curve at a particular point is defined as the radius of the approximating circle. You can see the reference here.
lane.py is implemented for this purpose.
We assume the camera is mounted roughly at the center of the front window, and we have computed positions of the left and right lanes. So we could just take the bottom position of the left and right lanes, and compare with the middle of the image frame. Also, we could utilize the prior knowledge of the width of between left and right lanes (e.g. 3.7 m in US) to estimate the distance in real scale.
A function named
lane.py is implemented for this purpose.
Finally, we could warp the detected lane region back to the original input frame. If the vehicle is driving too close to the lane edge, we could
warn the driver by changing the color from green to red (see function
In order to be useful, driver assistance systems should reach very low error rates. For a warning system like LDW, the false alarm rate should be very low, as high rates irritate drivers and lead to system rejection. The exact amount of false alarms acceptable by drivers is still a subject for research . Also, there are various of scenarios diversity which road and lane detection should cope with, and Lane and road appearance can change dramatically as the following figure.
Current implementation is purely based on monocular camera with computer vision techniques, it involves hard-code perspective transform and many manually tuned parameters for lane-line filtering. This maybe sufficient for good condition road driving, e.g. highway, daytime, clear lane marks, but it may fail with vairous of hard scenarios. In the future, we could consider two approaches:
Reference:  Recent Progress in Road and Lane Detection - A survey