This project is to use advanced technologies to solve lane detection problem.
The steps of this project are the following:
Compute the camera calibration matrix and distortion coefficients given a set of chessboard images. The images for camera calibration are stored in the folder called
camera_cal. Then I save the amera calibration matrix [mtx] and distortion coefficients [dist] to a pickle file
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, img_size,None,None)
test_1 test_2 test_3 test_4 test_5 test_6 solidWhiteRight solidYellowLeft
Use color transforms, gradients, etc., to create a thresholded binary image.
abs_sobel_thresh(indist, orient='x', sobel_kernel=ksize, thresh=(20, 250)) * Y-axis Gratituds > abs_sobel_thresh(indist, orient='y', sobel_kernel=ksize, thresh=(50, 250)) > * Gradient Magnitude thresh fllter > mag_binary = utils.mag_thresh(indist, sobel_kernel=ksize, mag_thresh=(40, 250)) > * Gradient direction thresh fllter > dir_binary = utils.dir_threshold(indist, sobel_kernel=ksize, thresh=(np.pi/6, np.pi/2)) > * HLS S-channel thresh fllter > s_binary = utils.hls_select(indist, thresh=(100, 255)) > * Combine filters together ** > combined[ (s_binary == 255)|((((gradx == 1)) & (grady == 1)) | ((mag_binary == 1) & (dir_binary == 1)))] = 1 >
Apply a perspective transform to rectify binary image ("birds-eye view"). Here I use masked area,canny edge and hougline to get the corresponding points. Then calculate the perspective transform matrix and inverse matrix.
M = cv2.getPerspectiveTransform(src, dst) Minv = cv2.getPerspectiveTransform(dst,src) Masked image area within interest area Src point and corresponding dst point
Detect lane pixels and fit to find lane boundary.
Histogram and start point * Secondly,Using histogram to search lane points row by row > Search Point results * Thirdly, need to fit a polynomial to those pixel positions. > polynomial Point results
Determine curvature of the lane and vehicle position with respect to center.
left_fit_cr = np.polyfit(yvalsym_per_pix, left_fitxxm_per_pix, 2) right_fit_cr = np.polyfit(yvalsym_per_pix, right_fitxxm_per_pix, 2) * curvature: > left_curverad = ((1 + (2left_fit_cry_eval + left_fit_cr)2)1.5)/np.absolute(2left_fit_cr) > right_curverad = ((1 + (2right_fit_cry_eval + right_fit_cr)2)1.5)/np.absolute(2right_fit_cr) > curvature 1961.09892301 ** * center_distance: > car_cen = (start_left+start_right)/2.
> center_distance = abs( warped.shape/2. - car_cen)*xm_per_pix > center_distance 0.130588235294m **
Warp the detected lane boundaries back onto the original image using pre_calculated inverse matrix
newwarp = cv2.warpPerspective(color_warp, Minv, (image.shape, image.shape))
Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.
Standard output video Detail Decscription video
- In the first few frames of video, the algorithm should perform a search without prior assumptions about where the lines.The start point are calculate based on whole image and row by row search using big window. If missing points, using pre-row point.
- Once a high-confidence detection is achieved -- In my case, the first 10 frames and frame whcih line located at allowed threshold as robust, these image would be saved and updated in lanne_class.
- The high confidence information is used in frame line detection:
- For start point, I use start point from preframe to narrow the window size: Start point search in first frames Start point search in later frames this not only improve accracy (removing the nosiy info) and increase the search speed (less calculation). But this also bring larger prob of missing start points, so when missing data or data not believable (distance too large), I use pre_frame start point. Start point search in later frames
- For searching of points in each row, I also use smaller window size, reason and advantagement same as above, so is the risk. So I check the points strictly and use pre_info to do inference: When points missing, I will use the best_fit based on robusrt frames to do prediction; Then, compare the detected or calculated point with pre_frame and pre_row point, then if they fail, I choose to use point from pre_frame or row. Detail shown below: row point search in later frames