When we drive, we use our eyes to decide where to go. The lines on the road that show us where the lanes are act as our constant reference for where to steer the vehicle. Naturally, one of the first things we would like to do in developing a self-driving car is to automatically detect lane lines using an algorithm.
In this project you will detect lane lines in images using Python and OpenCV. OpenCV means "Open-Source Computer Vision", which is a package that has many useful tools for analyzing images.
To complete the project, two files will be submitted: a file containing project code and a file containing a brief write up explaining your solution. We have included template files to be used both for the code and the writeup.The code file is called P1.ipynb and the writeup template is writeup_template.md
To meet specifications in the project, take a look at the requirements in the project rubric
If you have already installed the CarND Term1 Starter Kit you should be good to go! If not, you should install the starter kit to get started on this project.
Step 1: Set up the CarND Term1 Starter Kit if you haven't already.
Step 2: Open the code in a Jupyter Notebook
You will complete the project code in a Jupyter notebook. If you are unfamiliar with Jupyter Notebooks, check out Cyrille Rossant's Basics of Jupyter Notebook and Python to get started.
Jupyter is an Ipython notebook where you can run blocks of code and see results interactively. All the code for this project is contained in a Jupyter notebook. To start Jupyter in your browser, use terminal to navigate to your project directory and then run the following command at the terminal prompt (be sure you've activated your Python 3 carnd-term1 environment as described in the CarND Term1 Starter Kit installation instructions!):
> jupyter notebook
A browser window will appear showing the contents of the current directory. Click on the file called "P1.ipynb". Another browser window will appear displaying the notebook. Follow the instructions in the notebook to complete the project.
- Detect lane lines in image
- Detect lane lines in video
- Show the result in the video
The following is the test image, my object is to find the lane lines in the image.
The project contains the following steps:
- Color selection
- ROI & Markout lane with red color
- Gray Scaling
- Gaussion smoothing
- Canny
- Hough transform lane detection
- Draw lanes
The first step is color selection, lane lines' color are write, based on this situation, we can roughly fiter out the lanes according to the R G B value in the color space. The realization are as below:
#### First Step:Color selection
#Define color selection criteria
color_select = np.copy(image)
red_threshold = 200
green_threshold = 200
blue_threshold = 200
rgb_threshold = [red_threshold, green_threshold, blue_threshold]
# Do a boolean or with the "|" character to identify
# pixels below the thresholds
thresholds = (image[:,:,0] < rgb_threshold[0]) \
| (image[:,:,1] < rgb_threshold[1]) \
| (image[:,:,2] < rgb_threshold[2])
color_select[thresholds] = [0,0,0]
plt.imshow(color_select)
plt.show()
cv2.imwrite('test_images_output/color_selection.png', color_select)
After the first step, we can get the result like below. From the result we can tell that the code can fiter out some write part but also include some noise(the write car)
The purpose of this step is to reduce the computering resouce needed. We can only foucs on the ROI.
#### Second Step: ROI finding and mark out the lane with Red color
vertices = np.array([[(0,539),(440, 330), (520, 330), (939,539)]], dtype=np.int32)
line_image = np.copy(image)
roi_image = region_of_interest(color_select, vertices)
plt.imshow(roi_image)
plt.show()
cv2.imwrite('test_images_output/roi_image.png', roi_image)
After this step, we can see that the code fiter out most noise due to the fact that we only foucs on ROI and fill in other parts as 0.
The purpose of this step is quite obvious, Canny need to run on gray image.
gray = grayscale(roi_image)
plt.imshow(gray, cmap='gray')
plt.show()
cv2.imwrite('test_images_output/gray.png', gray)
After this step, we can not see much difference.But the image has been changed to gray in the background.
For more details regardisng Gaussion and Canny please find the material from PyimageSearch here
#### Forth Step: Gaussian smoothing
# This step is optional due to the fact that Ganny also did Gaussian blur
# Question: How the pic shows not correct?
blur_gray = gaussian_blur(gray, 3)
plt.imshow(blur_gray)
plt.show()