基于Opencv的车道线检测方法

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接: https://blog.csdn.net/weixin_40279184/article/details/102748068

车道线检测是图像处理运用到无人驾驶的一项技术,目前也过渡到了部分汽车上,高速公路的自动车道保持就是一个应用。
最近研究了两个基于opencv的车道检的代码,先放链接:
A.Udacity车道线检测代码
B.基于霍夫变换和kalman滤波车道线检测

 ***A***中处理步骤如下:
 
 1.使用提供的一组棋盘格图片计算相机校正矩阵(camera calibration matrix)和失真系数(distortion coefficients).这里主要是考虑到一些鱼眼相机。可以通过棋盘图片来获取校正矩阵进而完成相角拍摄图片的校正。
 2.使用梯度阈值(gradient threshold),颜色阈值(color threshold)等处理图片得到清晰捕捉车道线的二进制图(binary image).
 3.使用透视变换(perspective transform)得到二进制图(binary image)的鸟瞰图(birds-eye view).
 进行透视变化只要是为了找到左右车道线的起点(这里将透视变化后的图片中的像素点投影到x轴,像素多的点对应为左右车道线的起点)
 4.检测属于车道线的像素并用它来测出车道边界.
 5.计算车道曲率及车辆相对车道中央的位置.
 6.处理图片展示车道区域,及车道的曲率和车辆位置.

A中各个步骤的处理细节,效果对比在.
Github中README.md中介绍很详细;A中主要缺点就是实时性还不够,对于弯道较多的路段还是很难适用。

***B***中对于夜视情况下检测效果非常好,且实时性较好。
         步骤如下:1.对视频中的图片进行gamma校正,gamma校正相当于进行直方图均匀化。将夜视下的图片增量对比度提高。代码如下:`
def gamma_correction_auto(RGBimage, equalizeHist=False):  # 0.35
    originalFile = RGBimage.copy()
    red = RGBimage[:, :, 2]
    green = RGBimage[:, :, 1]
    blue = RGBimage[:, :, 0]
    forLuminance = cv2.cvtColor(originalFile, cv2.COLOR_BGR2YUV)
    Y = forLuminance[:, :, 0]
    totalPix = vidsize[0] * vidsize[1]
    summ = np.sum(Y[:, :])
    Yaverage = np.divide(totalPix, summ)
    # Yclipped = np.clip(Yaverage,0,1)
    epsilon = 1.19209e-007
    correct_param = np.divide(-0.3, np.log10([Yaverage + epsilon]))
    correct_param = 0.7 - correct_param
    red = red / 255.0
    red = cv2.pow(red, correct_param)
    red = np.uint8(red * 255)
    if equalizeHist:
        red = cv2.equalizeHist(red)
    green = green / 255.0
    green = cv2.pow(green, correct_param)
    green = np.uint8(green * 255)
    if equalizeHist:
        green = cv2.equalizeHist(green)
    blue = blue / 255.0
    blue = cv2.pow(blue, correct_param)
    blue = np.uint8(blue * 255)
    if equalizeHist:
        blue = cv2.equalizeHist(blue)
    output = cv2.merge((blue, green, red))
    # print(correct_param)
    return output
 2.将图片转化到hsv空间主要筛选出白色和黄色区域:
       hsv空间中黄色所在区间(yellow)
       min_val_y = np.array([15, 80, 190])
       max_val_y = np.array([30, 255, 255])
       hsv空间中白色色所在区间(white)
       min_val_w = np.array([0, 0, 195])
       max_val_w = np.array([255, 80, 255])

代码如下:

def hsv_filter(image, min_val_y, max_val_y, min_val_w, max_val_w):#因为车道线只有黄白两种颜色所以主要对这两种颜色进行处理
    """
    A function returning a mask for pixels within min_val - max_val range
    Inputs:
    - image - a BGR image you want to apply function on
    - min_val_y - array of shape (3,) giving minumum HSV values for yellow color
    - max_val_y - array of shape (3,) giving maximum HSV values for yellow color
    - min_val_w - array of shape (3,) giving minumum HSV values for white color
    - max_val_w - array of shape (3,) giving maximum HSV values for white color
    Returns:
    - img_filtered - image of pixels being in given threshold
    """

    hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
    mask_yellow = cv2.inRange(hsv, min_val_y, max_val_y) #黄
    mask_white = cv2.inRange(hsv, min_val_w, max_val_w)  #白
    mask = cv2.bitwise_or(mask_yellow, mask_white)
    img_filtered = cv2.bitwise_and(image, image, mask=mask)
    return img_filtered
 3.霍夫变换检检测出直线。霍夫变换可以检测直线和圆。原理可以查看下相关资料,运算量算小。所以实时性不错。对于一些弯道较多的路段挑战较大。代码如下:代码中进行了关于角度的滤波,因为霍夫变换检测出来的直线也较多所以可以去掉一些角度不在范围内的,挑选出左右车道线。
def hough_transform(original, gray_img, threshold, discard_horizontal=0.4):
    """
    A function fitting lines that intersect >=threshold white pixels
    Input:
    - original - image we want to draw lines on
    - gray_img - image with white/black pixels, e.g. a result of Canny Edge Detection
    - threshold - if a line intersects more than threshold white pixels, draw it
    - discard_horizontal - smallest abs derivative of line that we want to take into account
    Return:
    - image_lines - result of applying the function
    - lines_ok - rho and theta
    """
    lines = cv2.HoughLines(gray_img, 0.5, np.pi / 360, threshold)
    image_lines = original
    lines_ok = []  # list of parameters of lines that we want to take into account (not horizontal)
    if lines is not None:
        for i in range(0, len(lines)):
            rho = lines[i][0][0]
            theta = lines[i][0][1]
            # discard horizontal lines
            m = -math.cos(theta) / (math.sin(theta) + 1e-10)  # adding some small value to avoid dividing by 0
            if abs(m) < discard_horizontal:
                continue
            else:
                a = math.cos(theta)
                b = math.sin(theta)
                x0 = a * rho
                y0 = b * rho
                pt1 = (int(x0 + 1000 * (-b)), int(y0 + 1000 * (a)))
                pt2 = (int(x0 - 1000 * (-b)), int(y0 - 1000 * (a)))
                cv2.line(image_lines, pt1, pt2, (0, 0, 255), 2, cv2.LINE_AA)
                lines_ok.append([rho, theta])
    lines_ok = np.array(lines_ok)
    return image_lines, lines_ok
  4.检测出直线后为了防止车道线抖动,这里采用了kalman滤波对车道线进行跟踪。kalman运算量较小,广泛运用于目标跟踪。当然还有粒子滤波有兴趣可以了解下,这里最关键是怎样对kalman的初试矩阵进行初始化。kalman滤波主要运用了5个方程。2个用于预测,3个用于更新参数;直接贴代码吧
class LaneTracker:
    def __init__(self, n_lanes, proc_noise_scale, meas_noise_scale, process_cov_parallel=0, proc_noise_type='white'):
        '''
        n_lanes=2
        proc_noise_scale=0.1
        meas_noise_scale=15
        predict():  基于k-1去预测值
                     x^k- =Ax^(k-1)        #估计的状态       x^k- 16*1    A.shape 16*16      x^(k-1) 16*1
                     p^k-  =A(p^k)AT+Q     #估计的协方差     16*16 =16*16 * 16*16*16*16 +16*16

        correct(): 估计值 基于zk测量值和x^k- 去估计
                  Kk=((p^k-).HT)/(H(p^k-)HT+R) 16*8=(16*16*16*8)/(8*16 *16*16 *16*8 +8*8)
                  x^k=x^k- +Kk(zk-H(x^k-))      #更新的状态#结合测量和预测后验估计    精确  16*1=16*1-16*8* (8*1-8*16*16*1)
                  p^k=(I-KkH)(p^k- )            #更新的协方差#结合测量和预测后验估计  精确  16*16=( 16*16- 16*8*8*16)*16*16

        x^k-          为k时刻先验状态估计
        x^k、x^(k-1)  为后验  转态估计
        p^k-          为k时刻 先验估计协方差 初始化为0 容易收敛
        Q             过程噪声协方差  表达了状态矩阵和实际过程状态矩阵的误差 是估计的关键

        Kk 卡尔曼协方差
        zk 为测量值 m阶向量
        H  为测量转移矩阵
        R为 测量噪声协方差 R越小越收敛快
        '''
        self.n_lanes = n_lanes#2
        self.meas_size = 4 * self.n_lanes     #8 测量维度 一条车道线四个点  x1,y1,x2,y2, x1',y1',x2',y2'
        self.state_size = self.meas_size * 2  #16   转态转移维度  测量维度*2  x1,dx1,y1,dy1,x2,dx2,y2,dy2,  x1',dx1',y1',dy1',x2',dx2',y2',dy2'
        self.contr_size = 0
        self.kf = cv2.KalmanFilter(self.state_size, self.meas_size, self.contr_size)        #(16, 8,0)转移 测量控制矩阵维度
        self.kf.transitionMatrix = np.eye(self.state_size, dtype=np.float32)                #(16,16)单位矩阵转移矩阵初始化A
        self.kf.measurementMatrix = np.zeros((self.meas_size, self.state_size), np.float32) #(8,16)测量矩阵初始化H
        for i in range(self.meas_size):
            self.kf.measurementMatrix[i, i*2] = 1
        if proc_noise_type == 'white':#处理的白噪声
            block = np.matrix([[0.25, 0.5],
                               [0.5, 1.]], dtype=np.float32)
            self.kf.processNoiseCov = block_diag(*([block] * self.meas_size)) * proc_noise_scale#过程噪声协方差Q
        if proc_noise_type == 'identity':
            self.kf.processNoiseCov = np.eye(self.state_size, dtype=np.float32) * proc_noise_scale#(16,16)

        for i in range(0, self.meas_size, 2):
            for j in range(1, self.n_lanes):#8*2
                self.kf.processNoiseCov[i, i+(j*8)] = process_cov_parallel  #process_cov_parallel=0
                self.kf.processNoiseCov[i+(j*8), i] = process_cov_parallel
        self.kf.measurementNoiseCov = np.eye(self.meas_size, dtype=np.float32) * meas_noise_scale#(8,8)测量噪声协方差R
        self.kf.errorCovPre = np.eye(self.state_size)#P^k 错误协方差(16,16)的单位矩阵
        self.meas = np.zeros((self.meas_size, 1), np.float32)#(8,1)测量值 zk
        self.state = np.zeros((self.state_size, 1), np.float32)#(16,1)状态值 x^k-
        self.first_detected = False
        
    def _update_dt(self, dt):#更新A(16,16)
        for i in range(0, self.state_size, 2):
            self.kf.transitionMatrix[i, i+1] = dt  #速度 dt为视频处理速度?

    def _first_detect(self, lanes):#初始化self.state
        for l, i in zip(lanes, range(0, self.state_size, 8)):
            self.state[i:i+8:2, 0] = l#先验转态x^k-
        self.kf.statePost = self.state
        self.first_detected = True

    def update(self, lanes):
        if self.first_detected:
            for l, i in zip(lanes, range(0, self.meas_size, 4)):
                if l is not None:
                    self.meas[i:i+4, 0] = l#测量值更新
            self.kf.correct(self.meas)#根据测量值更新    Kk x^k p^k 为下次预测做准备

        else:
            if lanes.count(None) == 0:
                self._first_detect(lanes)

    def predict(self, dt):
        if self.first_detected:
            self._update_dt(dt)
            state = self.kf.predict()#预测值x^k-
            lanes = []
            for i in range(0, len(state), 8):
                lanes.append((state[i], state[i+2], state[i+4], state[i+6]))# x1,dx1,y1,dy1,x2,dx2,y2,dy2
            return lanes
        else:
            return None

整体效果大家可以运行Github代码查看。夜视下以及实时性很nice。
A,B主要采用opencv方式完成的车道线检测方法。接下来作者将继续去研究基于特征的深度学习方法完成车道检测。

猜你喜欢

转载自blog.csdn.net/weixin_40279184/article/details/102748068