基于声音和视频的人跟随自动驾驶汽车-smartcar圆形车底盘-arduino Nano控制

元器件材料清单:

树莓派 *1

arduino  Nano *1

L298N电机驱动模块 *1

左右轮马达 各1

7.2V锂电池 (电机驱动供电)

充电宝 5V 2A (树莓派)

手机Micro USB线

Nano MINI USB Type B

快换接头 *1

杜邦线

超声波

USB摄像头

圆形车底盘

用杜邦线把,接口对应相连:

arduino Nano   L298N电机驱动模块

5       IN1

6        IN2

9        IN3

10       IN4

GND   GND

VIN    +5V

arduino Nano 超声波模块

5V     VCC

8       (EchoPin, INPUT)

7        (TrigPin, OUTPUT)

GND    GND

L298N电机驱动模块

UNO接USB供电,输出电流比较弱,可能无法驱动电机,需要 5V输入 端口,独立供电,

驱动模块5V输入   Nano

GND                     GND

+5V                       VIN

A              左轮马达

OUT1      马达 正极+ 红线

OUT 2      马达 负极-  黑线

B              右轮马达

OUT3      马达 正极+ 红线

OUT 4     马达 负极-  黑线

主电源      7.2V锂电池

+12V     正极+

GND      负极-

2:驱动直流电机
由于本模块是2路的H桥驱动,所以可以同时驱动两个电机,接法如图所示
使能ENA ENB之后,
可以分别从IN1 IN2输入PWM信号驱动电机1的转速和方向
可以分别从IN3 IN4输入PWM信号驱动电机2的转速和方向
信号如图所示

直流电机

旋转方式

IN1

IN2

IN3

IN4

调速PWM信号

调速端A

调速端B

M1左轮马达

正转

/

/

/

反转

/

/

/

停止

/

/

/

M2右轮马达

正转

/

/

/

反转

/

/

/

停止

/

/

/

如果要做电机调速,还可以让Arduino输出PWM波,而不是恒定的5V电平,达到控制速度的效果(本项目暂时没有用)。

代码:

(主要本例为圆形底盘,左轮马达由5、6控制正反转,右轮马达由9、10控制左右)

    正转  反转  左转  右转  左单  右单

5    高    低      低      高      低      高

6   低     高      高      低      低      低

9   高     低      高      低      高      低

10  低    高      低      高      低      低

arduino Nano 代码:



int led = 13;
//float duration;                
//float distance;              
//int srfPin = 8;

const int TrigPin = 7; //发出超声波
const int EchoPin = 8; //收到反射回来的超声波
float cm; //因为测得的距离是浮点型的

void setup() 
{
    Serial.begin(9600);
    pinMode(led, OUTPUT); 
    pinMode(9,OUTPUT);//右轮马达 正极+
    pinMode(10,OUTPUT);//右轮马达 负极-
    pinMode(5,OUTPUT); //左轮马达 正极+
    pinMode(6,OUTPUT);//左轮马达 负极-

    pinMode(TrigPin, OUTPUT); 
    pinMode(EchoPin, INPUT);     
    
}
void loop()
{
    delay(100);               // wait for a second
    
    if (Serial.available()>0)
    {
      char cmd = Serial.read();
      //Serial.print(cmd);
      Serial.print(cmd);
      Serial.println(); 
      switch (cmd)
      {
        case '1':
            Forward();
            break;
        case '2':
            Back();
            break;
        case '3':
            Turn_left();
            break;
        case '4':
            Turn_right();
            break;
        case '7':
            Only_left();
            break;
        case '8':
            Only_right();
            break;
        case '9':
            Stop();
            break;
                         
        case '6':
            Measure();
        default:
            Stop();
      }
    }
    //Measure();
    //超声波循环测距,测试时可暂时屏蔽;case '6'只是单次测距
}
void Forward()
{
    digitalWrite(9,HIGH);
    digitalWrite(10,LOW);
    digitalWrite(5,HIGH);
    digitalWrite(6,LOW);
}
void Back()
{
    digitalWrite(9,LOW);
    digitalWrite(10,HIGH);
    digitalWrite(5,LOW);
    digitalWrite(6,HIGH);
}
void Turn_right()
{
    digitalWrite(9,LOW);
    digitalWrite(10,HIGH);
    digitalWrite(5,HIGH);
    digitalWrite(6,LOW);
}
void Turn_left()
{
    digitalWrite(9,HIGH);
    digitalWrite(10,LOW);
    digitalWrite(5,LOW);
    digitalWrite(6,HIGH);
}

void Stop()
{
    digitalWrite(9,LOW);
    digitalWrite(10,LOW);
    digitalWrite(5,LOW);
    digitalWrite(6,LOW);
}

void Only_left()
{
    digitalWrite(9,HIGH);
    digitalWrite(10,LOW);
    digitalWrite(5,LOW);
    digitalWrite(6,LOW);
}

void Only_right()
{
    digitalWrite(9,LOW);
    digitalWrite(10,LOW);
    digitalWrite(5,HIGH);
    digitalWrite(6,LOW);
}

void Measure()
{
      /*
      pinMode(srfPin, OUTPUT); 
      digitalWrite(srfPin, LOW);            
      delayMicroseconds(2); 
      digitalWrite(srfPin, HIGH);           
      delayMicroseconds(10); 
      digitalWrite(srfPin, LOW);            
      pinMode(srfPin, INPUT); 
      duration = pulseIn(srfPin, HIGH);         
      distance = duration/58;       
      Serial.println(distance);     
      delay(50); */  

    digitalWrite(TrigPin, LOW); //低高低电平发一个短时间脉冲去TrigPin 
    delayMicroseconds(2);       // delayMicroseconds在更小的时间内延时准确
    digitalWrite(TrigPin, HIGH); 
    delayMicroseconds(10); 
    digitalWrite(TrigPin, LOW); //通过这里控制超声波的发射
  
    cm = pulseIn(EchoPin, HIGH) / 58.0; //将回波时间换算成cm 
    cm = (int(cm * 100.0)) / 100.0; //保留两位小数 
    Serial.print("Distance:"); 
    Serial.print(cm); 
    Serial.print("cm"); 
    Serial.println(); 
    delay(50); 
}


打开IDE,nano通过USB连接电脑;

选择匹配的 开发版 和 端口;

验证无误后,点击上传:

卸载设备,断电接线;

实物接线:

测试:

nano通过USB连接树莓派,启动树莓派:

通过VNC连接树莓派,启动自动驾驶程序:

cd /home/pi/Desktop/smart_car-master 
python smart_raspberry.py 

Ctrl+z暂停,kill +%1终止程序;

灯还亮,运行

python pixels.py

再Ctrl+c,LED灯即可熄灭;

测试:

原教程提供的镜像不适用于树莓派3B+,烧录 respeaker提供的镜像安装必要的服务即可正常运行程序:

本例应用的镜像:

下载链接:

https://v2.fangcloud.com/share/7395fd138a1cab496fd4792fe5?folder_id=188000207913

自行安装python-opencv,

本例源代码仅修改两处:

直接导入Image,出错,修改PIL.Image即可正常导入

nano通过USB接入,选USB0;

需要根据自己的硬件情况做相应修改;

最终代码:

#!/usr/bin/env python
import socket
import sys
import threading
import random
import os
import time
import struct
import serial


import cv
import cv2
import PIL.Image, StringIO
import numpy as np


from voice_engine.source import Source
from voice_engine.channel_picker import ChannelPicker
from voice_engine.doa_respeaker_4mic_array import DOA
from pixels import pixels

import datetime



port_serial="/dev/ttyUSB0"
sl = serial.Serial(port_serial,9600)

HOST = "0.0.0.0"
PORT = 9004
SOCK_ADDR = (HOST, PORT)
exit_now = 0
distance = 0

moving = False
last_movement_timestamp = datetime.datetime.now()
doa_valid = True


def exit_signal_handle(sig,stack_frame):
    global exit_now
    print "EXIT sig"
    exit_now = 1

class serial_thread(threading.Thread):
    def __init__(self):
        threading.Thread.__init__(self)
    def run(self):
        self.running = True
        while self.running:
            try:
                global distance
                data=sl.readline()
                #print "^^^^^^"
                #print data
                tmp = data.split("\r\n")[0]
                if (":" in tmp ) and (";" in tmp):
                    distance_str = (tmp.split(":")[1]).split(";")[0]
                    distance =  float(distance_str.decode("utf-8"))
                #print str(distance)
                #print "vvvvvv"
            except:
                print sys.exc_info()

    def stop(self):
        self.running = False

class doa_thread(threading.Thread):
    def __init__(self):
        threading.Thread.__init__(self)
    def run(self):
        global moving,last_movement_timestamp,doa_valid
        src = Source(rate=16000, channels=4, frames_size=320)
        ch1 = ChannelPicker(channels=4, pick=1)
        doa = DOA(rate=16000)
        src.link(ch1)
        src.link(doa)
        src.recursive_start()



        self.running = True
        while self.running:
            try:
                time.sleep(1)
                current_timestamp = datetime.datetime.now()
                if doa_valid == True and ((current_timestamp - last_movement_timestamp).seconds > 2):
                    position, amplitute = doa.get_direction()
                    if amplitute > 2000:
                        pixels.wakeup(position)
                        print amplitute,position
                        if position > 0  and position < 180:
                            pivot_right()
                            time.sleep(position/200)
                            stop()
                        elif position >= 180 and position < 360:
                            pivot_left()
                            position = 360 - position
                            time.sleep(position/200)
                            stop()
                        time.sleep(2)
                    else:
                        pixels.speak()
                else:
                    pixels.think()
            except:
                print sys.exc_info()
         
        src.recursive_stop()

    def stop(self):
        self.running = False

class movement_thread(threading.Thread):
    def __init__(self, movement_type, quantity):
        threading.Thread.__init__(self)
        self.movement_type = movement_type
        self.quantity = quantity
    def run(self):
        global moving, last_movement_timestamp
        
        current_timestamp = datetime.datetime.now()
        if (current_timestamp - last_movement_timestamp).seconds > 0.5:
            moving = True
            if self.movement_type == 1:
                forward()
                time.sleep(self.quantity/200)
                stop()
            elif self.movement_type == 2:
                reverse()
                time.sleep(self.quantity/200)
                stop()
            elif self.movement_type == 3:
                pivot_left()
                time.sleep(self.quantity/200)
                stop()
            elif self.movement_type == 4:
                pivot_right()
                time.sleep(self.quantity/200)
                stop()
            moving = False
            last_movement_timestamp = datetime.datetime.now()
            
    




def forward():
    string="1"
    sl.write(string)

def reverse():
    string="2"
    sl.write(string)

def pivot_left():
    string="3"
    sl.write(string)

def pivot_right():
    string="4"
    sl.write(string)

def measure():
    string="6"
    sl.write(string)

def stop():
    string="0"
    sl.write(string)



def main():
    global moving,last_movement_timestamp,doa_valid,distance
    doa_th = doa_thread()
    doa_th.start()
    ser_th = serial_thread()
    ser_th.start()


    cap=cv2.VideoCapture(0)
    ret=cap.set(3,320)
    ret=cap.set(4,240)
    #resize = 2

    avg = None

    while exit_now == 0:
        ret, frame = cap.read()
        frame = cv2.flip(frame,-1)


        hsv = cv2.cvtColor(frame,cv2.COLOR_BGR2HSV)
        lower_blue = np.array([100,50,50])
        upper_blue = np.array([140,255,255])
        mask = cv2.inRange(hsv,lower_blue,upper_blue)
        
        if avg is None:
            avg = mask.copy().astype("float")
            continue

        cv2.accumulateWeighted(mask,avg,0.5)
        
        thresh = cv2.threshold(mask,25,255,cv2.THRESH_BINARY)[1]
        kernel = np.ones((4,4),np.uint8)
        thresh = cv2.erode(thresh,kernel, iterations = 1)
        kernel = np.ones((13,13),np.uint8)
        thresh = cv2.dilate(thresh, kernel, iterations = 1)
        #cv2.imshow('thre',thresh)
        (cnts,_) = cv2.findContours(thresh.copy(),cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)

        max_area = 0
        rect_max = (0,0,0,0)
        for c in cnts:
            if cv2.contourArea(c) < 3000:
                continue
            
            if cv2.contourArea(c) > max_area:
                max_area = cv2.contourArea(c)
                rect_max = cv2.boundingRect(c)

        



        
        #gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
        #gray = cv2.equalizeHist(gray)

        if max_area !=0:
            doa_valid = False
            (x,y,w,h) = rect_max
            #cv2.rectangle(frame,(x,y),(x+w,y+h),(0,0,255),2)
            x1 = x
            y1 = y
            x2 = x+w
            y2 = y+h

            

            x_mid = float((x1+x2)/2)
            y_mid = float((y1+y2)/2)

            print x_mid,y_mid,x1,x2
            if x_mid > (160 + 30) and np.abs(x2-x1) < 150 :
                mt = movement_thread(4,(np.abs(x_mid-160))/10)
                mt.start()       
            elif x_mid < (160 - 30) and np.abs(x2-x1) < 150 :
                mt = movement_thread(3,(np.abs(x_mid-160))/10)
                mt.start()
            else:
                print "DB for cam"
                #measure()

                print "first:",str(distance)
                
                #if distance > 450:
                    #print "not detected by sonar"
                    #mt = movement_thread(1,np.abs(80))
                    #mt.start()
                #else:
                if distance < 450:
                    #print "target detected by sonar:",str(distance)                    
                    if distance > 50:
                        print "too far, move forward"
                        mt = movement_thread(1,np.abs(distance-50)*1)
                        mt.start()
                    elif distance < 30:
                        print "too close, move backward"
                        mt = movement_thread(2,np.abs(distance-50)*1)
                        mt.start()
                    else:
                        print "DB for sonar"
                
                    
              
                

        else:
            doa_valid = True
         
        
        
    doa_th.stop()
    doa_th.join()
    ser_th.stop()
    ser_th.join()


if __name__ == "__main__":
    main()


参考链接:

原教程链接,硬件连接部分不够详细;本教程做相应补充;

https://blog.csdn.net/shukebeta008?t=1

基于声音和视频的人跟随自动驾驶汽车- smart_car 语音视频控制环境

https://blog.csdn.net/jacka654321/article/details/82493083

基于声音和视频的人跟随自动驾驶汽车- smart_car(PC控制)

https://blog.csdn.net/jacka654321/article/details/81951470

基于声音和视频的人跟随自动驾驶汽车- smart_car PC端 python环境搭建

https://blog.csdn.net/jacka654321/article/details/82050298

猜你喜欢

转载自blog.csdn.net/jacka654321/article/details/82770365