Gazebo环境下基于ROS和OpenCV的阿克曼小车多车跟随
1. 创建资源
开始实验之前,您需要先创建实验相关资源。
- 在实验室页面,单击创建资源。
- (可选)在实验室页面左侧导航栏中,单击云产品资源列表,可查看本次实验资源相关信息(例如IP地址、子用户信息等)。
说明:资源创建过程需要3~5分钟(视资源不同开通时间有所差异,ACK等资源开通时间较长)。完成实验资源的创建后,您可以在云产品资源列表查看已创建的资源信息,例如:子用户名称、子用户密码、AK ID、AK Secret、资源中的项目名称等。
实验环境一旦开始创建则进入计时阶段,建议学员先基本了解实验具体的步骤、目的,真正开始做实验时再进行创建。
资源创建成功,可在左侧的资源卡片中查看相关资源信息以及RAM子账号信息
2. 进入Docker镜像
打开火狐浏览器。
通过RAM用户名密码登录页面,输入子用户密码登录账号。
登陆后选择左侧产品与服务中的云服务器ECS。
在所有云服务器中通过左下角地域找到正确的服务器并点击远程连接。
选择通过VNC远程连接。
选择重置VNC密码并设置新的密码后登录。
设置好进入各选项后进入Ubuntu桌面。
右键打开终端,输入sudo docker run -it --rm -p 6080:80 aliyun_demo命令运行镜像aliyun_demo。
运行成功后打开浏览器,在地址一栏输入127.0.0.1:6080进入镜像系统。
3. 代码部分
跟车程序使用的是multicars场景,由multicars.launch文件启动。找到/root/aliyun_demo/src/raceworld-master/launch文件夹中的multicars.launch文件并打开。
<node name="controller_manager" pkg="controller_manager" type="spawner" respawn="false" output="screen" args="left_rear_wheel_velocity_controller right_rear_wheel_velocity_controller left_front_wheel_velocity_controller right_front_wheel_velocity_controller left_steering_hinge_position_controller right_steering_hinge_position_controller joint_state_controller" />
在launch文件内加载了deepracer1到deepracer4四个组,每个组加载多个控制器,其中各个组中的left_rear_wheel_velocity_controller,right_rear_wheel_velocity_controller,left_front_wheel_velocity_controller, right_front_wheel_velocity_controller,left_steering_hinge_position_controller和right_steering_hinge_position_controller接收ROS消息,分别控制了左右后轮速度,左右前轮速度和转向角度。
在加载控制器代码的下面,还加载了一个名为servo_comannds.py的代码文件。
<nodepkg="raceworld_master" type="servo_commands.py" name="servo_commands" output="screen">
打开servo_comannds.py文件我们可以看到,代码中新建了一个名为servo_comannds的节点,他订阅并接收/robot_name(deepracer1到deepracer4)/ackermann_cmd_mux/output消息。
def servo_commands(): rospy.init_node('servo_commands', anonymous=True) robot_name = rospy.get_param('~robot_name') rospy.Subscriber("ackermann_cmd_mux/output", AckermannDriveStamped, set_throttle_steer) # spin() simply keeps python from exiting until this node is stopped rospy.spin()
在接收到消息后,经过适当的转换,将速度和转向信息发布给之前所述六个控制器。
robot_name = "deepracer" def set_throttle_steer(data): global flag_move pub_vel_left_rear_wheel = rospy.Publisher("left_rear_wheel_velocity_controller/command", Float64, queue_size=1) pub_vel_right_rear_wheel = rospy.Publisher("right_rear_wheel_velocity_controller/command", Float64, queue_size=1) pub_vel_left_front_wheel = rospy.Publisher("left_front_wheel_velocity_controller/command", Float64, queue_size=1) pub_vel_right_front_wheel = rospy.Publisher("right_front_wheel_velocity_controller/command", Float64, queue_size=1) pub_pos_left_steering_hinge = rospy.Publisher("left_steering_hinge_position_controller/command", Float64, queue_size=1) pub_pos_right_steering_hinge = rospy.Publisher("right_steering_hinge_position_controller/command", Float64, queue_size=1) throttle = data.drive.speed*28 steer = data.drive.steering_angle pub_vel_left_rear_wheel.publish(throttle) pub_vel_right_rear_wheel.publish(throttle) pub_vel_left_front_wheel.publish(throttle) pub_vel_right_front_wheel.publish(throttle) pub_pos_left_steering_hinge.publish(steer) pub_pos_right_steering_hinge.publish(steer)
跟车程序由multi_follow1.launch文件启动。其中调用了lane_master.py,pure_master.cpp和main_master.cpp三个程序。
<launch> <node pkg="raceworld_master" type="pure_master" name="pure_master" output="screen"/> <node pkg="raceworld_master" type="main_master" name="main_master" /> <node pkg="raceworld_master" type="lane_master.py" name="lane_master" /> </launch>
其中main_master.cpp的功能比较简单,新建一个/status主题的发布者,将leader和follower的名字设置好后就可以发布出去。
status_publisher = mainNode.advertise<raceworld_master::status>("/status", 1000); raceworld_master::status a_msg, b_msg; a_msg.formation = 1; a_msg.leader = "deepracer1"; a_msg.follower1 = "deepracer2"; a_msg.follower2 = "deepracer3"; a_msg.follower3 = "deepracer4"; a_msg.follower4 = "deepracer5"; b_msg.formation = 2; b_msg.leader = "deepracer1"; b_msg.follower1 = "deepracer2"; b_msg.follower2 = "deepracer3"; b_msg.follower3 = "deepracer4"; b_msg.follower4 = "deepracer5";
follower的跟车由pure.cpp实现。首先订阅/status,/deepracer1/base_pose_ground_truth到/deepracer4/base_pose_ground_truth主题,获取leader和follower的名字信息以及四辆车的位姿和转角信息。
ros::Subscriber platoon_status = node.subscribe("/status", 1, statusCallBack); ros::Subscriber pose1 = node.subscribe("/deepracer1/base_pose_ground_truth", 10, poseCallback1); ros::Subscriber pose2 = node.subscribe("/deepracer2/base_pose_ground_truth", 10, poseCallback2); ros::Subscriber pose3 = node.subscribe("/deepracer3/base_pose_ground_truth", 10, poseCallback3); ros::Subscriber pose4 = node.subscribe("/deepracer4/base_pose_ground_truth", 10, poseCallback4);
void statusCallBack(const raceworld_master::status::ConstPtr & status_msg) { leader_name = status_msg->leader; isformation = status_msg->formation; follower1 = status_msg->follower1; follower2 = status_msg->follower2; follower3 = status_msg->follower3; follower4 = status_msg->follower4; } void poseCallback1(const nav_msgs::Odometry::ConstPtr& msg) { pose_flag = true; msg1.child_frame_id = msg->child_frame_id; msg1.header = msg->header; msg1.pose = msg->pose; msg1.twist = msg->twist; } void poseCallback2(const nav_msgs::Odometry::ConstPtr& msg) { pose_flag = true; msg2.child_frame_id = msg->child_frame_id; msg2.header = msg->header; msg2.pose = msg->pose; msg2.twist = msg->twist; } void poseCallback3(const nav_msgs::Odometry::ConstPtr& msg) { pose_flag = true; msg3.child_frame_id = msg->child_frame_id; msg3.header = msg->header; msg3.pose = msg->pose; msg3.twist = msg->twist; } void poseCallback4(const nav_msgs::Odometry::ConstPtr& msg) { pose_flag = true; msg4.child_frame_id = msg->child_frame_id; msg4.header = msg->header; msg4.pose = msg->pose; msg4.twist = msg->twist; }
将前车的位置信息作为目标进行跟随。
double distance1 = sqrt((target_point1.pose.pose.position.x - f1msg.pose.pose.position.x) * (target_point1.pose.pose.position.x - f1msg.pose.pose.position.x) + (target_point1.pose.pose.position.y - f1msg.pose.pose.position.y) * (target_point1.pose.pose.position.y - f1msg.pose.pose.position.y)); if(target_point1.pose.pose.position.x == 0 && target_point1.pose.pose.position.y == 0) target_point1 = ldmsg; if(distance1 < thresh_distance){ target_point1 = ldmsg_queue1->front(); ldmsg_queue1->pop(); } else if(distance1 > 5){ delete ldmsg_queue1; ldmsg_queue1 = new std::queue<nav_msgs::Odometry>; target_point1 = ldmsg; } double distance2 = sqrt((target_point2.pose.pose.position.x - f2msg.pose.pose.position.x) * (target_point2.pose.pose.position.x - f2msg.pose.pose.position.x)+ (target_point2.pose.pose.position.y - f2msg.pose.pose.position.y) * (target_point2.pose.pose.position.y - f2msg.pose.pose.position.y)); if(target_point2.pose.pose.position.x == 0 && target_point2.pose.pose.position.y == 0) target_point2 = f1msg; if(distance2 < thresh_distance){ target_point2 = ldmsg_queue2->front(); ldmsg_queue2->pop(); } else if(distance2 > 5){ delete ldmsg_queue2; ldmsg_queue2 = new std::queue<nav_msgs::Odometry>; target_point2 = f1msg; } double distance3 = sqrt((target_point3.pose.pose.position.x - f3msg.pose.pose.position.x) * (target_point3.pose.pose.position.x - f3msg.pose.pose.position.x) + (target_point3.pose.pose.position.y - f3msg.pose.pose.position.y) * (target_point3.pose.pose.position.y - f3msg.pose.pose.position.y)); if(target_point3.pose.pose.position.x == 0 && target_point3.pose.pose.position.y == 0) target_point3 = f2msg; if(distance3 < thresh_distance){ target_point3 = ldmsg_queue3->front(); ldmsg_queue3->pop(); } else if(distance3 > 5){ delete ldmsg_queue3; ldmsg_queue3 = new std::queue<nav_msgs::Odometry>; target_point3 = f2msg; } double distance4 = sqrt((target_point4.pose.pose.position.x - f4msg.pose.pose.position.x) * (target_point4.pose.pose.position.x - f4msg.pose.pose.position.x) + (target_point4.pose.pose.position.y - f4msg.pose.pose.position.y) * (target_point4.pose.pose.position.y - f4msg.pose.pose.position.y)); if(target_point4.pose.pose.position.x == 0 && target_point4.pose.pose.position.y == 0) target_point4 = f3msg; if(distance4 < thresh_distance){ target_point4 = ldmsg_queue4->front(); ldmsg_queue4->pop(); } else if(distance4 > 5){ delete ldmsg_queue4; ldmsg_queue4 = new std::queue<nav_msgs::Odometry>; target_point4 = f3msg; } follow(target_point1,target_point2,target_point3,target_point4);
在计算并设置好速度与转角之后,将AckermannDriveStamped类型的消息发布出去。
double r1 = sqrt(pow(ldmsg.pose.pose.position.x-f1msg.pose.pose.position.x, 2) + pow(ldmsg.pose.pose.position.y-f1msg.pose.pose.position.y, 2)); double r2 = sqrt(pow(f1msg.pose.pose.position.x-f2msg.pose.pose.position.x, 2) + pow(f1msg.pose.pose.position.y-f2msg.pose.pose.position.y, 2)); double r3 = sqrt(pow(f2msg.pose.pose.position.x-f3msg.pose.pose.position.x, 2) + pow(f2msg.pose.pose.position.y-f3msg.pose.pose.position.y, 2)); double r4 = sqrt(pow(f3msg.pose.pose.position.x-f4msg.pose.pose.position.x, 2) + pow(f3msg.pose.pose.position.y-f4msg.pose.pose.position.y, 2)); double k = 1.0; if(r1 > 0.4){ vel_msg1.drive.speed = 0.3 * r1; }else{ vel_msg1.drive.speed = 0.05; } if(r2 > 0.4){ vel_msg2.drive.speed = 0.3 * r2; }else{ vel_msg2.drive.speed = 0.05; } if(r3 > 0.4){ vel_msg3.drive.speed = 0.3 * r3; }else{ vel_msg3.drive.speed = 0.05; } if(r4 > 0.4){ vel_msg4.drive.speed = 0.3 * r4; }else{ vel_msg4.drive.speed = 0.05; } vel_msg1.drive.steering_angle = k * theta1; vel_msg2.drive.steering_angle = k * theta2; vel_msg3.drive.steering_angle = k * theta3; vel_msg4.drive.steering_angle = k * theta4; slave_vel1.publish(vel_msg1); slave_vel2.publish(vel_msg2); slave_vel3.publish(vel_msg3); slave_vel4.publish(vel_msg4);
leader的巡线由lane_master.py实现。在代码中首先新建了lane节点,并且订阅了名为/deepracer1/camera/zed_left/image_rect_color_left的消息。该消息通过Gazebo中的deepracer1小车上的左摄像头获取。在获取到摄像头图像信息后,把它转换为cv2图像数据。此外,还创建了一个/deepracer1/ackermann_cmd_mux消息的发布者。
if __name__ == '__main__': try: print("exec!") rospy.init_node('lane', anonymous=True) rospy.Subscriber("/deepracer1/camera/zed_left/image_rect_color_left", Image, camera_callback) rospy.spin() except rospy.ROSInterruptException: pass
pub = rospy.Publisher("/deepracer1/ackermann_cmd_mux/output", AckermannDriveStamped,queue_size=1)
获得的图像信息通过了高斯滤波,HSV图像转换,腐蚀,二值图转换,视角转换,灰度值处理等一系列操作。
#进行高斯滤波 blur = cv2.GaussianBlur(img,(1,1),0) #转换为HSV图像 hsv_img = cv2.cvtColor(blur,cv2.COLOR_BGR2HSV) #cv2.namedWindow("hsv image",0) #cv2.imshow('hsv image', hsv_img) kernel = np.ones((3, 3), dtype=np.uint8) #腐蚀操作 erode_hsv = cv2.erode(hsv_img,kernel, 1) #cv2.namedWindow("erode image",0) #cv2.imshow('erode image', erode_hsv) #转换为二值图,范围内的颜色为白色,其他范围为黑色 inRange_hsv = cv2.inRange(erode_hsv, color_dist['blue']['Lower'], color_dist['blue']['Upper']) #cv2.namedWindow("binary image",0) #cv2.imshow('binary image', inRange_hsv) gray_img = inRange_hsv #利用透视变换矩阵进行图片转换 gray_img = cv2.warpPerspective(gray_img, M, (640, 480), cv2.INTER_LINEAR) #cv2.namedWindow("gray image",0) #cv2.imshow('gray image', gray_img) #二值化 ret, origin_thr = cv2.threshold(gray_img, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU) #cv2.namedWindow("origin_thr image",0) #cv2.imshow('origin_thr image', origin_thr)
原始图像如下。
腐蚀操作后图像如下。
视角转换后图像如下。
随后根据图像大小设置滑动窗口的参数。
#7个滑动窗口 nwindows=7 #设置每个滑动窗口的高度 window_height=int(binary_warped.shape[0] / nwindows) nonzero=binary_warped.nonzero() nonzeroy=np.array(nonzero[0]) nonzerox=np.array(nonzero[1]) lane_current=lane_base #滑动窗口宽度的一半 margin=100 minpix=25
并且在视角转换后的图像中绘制出来。
for window in range(nwindows): #设置滑动窗口坐标点 win_y_low = binary_warped.shape[0] - (window + 1) * window_height win_y_high = binary_warped.shape[0] - window * window_height win_x_low = lane_current - margin win_x_high = lane_current + margin good_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & (nonzerox >= win_x_low) & (nonzerox < win_x_high)).nonzero()[0] lane_inds.append(good_inds) img1 = cv2.rectangle(img1, (win_x_low, win_y_low), (win_x_high, win_y_high), (0, 255, 0), 3) if len(good_inds) > minpix: lane_current = int(np.mean(nonzerox[good_inds])) elif window >= 3: break
随后绘制车道线,计算出质心坐标后绘制两个定位用的圆形。其中蓝色圆形位于车道线上,绿色圆形使用的是质心的坐标。
1. for num in range(len(ploty) - 1): 2. cv2.line(img1, (int(plotx[num]), int(ploty[num])), (int(plotx[num + 1]), int(ploty[num + 1])),(0, 0, 255), 8) 3. aP[0] = aimLaneP[0] - math.sin(theta) * (LorR) * roadWidth / 2 4. aP[1] = aimLaneP[1] - math.cos(theta) * (LorR) * roadWidth / 2 5. #车道线上绘制蓝色圆 6. img1 = cv2.circle(img1, (int(aimLaneP[0]), int(aimLaneP[1])), 10, (255, 0, 0), -1) 7. #质心绘制绿色圆 8. img1 = cv2.circle(img1, (int(aP[0]), int(aP[1])), 10, (0, 255, 0), -1)
绘制后的图像如下。
计算目标点的真实坐标。并根据偏差计算转角数据。
# 计算目标点的真实坐标 if lastP[0] > 0.001 and lastP[1] > 0.001: if (((aP[0] - lastP[0]) ** 2 + (aP[1] - lastP[1]) ** 2 > 2500) and Timer < 2): # To avoid the mislead by walkers aP = lastP[:] Timer += 1 else: Timer = 0 lastP = aP[:] steerAngle = k * math.atan(2 * I * aP[0] / (aP[0] * aP[0] + (aP[1] + D) * (aP[1] + D))) #print("steerAngle=", steerAngle) st = steerAngle * 4.0 / 3.1415
新建一个AckermannDriveStamped类型数据,给速度和转角信息复制后即可发布。
msg = AckermannDriveStamped(); msg.header.stamp = rospy.Time.now() msg.header.frame_id = "base_link" msg.drive.speed = 0.35 msg.drive.steering_angle = steerAngle * 0.95; print("Speed:",msg.drive.speed) print("Steering_Angle:",msg.drive.steering_angle) pub.publish(msg) print("Message From lan.py Published\n")
4. 实验操作
打开终端,输入如下命令启动multicars场景。
cd aliyun_demo source devel/setup.bash roslaunch raceworld_master multicars.launch
场景如下所示。
关闭场景阴影。
新建终端,输入如下命令启动follow.launch。
cd aliyun_demo source devel/setup.bash roslaunch raceworld multi_follow1.launch
随后可以看到多车跟随程序的运行情况。
实验地址:https://developer.aliyun.com/adc/scenario/d0c3fc1fe6b94a309acf4420b048a760