ZED摄像头
获得中心点深度,未考虑RGB与深度映射(可参考下面D415)
#include <iostream> #include <fstream> #include <sstream> #include <algorithm> #include <dirent.h> #include <opencv2/core/core.hpp> #include <opencv2/highgui/highgui.hpp> #include <ros/ros.h> #include <sensor_msgs/Image.h> #include <cv_bridge/cv_bridge.h> #include <image_transport/image_transport.h> #include <sensor_msgs/image_encodings.h> #include <opencv2/imgproc/imgproc.hpp> void depthCallback(const sensor_msgs::Image::ConstPtr& msg) { // Get a pointer to the depth values casting the data // pointer to floating point float* depths = (float*)(&msg->data[0]); // Image coordinates of the center pixel int u = msg->width / 2; int v = msg->height / 2; // Linear index of the center pixel int centerIdx = u + msg->width * v; // Output the measure ROS_INFO("Center distance : %g m", depths[centerIdx]); } static const std::string OPENCV_WINDOW = "Image window"; class ImageConverter { ros::NodeHandle nh_; image_transport::ImageTransport it_; image_transport::Subscriber image_sub_; image_transport::Publisher image_pub_; public: ImageConverter() : it_(nh_) { // Subscrive to input video feed and publish output video feed image_sub_ = it_.subscribe("/zed/right/image_raw_color", 1, &ImageConverter::imageCb, this); image_pub_ = it_.advertise("/image_converter/output_video", 1); cv::namedWindow(OPENCV_WINDOW); } ~ImageConverter() { cv::destroyWindow(OPENCV_WINDOW); } void imageCb(const sensor_msgs::ImageConstPtr& msg) { cv_bridge::CvImagePtr cv_ptr; try { cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::BGR8); } catch (cv_bridge::Exception& e) { ROS_ERROR("cv_bridge exception: %s", e.what()); return; } // Draw an example circle on the video stream if (cv_ptr->image.rows > 60 && cv_ptr->image.cols > 60) cv::circle(cv_ptr->image, cv::Point(50, 50), 10, CV_RGB(255,0,0)); // Update GUI Window cv::imshow(OPENCV_WINDOW, cv_ptr->image); cv::waitKey(3); // Output modified video stream image_pub_.publish(cv_ptr->toImageMsg()); } }; //blog.csdn.net/qq_29985391/article/details/80247724 int main(int argc, char** argv) { ros::init(argc, argv, "zed_video_subscriber"); /* * NodeHandle is the main access point to communications with the ROS system. * The first NodeHandle constructed will fully initialize this node, and the last * NodeHandle destructed will close down the node. */ ros::NodeHandle n; ros::Subscriber subDepth = n.subscribe("/zed/depth/depth_registered", 10, depthCallback); //--只需要声明一个对象即可------------------ ImageConverter ic;//定义一个对象 //---------------------------------------- ros::spin(); return 0; }
CMakeList.txt书写如下:
cmake_minimum_required(VERSION 2.8.3) project(zed_depth_sub_tutorial) # if CMAKE_BUILD_TYPE is not specified, take 'Release' as default IF(NOT CMAKE_BUILD_TYPE) SET(CMAKE_BUILD_TYPE Release) ENDIF(NOT CMAKE_BUILD_TYPE) ## Find catkin and any catkin packages find_package(catkin REQUIRED COMPONENTS cv_bridge image_transport roscpp sensor_msgs std_msgs geometry_msgs ) find_package(OpenCV REQUIRED) if(NOT WIN32) ADD_DEFINITIONS("-std=c++0x -O3") endif(NOT WIN32) ## Declare a catkin package catkin_package() include_directories(include ${catkin_INCLUDE_DIRS}) include_directories ( ${OpenCV_INCLUDE_DIRS} ) ## Build add_executable(zed_depth_sub src/zed_depth_sub_tutorial.cpp) ##target_link_libraries(zed_depth_sub ${catkin_LIBRARIES}) target_link_libraries( zed_depth_sub ${OpenCV_LIBRARIES} ${OpenCV_LIBS} ${catkin_LIBRARIES})
xml文件
<package> <name>zed_depth_sub_tutorial</name> <version>2.7.0</version> <description> This package is a tutorial showing how to subscribe to ZED depth streams </description> <maintainer email="support@stereolabs.com">STEREOLABS</maintainer> <license>MIT</license> <url type="website">http://stereolabs.com/</url> <url type="repository">https://github.com/stereolabs/zed-ros-wrapper</url> <url type="bugtracker">https://github.com/stereolabs/zed-ros-wrapper/issues</url> <buildtool_depend>catkin</buildtool_depend> <build_depend>cv_bridge</build_depend> <build_depend>image_transport</build_depend> <build_depend>opencv2</build_depend> <build_depend>roscpp</build_depend> <build_depend>sensor_msgs</build_depend> <build_depend>std_msgs</build_depend> <run_depend>cv_bridge</run_depend> <run_depend>image_transport</run_depend> <run_depend>opencv2</run_depend> <run_depend>roscpp</run_depend> <run_depend>sensor_msgs</run_depend> <run_depend>std_msgs</run_depend> </package>
报错了,怎是编译不了,这时如果大家用Qt进行编译发现报错,但是不指明在哪里报错,这时候可以到工作空间下,(我的工作空间是open_cv_ws),进行$catkin_make,这时候上面显示 format="2"应该去掉,去掉之后重新编译一下就可以了。
总结
话题/zed/right/image_raw_color可订阅ZED颜色图像,写在一个类里面,主程序直接定义一个对象就可以了
知道怎么去添加依赖,在CMakeList.txt中添加find_package(catkin REQUIRED COMPONENTS cv_bridge image_transport roscpp sensor_msgs std_msgs geometry_msgs ) find_package(OpenCV REQUIRED)
Kinect v2
保存图像和深度图序列
ROS
做机器人相关的工作的同学, 同时对该部分也不会陌生吧。 机器人操作系统(ROS), 应用非常广泛, 并且有很多开源库, 包可以使用。 同时, 主流的传感器在ROS中也都有支持。 当然, Kinect2也是能够支持的。 ROS安装于Ubuntu之上, 我使用的版本是Ubuntu14.04 + ROS indigo。 关于ROS的安装问题, 可以查看官网相关指导 。 很简单的步骤。 依次输入下列命令:
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list' sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net --recv-key 0xB01FA116 sudo apt-get update sudo apt-get install ros-indigo-desktop-full sudo rosdep init rosdep update echo "source /opt/ros/indigo/setup.bash" >> ~/.bashrc source ~/.bashrc sudo apt-get install python-rosinstall
上述指令执行完毕之后, ROS也就安装完成了。 当然, 紧接着还需要建立自己的工作空间。 执行下述代码:
mkdir -p ~/catkin_ws/src cd ~/catkin_ws/src catkin_init_workspace cd .. catkin_make
驱动节点配置
在ROS环境里使用Kinect2, 主要依靠iai-kinect2这个包。
Github地址: https://github.com/code-iai/iai_kinect2 。
首先, 需要安装其libfreenect2, 步骤如下(以下默认以ubuntu14.04或更新的版本是系统版本, 别的系统, 参见原始网站说明):
libfreenect2
下载 libfreenect2 源码
git clone https://github.com/OpenKinect/libfreenect2.git cd libfreenect2
下载upgrade deb 文件
cd depends; ./download_debs_trusty.sh
安装编译工具
sudo apt-get install build-essential cmake pkg-config
安装 libusb. 版本需求 >= 1.0.20.
sudo dpkg -i debs/libusb*deb
安装 TurboJPEG
sudo apt-get install libturbojpeg libjpeg-turbo8-dev
安装 OpenGL
sudo dpkg -i debs/libglfw3*deb sudo apt-get install -f sudo apt-get install libgl1-mesa-dri-lts-vivid
(If the last command conflicts with other packages, don’t do it.) 原文如上提示, 我在安装的时候, 第三条指令确实出现了错误, 就直接忽略第三条指令了。
----安装 OpenCL (可选)
-----------Intel GPU:
sudo apt-add-repository ppa:floe/beignet sudo apt-get update sudo apt-get install beignet-dev sudo dpkg -i debs/ocl-icd*deb
-----------AMD GPU: apt-get install opencl-headers
-----------验证安装: clinfo
----安装 CUDA (可选, Nvidia only):
-----------(Ubuntu 14.04 only) Download cuda-repo-ubuntu1404…*.deb (“deb (network)”) from Nvidia website, follow their installation instructions, including apt-get install cuda which installs Nvidia graphics driver.
-----------(Jetson TK1) It is preloaded.
-----------(Nvidia/Intel dual GPUs) After apt-get install cuda, use sudo prime-select intel to use Intel GPU for desktop.
-----------(Other) Follow Nvidia website’s instructions.
----安装 VAAPI (可选, Intel only)sudo dpkg -i debs/{libva,i965}*deb; sudo apt-get install
----安装 OpenNI2 (可选)
powershell sudo apt-add-repository ppa:deb-rob/ros-trusty && sudo apt-get update sudo apt-get install libopenni2-de
----Build
powershell cd .. mkdir build && cd build cmake .. -DCMAKE_INSTALL_PREFIX=$HOME/freenect2 -DENABLE_CXX11=ON make make install
针对上面cmake命令的说明, 第一个参数, 是特别指定安装的位置, 你也可以指定别的你觉得高兴的地方, 但一般标准的路径是上述示例路径或者/usr/local。 第二个参数是增加C++11的支持。
----设定udev rules: sudo cp ../platform/linux/udev/90-kinect2.rules /etc/udev/rules.d/, 然后重新插拔Kinect2.
----一切搞定, 现在可以尝试运行Demo程序: ./bin/Protonect, 不出意外, 应该能够看到如下效果:
iai-kinect2
利用命令行从Github上面下载工程源码到工作空间内src文件夹内:
cd ~/catkin_ws/src/ git clone https://github.com/code-iai/iai_kinect2.git cd iai_kinect2 rosdep install -r --from-paths . cd ~/catkin_ws catkin_make -DCMAKE_BUILD_TYPE="Release"
针对于上述命令中最后一行指令, 需要说明的是, 如果前面libfreenect2你安装的位置不是标准的两个路径下, 需要提供参数指定libfreenect2所在路径:
catkin_make -Dfreenect2_DIR=path_to_freenect2/lib/cmake/freenect2 -DCMAKE_BUILD_TYPE="Release"
编译结束, 一切OK的话, 会看到如下提示:
最后就是激动人心的时刻了, 在ROS中获取Kinect2的数据。
PS: 很多同学在运行下属命令时,时常会遇到不能执行的问题,大部分情况是没有source对应的目录。应该先执行source ~/catkin_ws/devel/setup.bash,若对应已经写入~/.bashrc文件的同学,可以忽略。
roslaunch kinect2_bridge kinect2_bridge.launch
使用roslaunch发起kinect2相关节点, 可以看到如下结果, 在另外一个命令行中, 输入rostopic list, 可以查看到该节点发布出来的Topic, 还可以输入rosrun kinect2_viewer kinect2_viewer sd cloud, 来开启一个Viewer查看数据。
结果如下图所示: