双目标定的目标在于得出两个摄像头之间的旋转矩阵R(rotation matrix)和平移向量T(translation vector),以及各自的旋转矩阵Rl Rr、投影矩阵Pl Pr和重映射矩阵Q(disparity-to-depth mapping matrix)。经过立体匹配(BM、SGBM、GC算法等)后可得出视差图,根据Q便可计算出实际空间坐标。
跟上一篇一样,后来的我也将这部分的原理总结了一下,可以看我的这篇读书笔记,但立体匹配部分我尚未完全理解,这部分以后会补上。
—— Jacob杨帮帮 11/2/2018
1.摄像头各自标定
见我的另一篇博文opencv3.3 单目摄像头的标定与矫正 - 简书。各自标定得出内参矩阵和畸变矩阵后再进行双目标定的结果要好许多。当然也可以直接双目标定,这里不进行介绍。
//单目标定得到的内参矩阵和畸变矩阵
cv::Mat cameraMatrixL = (cv::Mat_<double>(3, 3) <<
570.853,0,163.936,
0,565.62,142.756,
0,0,1);
cv::Mat distCoeffL = (cv::Mat_<double>(5, 1) <<-0.1464597668354846, -6.154543533838482,
-0.002589887217588616, 0.005985159261180101, 58.40123386205326);
cv::Mat cameraMatrixR = (cv::Mat_<double>(3, 3) <<
568.373,0,158.748,
0,562.243,114.268,
0,0,1);
cv::Mat distCoeffR = (cv::Mat_<double>(5, 1) << -0.2883413485650786, -1.10075802161073,
-0.00209556234492967, 0.007351217947355803, 6.544712063275942);
2.进行双目立体标定
这方面是有官方的例程的。但是可能我的用法有问题,得出的结果不尽人意,而且也想自己实现一遍。以下代码主要参考官方例程和各个博客大神。
- 寻找角点并计算投影点
isFindL = cv::findChessboardCorners(imageL, boardSize, imageCornersL);
isFindR = cv::findChessboardCorners(imageR, boardSize, imageCornersR);
//如果两幅图像都找到了所有的角点 则说明这两幅图像是可行的
if (isFindL == true && isFindR == true)
{
/*
Size(5,5) 搜索窗口的一半大小
Size(-1,-1) 死区的一半尺寸
TermCriteria(CV_TERMCRIT_EPS | CV_TERMCRIT_ASsxITER, 20, 0.1)迭代终止条件
*/
cv::cornerSubPix(imageL, imageCornersL, cv::Size(5, 5), cv::Size(-1, -1),
cv::TermCriteria(CV_TERMCRIT_EPS | CV_TERMCRIT_ITER, 20, 0.1));
cv::drawChessboardCorners(imageL, boardSize, imageCornersL, isFindL);
// cv::imshow("chessboardL", imageL);
imagePointL.push_back(imageCornersL);
cv::cornerSubPix(imageR, imageCornersR, cv::Size(5, 5), cv::Size(-1, -1),
cv::TermCriteria(CV_TERMCRIT_EPS | CV_TERMCRIT_ITER, 20, 0.1));
cv::drawChessboardCorners(imageR, boardSize, imageCornersR, isFindR);
// cv::imshow("chessboardR", imageR);
imagePointR.push_back(imageCornersR);
goodFrameCount++;
std::cout << "The image" << goodFrameCount << " is good" << std::endl;
}
else
{
std::cout << "The image is bad please try again" << std::endl;
std::cout <<"left image " << isFindL << std::endl;
std::cout <<"right image " << isFindR << std::endl;
}
}
calRealPoint(objRealPoint, boardWidth, boardHeight, PICS_NUMBER/2, squareSize);
std::cout << "calculate success" << std::endl;
与单目摄像头标定类似,不多做介绍
- 调用stereoCalibrate函数,得出R和T矩阵
double rms = cv::stereoCalibrate(objRealPoint, imagePointL, imagePointR,
cameraMatrixL, distCoeffL,
cameraMatrixR, distCoeffR,
cv::Size(imageWidth, imageHeight),
R, T, E, F,cv::CALIB_USE_INTRINSIC_GUESS,
cv::TermCriteria(cv::TermCriteria::COUNT
+ cv::TermCriteria::EPS, 100, 1e-5));
std::cout << "Stereo Calibration done with RMS error = " << rms << std::endl;
有必要看下函数原型
double stereoCalibrate( InputArrayOfArrays objectPoints,
InputArrayOfArrays imagePoints1,
InputArrayOfArrays imagePoints2,
InputOutputArray cameraMatrix1, InputOutputArray distCoeffs1,
lnputOutputArray cameraMatrix2, InputOutputArray distCoeffs2,
Size imageSize, InputOutputArray R,
InputOutputArray T, OutputArray E, OutputArray F,
OutputArray perViewErrors, int flags = CALIB_FIX_INTRINSIC,
TermCriteria criteria = TermCriteria(TermCriteria::COUNT+
TermCriteria::EPS, 30, 1e-6) );
其中参数E和F为本征矩阵和基础矩阵,本例中不使用。可以传入空矩阵
cv::Mat()。criteria为终止条件,一般使用默认参数。flag参数比较重要,源码中描述如下,我做了下简单翻译
§ CV_CALIB_FIX_INTRINSIC 内参矩阵和畸变矩阵不变,所以只有 R, T, E , 和 F矩阵被估计出来
§ CV_CALIB_USE_INTRINSIC_GUESS 内参矩阵和畸变矩阵初始值由用户提供,并在迭代中进行优化
§ CV_CALIB_FIX_PRINCIPAL_POINT 在优化过程中确定主点。
§ CV_CALIB_FIX_FOCAL_LENGTH迭代中不改变焦距 .
§ CV_CALIB_FIX_ASPECT_RATIO保持 fx和 fy比值相同.
§ CV_CALIB_SAME_FOCAL_LENGTH强制保持两个摄像机的焦距相同 .
§ CV_CALIB_ZERO_TANGENT_DIST设置每个相机切向畸变系数为零并且设为固定值。
§ CV_CALIB_FIX_K1,...,CV_CALIB_FIX_K6在优化中不改变相应的径向畸变系数. 如果设置CV_CALIB_USE_INTRINSIC_GUESS , 使用distCoeffs矩阵提供的系数。否则将其置零.
§ CV_CALIB_RATIONAL_MODEL能够输出系数k4,k5,k6。如果FLAG没有被设置,该函数计算并只返回5畸变系数。
由于已经做了单目的标定,使用CV_CALIB_USE_INTRINSIC_GUESS即可
- 对标定过的摄像头进行立体矫正
/*
立体校正的时候需要两幅图像共面并且行对准 以使得立体匹配更加的可靠
使得两幅图像共面的方法就是把两个摄像头的图像投影到一个公共成像面上,这样每幅图像从本图像平面投影到公共图像平面都需要一个旋转矩阵R
stereoRectify 这个函数计算的就是从图像平面投影到公共成像平面的旋转矩阵Rl,Rr。 Rl,Rr即为左右相机平面行对准的校正旋转矩阵。
左相机经过Rl旋转,右相机经过Rr旋转之后,两幅图像就已经共面并且行对准了。
其中Pl,Pr为两个相机的投影矩阵,其作用是将3D点的坐标转换到图像的2D点的坐标:P*[X Y Z 1]' =[x y w]
Q矩阵为重投影矩阵,即矩阵Q可以把2维平面(图像平面)上的点投影到3维空间的点:Q*[x y d 1] = [X Y Z W]。其中d为左右两幅图像的时差
*/
cv::stereoRectify(cameraMatrixL, distCoeffL, cameraMatrixR, distCoeffR, imageSize, R, T, Rl, Rr, Pl, Pr, Q,
cv::CALIB_ZERO_DISPARITY, -1, imageSize, &validROIL, &validROIR);
/*
根据stereoRectify 计算出来的R 和 P 来计算图像的映射表 mapx,mapy
mapx,mapy这两个映射表接下来可以给remap()函数调用,来校正图像,使得两幅图像共面并且行对准
ininUndistortRectifyMap()的参数newCameraMatrix就是校正后的摄像机矩阵。在openCV里面,校正后的计算机矩阵Mrect是跟投影矩阵P一起返回的。
所以我们在这里传入投影矩阵P,此函数可以从投影矩阵P中读出校正后的摄像机矩阵
*/
cv::Size newSize(static_cast<int>(imageL.cols*1.2), static_cast<int>(imageL.rows*1.2));
cv::initUndistortRectifyMap(cameraMatrixL, distCoeffL, Rl, Pl, newSize,
CV_32FC1, mapLx, mapLy);
cv::initUndistortRectifyMap(cameraMatrixR, distCoeffR, Rr, Pr, newSize,
CV_32FC1, mapRx, mapRy);
//使用映射表进行矫正
cv::Mat rectifyImageL, rectifyImageR;
cv::remap(imageL, rectifyImageL, mapLx, mapLy, cv::INTER_LINEAR);
cv::remap(imageR, rectifyImageR, mapRx, mapRy, cv::INTER_LINEAR);
函数stereoRectify原型如下
void stereoRectify( InputArray cameraMatrix1, InputArray distCoeffs1,
InputArray cameraMatrix2, InputArray distCoeffs2,
Size imageSize, InputArray R, InputArray T,
OutputArray R1, OutputArray R2,
OutputArray P1, OutputArray P2,
OutputArray Q, int flags = CALIB_ZERO_DISPARITY,
double alpha = -1, Size = Size(),
CV_OUT Rect* validPixROI1 = 0, CV_OUT Rect* validPixROI2 = 0 );
R1 R2是左右两个摄像头的旋转矩阵,P1 P2是各自的投影矩阵。
alpha为拉伸参数。如果设置为负或忽略,将不进行拉伸。如果设置为0,那么校正后图像只有有效的部分会被显示(没有黑色的部分),如果设置为1,那么就会显示整个图像。可设置范围为为0~1。
flags可选的标志有两种零或者CV_CALIB_ZERO_DISPARITY ,如果设置 CV_CALIB_ZERO_DISPARITY的话,该函数会让两幅校正后的图像的主点有相同的像素坐标。否则该函数会水平或垂直的移动图像,以使得其有用的范围最大
newImageSize 校正后的图像大小,一般跟原图像相同。
validPixROI1 validPixROI2为校正后的图像可选的输出矩形。这里不使用。
其他函数单目标定中有使用,不做介绍。可以使用drawMatches函数显示矫正后的双目图片
//显示矫正后的图像
for(int i=20; i<rectifyImageL.rows; i+=20)
{
cv::line(rectifyImageL,cv::Point(0,i),cv::Point(rectifyImageL.cols,i),
cv::Scalar(255,255,255));
cv::line(rectifyImageR,cv::Point(0,i),cv::Point(rectifyImageL.cols,i),
cv::Scalar(255,255,255));
}
cv::Mat imageMatches;
cv::drawMatches(rectifyImageL, std::vector<cv::KeyPoint>(), // 1st image
rectifyImageR, std::vector<cv::KeyPoint>(), // 2nd image
std::vector<cv::DMatch>(),
imageMatches, // the image produced
cv::Scalar(255, 255, 255),
cv::Scalar(255, 255, 255),
std::vector<char>(),
2);
cv::imshow("imageMatches", imageMatches);
3.使用SGBM算法验证结果
SGBM是一种立体匹配算法,准确度和速度适中,工程中比较常用
cv::Ptr<cv::StereoSGBM> sgbm = cv::StereoSGBM::create(0,16,3);
int sgbmWinSize = 3;
int cn = imageL.channels();
int numberOfDisparities = ((imageSize.width/8) + 15) & -16;
sgbm->setPreFilterCap(63);
sgbm->setBlockSize(sgbmWinSize);
sgbm->setP1(8*cn*sgbmWinSize*sgbmWinSize);
sgbm->setP2(32*cn*sgbmWinSize*sgbmWinSize);
sgbm->setMinDisparity(0);
sgbm->setNumDisparities(numberOfDisparities);
sgbm->setUniquenessRatio(10);
sgbm->setSpeckleWindowSize(50);
sgbm->setSpeckleRange(32);
sgbm->setDisp12MaxDiff(1);
sgbm->setMode(cv::StereoSGBM::MODE_SGBM);
cv::Mat disp, disp8;
sgbm->compute(rectifyImageL, rectifyImageR, disp);
disp.convertTo(disp8, CV_8U, 255/(numberOfDisparities*16.));
cv::imshow("disparity8", disp8);
对disp进行转换就得到可以显示的视差图。
cv::reprojectImageTo3D(disp, xyz, Q, true);
xyz = xyz * 16; // xyz=[X/W Y/W Z/W],乘以16得到真实坐标
cv::setMouseCallback("disparity8", onMouse, 0);
void onMouse(int event, int x, int y,int,void*)
{
cv::Point origin;
switch (event)
{
case cv::EVENT_LBUTTONDOWN: //鼠标左按钮按下的事件
origin = cv::Point(x, y);
std::cout << origin << "in world coordinate is: " <<
xyz.at<cv::Vec3f>(origin)<< std::endl;
break;
}
}
设置回调函数,这样在image window上点击的时候就可以得到真实坐标。
4.验证结果
矫正后的双目图片如下
视差图如下
输出的真实坐标如下。两个柚子的距离在60cm左右,结果还是比较准确的。
5.经验总结
- 标定板格子的大小要测量准确,对重映射矩阵Q的影响很大,结果准不准确很大程度上因此决定
- 双目标定同样不要偷懒,图片数量最好在15对(30张)以上
- 善于使用官方例程,这方面opencv开源库的维护者还是做得比较到位的
完整代码如下
/*
* m_stereo_match.cpp
* Created by 杨帮杰 on 7/30/18.
* Right to use this code in any way you want without warranty,
* support or any guarantee of it working
* E-mail: yangbangjie1998@qq.com
*/
#include <iostream>
#include <iomanip>
#include <vector>
#include <opencv2/core.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/features2d.hpp>
#include "opencv2/imgproc.hpp"
#include "opencv2/calib3d.hpp"
#define LEFT_CAM_PICS_FILE_PATH "F:/QtProjects/stero/stereo4/left" //路径
#define RIGHT_CAM_PICS_FILE_PATH "F:/QtProjects/stero/stereo4/right"
#define PICS_NUMBER 34 //标定图片的数量
const int imageWidth = 320; //摄像头的分辨率
const int imageHeight = 240;
const int boardWidth = 8; //横向的角点数目
const int boardHeight = 6; //纵向的角点数据
const float squareSize = 1.33; //标定板黑白格子的大小 单位cm
const cv::Size imageSize = cv::Size(imageWidth, imageHeight);
const cv::Size boardSize = cv::Size(boardWidth, boardHeight); //标定板的总内角点
cv::Mat Rl, Rr, Pl, Pr, Q; //校正旋转矩阵R,投影矩阵P 重投影矩阵Q
cv::Mat mapLx, mapLy, mapRx, mapRy; //映射表
cv::Rect validROIL, validROIR; //裁剪之后的区域
cv::Mat xyz;
int goodFrameCount=0; //可检测到的图片对数
cv::Mat R, T, E, F; //R 旋转矢量 T平移矢量 E本征矩阵 F基础矩阵
std::vector<cv::Mat> rvecs; //旋转向量
std::vector<cv::Mat> tvecs; //平移向量
std::vector<std::vector<cv::Point2f>> imagePointL; //左边摄像机所有照片角点的坐标集合
std::vector<std::vector<cv::Point2f>> imagePointR; //右边摄像机所有照片角点的坐标集合
std::vector<std::vector<cv::Point3f>> objRealPoint; //各副图像的角点的实际物理坐标集合
//单目标定得到的内参矩阵和畸变矩阵
cv::Mat cameraMatrixL = (cv::Mat_<double>(3, 3) <<
570.853,0,163.936,
0,565.62,142.756,
0,0,1);
cv::Mat distCoeffL = (cv::Mat_<double>(5, 1) <<-0.1464597668354846, -6.154543533838482,
-0.002589887217588616, 0.005985159261180101, 58.40123386205326);
cv::Mat cameraMatrixR = (cv::Mat_<double>(3, 3) <<
568.373,0,158.748,
0,562.243,114.268,
0,0,1);
cv::Mat distCoeffR = (cv::Mat_<double>(5, 1) << -0.2883413485650786, -1.10075802161073,
-0.00209556234492967, 0.007351217947355803, 6.544712063275942);
/*计算标定板上模块的实际物理坐标*/
void calRealPoint(std::vector<std::vector<cv::Point3f>>& obj, int boardwidth, int boardheight, int imgNumber, float squaresize)
{
std::vector<cv::Point3f> imgpoint;
for (float rowIndex = 0.; rowIndex < boardheight; rowIndex++)
{
for (float colIndex = 0.; colIndex < boardwidth; colIndex++)
{
imgpoint.push_back(cv::Point3f(rowIndex * squaresize, colIndex * squaresize, 0));
}
}
for (float imgIndex = 0.; imgIndex < imgNumber; imgIndex++)
{
obj.push_back(imgpoint);
}
}
void onMouse(int event, int x, int y,int,void*)
{
cv::Point origin;
switch (event)
{
case cv::EVENT_LBUTTONDOWN: //鼠标左按钮按下的事件
origin = cv::Point(x, y);
xyz.at<cv::Vec3f>(origin)[2] +=2;
std::cout << origin << "in world coordinate is: " << xyz.at<cv::Vec3f>(origin)<< std::endl;
break;
}
}
int main()
{
std::vector<std::string> filelistL;
std::vector<std::string> filelistR;
//读取左摄像头的文件
for (int i=1; i<=PICS_NUMBER/2; i++) {
std::stringstream str;
str << LEFT_CAM_PICS_FILE_PATH << std::setw(2) << std::setfill('0') << i << ".png";
std::cout << str.str() << std::endl;
filelistL.push_back(str.str());
}
//读取右摄像头的文件
for (int i=1; i<=PICS_NUMBER/2; i++) {
std::stringstream str;
str << RIGHT_CAM_PICS_FILE_PATH << std::setw(2) << std::setfill('0') << i << ".png";
std::cout << str.str() << std::endl;
filelistR.push_back(str.str());
}
cv::Mat imageL;
cv::Mat imageR;
while (goodFrameCount < PICS_NUMBER/2)
{
std::vector<cv::Point2f> imageCornersL;
std::vector<cv::Point2f> imageCornersR;
/*读取左边的图像*/
imageL = cv::imread(filelistL[goodFrameCount], 0);
/*读取右边的图像*/
imageR = cv::imread(filelistR[goodFrameCount], 0);
bool isFindL, isFindR;
isFindL = cv::findChessboardCorners(imageL, boardSize, imageCornersL);
isFindR = cv::findChessboardCorners(imageR, boardSize, imageCornersR);
if (isFindL == true && isFindR == true) //如果两幅图像都找到了所有的角点 则说明这两幅图像是可行的
{
/*
Size(5,5) 搜索窗口的一半大小
Size(-1,-1) 死区的一半尺寸
TermCriteria(CV_TERMCRIT_EPS | CV_TERMCRIT_ASsxITER, 20, 0.1)迭代终止条件
*/
cv::cornerSubPix(imageL, imageCornersL, cv::Size(5, 5), cv::Size(-1, -1), cv::TermCriteria(CV_TERMCRIT_EPS | CV_TERMCRIT_ITER, 20, 0.1));
cv::drawChessboardCorners(imageL, boardSize, imageCornersL, isFindL);
// cv::imshow("chessboardL", imageL);
imagePointL.push_back(imageCornersL);
cv::cornerSubPix(imageR, imageCornersR, cv::Size(5, 5), cv::Size(-1, -1), cv::TermCriteria(CV_TERMCRIT_EPS | CV_TERMCRIT_ITER, 20, 0.1));
cv::drawChessboardCorners(imageR, boardSize, imageCornersR, isFindR);
// cv::imshow("chessboardR", imageR);
imagePointR.push_back(imageCornersR);
goodFrameCount++;
std::cout << "The image" << goodFrameCount << " is good" << std::endl;
}
else
{
std::cout << "The image is bad please try again" << std::endl;
std::cout <<"left image " << isFindL << std::endl;
std::cout <<"right image " << isFindR << std::endl;
}
}
calRealPoint(objRealPoint, boardWidth, boardHeight, PICS_NUMBER/2, squareSize);
std::cout << "calculate success" << std::endl;
double rms = cv::stereoCalibrate(objRealPoint, imagePointL, imagePointR,
cameraMatrixL, distCoeffL,
cameraMatrixR, distCoeffR,
cv::Size(imageWidth, imageHeight),
R, T, E, F,cv::CALIB_USE_INTRINSIC_GUESS,
cv::TermCriteria(cv::TermCriteria::COUNT
+ cv::TermCriteria::EPS, 100, 1e-5));
std::cout << "Stereo Calibration done with RMS error = " << rms << std::endl;
/*
立体校正的时候需要两幅图像共面并且行对准 以使得立体匹配更加的可靠
使得两幅图像共面的方法就是把两个摄像头的图像投影到一个公共成像面上,这样每幅图像从本图像平面投影到公共图像平面都需要一个旋转矩阵R
stereoRectify 这个函数计算的就是从图像平面投影到公共成像平面的旋转矩阵Rl,Rr。 Rl,Rr即为左右相机平面行对准的校正旋转矩阵。
左相机经过Rl旋转,右相机经过Rr旋转之后,两幅图像就已经共面并且行对准了。
其中Pl,Pr为两个相机的投影矩阵,其作用是将3D点的坐标转换到图像的2D点的坐标:P*[X Y Z 1]' =[x y w]
Q矩阵为重投影矩阵,即矩阵Q可以把2维平面(图像平面)上的点投影到3维空间的点:Q*[x y d 1] = [X Y Z W]。其中d为左右两幅图像的时差
*/
//对标定过的图像进行校正
cv::stereoRectify(cameraMatrixL, distCoeffL, cameraMatrixR, distCoeffR, imageSize, R, T, Rl, Rr, Pl, Pr, Q,
cv::CALIB_ZERO_DISPARITY, -1, imageSize, &validROIL, &validROIR);
/*
根据stereoRectify 计算出来的R 和 P 来计算图像的映射表 mapx,mapy
mapx,mapy这两个映射表接下来可以给remap()函数调用,来校正图像,使得两幅图像共面并且行对准
ininUndistortRectifyMap()的参数newCameraMatrix就是校正后的摄像机矩阵。在openCV里面,校正后的计算机矩阵Mrect是跟投影矩阵P一起返回的。
所以我们在这里传入投影矩阵P,此函数可以从投影矩阵P中读出校正后的摄像机矩阵
*/
//摄像机校正映射
cv::Size newSize(static_cast<int>(imageL.cols*1.2), static_cast<int>(imageL.rows*1.2));
cv::initUndistortRectifyMap(cameraMatrixL, distCoeffL, Rl, Pl, newSize,
CV_32FC1, mapLx, mapLy);
cv::initUndistortRectifyMap(cameraMatrixR, distCoeffR, Rr, Pr, newSize,
CV_32FC1, mapRx, mapRy);
std::cout << "---------------cameraMatrixL & distCoeffL ----------------- " << std::endl;
std::cout << "cameraMatrixL" << std::endl << cameraMatrixL <<std::endl;
std::cout << "distCoeffL" << std::endl << distCoeffL <<std::endl;
std::cout << "---------------cameraMatrixR & distCoeffR ----------------- " << std::endl;
std::cout << "cameraMatrixR" << std::endl << cameraMatrixR << std::endl;
std::cout << "distCoeffR " << std::endl << distCoeffR << std::endl;
std::cout << "---------------R & T ----------------- " << std::endl;
std::cout << "R " << std::endl << R <<std::endl;
std::cout << "T " << std::endl << T <<std::endl;
std::cout << "---------------Pl & Pr ----------------- " << std::endl;
std::cout << "Pl " << std::endl << Pl <<std::endl;
std::cout << "Pr " << std::endl << Pr <<std::endl;
std::cout << "---------------Rl & Rr ----------------- " << std::endl;
std::cout << "Rl " << std::endl << Rl <<std::endl;
std::cout << "Rr " << std::endl << Rr <<std::endl;
std::cout << "--------------- Q ----------------- " << std::endl;
std::cout << "Q " << std::endl << Q <<std::endl;
/*************使用SGBM算子验证结果**************/
#if 0
imageL = cv::imread(filelistL[13],0);
imageR = cv::imread(filelistR[13],0);
#else
imageL = cv::imread("F:\\QtProjects\\stero\\youzi60l.png",0);
imageR = cv::imread("F:\\QtProjects\\stero\\youzi60r.png",0);
#endif
cv::Mat rectifyImageL, rectifyImageR;
cv::remap(imageL, rectifyImageL, mapLx, mapLy, cv::INTER_LINEAR);
cv::remap(imageR, rectifyImageR, mapRx, mapRy, cv::INTER_LINEAR);
// cv::imshow("rectifyImageL", rectifyImageL);
// cv::imshow("rectifyImageR", rectifyImageR);
cv::Ptr<cv::StereoSGBM> sgbm = cv::StereoSGBM::create(0,16,3);
int sgbmWinSize = 3;
int cn = imageL.channels();
int numberOfDisparities = ((imageSize.width/8) + 15) & -16;
sgbm->setPreFilterCap(63);
sgbm->setBlockSize(sgbmWinSize);
sgbm->setP1(8*cn*sgbmWinSize*sgbmWinSize);
sgbm->setP2(32*cn*sgbmWinSize*sgbmWinSize);
sgbm->setMinDisparity(0);
sgbm->setNumDisparities(numberOfDisparities);
sgbm->setUniquenessRatio(10);
sgbm->setSpeckleWindowSize(50);
sgbm->setSpeckleRange(32);
sgbm->setDisp12MaxDiff(1);
sgbm->setMode(cv::StereoSGBM::MODE_SGBM);
cv::Mat disp, disp8;
sgbm->compute(rectifyImageL, rectifyImageR, disp);
disp.convertTo(disp8, CV_8U, 255/(numberOfDisparities*16.));
cv::imshow("disparity8", disp8);
cv::reprojectImageTo3D(disp, xyz, Q, true);
xyz = xyz * 16; // xyz=[X/W Y/W Z/W],乘以16得到真实坐标
cv::setMouseCallback("disparity8", onMouse, 0);
//显示矫正后的图像
for(int i=20; i<rectifyImageL.rows; i+=20)
{
cv::line(rectifyImageL,cv::Point(0,i),cv::Point(rectifyImageL.cols,i),cv::Scalar(255,255,255));
cv::line(rectifyImageR,cv::Point(0,i),cv::Point(rectifyImageL.cols,i),cv::Scalar(255,255,255));
}
cv::Mat imageMatches;
cv::drawMatches(rectifyImageL, std::vector<cv::KeyPoint>(), // 1st image
rectifyImageR, std::vector<cv::KeyPoint>(), // 2nd image
std::vector<cv::DMatch>(),
imageMatches, // the image produced
cv::Scalar(255, 255, 255),
cv::Scalar(255, 255, 255),
std::vector<char>(),
2);
cv::imshow("imageMatches", imageMatches);
cv::waitKey();
return 0;
}