【OpenCV学习】极几何约束 .

简介:




2.Ransac
是一种非常简单的算法

用于在一群样本中去掉噪声样本,得到有效的样本
采用随机抽样验证的方法,以下节选自wikipedia,选有用的贴了过来

RANSAC
RANSAC is an abbreviation for "RANdom SAmple Consensus". It is an algorithm to estimate parameters

of a mathematical model from a set of observed data which contains outliers . The algorithm was first

published by Fischler and Bolles in 1981.
A basic assumption is that the data consists of "inliers", i.e., data
points which can be explained by some set of model parameters, and
"outliers" which are data points that do not fit the model. In addition
to this, the data points can be subject to noise. The outliers can
come, e.g., from extreme values of the noise or from erroneous
measurements or incorrect hypotheses about the interpretation of data.
RANSAC also assumes that, given a (usually small) set of inliers, there
exists a procedure which can estimate the parameters of a model that
optimally explains or fits this data.

Example
A
simple example is fitting of a 2D line to set of observations. Assuming
that this set contains both inliers, i.e., points which approximately
can be fitted to a line, and outliers, points which cannot be fitted to
this line, a simple least squares method for line fitting will in
general produce a line with a bad fit to the inliers. The reason is
that it is optimally fitted to all points, including the outliers.
RANSAC, on the other hand, can produce a model which is only computed
from the inliers, provided that the probability of choosing only
inliers in the selection of data points is sufficiently high. There is
no guarantee for this situation, however, and there are a number of
algorithm parameters which must be carefully chosen to keep the level
of probability reasonably high.


Overview
The
input to the RANSAC algorithm is a set of observed data values, a
parameterized model which can explain or be fitted to the observations,
and some confidence parameters.
RANSAC achieves its goal by
iteratively selecting a random subset of the original data points.
These points are hypothetical inliers and this hypothesis is then
tested as follows. A model is fitted to the hypothetical inliers, that
is, all free parameters of the model are reconstructed from the point
set. All other data points are then tested against the fitted model,
that is, for every point of the remaining set, the algorithm determines
how well the point fits to the estimated model. If it fits well, that
point is also considered as a hypothetical inlier. If sufficiently many
points have been classified as hypothetical inliers relative to the
estimated model, then we have a model which is reasonably good.
However, it has only been estimated from the initial set of
hypothetical inliers, so we reestimate the model from the entire set of
point's hypothetical inliers. At the same time, we also estimate the
error of the inliers relative to the model.
This procedure is then
repeated a fixed number of times, each time producing either a model
which is rejected because too few points are classified as inliers or a
refined model together with a corresponding error measure. In the
latter case, we keep the refined model if its error is lower than the
last saved model.

Algorithm
The generic RANSAC algorithm works as follows:

input:
data - a set of observed data points
model - a model that can be fitted to data points
n - the minimum number of data values required to fit the model
k - the maximum number of iterations allowed in the algorithm
t - a threshold value for determining when a data point fits a model
d - the number of close data values required to assert that a model fits well to data
output:
bestfit - model parameters which best fit the data (or nil if no good model is found)
iterations := 0
bestfit := nil
besterr := infinity
while iterations <> d
(this implies that we may have found a good model now test
how good it is)
bettermodel := model parameters fitted to all points in maybeinliers and alsoinliers
thiserr := a measure of how well model fits these points
if thiserr <>
increment iterations
return bestfit


While
the parameter values of t and d have to be calculated from the
individual requirements it can be experimentally determined. The
interesting parameter of the RANSAC algorithm is k.
To calculate the
parameter k given the known probability w of a good data value, the
probability z of seeing only bad data values is used:
which leads to
To
gain additional confidence, the standard deviation or multiples thereof
can be added to k. The standard deviation of k is defined as
A
common case is that w is not well known beforehand, but some rough
value can be given. If n data values are given, the probability of
success is wn.


Advantages and disadvantages
An advantage
of RANSAC is its ability to do robust estimation of the model
parameters, i.e., it can estimate the parameters with a high degree of
accuracy even when outliers are present in the data set. A disadvantage
of RANSAC is that there is no upper bound on the time it takes to
compute these parameters. If an upper time bound is used, the solution
obtained may not be the most optimal one.
RANSAC can only estimate
one model for a particular data set. As for any one-model approach when
more two (or more) models exist, RANSAC may fail to find either one.

Applications
The
RANSAC algorithm is often used in computer vision, e.g., to
simultaneously solve the correspondence problem and estimate the
fundamental matrix related to a pair of stereo cameras.

References
M.
A. Fischler and R. C. Bolles (June 1981). "Random Sample Consensus: A
Paradigm for Model Fitting with Applications to Image Analysis and
Automated Cartography". Comm. of the ACM 24: 381--395.
doi:10.1145/358669.358692.
David A. Forsyth and Jean Ponce (2003). Computer Vision, a modern approach. Prentice Hall. ISBN ISBN 0-13-085198-1.
Richard
Hartley and Andrew Zisserman (2003). Multiple View Geometry in Computer
Vision, 2nd edition, Cambridge University Press.

3.基础矩阵的概念:
基础矩阵把左边图像的一个点的图像坐标与它右边图像中的对应点的图像联系起来,他是一个3x3的退化矩阵,描述了两个立体图像对的外极限几何关系,其计算依赖于在两个图像中相对应的一组点。

#include <iostream>
#include <cv.h>
#include <highgui.h>

//------------ 
各種外部変数 ----------//
double first[12][2] =
{
{488.362, 169.911},
{449.488, 174.44},
{408.565, 179.669},
{364.512, 184.56},
{491.483, 122.366},
{451.512, 126.56},
{409.502, 130.342},
{365.5, 134},
{494.335, 74.544},
{453.5, 76.5},
{411.646, 79.5901},
{366.498, 81.6577}
};

double second[12][2] =
{
{526.605, 213.332},
{470.485, 207.632},
{417.5, 201},
{367.485, 195.632},
{530.673, 156.417},
{473.749, 151.39},
{419.503, 146.656},
{368.669, 142.565},
{534.632, 97.5152},
{475.84, 94.6777},
{421.16, 90.3223},
{368.5, 87.5}
};

//---- 
支持功能 ---//
double GetYCoord(double x, double a,double b,double c)
{
return -(a*x+c)/b;
}


int main(int argc,char *argv[])
{
CvMat *firstM = cvCreateMat(12,2,CV_64FC1);
cvSetData(firstM,first,firstM->step);

CvMat *secondM = cvCreateMat(12,2,CV_64FC1);
cvSetData(secondM,second,secondM->step);

CvMat *FMat= cvCreateMat(3,3,CV_64FC1);

if(cvFindFundamentalMat(firstM,secondM,FMat,CV_FM_RANSAC,1.00,0.99) == 0){ //
获取基础矩阵F
std::cerr << "Can't Get F Mat/n";
return -1;
}


CvMat *lines = cvCreateMat(12,3,CV_64FC1);
cvComputeCorrespondEpilines(firstM,1,FMat,lines); //
绘制外极线


IplImage *imgB = cvLoadImage( "second.png", CV_LOAD_IMAGE_ANYDEPTH | CV_LOAD_IMAGE_ANYCOLOR);
IplImage *imgA = cvLoadImage( "first.png", CV_LOAD_IMAGE_ANYDEPTH | CV_LOAD_IMAGE_ANYCOLOR);
if(imgB == NULL || imgA == NULL){
std::cout<<"Can't Load Image ./n";
return -1;
}


//
绘制外极线
cvLine(imgB,
cvPoint( //
始点
0, // x
cvRound(// y
GetYCoord(0,CV_MAT_ELEM(*lines,double,11,0),
CV_MAT_ELEM(*lines,double,11,1),CV_MAT_ELEM(*lines,double,11,2)))
),
cvPoint( //
終点
imgB->width,// x
cvRound( // y
GetYCoord(imgB->width,CV_MAT_ELEM(*lines,double,11,0),
CV_MAT_ELEM(*lines,double,11,1),
CV_MAT_ELEM(*lines,double,11,2)))
),
CV_RGB(255,0,0),5);

//
提请第三点
cvDrawRect(imgA,
cvPoint(cvRound(CV_MAT_ELEM(*firstM,double,11,0)),cvRound(CV_MAT_ELEM(*firstM,double,11,1))),
cvPoint(cvRound(CV_MAT_ELEM(*firstM,double,11,0)),cvRound(CV_MAT_ELEM(*firstM,double,11,1))),
CV_RGB(0,255,0),
8);



cvNamedWindow("second",CV_WINDOW_AUTOSIZE);
cvShowImage("second",imgB);


cvNamedWindow("first",CV_WINDOW_AUTOSIZE);
cvShowImage("first",imgA);

cvReleaseMat(&firstM);
cvReleaseMat(&secondM);
cvReleaseMat(&FMat);
cvReleaseImage( &imgA ); 
cvReleaseImage( &imgB);
cvReleaseMat(&lines);


cvWaitKey(0); 

cvDestroyAllWindows(); 


return EXIT_SUCCESS;
}




结果:





本文转自gnuhpc博客园博客,原文链接:http://www.cnblogs.com/gnuhpc/archive/2012/11/25/2787614.html,如需转载请自行联系原作者
相关实践学习
基于Hologres轻松玩转一站式实时仓库
本场景介绍如何利用阿里云MaxCompute、实时计算Flink和交互式分析服务Hologres开发离线、实时数据融合分析的数据大屏应用。
阿里云实时数仓实战 - 项目介绍及架构设计
课程简介 1)学习搭建一个数据仓库的过程,理解数据在整个数仓架构的从采集、存储、计算、输出、展示的整个业务流程。 2)整个数仓体系完全搭建在阿里云架构上,理解并学会运用各个服务组件,了解各个组件之间如何配合联动。 3&nbsp;)前置知识要求 &nbsp; 课程大纲 第一章&nbsp;了解数据仓库概念 初步了解数据仓库是干什么的 第二章&nbsp;按照企业开发的标准去搭建一个数据仓库 数据仓库的需求是什么 架构 怎么选型怎么购买服务器 第三章&nbsp;数据生成模块 用户形成数据的一个准备 按照企业的标准,准备了十一张用户行为表 方便使用 第四章&nbsp;采集模块的搭建 购买阿里云服务器 安装 JDK 安装 Flume 第五章&nbsp;用户行为数据仓库 严格按照企业的标准开发 第六章&nbsp;搭建业务数仓理论基础和对表的分类同步 第七章&nbsp;业务数仓的搭建&nbsp; 业务行为数仓效果图&nbsp;&nbsp;
相关文章
|
2月前
|
存储 资源调度 算法
Opencv(C++)系列学习---SIFT、SURF、ORB算子特征检测
Opencv(C++)系列学习---SIFT、SURF、ORB算子特征检测
|
3月前
|
机器学习/深度学习 算法 数据可视化
计算机视觉+深度学习+机器学习+opencv+目标检测跟踪+一站式学习(代码+视频+PPT)-2
计算机视觉+深度学习+机器学习+opencv+目标检测跟踪+一站式学习(代码+视频+PPT)
100 0
|
3月前
|
机器学习/深度学习 Ubuntu Linux
计算机视觉+深度学习+机器学习+opencv+目标检测跟踪+一站式学习(代码+视频+PPT)-1
计算机视觉+深度学习+机器学习+opencv+目标检测跟踪+一站式学习(代码+视频+PPT)
56 1
|
2月前
|
算法 C++ 计算机视觉
Opencv(C++)学习系列---Laplacian拉普拉斯边缘检测算法
Opencv(C++)学习系列---Laplacian拉普拉斯边缘检测算法
|
2月前
|
算法 计算机视觉 C++
Opencv(C++)学习系列---Sobel索贝尔算子边缘检测
Opencv(C++)学习系列---Sobel索贝尔算子边缘检测
|
2月前
|
算法 C++ 计算机视觉
Opencv(C++)学习系列---Canny边缘检测算法
Opencv(C++)学习系列---Canny边缘检测算法
|
2月前
|
存储 计算机视觉 C++
Opencv(C++)学习系列---特征点检测和匹配
Opencv(C++)学习系列---特征点检测和匹配
|
2月前
|
算法 测试技术 计算机视觉
Opencv(C++)系列学习---opencv_contrib安装
Opencv(C++)系列学习---opencv_contrib安装
|
2月前
|
C++ 计算机视觉
Opencv(C++)系列学习---读取视频文件和打开摄像头
Opencv(C++)系列学习---读取视频文件和打开摄像头
|
2月前
|
存储 计算机视觉 C++
Opencv (C++)系列学习---模板匹配
Opencv (C++)系列学习---模板匹配