车牌识别,移植到android系统

本文涉及的产品
小语种识别,小语种识别 200次/月
通用文字识别,通用文字识别 200次/月
教育场景识别,教育场景识别 200次/月
简介:    首先吐槽,搞了1天半,终于弄好了。自己android开发是小白,之前一门心思想在jni目录下读取xml文件,事实证明无论如何都不行的。好吧,后来发现资源文件应该都放在assets目录下,可是文件会被压缩,必须用什么assetmanager访问。

   首先吐槽,搞了1天半,终于弄好了。自己android开发是小白,之前一门心思想在jni目录下读取xml文件,事实证明无论如何都不行的。好吧,后来发现资源文件应该都放在assets目录下,可是文件会被压缩,必须用什么assetmanager访问。opencv之前训练的两个svm.xml和ocr.xml文件,和一般的xml文件不同的,自己解析xml存到opencv的mat中太麻烦了。后来想了又想,还是放到sdcard中比较好,我是通过DDMS导入的,反正这次只是长姿势


声明:

1.本次导入的汽车图片还是包含西班牙的车牌的汽车,它与中国车牌最大的不同是不包含中文,西班牙车牌含有0-9数字及20个英文字符

2.在模拟机上运行速度貌似和vs2008一样慢,而且有识别错的可能,我碰到过

3.原理什么的见我前面的文章,我这次直接使用训练好的svm.xml和ocr.xml,并给出完整的识别流程。整个工程文件,待会上传csdn下载频道


环境需求:

eclipse juno

ndk(r9)

android sdk 4.4 api 19

opencv 2.4.7 android版本

cygwin


准备工作:

1.将E:\OpenCV-2.4.7.1-android-sdk\sdk中的java项目导入工作空间,日后凡事java端调用opencv的函数都要用到这个类库

2.安装opencv manager.apk,目前在android上所有的opencv程序都必须依附于android manger。在DOS窗口口中执行:

adb install <OpenCV4Android SDKpath>/apk/OpenCV_2.4.7_Manager_2.14_armv7a-neon.apk

开始项目:

1.新建android application工程,取名CarPlate,右击项目属性,勾选opencv类库

2. 将汽车 照片复制到 drwabale 随便哪个目录下,然后编写布局文件 activity_main.xml

<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
   	android:orientation="vertical"
    tools:context=".MainActivity" >

    <TextView
        android:id="@+id/myshow"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:text="检测结果...." />
	<Button   
        android:id="@+id/btn_plate"  
        android:layout_width="fill_parent"  
        android:layout_height="wrap_content"  
        android:text="车牌检测"
        android:onClick="click"
        /> 
    <ImageView  
        android:id="@+id/image_view"  
        android:layout_width="wrap_content"  
        android:layout_height="wrap_content"  
        android:contentDescription="@string/str_proc"/>   
</LinearLayout>

3. 新建CarPlateDetection 类,编写本地化方法,作为调用 c 语言代码的入口:

package com.example.carplate;


public class CarPlateDetection {
	public static native String ImageProc(int[] pixels, int w, int h,String path);
}

4 . dos 窗口中,使用 javah 工具,自动生成 c 语言的头文件,具体方法就是在DOS窗口中跑到 CarPlate 项目的 bin\classes 目录下,输入:

javah com.example.carplate.CarPlateDetection
之后,在classes目录下将会有com_example_carplate_CarPlateDetection.h文件


5.新建一个jni文件夹,把刚才的那个com_example_carplate_CarPlateDetection.h文件拷贝过来。然后编写Android.mk

LOCAL_PATH := $(call my-dir)  
include $(CLEAR_VARS)  
include E:/OpenCV-2.4.7.1-android-sdk/sdk/native/jni/OpenCV.mk  
LOCAL_SRC_FILES  := ImageProc.cpp  
LOCAL_SRC_FILES  += Plate_Recognition.cpp
LOCAL_SRC_FILES  += Plate_Segment.cpp
LOCAL_SRC_FILES  += Plate.cpp
LOCAL_C_INCLUDES += $(LOCAL_PATH)
LOCAL_MODULE     := imageproc  
LOCAL_LDLIBS += -llog 
include $(BUILD_SHARED_LIBRARY)  


6.修改AndroidManifest.xml,增加sdcard权限【就算是读取,也要加上!】:

<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.MOUNT_UNMOUNT_FILESYSTEMS"/> 

7. 回到MainActivity 中,编写java端主要的代码:

package com.example.carplate;

import java.io.File;

import org.opencv.android.BaseLoaderCallback;
import org.opencv.android.LoaderCallbackInterface;
import org.opencv.android.OpenCVLoader;
import org.opencv.core.*;

import android.os.Bundle;
import android.os.Environment;
import android.app.Activity;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.view.Menu;
import android.view.View;
import android.widget.ImageView;
import android.widget.TextView;

public class MainActivity extends Activity {
	private ImageView imageView = null;  
	private Bitmap bmp = null;  
	private TextView m_text = null;
	private String path = null; //SDCARD 根目录
	@Override
	protected void onCreate(Bundle savedInstanceState) {
		super.onCreate(savedInstanceState);
		setContentView(R.layout.activity_main);
		imageView = (ImageView) findViewById(R.id.image_view);  
		m_text = (TextView) findViewById(R.id.myshow);
	    //将汽车完整图像加载程序中并进行显示
		 bmp = BitmapFactory.decodeResource(getResources(), R.drawable.test2);  
	     imageView.setImageBitmap(bmp);
	     path = Environment.getExternalStorageDirectory().getAbsolutePath();//获取跟目录 
	     System.out.println(path);
	}

	//OpenCV类库加载并初始化成功后的回调函数,在此我们不进行任何操作  
    private BaseLoaderCallback  mLoaderCallback = new BaseLoaderCallback(this) {  
       @Override  
       public void onManagerConnected(int status) {  
           switch (status) {  
               case LoaderCallbackInterface.SUCCESS:{  
                   System.loadLibrary("imageproc");  
               } break;  
               default:{  
                   super.onManagerConnected(status);  
               } break;  
           }  
       }  
   };  
   
   public void click(View view){
	   System.out.println("entering the jni");
	   int w = bmp.getWidth();
	   int h = bmp.getHeight();
	   int[] pixels = new int[w * h];
	   String result=null;
	   bmp.getPixels(pixels, 0, w, 0, 0, w, h);
	  // System.out.println(Environment.getExternalStorageState());
	   result=CarPlateDetection.ImageProc(pixels, w, h,path);
	   System.out.println(result);
	   m_text.setText(result);   
   }
   
	@Override
	protected void onResume() {
		// TODO Auto-generated method stub
		super.onResume();
		  //通过OpenCV引擎服务加载并初始化OpenCV类库,所谓OpenCV引擎服务即是  
       //OpenCV_2.4.3.2_Manager_2.4_*.apk程序包,存在于OpenCV安装包的apk目录中  
       OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION_2_4_3, this, mLoaderCallback);  
	}
}

8.好了,现在开始主要的 C 语言部分。对应头文件和源文件内容分别是(这些文件也放在 jni 目录下):

Plate.h:【车牌类,包含车牌数据结构及对识别的车牌字符顺序调整函数】

#ifndef Plate_h
#define Plate_h

#include <string.h>
#include <vector>

#include <cv.h>
#include <highgui.h>
#include <cvaux.h>

using namespace std;
using namespace cv;

class Plate{
    public:
        Plate();
        Plate(Mat img, Rect pos);
        string str();
        Rect position;
        Mat plateImg;
        vector<char> chars;
        vector<Rect> charsPos;        
};

#endif

Plate.cpp:

#include "Plate.h"

Plate::Plate(){
}

Plate::Plate(Mat img, Rect pos){
    plateImg=img;
    position=pos;
}

string Plate::str(){
    string result="";
    //Order numbers
    vector<int> orderIndex;
    vector<int> xpositions;
    for(int i=0; i< charsPos.size(); i++){
        orderIndex.push_back(i);
        xpositions.push_back(charsPos[i].x);
    }
    float min=xpositions[0];
	int minIdx=0;
    for(int i=0; i< xpositions.size(); i++){
        min=xpositions[i];
        minIdx=i;
        for(int j=i; j<xpositions.size(); j++){
            if(xpositions[j]<min){
                min=xpositions[j];
                minIdx=j;
            }
        }
        int aux_i=orderIndex[i];
        int aux_min=orderIndex[minIdx];
        orderIndex[i]=aux_min;
        orderIndex[minIdx]=aux_i;
        
        float aux_xi=xpositions[i];
        float aux_xmin=xpositions[minIdx];
        xpositions[i]=aux_xmin;
        xpositions[minIdx]=aux_xi;
    }
    for(int i=0; i<orderIndex.size(); i++){
        result=result+chars[orderIndex[i]];
    }
    return result;
}

PlateSegment.h:【功能:从一张汽车图片中分割得到一张车牌】

#ifndef seg_h
#define seg_h

#include<iostream>
#include <cv.h>
#include <highgui.h>
#include <cvaux.h>
#include "Plate.h"

using namespace std;
using namespace cv;

bool verifySizes(RotatedRect mr);
Mat histeq(Mat in);
vector<Plate> segment(Mat input);

#endif

PlateSegment.cpp:

#include "Plate_Segment.h"

//对minAreaRect获得的最小外接矩形,用纵横比进行判断
bool verifySizes(RotatedRect mr)
{
	float error=0.4;
	//Spain car plate size: 52x11 aspect 4,7272
	float aspect=4.7272;
	//Set a min and max area. All other patchs are discarded
	int min= 15*aspect*15; // minimum area
	int max= 125*aspect*125; // maximum area
	//Get only patchs that match to a respect ratio.
	float rmin= aspect-aspect*error;
	float rmax= aspect+aspect*error;

	int area= mr.size.height * mr.size.width;
	float r= (float)mr.size.width / (float)mr.size.height;
	if(r<1)
		r= (float)mr.size.height / (float)mr.size.width;

	if(( area < min || area > max ) || ( r < rmin || r > rmax )){
		return false;
	}else{
		return true;
	}
}

Mat histeq(Mat in)
{
	Mat out(in.size(), in.type());
	if(in.channels()==3){
		Mat hsv;
		vector<Mat> hsvSplit;
		cvtColor(in, hsv, CV_BGR2HSV);
		split(hsv, hsvSplit);
		equalizeHist(hsvSplit[2], hsvSplit[2]);
		merge(hsvSplit, hsv);
		cvtColor(hsv, out, CV_HSV2BGR);
	}else if(in.channels()==1){
		equalizeHist(in, out);
	}

	return out;
}

vector<Plate> segment(Mat input){
	vector<Plate> output;

	//apply a Gaussian blur of 5 x 5 and remove noise
	Mat img_gray;
	cvtColor(input, img_gray, CV_BGR2GRAY);
	blur(img_gray, img_gray, Size(5,5));    

	//Finde vertical edges. Car plates have high density of vertical lines
	Mat img_sobel;
	Sobel(img_gray, img_sobel, CV_8U, 1, 0, 3, 1, 0, BORDER_DEFAULT);//xorder=1,yorder=0,kernelsize=3

	//apply a threshold filter to obtain a binary image through Otsu's method
	Mat img_threshold;
	threshold(img_sobel, img_threshold, 0, 255, CV_THRESH_OTSU+CV_THRESH_BINARY);

	//Morphplogic operation close:remove blank spaces and connect all regions that have a high number of edges
	Mat element = getStructuringElement(MORPH_RECT, Size(17, 3) );
	morphologyEx(img_threshold, img_threshold, CV_MOP_CLOSE, element);

	//Find 轮廓 of possibles plates
	vector< vector< Point> > contours;
	findContours(img_threshold,
		contours, // a vector of contours
		CV_RETR_EXTERNAL, // 提取外部轮廓
		CV_CHAIN_APPROX_NONE); // all pixels of each contours

	//Start to iterate to each contour founded
	vector<vector<Point> >::iterator itc= contours.begin();
	vector<RotatedRect> rects;

	//Remove patch that are no inside limits of aspect ratio and area.    
	while (itc!=contours.end()) {
		//Create bounding rect of object
		RotatedRect mr= minAreaRect(Mat(*itc));
		if( !verifySizes(mr)){
			itc= contours.erase(itc);
		}else{
			++itc;
			rects.push_back(mr);
		}
	}

	cv::Mat result;
	input.copyTo(result);

	for(int i=0; i< rects.size(); i++)
	{
		//get the min size between width and height
		float minSize=(rects[i].size.width < rects[i].size.height)?rects[i].size.width:rects[i].size.height;
		minSize=minSize-minSize*0.5;
		//initialize rand and get 5 points around center for floodfill algorithm
		srand ( time(NULL) );
		//Initialize floodfill parameters and variables
		Mat mask;
		mask.create(input.rows + 2, input.cols + 2, CV_8UC1);
		mask= Scalar::all(0);
		int loDiff = 30;
		int upDiff = 30;
		int connectivity = 4;
		int newMaskVal = 255;
		int NumSeeds = 10;
		Rect ccomp;
		int flags = connectivity + (newMaskVal << 8 ) + CV_FLOODFILL_FIXED_RANGE + CV_FLOODFILL_MASK_ONLY;
		for(int j=0; j<NumSeeds; j++){
			Point seed;
			seed.x=rects[i].center.x+rand()%(int)minSize-(minSize/2);
			seed.y=rects[i].center.y+rand()%(int)minSize-(minSize/2);
			int area = floodFill(input, mask, seed, Scalar(255,0,0), &ccomp, Scalar(loDiff, loDiff, loDiff), Scalar(upDiff, upDiff, upDiff), flags);
		}

		//Check new floodfill mask match for a correct patch.
		//Get all points detected for get Minimal rotated Rect
		vector<Point> pointsInterest;
		Mat_<uchar>::iterator itMask= mask.begin<uchar>();
		Mat_<uchar>::iterator end= mask.end<uchar>();
		for( ; itMask!=end; ++itMask)
			if(*itMask==255)
				pointsInterest.push_back(itMask.pos());

		RotatedRect minRect = minAreaRect(pointsInterest);

		if(verifySizes(minRect)){
			// rotated rectangle drawing 
			Point2f rect_points[4]; minRect.points( rect_points );   

			//Get rotation matrix
			float r= (float)minRect.size.width / (float)minRect.size.height;
			float angle=minRect.angle;    
			if(r<1)
				angle=90+angle;
			Mat rotmat= getRotationMatrix2D(minRect.center, angle,1);

			//Create and rotate image
			Mat img_rotated;
			warpAffine(input, img_rotated, rotmat, input.size(), CV_INTER_CUBIC);

			//Crop image
			Size rect_size=minRect.size;
			if(r < 1)
				swap(rect_size.width, rect_size.height);
			Mat img_crop;
			getRectSubPix(img_rotated, rect_size, minRect.center, img_crop);

			Mat resultResized;
			resultResized.create(33,144, CV_8UC3);
			resize(img_crop, resultResized, resultResized.size(), 0, 0, INTER_CUBIC);
			//Equalize croped image
			Mat grayResult;
			cvtColor(resultResized, grayResult, CV_BGR2GRAY); 
			blur(grayResult, grayResult, Size(3,3));
			grayResult=histeq(grayResult);
			output.push_back(Plate(grayResult,minRect.boundingRect()));
		}
	}
	return output;
}

PlateRecogntion.h:【从车牌图片上识别各个字符】

#ifndef rec_h
#define rec_h
#include <cv.h>
#include <highgui.h>
#include <cvaux.h>
#include <ml.h>

#include <iostream>
#include <vector>
#define HORIZONTAL    1
#define VERTICAL    0

using namespace std;
using namespace cv;

bool verifySizes(Mat r);
Mat preprocessChar(Mat in);
Mat ProjectedHistogram(Mat img, int t);
Mat features(Mat in, int sizeData);
int classify(Mat f,CvANN_MLP *ann);
void train(Mat TrainData, Mat classes,CvANN_MLP *ann,int nlayers);
#endif

PlateRecognition.cpp:

#include "Plate_Recognition.h"

const int numCharacters=30;

bool verifySizes(Mat r){
	//Char sizes 45x77
	float aspect=45.0f/77.0f;
	float charAspect= (float)r.cols/(float)r.rows;
	float error=0.35;
	float minHeight=15;
	float maxHeight=28;
	//We have a different aspect ratio for number 1, and it can be ~0.2
	float minAspect=0.2;
	float maxAspect=aspect+aspect*error;
	//area of pixels
	float area=countNonZero(r);
	//bb area
	float bbArea=r.cols*r.rows;
	// of pixel in area
	float percPixels=area/bbArea;

	if(percPixels < 0.8 && charAspect > minAspect && charAspect < maxAspect && r.rows >= minHeight && r.rows < maxHeight)
		return true;
	else
		return false;

}

Mat preprocessChar(Mat in){
	//Remap image
	int h=in.rows;
	int w=in.cols;
	int charSize=20;	//统一每个字符的大小
	Mat transformMat=Mat::eye(2,3,CV_32F);
	int m=max(w,h);
	transformMat.at<float>(0,2)=m/2 - w/2;
	transformMat.at<float>(1,2)=m/2 - h/2;

	Mat warpImage(m,m, in.type());
	warpAffine(in, warpImage, transformMat, warpImage.size(), INTER_LINEAR, BORDER_CONSTANT, Scalar(0) );

	Mat out;
	resize(warpImage, out, Size(charSize, charSize) ); 

	return out;
}

//create the accumulation histograms,img is a binary image, t is 水平或垂直
Mat ProjectedHistogram(Mat img, int t)
{
	int sz=(t)?img.rows:img.cols;
	Mat mhist=Mat::zeros(1,sz,CV_32F);

	for(int j=0; j<sz; j++){
		Mat data=(t)?img.row(j):img.col(j);
		mhist.at<float>(j)=countNonZero(data);	//统计这一行或一列中,非零元素的个数,并保存到mhist中
	}

	//Normalize histogram
	double min, max;
	minMaxLoc(mhist, &min, &max);

	if(max>0)
		mhist.convertTo(mhist,-1 , 1.0f/max, 0);//用mhist直方图中的最大值,归一化直方图

	return mhist;
}

Mat features(Mat in, int sizeData){
	//Histogram features
	Mat vhist=ProjectedHistogram(in,VERTICAL);
	Mat hhist=ProjectedHistogram(in,HORIZONTAL);

	//Low data feature
	Mat lowData;
	resize(in, lowData, Size(sizeData, sizeData) );

	//Last 10 is the number of moments components
	int numCols=vhist.cols+hhist.cols+lowData.cols*lowData.cols;

	Mat out=Mat::zeros(1,numCols,CV_32F);
	//Asign values to feature,ANN的样本特征为水平、垂直直方图和低分辨率图像所组成的矢量
	int j=0;
	for(int i=0; i<vhist.cols; i++)
	{
		out.at<float>(j)=vhist.at<float>(i);
		j++;
	}
	for(int i=0; i<hhist.cols; i++)
	{
		out.at<float>(j)=hhist.at<float>(i);
		j++;
	}
	for(int x=0; x<lowData.cols; x++)
	{
		for(int y=0; y<lowData.rows; y++){
			out.at<float>(j)=(float)lowData.at<unsigned char>(x,y);
			j++;
		}
	}

	return out;
}


int classify(Mat f,CvANN_MLP *ann){
	int result=-1;
	Mat output(1, 30, CV_32FC1); //西班牙车牌只有30种字符
	(*ann).predict(f, output);
	Point maxLoc;
	double maxVal;
	minMaxLoc(output, 0, &maxVal, 0, &maxLoc);
	return maxLoc.x;
}

void train(Mat TrainData, Mat classes,CvANN_MLP *ann,int nlayers){
	Mat layers(1,3,CV_32SC1);
	layers.at<int>(0)= TrainData.cols;
	layers.at<int>(1)= nlayers;
	layers.at<int>(2)= 30;
	(*ann).create(layers, CvANN_MLP::SIGMOID_SYM, 1, 1);

	//Prepare trainClases
	//Create a mat with n trained data by m classes
	Mat trainClasses;
	trainClasses.create( TrainData.rows, 30, CV_32FC1 );
	for( int i = 0; i <  trainClasses.rows; i++ )
	{
		for( int k = 0; k < trainClasses.cols; k++ )
		{
			//If class of data i is same than a k class
			if( k == classes.at<int>(i) )
				trainClasses.at<float>(i,k) = 1;
			else
				trainClasses.at<float>(i,k) = 0;
		}
	}
	Mat weights( 1, TrainData.rows, CV_32FC1, Scalar::all(1) );

	//Learn classifier
	(*ann).train( TrainData, trainClasses, weights );
}

然后,编写我们的 ImageProc.cpp :【这边我把sdcard的路径都写死了,大家自己调整下】

#include<com_example_carplate_CarPlateDetection.h>
#include "Plate.h"
#include "Plate_Segment.h"
#include "Plate_Recognition.h"
#include <android/log.h>
#define LOG_TAG "System.out"
#define  LOGI(...)  __android_log_print(ANDROID_LOG_INFO,LOG_TAG,__VA_ARGS__)
#define  LOGD(...)  __android_log_print(ANDROID_LOG_DEBUG,LOG_TAG,__VA_ARGS__)
#define  LOGE(...)  __android_log_print(ANDROID_LOG_ERROR,LOG_TAG,__VA_ARGS__)

/*char* jstring2str(JNIEnv* env, jstring jstr)
{
    char*   rtn   =   NULL;
    jclass   clsstring   =   env->FindClass("java/lang/String");
    jstring   strencode   =   env->NewStringUTF("GB2312");
    jmethodID   mid   =   env->GetMethodID(clsstring,   "getBytes",   "(Ljava/lang/String;)[B");
    jbyteArray   barr=   (jbyteArray)env->CallObjectMethod(jstr,mid,strencode);
    jsize   alen   =   env->GetArrayLength(barr);
    jbyte*   ba   =   env->GetByteArrayElements(barr,JNI_FALSE);
    if(alen   >   0)
    {
        rtn   =   (char*)malloc(alen+1);
        memcpy(rtn,ba,alen);
        rtn[alen]=0;
    }
    env->ReleaseByteArrayElements(barr,ba,0);
    return  rtn;
}*/

JNIEXPORT jstring JNICALL Java_com_example_carplate_CarPlateDetection_ImageProc
  (JNIEnv *env, jclass obj, jintArray buf, jint w, jint h,jstring dir){
	jint *cbuf;
    cbuf = env->GetIntArrayElements(buf, false);
    //char* path = jstring2str(env,dir);

    Size size;
    size.width = w;
    size.height = h;

    Mat imageData,input;
    imageData = Mat(size, CV_8UC4, (unsigned char*)cbuf);
    input = Mat(size, CV_8UC3);
    cvtColor(imageData,input,CV_BGRA2BGR);

	vector<Plate> posible_regions = segment(input);

	const char strCharacters[] = {'0','1','2','3','4','5','6','7','8','9','B', 'C', 'D', 'F', 'G', 'H', 'J', 'K', 'L', 'M', 'N', 'P', 'R', 'S', 'T', 'V', 'W', 'X', 'Y', 'Z'};
	CvANN_MLP ann;
	//SVM for each plate region to get valid car plates,Read file storage.
	FileStorage fs;
	//strcat(path,"/SVM.xml");
	fs.open("/storage/sdcard/SVM.xml", FileStorage::READ);
	Mat SVM_TrainingData;
	Mat SVM_Classes;
	fs["TrainingData"] >> SVM_TrainingData;
	fs["classes"] >> SVM_Classes;
	if(fs.isOpened())
		LOGD("read success!");

	//Set SVM params
	LOGD("size:%d",SVM_TrainingData.rows);
	SVM_TrainingData.convertTo(SVM_TrainingData, CV_32FC1);
	SVM_Classes.convertTo(SVM_Classes, CV_32FC1);
	CvSVMParams SVM_params;
	SVM_params.svm_type = CvSVM::C_SVC;
	SVM_params.kernel_type = CvSVM::LINEAR; //CvSVM::LINEAR;
	SVM_params.degree = 0;
	SVM_params.gamma = 1;
	SVM_params.coef0 = 0;
	SVM_params.C = 1;
	SVM_params.nu = 0;
	SVM_params.p = 0;
	SVM_params.term_crit = cvTermCriteria(CV_TERMCRIT_ITER, 1000, 0.01);
	LOGD("Everything is ready");
	//Train SVM
	LOGD("START TO ENTER SVM PREDICT");
	CvSVM svmClassifier(SVM_TrainingData, SVM_Classes, Mat(), Mat(), SVM_params);
	//For each possible plate, classify with svm if it's a plate or no
	vector<Plate> plates;
	for(int i=0; i< posible_regions.size(); i++)
	{
		Mat img=posible_regions[i].plateImg;
		Mat p= img.reshape(1, 1);
		p.convertTo(p, CV_32FC1);

		int response = (int)svmClassifier.predict( p );
		if(response==1)
			plates.push_back(posible_regions[i]);
	}
	LOGD("SVM PREDICT FINISH");
	fs.release();
	//Read file storage.
	FileStorage fs2;
	fs2.open("/storage/sdcard/OCR.xml", FileStorage::READ);
	Mat TrainingData;
	Mat Classes;
	fs2["TrainingDataF15"] >> TrainingData;
	fs2["classes"] >> Classes;
	LOGD("size:%d",TrainingData.rows);
	LOGD("START TO TRAIN MLP");
	//训练神经网络
	train(TrainingData, Classes,&ann,10);
	LOGD("FINISH TRAIN MLP");
	Mat inputs=plates[0].plateImg;
	Plate mplate;
	//dealing image and save each character image into vector<CharSegment>
	//Threshold input image
	Mat img_threshold;
	threshold(inputs, img_threshold, 60, 255, CV_THRESH_BINARY_INV);

	Mat img_contours;
	img_threshold.copyTo(img_contours);
	//Find contours of possibles characters
	vector< vector< Point> > contours;
	findContours(img_contours,
		contours, // a vector of contours
		CV_RETR_EXTERNAL, // retrieve the external contours
		CV_CHAIN_APPROX_NONE); // all pixels of each contours
	//Start to iterate to each contour founded
	vector<vector<Point> >::iterator itc= contours.begin();
	LOGD("Before extracting hist and low-resolution image");
	//Remove patch that are no inside limits of aspect ratio and area.
	while (itc!=contours.end()) {

		//Create bounding rect of object
		Rect mr= boundingRect(Mat(*itc));
		//Crop image
		Mat auxRoi(img_threshold, mr);
		if(verifySizes(auxRoi)){
			auxRoi=preprocessChar(auxRoi);
			LOGD("FINISH extracting features");
			//对每一个小方块,提取直方图特征
			Mat f=features(auxRoi,15);
			//For each segment feature Classify
			LOGD("START TO CLASSIFY IN MLP");
			int character=classify(f,&ann);
			mplate.chars.push_back(strCharacters[character]);
			LOGD("FINISH CLASSIFY");
			mplate.charsPos.push_back(mr);
			//printf("%c ",strCharacters[character]);
		}
		++itc;
	}
	fs2.release();
	string licensePlate=mplate.str();
	//const char *result;
	//result=licensePlate.c_str();
	env->ReleaseIntArrayElements(buf, cbuf, 0);

	return env->NewStringUTF(licensePlate.c_str());
}

9.最后用cygwin进行交叉编译:

打开cygwin,输入

cd /cygdrive/e/worksapce/CarPlate

ndk-build

记得按F5,并clean一下工程,这是在libs目录下有个libimage_proc.so文件,

10.通过DDMS向sdcard中添加文件:

打开虚拟机,点击DDMS:


如果能进入如下界面的话:【否则点击左半边的小倒三角,选择reset adb】


点击右半边右上角第二个按钮:


跑到如storage/sdcard目录下,将之前训练好的SVM.XML和OCR.XML都加入进去。

如果cygwin没有报错的话,然后运行我们的android applicatoin

效果图:





注意:

1.如果想玩国内车牌的话,可以用我之前 2篇文章的方法,自己人工分类图片【不用你裁剪,只要挑选就行】,并运行程序得到相应的xml文件

2.这边我的路径和资源摆放都很不够理想,暂时也想不出更好的了


完整的程序下载地址:http://download.csdn.net/detail/jinshengtao/6828651

里面的assets文件夹下有训练好的svm.xml和ocr.xml,把他放到sdcard中吧

目录
相关文章
|
4天前
|
缓存 Java Shell
Android 系统缓存扫描与清理方法分析
Android 系统缓存从原理探索到实现。
29 15
Android 系统缓存扫描与清理方法分析
|
3月前
|
JavaScript 前端开发 Java
[Android][Framework]系统jar包,sdk的制作及引用
[Android][Framework]系统jar包,sdk的制作及引用
65 0
|
6天前
|
安全 搜索推荐 Android开发
深入探索安卓与iOS系统的差异及其对用户体验的影响
在当今的智能手机市场中,安卓和iOS是两大主流操作系统。它们各自拥有独特的特性和优势,为用户提供了不同的使用体验。本文将深入探讨安卓与iOS系统之间的主要差异,包括它们的设计理念、用户界面、应用生态以及安全性等方面,并分析这些差异如何影响用户的使用体验。
|
6天前
|
安全 搜索推荐 Android开发
揭秘iOS与Android系统的差异:一场技术与哲学的较量
在当今数字化时代,智能手机操作系统的选择成为了用户个性化表达和技术偏好的重要标志。iOS和Android,作为市场上两大主流操作系统,它们之间的竞争不仅仅是技术的比拼,更是设计理念、用户体验和生态系统构建的全面较量。本文将深入探讨iOS与Android在系统架构、应用生态、用户界面及安全性等方面的本质区别,揭示这两种系统背后的哲学思想和市场策略,帮助读者更全面地理解两者的优劣,从而做出更适合自己的选择。
|
24天前
|
IDE Android开发 iOS开发
探索安卓与iOS系统的技术差异:开发者的视角
本文深入分析了安卓(Android)与苹果iOS两大移动操作系统在技术架构、开发环境、用户体验和市场策略方面的主要差异。通过对比这两种系统的不同特点,旨在为移动应用开发者提供有价值的见解,帮助他们在不同平台上做出更明智的开发决策。
|
24天前
|
Ubuntu Shell API
Ubuntu 64系统编译android arm64-v8a 的openssl静态库libssl.a和libcrypto.a
Ubuntu 64系统编译android arm64-v8a 的openssl静态库libssl.a和libcrypto.a
|
2月前
|
监控 Android开发 iOS开发
深入探索安卓与iOS的系统架构差异:理解两大移动平台的技术根基在移动技术日新月异的今天,安卓和iOS作为市场上最为流行的两个操作系统,各自拥有独特的技术特性和庞大的用户基础。本文将深入探讨这两个平台的系统架构差异,揭示它们如何支撑起各自的生态系统,并影响着全球数亿用户的使用体验。
本文通过对比分析安卓和iOS的系统架构,揭示了这两个平台在设计理念、安全性、用户体验和技术生态上的根本区别。不同于常规的技术综述,本文以深入浅出的方式,带领读者理解这些差异是如何影响应用开发、用户选择和市场趋势的。通过梳理历史脉络和未来展望,本文旨在为开发者、用户以及行业分析师提供有价值的见解,帮助大家更好地把握移动技术发展的脉络。
66 6
|
2月前
|
Dart 开发工具 Android开发
在 Android 系统上搭建 Flutter 环境的具体步骤是什么?
在 Android 系统上搭建 Flutter 环境的具体步骤是什么?
|
2月前
|
Android开发 UED 开发者
Android经典实战之WindowManager和创建系统悬浮窗
本文详细介绍了Android系统服务`WindowManager`,包括其主要功能和工作原理,并提供了创建系统悬浮窗的完整步骤。通过示例代码,展示了如何添加权限、请求权限、实现悬浮窗口及最佳实践,帮助开发者轻松掌握悬浮窗开发技巧。
199 1
|
3月前
|
安全 Android开发 iOS开发
安卓与iOS的终极对决:哪个系统更适合你?
在智能手机的世界里,安卓和iOS两大操作系统如同两座巍峨的山峰,各自拥有庞大的用户群体。本文将深入浅出地探讨这两个系统的优缺点,并帮助你找到最适合自己的那一款。让我们一起揭开这场技术盛宴的序幕吧!