机器学习实战(kNN)

概述
k-近邻算法采用测量不同特征值之间的距离方法进行分类。
- 优点:精度高、对异常值不敏感、无数据输入假定
- 缺点:计算复杂度高、空间复杂度高
- 适用数据范围:数值型和标称型
工作原理
存在一个一个数据集合,也称训练数据集,并且每个数据都存在标签,即我们知道样本集中每一数据与所属分类的对应关系。输入没有标签的新数据后,将新数据的每个特征与样本集中数据对应的特征进行比较,选择前k(通常k是不大于20的整数)个最相似(最邻近)的数据。在这些数据中取出现次数最多的分类。
-
''
-
def classify0(inX,dataSet,labels,k):
- dataSetSize = np.size(dataSet,axis=0)
- diffMat = np.tile(inX,(dataSetSize,1)) - dataSet
- sqDiffMat = diffMat**2
- sqDistances = np.sum(sqDiffMat,axis=1)
- sortedIndicies = np.argsort(sqDistances)
- classCount = {}
- for i in range(k):
- voteILabel = labels[sortedIndicies[i]]
- classCount[voteILabel] = classCount.get(voteILabel,0) + 1
- sortedClassCount = sorted(classCount.items(),key=operator.itemgetter(1),reverse=True)
- return sortedClassCount[0][0]
使用k-近邻算法改进约会网站的配对效果
-
import numpy as np
-
import operator
-
import matplotlib
-
import matplotlib.pyplot as plt
-
''
-
def file2matrix(filename):
- fr = open(filename)
- arrayOLines = fr.readlines()
- numberOfLines = len(arrayOLines)
- returnMat = np.zeros((numberOfLines,3))
- classLabelVector = []
- index = 0
- for line in arrayOLines:
- line = line.strip()
- listFromLine = line.split('\t')
- returnMat[index,:] = listFromLine[0:3]
- classLabelVector.append(int(listFromLine[-1]))
- index += 1
- return returnMat,classLabelVector
-
-
''
-
def autoNorm(dataSet):
- minVals = np.min(dataSet,axis=0)
- maxVals = np.max(dataSet,axis=0)
- ranges = maxVals - minVals
- normDataSet = np.zeros(np.shape(dataSet))
- m = np.size(dataSet,axis=0)
- normDataSet = dataSet - np.tile(minVals,(m,1))
- normDataSet = normDataSet/np.tile(ranges,(m,1))
- return normDataSet,ranges,minVals
- 分类器针对约会网站的测试函数
- 取训练集的前10%样本作为测试集
-
'''
-
def datingClassTest():
- hoRatio = 0.10
- datingDataMat,datingLabels = file2matrix('datingTestSet2.txt')
- normMat,ranges,minVals = autoNorm(datingDataMat)
- m = np.size(normMat,axis=0)
- numTestVecs = int(hoRatio*m)
- errorCount = 0.0
- for i in range(numTestVecs):
- classifierResult = classify0(normMat[i,:], normMat[numTestVecs:m,:], datingLabels[numTestVecs:m], 3)
- print("The classifier came back with:%d,the real answer is:%d" % (classifierResult,datingLabels[i]))
- if classifierResult != datingLabels[i]:
- errorCount += 1.0
- print("the total error rate is:%f" % (errorCount/float(numTestVecs)))
-
''
-
def classifyPerson():
- resultList = ["not at all","in small does","in large does"]
- percentTats = float(input("percentage of time spent playing video games?"))
- ffMiles = float(input("frequent flier miles earned per year?"))
- iceCream = float(input("liters of ice cream consumes per year?"))
- datingDataMat,datingLabels = file2matrix("datingTestSet2.txt")
- normMat,ranges,minVals = autoNorm(datingDataMat)
- inArr = np.array([ffMiles,percentTats,iceCream])
- classifierResult = classify0(((inArr-minVals)/ranges),datingDataMat,datingLabels,3)
- print("You will probably like this person:",resultList[classifierResult - 1])
结果(部分)
- The classifier came back with:3,the real answer is:3
- The classifier came back with:2,the real answer is:2
- The classifier came back with:1,the real answer is:1
- The classifier came back with:1,the real answer is:1
- The classifier came back with:1,the real answer is:1
- The classifier came back with:1,the real answer is:1
- The classifier came back with:3,the real answer is:3
- The classifier came back with:1,the real answer is:1
- The classifier came back with:3,the real answer is:3
- The classifier came back with:3,the real answer is:3
- The classifier came back with:2,the real answer is:2
- The classifier came back with:1,the real answer is:1
- The classifier came back with:3,the real answer is:1
- the total error rate is:0.050000
- percentage of time spent playing video games?10
- frequent flier miles earned per year?10000
- liters of ice cream consumes per year?0.5
- You will probably like this person: in small does
手写数字识别
-
import numpy as np
-
import os
-
import operator
-
''
-
def img2vector(filename):
- returnVect = np.zeros((1,1024))
- fr = open(filename)
- for i in range(32):
- lineStr = fr.readline()
- for j in range(32):
- returnVect[0,32*i+j] = int(lineStr[j])
- return returnVect
-
''
-
def handwritingClassTest():
- hwLabels = []
- trainingFileList = os.listdir('trainingDigits')
- m = len(trainingFileList)
- trainingMat = np.zeros((m,1024))
- for i in range(m):
- fileNameStr = trainingFileList[i]
- fileStr = fileNameStr.split('.')[0]
- classNumStr = int(fileStr.split('_')[0])
- hwLabels.append(classNumStr)
- trainingMat[i, :] = img2vector('trainingDigits/' + fileNameStr)
- testFileList = os.listdir('testDigits')
- errorCount = 0.0
- mTest = len(testFileList)
- for i in range(mTest):
- fileNameStr = testFileList[i]
- fileStr = fileNameStr.split('.')[0]
- classNumStr = int(fileStr.split('_')[0])
- vectorUnderTest = img2vector('testDigits/%s' % fileNameStr)
- prediction = classify0(vectorUnderTest,trainingMat,hwLabels,3)
- print("the classifier came back with: %d,the real answer is:%d" % (prediction,classNumStr))
- if classNumStr != prediction:
- errorCount += 1
- print("\nthe total number of errors is:%d" % errorCount)
- print("\nthe total error rate is:%f" % (errorCount/float(mTest)))
结果(部分)
- the classifier came back with: 9,the real answer is:9
- the classifier came back with: 9,the real answer is:9
- the classifier came back with: 9,the real answer is:9
- the classifier came back with: 9,the real answer is:9
- the classifier came back with: 9,the real answer is:9
- the classifier came back with: 9,the real answer is:9
-
- the total number of errors is:10
-
- the total error rate is:0.010571
实际使用这个算法时,时间复杂度和空间复杂度相当高, 因此执行效率并不高。此外还需要为向量准备2MB的存储空间。为了解决存储空间和计算时间的开销,可以引入k决策树,可以节省大量的计算开销。
小结
knn是分类数据最简单有效的算法。knn是基于实例的学习,使用算法时必须有接近实际数据的训练样本数据。knn必须保存全部数据集,如果训练数据集很大,则须使用大量存储空间。此外,由于必须对数据集中的每个数据计算距离值,实际使用时可能非常耗时。
knn另一个缺陷是无法给出任何数据的基础结构信息,因此也无法知晓平均实例样本和典型实例样本具有什么特征。knn
原文地址http://www.bieryun.com/2427.html