支持向量机算法

简介: 支持向量机算法

谷歌笔记本(可选)

from google.colab import drive
drive.mount("/content/drive")
Mounted at /content/drive

SMO高效优化算法

import random
def loadDataSet(fileName):
  dataMat = []
  labelMat = []
  fr = open(fileName)
  for line in fr.readlines():
    lineArr = line.strip().split('\t')
    dataMat.append([float(lineArr[0]), float(lineArr[1])])
    labelMat.append(float(lineArr[2]))
  return dataMat, labelMat
def selectJrand(i, m):
  j=i
  while(j==i):
    j = int(random.uniform(0, m))
  return j
def clipAlpha(aj, H, L):
  if aj > H:
    aj = H
  if L > aj:
    aj = L
  return aj
dataArr, labelArr = loadDataSet('/content/drive/MyDrive/Colab Notebooks/MachineLearning/《机器学习实战》/支持向量机/支持向量机/testSet.txt')
labelArr
[-1.0,
 -1.0,
 1.0,
 -1.0,
 1.0,
 1.0,
 1.0,
 -1.0,
 -1.0,
 -1.0,
 -1.0,
 -1.0,
 -1.0,
 1.0,
 -1.0,
 1.0,
 1.0,
 -1.0,
 1.0,
 -1.0,
 -1.0,
 -1.0,
 1.0,
 -1.0,
 -1.0,
 1.0,
 1.0,
 -1.0,
 -1.0,
 -1.0,
 -1.0,
 1.0,
 1.0,
 1.0,
 1.0,
 -1.0,
 1.0,
 -1.0,
 -1.0,
 1.0,
 -1.0,
 -1.0,
 -1.0,
 -1.0,
 1.0,
 1.0,
 1.0,
 1.0,
 1.0,
 -1.0,
 1.0,
 1.0,
 -1.0,
 -1.0,
 1.0,
 1.0,
 -1.0,
 1.0,
 -1.0,
 -1.0,
 -1.0,
 -1.0,
 1.0,
 -1.0,
 1.0,
 -1.0,
 -1.0,
 1.0,
 1.0,
 1.0,
 -1.0,
 1.0,
 1.0,
 -1.0,
 -1.0,
 1.0,
 -1.0,
 1.0,
 1.0,
 1.0,
 1.0,
 1.0,
 1.0,
 1.0,
 -1.0,
 -1.0,
 -1.0,
 -1.0,
 1.0,
 -1.0,
 1.0,
 1.0,
 1.0,
 -1.0,
 -1.0,
 -1.0,
 -1.0,
 -1.0,
 -1.0,
 -1.0]
from numpy import *
def smoSimple(dataMatIn, classLabels, C, toler, maxIter):
    dataMatrix = mat(dataMatIn); labelMat = mat(classLabels).transpose()
    b = 0; m,n = shape(dataMatrix)
    alphas = mat(zeros((m,1)))
    iter = 0
    while (iter < maxIter):
        alphaPairsChanged = 0
        for i in range(m):
            fXi = float(multiply(alphas,labelMat).T*(dataMatrix*dataMatrix[i,:].T)) + b
            Ei = fXi - float(labelMat[i])#if checks if an example violates KKT conditions
            if ((labelMat[i]*Ei < -toler) and (alphas[i] < C)) or ((labelMat[i]*Ei > toler) and (alphas[i] > 0)):
                j = selectJrand(i,m)
                fXj = float(multiply(alphas,labelMat).T*(dataMatrix*dataMatrix[j,:].T)) + b
                Ej = fXj - float(labelMat[j])
                alphaIold = alphas[i].copy(); alphaJold = alphas[j].copy();
                if (labelMat[i] != labelMat[j]):
                    L = max(0, alphas[j] - alphas[i])
                    H = min(C, C + alphas[j] - alphas[i])
                else:
                    L = max(0, alphas[j] + alphas[i] - C)
                    H = min(C, alphas[j] + alphas[i])
                if L==H:
                  print("L==H")
                  continue
                eta = 2.0 * dataMatrix[i,:]*dataMatrix[j,:].T - dataMatrix[i,:]*dataMatrix[i,:].T - dataMatrix[j,:]*dataMatrix[j,:].T
                if eta >= 0:
                  print("eta>=0")
                  continue
                alphas[j] -= labelMat[j]*(Ei - Ej)/eta
                alphas[j] = clipAlpha(alphas[j],H,L)
                if (abs(alphas[j] - alphaJold) < 0.00001):
                  print("j not moving enough")
                  continue
                alphas[i] += labelMat[j]*labelMat[i]*(alphaJold - alphas[j])#update i by the same amount as j
                                                                        #the update is in the oppostie direction
                b1 = b - Ei- labelMat[i]*(alphas[i]-alphaIold)*dataMatrix[i,:]*dataMatrix[i,:].T - labelMat[j]*(alphas[j]-alphaJold)*dataMatrix[i,:]*dataMatrix[j,:].T
                b2 = b - Ej- labelMat[i]*(alphas[i]-alphaIold)*dataMatrix[i,:]*dataMatrix[j,:].T - labelMat[j]*(alphas[j]-alphaJold)*dataMatrix[j,:]*dataMatrix[j,:].T
                if (0 < alphas[i]) and (C > alphas[i]):
                  b = b1
                elif (0 < alphas[j]) and (C > alphas[j]):
                  b = b2
                else:
                  b = (b1 + b2)/2.0
                alphaPairsChanged += 1
                print("iter: %d i:%d, pairs changed %d" % (iter,i,alphaPairsChanged))
        if (alphaPairsChanged == 0):
          iter += 1
        else: iter = 0
        print("iteration number: %d" % iter)
    return b,alphas

这是一个简化版的SMO(Sequential Minimal Optimization)算法,用于支持向量机的训练。


输入参数:


  • dataMatIn: 输入数据的特征矩阵
  • classLabels: 输入数据的类别标签
  • C: 软间隔参数,在优化目标函数时对误分类样本的惩罚程度
  • toler: 容错率,用于控制支持向量的选择
  • maxIter: 最大迭代次数

输出结果:

  • b: SMO算法中的常数项
  • alphas: 支持向量的拉格朗日乘子

算法主要步骤:

  1. 初始化一些参数,包括数据矩阵的大小、拉格朗日乘子矩阵等。
  2. 在最大迭代次数内进行迭代,直到所有的乘子不再更新或达到最大迭代次数。
  3. 针对每个样本,计算样本的预测值和误差,并检查是否违反了KKT条件(KKT条件是支持向量机优化问题的充要条件之一)。
  4. 如果违反了KKT条件,选择一个样本作为更新的对象,并计算该样本的预测值和误差。
  5. 根据样本的类别标签,计算L和H的值,用于限制拉格朗日乘子的取值范围。
  6. 计算alpha的更新量eta,并检查eta是否大于等于0,如果是,则继续选择新的样本进行更新。
  7. 更新alpha的值,同时限制其在L和H之间的范围。
  8. 检查alpha的更新幅度是否足够大,如果不够大,则继续选择新的样本进行更新。
  9. 更新常数项b的值,根据更新前后的alpha值和对应的样本信息。
  10. 记录更新的乘子数量,并根据乘子数量是否发生变化来判断是否继续迭代。
  11. 返回最终的常数项和乘子矩阵。

注:其中的函数selectJrand()用于随机选择乘子的索引,clipAlpha()用于限制乘子的取值范围。

b, alphas = smoSimple(dataArr, labelArr, 0.6, 0.001, 40)
<ipython-input-10-609e212d7149>:9: DeprecationWarning: Conversion of an array with ndim > 0 to a scalar is deprecated, and will error in future. Ensure you extract a single element from your array before performing this operation. (Deprecated NumPy 1.25.)
  fXi = float(multiply(alphas,labelMat).T*(dataMatrix*dataMatrix[i,:].T)) + b
<ipython-input-10-609e212d7149>:10: DeprecationWarning: Conversion of an array with ndim > 0 to a scalar is deprecated, and will error in future. Ensure you extract a single element from your array before performing this operation. (Deprecated NumPy 1.25.)
  Ei = fXi - float(labelMat[i])#if checks if an example violates KKT conditions
<ipython-input-10-609e212d7149>:13: DeprecationWarning: Conversion of an array with ndim > 0 to a scalar is deprecated, and will error in future. Ensure you extract a single element from your array before performing this operation. (Deprecated NumPy 1.25.)
  fXj = float(multiply(alphas,labelMat).T*(dataMatrix*dataMatrix[j,:].T)) + b
<ipython-input-10-609e212d7149>:14: DeprecationWarning: Conversion of an array with ndim > 0 to a scalar is deprecated, and will error in future. Ensure you extract a single element from your array before performing this operation. (Deprecated NumPy 1.25.)
  Ej = fXj - float(labelMat[j])


iter: 0 i:0, pairs changed 1
L==H
j not moving enough
L==H
L==H
L==H
L==H
L==H
……
j not moving enough
j not moving enough
iteration number: 40
b
matrix([[-3.82396091]])
alphas[alphas>0]
matrix([[0.09439001, 0.26843195, 0.0348491 , 0.32797286]])
shape(alphas[alphas>0])
(1, 4)
for i in range(100):
  if alphas[i] > 0:
    print(dataArr[i], labelArr[i])
[4.658191, 3.507396] -1.0
[3.457096, -0.082216] -1.0
[5.286862, -2.358286] 1.0
[6.080573, 0.418886] 1.0
import matplotlib.pyplot as plt
dataArr, labelArr = loadDataSet('/content/drive/MyDrive/Colab Notebooks/MachineLearning/《机器学习实战》/支持向量机/支持向量机/testSet.txt')
x = array(dataArr)[:, 0]
y = array(dataArr)[:, 1]
fig = plt.figure()
plt.scatter(x, y)
for i in range(100):
  if alphas[i] > 0:
    plt.scatter(dataArr[i][0], dataArr[i][1], color='red', s=20)
plt.show()

def kernelTrans(X, A, kTup): #calc the kernel or transform data to a higher dimensional space
    m,n = shape(X)
    K = mat(zeros((m,1)))
    if kTup[0]=='lin': K = X * A.T   #linear kernel
    elif kTup[0]=='rbf':
        for j in range(m):
            deltaRow = X[j,:] - A
            K[j] = deltaRow*deltaRow.T
        K = exp(K/(-1*kTup[1]**2)) #divide in NumPy is element-wise not matrix like Matlab
    else: raise NameError('Houston We Have a Problem -- \
    That Kernel is not recognized')
    return K

该函数是用于计算核函数或者将数据转换到更高维空间的函数。函数的输入包括数据集X、一个参考数据集A和一个核函数类型kTup。


首先,函数获取输入数据集的行和列数,并创建一个全零矩阵K,维度为m行1列。


然后,根据核函数类型选择不同的计算方法。如果核函数类型为’lin’,则采用线性核函数的计算方式,即将输入数据集X与参考数据集A的转置矩阵相乘。


如果核函数类型为’rbf’,则采用径向基函数(RBF)核函数的计算方式。首先遍历输入数据集X的每一行,计算每一行与参考数据集A的欧氏距离的平方,并存储在K矩阵中。然后,使用指数函数将K矩阵中的每个元素除以核函数参数的平方,并取负数。


最后,如果核函数类型不是’lin’也不是’rbf’,则报错提示核函数类型不被识别。

最后,函数返回计算得到的K矩阵。

class optStruct:
    def __init__(self,dataMatIn, classLabels, C, toler, kTup):  # Initialize the structure with the parameters
        self.X = dataMatIn
        self.labelMat = classLabels
        self.C = C
        self.tol = toler
        self.m = shape(dataMatIn)[0]
        self.alphas = mat(zeros((self.m,1)))
        self.b = 0
        self.eCache = mat(zeros((self.m,2))) #first column is valid flag
        self.K = mat(zeros((self.m,self.m)))
        for i in range(self.m):
            self.K[:,i] = kernelTrans(self.X, self.X[i,:], kTup)

这段代码是定义了一个名为optStruct的类,该类包含了一些变量和方法。


类的初始化函数__init__接受5个参数:dataMatIn、classLabels、C、toler和kTup。


  • dataMatIn是一个表示数据矩阵的输入
  • classLabels是一个表示类别标签的输入
  • C是一个常数,用于调整目标函数中的惩罚项
  • toler是一个容错率,用于控制在数值计算中的误差
  • kTup是一个元组,表示核函数的类型和参数

初始化函数中,将输入的参数赋值给类的成员变量。


其中,self.alphas是一个m行1列的矩阵,用于存储拉格朗日乘子

self.b是一个常数,用于计算分类器的偏置

self.eCache是一个m行2列的矩阵,用于存储计算过程中的误差缓存

self.K是一个m行m列的矩阵,用于存储样本间的核函数计算结果然后,使用一个循环来计算核函数矩阵self.K的值。循环从0到self.m-1,每次取出self.X的第i行作为参数,调用kernelTrans函数计算核函数的结果,并将结果赋值给self.K的第i列。

def calcEk(oS, k):
    fXk = float(multiply(oS.alphas,oS.labelMat).T*oS.K[:,k] + oS.b)
    Ek = fXk - float(oS.labelMat[k])
    return Ek
def selectJ(i, oS, Ei):         #this is the second choice -heurstic, and calcs Ej
    maxK = -1; maxDeltaE = 0; Ej = 0
    oS.eCache[i] = [1,Ei]  #set valid #choose the alpha that gives the maximum delta E
    validEcacheList = nonzero(oS.eCache[:,0].A)[0]
    if (len(validEcacheList)) > 1:
        for k in validEcacheList:   #loop through valid Ecache values and find the one that maximizes delta E
            if k == i: continue #don't calc for i, waste of time
            Ek = calcEk(oS, k)
            deltaE = abs(Ei - Ek)
            if (deltaE > maxDeltaE):
                maxK = k; maxDeltaE = deltaE; Ej = Ek
        return maxK, Ej
    else:   #in this case (first time around) we don't have any valid eCache values
        j = selectJrand(i, oS.m)
        Ej = calcEk(oS, j)
    return j, Ej
def updateEk(oS, k):#after any alpha has changed update the new value in the cache
    Ek = calcEk(oS, k)
    oS.eCache[k] = [1,Ek]
def innerL(i, oS):
    Ei = calcEk(oS, i)
    if ((oS.labelMat[i]*Ei < -oS.tol) and (oS.alphas[i] < oS.C)) or ((oS.labelMat[i]*Ei > oS.tol) and (oS.alphas[i] > 0)):
        j,Ej = selectJ(i, oS, Ei) #this has been changed from selectJrand
        alphaIold = oS.alphas[i].copy(); alphaJold = oS.alphas[j].copy();
        if (oS.labelMat[i] != oS.labelMat[j]):
            L = max(0, oS.alphas[j] - oS.alphas[i])
            H = min(oS.C, oS.C + oS.alphas[j] - oS.alphas[i])
        else:
            L = max(0, oS.alphas[j] + oS.alphas[i] - oS.C)
            H = min(oS.C, oS.alphas[j] + oS.alphas[i])
        if L==H:
          print("L==H")
          return 0
        eta = 2.0 * oS.K[i,j] - oS.K[i,i] - oS.K[j,j] #changed for kernel
        if eta >= 0:
          print("eta>=0")
          return 0
        oS.alphas[j] -= oS.labelMat[j]*(Ei - Ej)/eta
        oS.alphas[j] = clipAlpha(oS.alphas[j],H,L)
        updateEk(oS, j) #added this for the Ecache
        if (abs(oS.alphas[j] - alphaJold) < 0.00001):
          print("j not moving enough")
          return 0
        oS.alphas[i] += oS.labelMat[j]*oS.labelMat[i]*(alphaJold - oS.alphas[j])#update i by the same amount as j
        updateEk(oS, i) #added this for the Ecache                    #the update is in the oppostie direction
        b1 = oS.b - Ei- oS.labelMat[i]*(oS.alphas[i]-alphaIold)*oS.K[i,i] - oS.labelMat[j]*(oS.alphas[j]-alphaJold)*oS.K[i,j]
        b2 = oS.b - Ej- oS.labelMat[i]*(oS.alphas[i]-alphaIold)*oS.K[i,j]- oS.labelMat[j]*(oS.alphas[j]-alphaJold)*oS.K[j,j]
        if (0 < oS.alphas[i]) and (oS.C > oS.alphas[i]): oS.b = b1
        elif (0 < oS.alphas[j]) and (oS.C > oS.alphas[j]): oS.b = b2
        else: oS.b = (b1 + b2)/2.0
        return 1
    else: return 0
def smoP(dataMatIn, classLabels, C, toler, maxIter,kTup=('lin', 0)):    #full Platt SMO
    oS = optStruct(mat(dataMatIn),mat(classLabels).transpose(),C,toler, kTup)
    iter = 0
    entireSet = True; alphaPairsChanged = 0
    while (iter < maxIter) and ((alphaPairsChanged > 0) or (entireSet)):
        alphaPairsChanged = 0
        if entireSet:   #go over all
            for i in range(oS.m):
                alphaPairsChanged += innerL(i,oS)
                print("fullSet, iter: %d i:%d, pairs changed %d" % (iter,i,alphaPairsChanged))
            iter += 1
        else:#go over non-bound (railed) alphas
            nonBoundIs = nonzero((oS.alphas.A > 0) * (oS.alphas.A < C))[0]
            for i in nonBoundIs:
                alphaPairsChanged += innerL(i,oS)
                print("non-bound, iter: %d i:%d, pairs changed %d" % (iter,i,alphaPairsChanged))
            iter += 1
        if entireSet: entireSet = False #toggle entire set loop
        elif (alphaPairsChanged == 0): entireSet = True
        print("iteration number: %d" % iter)
    return oS.b,oS.alphas
import matplotlib.pyplot as plt
dataArr, labelArr = loadDataSet('/content/drive/MyDrive/Colab Notebooks/MachineLearning/《机器学习实战》/支持向量机/支持向量机/testSet.txt')
b, alphas = smoP(dataArr, labelArr, 0.6, 0.001, 40)
x = array(dataArr)[:, 0]
y = array(dataArr)[:, 1]
fig = plt.figure()
plt.scatter(x, y)
for i in range(100):
  if alphas[i] > 0:
    plt.scatter(dataArr[i][0], dataArr[i][1], color='red', s=20)
plt.show()
<ipython-input-48-c1e41c4ea928>:2: DeprecationWarning: Conversion of an array with ndim > 0 to a scalar is deprecated, and will error in future. Ensure you extract a single element from your array before performing this operation. (Deprecated NumPy 1.25.)
  fXk = float(multiply(oS.alphas,oS.labelMat).T*oS.K[:,k] + oS.b)
<ipython-input-48-c1e41c4ea928>:3: DeprecationWarning: Conversion of an array with ndim > 0 to a scalar is deprecated, and will error in future. Ensure you extract a single element from your array before performing this operation. (Deprecated NumPy 1.25.)
  Ek = fXk - float(oS.labelMat[k])


fullSet, iter: 0 i:0, pairs changed 1
fullSet, iter: 0 i:1, pairs changed 1
fullSet, iter: 0 i:2, pairs changed 2
fullSet, iter: 0 i:3, pairs changed 2
fullSet, iter: 0 i:4, pairs changed 3
fullSet, iter: 0 i:5, pairs changed 4
fullSet, iter: 0 i:6, pairs changed 4
fullSet, iter: 0 i:7, pairs changed 4
j not moving enough
fullSet, iter: 0 i:8, pairs changed 4
fullSet, iter: 0 i:9, pairs changed 4
j not moving enough
fullSet, iter: 0 i:10, pairs changed 4
fullSet, iter: 0 i:11, pairs changed 4
fullSet, iter: 0 i:12, pairs changed 4
fullSet, iter: 0 i:13, pairs changed 4
fullSet, iter: 0 i:14, pairs changed 4
fullSet, iter: 0 i:15, pairs changed 4
fullSet, iter: 0 i:16, pairs changed 4
fullSet, iter: 0 i:17, pairs changed 5
fullSet, iter: 0 i:18, pairs changed 6
fullSet, iter: 0 i:19, pairs changed 6
j not moving enough
fullSet, iter: 0 i:20, pairs changed 6
j not moving enough
fullSet, iter: 0 i:21, pairs changed 6
fullSet, iter: 0 i:22, pairs changed 6
fullSet, iter: 0 i:23, pairs changed 7
fullSet, iter: 0 i:24, pairs changed 7
j not moving enough
fullSet, iter: 0 i:25, pairs changed 7
L==H
fullSet, iter: 0 i:26, pairs changed 7
fullSet, iter: 0 i:27, pairs changed 7
fullSet, iter: 0 i:28, pairs changed 7
L==H
fullSet, iter: 0 i:29, pairs changed 7
fullSet, iter: 0 i:30, pairs changed 7
fullSet, iter: 0 i:31, pairs changed 7
fullSet, iter: 0 i:32, pairs changed 7
fullSet, iter: 0 i:33, pairs changed 7
fullSet, iter: 0 i:34, pairs changed 7
fullSet, iter: 0 i:35, pairs changed 7
fullSet, iter: 0 i:36, pairs changed 7
fullSet, iter: 0 i:37, pairs changed 7
fullSet, iter: 0 i:38, pairs changed 7
j not moving enough
fullSet, iter: 0 i:39, pairs changed 7
fullSet, iter: 0 i:40, pairs changed 7
fullSet, iter: 0 i:41, pairs changed 7
fullSet, iter: 0 i:42, pairs changed 7
fullSet, iter: 0 i:43, pairs changed 7
fullSet, iter: 0 i:44, pairs changed 7
fullSet, iter: 0 i:45, pairs changed 7
L==H
fullSet, iter: 0 i:46, pairs changed 7
fullSet, iter: 0 i:47, pairs changed 7
fullSet, iter: 0 i:48, pairs changed 7
fullSet, iter: 0 i:49, pairs changed 7
fullSet, iter: 0 i:50, pairs changed 7
fullSet, iter: 0 i:51, pairs changed 7
L==H
fullSet, iter: 0 i:52, pairs changed 7
fullSet, iter: 0 i:53, pairs changed 7
L==H
fullSet, iter: 0 i:54, pairs changed 7
L==H
fullSet, iter: 0 i:55, pairs changed 7
fullSet, iter: 0 i:56, pairs changed 7
L==H
fullSet, iter: 0 i:57, pairs changed 7
fullSet, iter: 0 i:58, pairs changed 7
fullSet, iter: 0 i:59, pairs changed 7
fullSet, iter: 0 i:60, pairs changed 7
fullSet, iter: 0 i:61, pairs changed 7
L==H
fullSet, iter: 0 i:62, pairs changed 7
fullSet, iter: 0 i:63, pairs changed 7
fullSet, iter: 0 i:64, pairs changed 7
fullSet, iter: 0 i:65, pairs changed 7
fullSet, iter: 0 i:66, pairs changed 7
fullSet, iter: 0 i:67, pairs changed 7
fullSet, iter: 0 i:68, pairs changed 7
L==H
fullSet, iter: 0 i:69, pairs changed 7
fullSet, iter: 0 i:70, pairs changed 7
fullSet, iter: 0 i:71, pairs changed 7
fullSet, iter: 0 i:72, pairs changed 7
fullSet, iter: 0 i:73, pairs changed 7
fullSet, iter: 0 i:74, pairs changed 7
fullSet, iter: 0 i:75, pairs changed 7
fullSet, iter: 0 i:76, pairs changed 7
fullSet, iter: 0 i:77, pairs changed 7
fullSet, iter: 0 i:78, pairs changed 7
L==H
fullSet, iter: 0 i:79, pairs changed 7
fullSet, iter: 0 i:80, pairs changed 7
fullSet, iter: 0 i:81, pairs changed 7
L==H
fullSet, iter: 0 i:82, pairs changed 7
fullSet, iter: 0 i:83, pairs changed 7
fullSet, iter: 0 i:84, pairs changed 7
fullSet, iter: 0 i:85, pairs changed 7
fullSet, iter: 0 i:86, pairs changed 7
fullSet, iter: 0 i:87, pairs changed 7
fullSet, iter: 0 i:88, pairs changed 7
fullSet, iter: 0 i:89, pairs changed 7
fullSet, iter: 0 i:90, pairs changed 7
fullSet, iter: 0 i:91, pairs changed 7
fullSet, iter: 0 i:92, pairs changed 7
fullSet, iter: 0 i:93, pairs changed 7
fullSet, iter: 0 i:94, pairs changed 7
fullSet, iter: 0 i:95, pairs changed 7
fullSet, iter: 0 i:96, pairs changed 7
fullSet, iter: 0 i:97, pairs changed 7
fullSet, iter: 0 i:98, pairs changed 7
fullSet, iter: 0 i:99, pairs changed 7
iteration number: 1
j not moving enough
non-bound, iter: 1 i:0, pairs changed 0
non-bound, iter: 1 i:4, pairs changed 1
non-bound, iter: 1 i:5, pairs changed 2
j not moving enough
non-bound, iter: 1 i:17, pairs changed 2
non-bound, iter: 1 i:18, pairs changed 3
non-bound, iter: 1 i:23, pairs changed 4
iteration number: 2
j not moving enough
non-bound, iter: 2 i:0, pairs changed 0
j not moving enough
non-bound, iter: 2 i:5, pairs changed 0
j not moving enough
non-bound, iter: 2 i:17, pairs changed 0
non-bound, iter: 2 i:23, pairs changed 0
j not moving enough
non-bound, iter: 2 i:52, pairs changed 0
non-bound, iter: 2 i:55, pairs changed 0
iteration number: 3
j not moving enough
fullSet, iter: 3 i:0, pairs changed 0
fullSet, iter: 3 i:1, pairs changed 0
fullSet, iter: 3 i:2, pairs changed 0
fullSet, iter: 3 i:3, pairs changed 0
fullSet, iter: 3 i:4, pairs changed 0
j not moving enough
fullSet, iter: 3 i:5, pairs changed 0
fullSet, iter: 3 i:6, pairs changed 0
fullSet, iter: 3 i:7, pairs changed 0
fullSet, iter: 3 i:8, pairs changed 0
fullSet, iter: 3 i:9, pairs changed 0
fullSet, iter: 3 i:10, pairs changed 0
fullSet, iter: 3 i:11, pairs changed 0
fullSet, iter: 3 i:12, pairs changed 0
fullSet, iter: 3 i:13, pairs changed 0
fullSet, iter: 3 i:14, pairs changed 0
fullSet, iter: 3 i:15, pairs changed 0
fullSet, iter: 3 i:16, pairs changed 0
j not moving enough
fullSet, iter: 3 i:17, pairs changed 0
fullSet, iter: 3 i:18, pairs changed 0
fullSet, iter: 3 i:19, pairs changed 0
fullSet, iter: 3 i:20, pairs changed 0
fullSet, iter: 3 i:21, pairs changed 0
fullSet, iter: 3 i:22, pairs changed 0
fullSet, iter: 3 i:23, pairs changed 0
fullSet, iter: 3 i:24, pairs changed 0
fullSet, iter: 3 i:25, pairs changed 0
fullSet, iter: 3 i:26, pairs changed 0
fullSet, iter: 3 i:27, pairs changed 0
fullSet, iter: 3 i:28, pairs changed 0
j not moving enough
fullSet, iter: 3 i:29, pairs changed 0
fullSet, iter: 3 i:30, pairs changed 0
fullSet, iter: 3 i:31, pairs changed 0
fullSet, iter: 3 i:32, pairs changed 0
fullSet, iter: 3 i:33, pairs changed 0
fullSet, iter: 3 i:34, pairs changed 0
fullSet, iter: 3 i:35, pairs changed 0
fullSet, iter: 3 i:36, pairs changed 0
fullSet, iter: 3 i:37, pairs changed 0
fullSet, iter: 3 i:38, pairs changed 0
fullSet, iter: 3 i:39, pairs changed 0
fullSet, iter: 3 i:40, pairs changed 0
fullSet, iter: 3 i:41, pairs changed 0
fullSet, iter: 3 i:42, pairs changed 0
fullSet, iter: 3 i:43, pairs changed 0
fullSet, iter: 3 i:44, pairs changed 0
fullSet, iter: 3 i:45, pairs changed 0
fullSet, iter: 3 i:46, pairs changed 0
fullSet, iter: 3 i:47, pairs changed 0
fullSet, iter: 3 i:48, pairs changed 0
fullSet, iter: 3 i:49, pairs changed 0
fullSet, iter: 3 i:50, pairs changed 0
fullSet, iter: 3 i:51, pairs changed 0
j not moving enough
fullSet, iter: 3 i:52, pairs changed 0
fullSet, iter: 3 i:53, pairs changed 0
L==H
fullSet, iter: 3 i:54, pairs changed 0
fullSet, iter: 3 i:55, pairs changed 0
fullSet, iter: 3 i:56, pairs changed 0
fullSet, iter: 3 i:57, pairs changed 0
fullSet, iter: 3 i:58, pairs changed 0
fullSet, iter: 3 i:59, pairs changed 0
fullSet, iter: 3 i:60, pairs changed 0
fullSet, iter: 3 i:61, pairs changed 0
fullSet, iter: 3 i:62, pairs changed 0
fullSet, iter: 3 i:63, pairs changed 0
fullSet, iter: 3 i:64, pairs changed 0
fullSet, iter: 3 i:65, pairs changed 0
fullSet, iter: 3 i:66, pairs changed 0
fullSet, iter: 3 i:67, pairs changed 0
fullSet, iter: 3 i:68, pairs changed 0
fullSet, iter: 3 i:69, pairs changed 0
fullSet, iter: 3 i:70, pairs changed 0
fullSet, iter: 3 i:71, pairs changed 0
fullSet, iter: 3 i:72, pairs changed 0
fullSet, iter: 3 i:73, pairs changed 0
fullSet, iter: 3 i:74, pairs changed 0
fullSet, iter: 3 i:75, pairs changed 0
fullSet, iter: 3 i:76, pairs changed 0
fullSet, iter: 3 i:77, pairs changed 0
fullSet, iter: 3 i:78, pairs changed 0
fullSet, iter: 3 i:79, pairs changed 0
fullSet, iter: 3 i:80, pairs changed 0
fullSet, iter: 3 i:81, pairs changed 0
fullSet, iter: 3 i:82, pairs changed 0
fullSet, iter: 3 i:83, pairs changed 0
fullSet, iter: 3 i:84, pairs changed 0
fullSet, iter: 3 i:85, pairs changed 0
fullSet, iter: 3 i:86, pairs changed 0
fullSet, iter: 3 i:87, pairs changed 0
fullSet, iter: 3 i:88, pairs changed 0
fullSet, iter: 3 i:89, pairs changed 0
fullSet, iter: 3 i:90, pairs changed 0
fullSet, iter: 3 i:91, pairs changed 0
fullSet, iter: 3 i:92, pairs changed 0
fullSet, iter: 3 i:93, pairs changed 0
fullSet, iter: 3 i:94, pairs changed 0
fullSet, iter: 3 i:95, pairs changed 0
fullSet, iter: 3 i:96, pairs changed 0
fullSet, iter: 3 i:97, pairs changed 0
fullSet, iter: 3 i:98, pairs changed 0
fullSet, iter: 3 i:99, pairs changed 0
iteration number: 4

def calcWs(alphas,dataArr,classLabels):
    X = mat(dataArr); labelMat = mat(classLabels).transpose()
    m,n = shape(X)
    w = zeros((n,1))
    for i in range(m):
        w += multiply(alphas[i]*labelMat[i],X[i,:].T)
    return w
def testRbf(k1=1.3):
    dataArr,labelArr = loadDataSet('testSetRBF.txt')
    b,alphas = smoP(dataArr, labelArr, 200, 0.0001, 10000, ('rbf', k1)) #C=200 important
    datMat=mat(dataArr); labelMat = mat(labelArr).transpose()
    svInd=nonzero(alphas.A>0)[0]
    sVs=datMat[svInd] #get matrix of only support vectors
    labelSV = labelMat[svInd];
    print("there are %d Support Vectors" % shape(sVs)[0])
    m,n = shape(datMat)
    errorCount = 0
    for i in range(m):
        kernelEval = kernelTrans(sVs,datMat[i,:],('rbf', k1))
        predict=kernelEval.T * multiply(labelSV,alphas[svInd]) + b
        if sign(predict)!=sign(labelArr[i]): errorCount += 1
    print("the training error rate is: %f" % (float(errorCount)/m))
    dataArr,labelArr = loadDataSet('testSetRBF2.txt')
    errorCount = 0
    datMat=mat(dataArr); labelMat = mat(labelArr).transpose()
    m,n = shape(datMat)
    for i in range(m):
        kernelEval = kernelTrans(sVs,datMat[i,:],('rbf', k1))
        predict=kernelEval.T * multiply(labelSV,alphas[svInd]) + b
        if sign(predict)!=sign(labelArr[i]): errorCount += 1
    print("the test error rate is: %f" % (float(errorCount)/m))
def img2vector(filename):
    returnVect = zeros((1,1024))
    fr = open(filename)
    for i in range(32):
        lineStr = fr.readline()
        for j in range(32):
            returnVect[0,32*i+j] = int(lineStr[j])
    return returnVect
def loadImages(dirName):
    from os import listdir
    hwLabels = []
    trainingFileList = listdir(dirName)           #load the training set
    m = len(trainingFileList)
    trainingMat = zeros((m,1024))
    for i in range(m):
        fileNameStr = trainingFileList[i]
        fileStr = fileNameStr.split('.')[0]     #take off .txt
        classNumStr = int(fileStr.split('_')[0])
        if classNumStr == 9: hwLabels.append(-1)
        else: hwLabels.append(1)
        trainingMat[i,:] = img2vector('%s/%s' % (dirName, fileNameStr))
    return trainingMat, hwLabels
def testDigits(kTup=('rbf', 10)):
    dataArr,labelArr = loadImages('trainingDigits')
    b,alphas = smoP(dataArr, labelArr, 200, 0.0001, 10000, kTup)
    datMat=mat(dataArr); labelMat = mat(labelArr).transpose()
    svInd=nonzero(alphas.A>0)[0]
    sVs=datMat[svInd]
    labelSV = labelMat[svInd];
    print("there are %d Support Vectors" % shape(sVs)[0])
    m,n = shape(datMat)
    errorCount = 0
    for i in range(m):
        kernelEval = kernelTrans(sVs,datMat[i,:],kTup)
        predict=kernelEval.T * multiply(labelSV,alphas[svInd]) + b
        if sign(predict)!=sign(labelArr[i]): errorCount += 1
    print("the training error rate is: %f" % (float(errorCount)/m))
    dataArr,labelArr = loadImages('testDigits')
    errorCount = 0
    datMat=mat(dataArr); labelMat = mat(labelArr).transpose()
    m,n = shape(datMat)
    for i in range(m):
        kernelEval = kernelTrans(sVs,datMat[i,:],kTup)
        predict=kernelEval.T * multiply(labelSV,alphas[svInd]) + b
        if sign(predict)!=sign(labelArr[i]): errorCount += 1
    print("the test error rate is: %f" % (float(errorCount)/m))
class optStructK:
    def __init__(self,dataMatIn, classLabels, C, toler):  # Initialize the structure with the parameters
        self.X = dataMatIn
        self.labelMat = classLabels
        self.C = C
        self.tol = toler
        self.m = shape(dataMatIn)[0]
        self.alphas = mat(zeros((self.m,1)))
        self.b = 0
        self.eCache = mat(zeros((self.m,2))) #first column is valid flag

def calcEkK(oS, k):
    fXk = float(multiply(oS.alphas,oS.labelMat).T*(oS.X*oS.X[k,:].T)) + oS.b
    Ek = fXk - float(oS.labelMat[k])
    return Ek

def selectJK(i, oS, Ei):         #this is the second choice -heurstic, and calcs Ej
    maxK = -1; maxDeltaE = 0; Ej = 0
    oS.eCache[i] = [1,Ei]  #set valid #choose the alpha that gives the maximum delta E
    validEcacheList = nonzero(oS.eCache[:,0].A)[0]
    if (len(validEcacheList)) > 1:
        for k in validEcacheList:   #loop through valid Ecache values and find the one that maximizes delta E
            if k == i: continue #don't calc for i, waste of time
            Ek = calcEk(oS, k)
            deltaE = abs(Ei - Ek)
            if (deltaE > maxDeltaE):
                maxK = k; maxDeltaE = deltaE; Ej = Ek
        return maxK, Ej
    else:   #in this case (first time around) we don't have any valid eCache values
        j = selectJrand(i, oS.m)
        Ej = calcEk(oS, j)
    return j, Ej

def updateEkK(oS, k):#after any alpha has changed update the new value in the cache
    Ek = calcEk(oS, k)
    oS.eCache[k] = [1,Ek]

def innerLK(i, oS):
    Ei = calcEk(oS, i)
    if ((oS.labelMat[i]*Ei < -oS.tol) and (oS.alphas[i] < oS.C)) or ((oS.labelMat[i]*Ei > oS.tol) and (oS.alphas[i] > 0)):
        j,Ej = selectJ(i, oS, Ei) #this has been changed from selectJrand
        alphaIold = oS.alphas[i].copy(); alphaJold = oS.alphas[j].copy();
        if (oS.labelMat[i] != oS.labelMat[j]):
            L = max(0, oS.alphas[j] - oS.alphas[i])
            H = min(oS.C, oS.C + oS.alphas[j] - oS.alphas[i])
        else:
            L = max(0, oS.alphas[j] + oS.alphas[i] - oS.C)
            H = min(oS.C, oS.alphas[j] + oS.alphas[i])
        if L==H:
          print("L==H")
          return 0
        eta = 2.0 * oS.X[i,:]*oS.X[j,:].T - oS.X[i,:]*oS.X[i,:].T - oS.X[j,:]*oS.X[j,:].T
        if eta >= 0:
          print("eta>=0")
          return 0
        oS.alphas[j] -= oS.labelMat[j]*(Ei - Ej)/eta
        oS.alphas[j] = clipAlpha(oS.alphas[j],H,L)
        updateEk(oS, j) #added this for the Ecache
        if (abs(oS.alphas[j] - alphaJold) < 0.00001):
          print("j not moving enough")
          return 0
        oS.alphas[i] += oS.labelMat[j]*oS.labelMat[i]*(alphaJold - oS.alphas[j])#update i by the same amount as j
        updateEk(oS, i) #added this for the Ecache                    #the update is in the oppostie direction
        b1 = oS.b - Ei- oS.labelMat[i]*(oS.alphas[i]-alphaIold)*oS.X[i,:]*oS.X[i,:].T - oS.labelMat[j]*(oS.alphas[j]-alphaJold)*oS.X[i,:]*oS.X[j,:].T
        b2 = oS.b - Ej- oS.labelMat[i]*(oS.alphas[i]-alphaIold)*oS.X[i,:]*oS.X[j,:].T - oS.labelMat[j]*(oS.alphas[j]-alphaJold)*oS.X[j,:]*oS.X[j,:].T
        if (0 < oS.alphas[i]) and (oS.C > oS.alphas[i]): oS.b = b1
        elif (0 < oS.alphas[j]) and (oS.C > oS.alphas[j]): oS.b = b2
        else: oS.b = (b1 + b2)/2.0
        return 1
    else: return 0

def smoPK(dataMatIn, classLabels, C, toler, maxIter):    #full Platt SMO
    oS = optStruct(mat(dataMatIn),mat(classLabels).transpose(),C,toler)
    iter = 0
    entireSet = True; alphaPairsChanged = 0
    while (iter < maxIter) and ((alphaPairsChanged > 0) or (entireSet)):
        alphaPairsChanged = 0
        if entireSet:   #go over all
            for i in range(oS.m):
                alphaPairsChanged += innerL(i,oS)
                print("fullSet, iter: %d i:%d, pairs changed %d" % (iter,i,alphaPairsChanged))
            iter += 1
        else:#go over non-bound (railed) alphas
            nonBoundIs = nonzero((oS.alphas.A > 0) * (oS.alphas.A < C))[0]
            for i in nonBoundIs:
                alphaPairsChanged += innerL(i,oS)
                print("non-bound, iter: %d i:%d, pairs changed %d" % (iter,i,alphaPairsChanged))
            iter += 1
        if entireSet: entireSet = False #toggle entire set loop
        elif (alphaPairsChanged == 0): entireSet = True
        print("iteration number: %d" % iter)
    return oS.b,oS.alphas
目录
相关文章
|
1月前
|
机器学习/深度学习 人工智能 算法
探索机器学习中的支持向量机(SVM)算法
【5月更文挑战第27天】在数据科学和人工智能的领域中,支持向量机(SVM)是一种强大的监督学习模型,它基于统计学习理论中的VC维理论和结构风险最小化原理。本文将详细介绍SVM的工作原理、核心概念以及如何在实际问题中应用该算法进行分类和回归分析。我们还将讨论SVM面临的挑战以及如何通过调整参数和核技巧来优化模型性能。
|
8天前
|
机器学习/深度学习 人工智能 编解码
AI - 支持向量机算法
**支持向量机(SVM)**是一种用于二分类的强大学习算法,寻找最佳超平面以最大化类别间间隔。对于线性可分数据,SVM通过硬间隔最大化找到线性分类器;非线性数据则通过核技巧映射到高维空间,成为非线性分类器。SVM利用软间隔处理异常或线性不可分情况,并通过惩罚参数C平衡间隔和误分类。损失函数常采用合页损失,鸢尾花数据集常用于SVM的示例实验。
|
18天前
|
机器学习/深度学习 存储 算法
使用支持向量机算法解决手写体识别问题
使用支持向量机算法解决手写体识别问题
16 2
|
1月前
|
机器学习/深度学习 数据采集 算法
深入理解并应用机器学习算法:支持向量机(SVM)
【5月更文挑战第13天】支持向量机(SVM)是监督学习中的强分类算法,用于文本分类、图像识别等领域。它寻找超平面最大化间隔,支持向量是离超平面最近的样本点。SVM通过核函数处理非线性数据,软间隔和正则化避免过拟合。应用步骤包括数据预处理、选择核函数、训练模型、评估性能及应用预测。优点是高效、鲁棒和泛化能力强,但对参数敏感、不适合大规模数据集且对缺失数据敏感。理解SVM原理有助于优化实际问题的解决方案。
|
1月前
|
机器学习/深度学习 人工智能 算法
探索机器学习中的支持向量机(SVM)算法
【5月更文挑战第6天】在数据科学和人工智能的广阔天地中,支持向量机(SVM)以其强大的分类能力与理论深度成为机器学习领域中的一个闪亮的星。本文将深入探讨SVM的核心原理、关键特性以及实际应用案例,为读者提供一个清晰的视角来理解这一高级算法,并展示如何利用SVM解决实际问题。
133 7
|
30天前
|
机器学习/深度学习 算法
探索机器学习中的支持向量机(SVM)算法
【5月更文挑战第31天】 在数据科学的广阔天地中,支持向量机(SVM)以其卓越的性能和强大的理论基础脱颖而出。本文将深入剖析SVM的工作原理、核心概念以及实际应用,旨在为读者提供一个清晰的理解视角,并通过实例演示其在分类问题中的有效性。我们将从线性可分的情况出发,逐步过渡到非线性问题的处理方法,并探讨如何通过调整参数来优化模型的性能。
253 0
|
1月前
|
机器学习/深度学习 人工智能 算法
探索机器学习中的支持向量机(SVM)算法
【5月更文挑战第28天】 在数据科学与人工智能的领域中,支持向量机(Support Vector Machines, SVM)是一种强大的监督学习模型,它基于统计学习理论中的VC维理论和结构风险最小化原则。本文将深入探讨SVM的数学原理、关键概念以及实际应用案例。我们将透过SVM的镜头,理解其在分类和回归问题中的应用,并讨论如何通过核技巧克服维度灾难,提高模型的泛化能力。文章还将展示使用SVM解决实际问题的步骤和注意事项,为读者提供一个清晰的SVM应用指南。
|
1月前
|
机器学习/深度学习 人工智能 算法
探索机器学习中的支持向量机算法
【5月更文挑战第6天】 在数据科学和人工智能领域,支持向量机(SVM)是一种强大的监督学习模型,它凭借其出色的分类能力在众多机器学习任务中占据重要地位。本文旨在深入剖析支持向量机的工作原理,探讨其在高维数据处理中的优势以及面对大规模数据集时的应对策略。通过对核技巧、软间隔以及优化问题的讨论,我们将揭示SVM如何优雅地处理线性不可分问题,并保持模型的泛化性能。
|
1月前
|
机器学习/深度学习 算法 数据挖掘
【视频】支持向量机算法原理和Python用户流失数据挖掘SVM实例(下)
【视频】支持向量机算法原理和Python用户流失数据挖掘SVM实例(下)
|
4天前
|
机器学习/深度学习 自然语言处理 算法
m基于深度学习的OFDM+QPSK链路信道估计和均衡算法误码率matlab仿真,对比LS,MMSE及LMMSE传统算法
**摘要:** 升级版MATLAB仿真对比了深度学习与LS、MMSE、LMMSE的OFDM信道估计算法,新增自动样本生成、复杂度分析及抗频偏性能评估。深度学习在无线通信中,尤其在OFDM的信道估计问题上展现潜力,解决了传统方法的局限。程序涉及信道估计器设计,深度学习模型通过学习导频信息估计信道响应,适应频域变化。核心代码展示了信号处理流程,包括编码、调制、信道模拟、降噪、信道估计和解调。
26 8