逻辑斯蒂回归
【关键词】Logistics函数,最大似然估计,梯度下降法,随机梯度下降(SGD)
1、Logistics回归的原理
利用Logistics回归进行分类的主要思想是:根据现有数据对分类边界线建立回归公式,以此进行分类。这里的“回归” 一词源于最佳拟合,表示要找到最佳拟合参数集。
训练分类器时的做法就是寻找最佳拟合参数,使用的是最优化算法。接下来介绍这个二值型输出分类器的数学原理
Logistic Regression和Linear Regression的原理是相似的,可以简单的描述为这样的过程:
(1)找一个合适的预测函数,一般表示为h函数,该函数就是我们需要找的分类函数,它用来预测输入数据的判断结果。这个过程是非常关键的,需要对数据有一定的了解或分析,知道或者猜测预测函数的“大概”形式,比如是线性函数还是非线性函数。
(2)构造一个Cost函数(损失函数),该函数表示预测的输出(h)与训练数据类别(y)之间的偏差,可以是二者之间的差(h-y)或者是其他的形式。综合考虑所有训练数据的“损失”,将Cost求和或者求平均,记为J(θ)函数,表示所有训练数据预测值与实际类别的偏差。
(3)显然,J(θ)函数的值越小表示预测函数越准确(即h函数越准确),所以这一步需要做的是找到J(θ)函数的最小值。找函数的最小值有不同的方法,Logistic Regression采用的是梯度下降法(Gradient Descent)。
1) 构造预测函数
Logistic Regression虽然名字里带“回归”,但是它实际上是一种分类方法,用于两分类问题(即输出只有两种
)。首先需要先找到一个预测函数(h),显然,该函数的输出必须是两类值(分别代表两个类别),所以利用了Logistic函数(或称为Sigmoid函数),函数形式为:
该函数形状为:
预测函数可以写为:
2 构造损失函数
Cost函数和J(θ)函数是基于 最大似然估计
推导得到的。
每个样本属于其真实标记的概率,即似然函数,可以写成:
所有样本都属于其真实标记的概率为
对数似然函数为
最大似然估计就是要求得使l(θ)取最大值时的θ,其实这里可以使用梯度上升法求解,求得的θ就是要求的最佳参数
3) 梯度下降法求J(θ)的最小值
求J(θ)的最小值可以使用梯度下降法
,根据梯度下降法可得θ的更新过程:
式中为α学习步长,下面来求偏导:
上式求解过程中用到如下的公式:
因此,θ的更新过程可以写成:
2、实战
sklearn.linear_model.LogisticRegression(penalty='l2', dual=False, tol=0.0001, C=1.0, fit_intercept=True, intercept_scaling=1, class_weight=None, random_state=None, solver='liblinear', max_iter=100, multi_class='ovr', verbose=0, warm_start=False, n_jobs=1)
solver参数的选择:
- “liblinear”:小数量级的数据集
- “lbfgs”, “sag” or “newton-cg”:大数量级的数据集以及多分类问题
- “sag”:极大的数据集
2.1 手写数字数据集的分类
from sklearn.linear_model import LogisticRegression from sklearn import datasets
digists = datasets.load_digits() • 1
digists
{'DESCR': "Optical Recognition of Handwritten Digits Data Set\n===================================================\n\nNotes\n-----\nData Set Characteristics:\n :Number of Instances: 5620\n :Number of Attributes: 64\n :Attribute Information: 8x8 image of integer pixels in the range 0..16.\n :Missing Attribute Values: None\n :Creator: E. Alpaydin (alpaydin '@' boun.edu.tr)\n :Date: July; 1998\n\nThis is a copy of the test set of the UCI ML hand-written digits datasets\nhttp://archive.ics.uci.edu/ml/datasets/Optical+Recognition+of+Handwritten+Digits\n\nThe data set contains images of hand-written digits: 10 classes where\neach class refers to a digit.\n\nPreprocessing programs made available by NIST were used to extract\nnormalized bitmaps of handwritten digits from a preprinted form. From a\ntotal of 43 people, 30 contributed to the training set and different 13\nto the test set. 32x32 bitmaps are divided into nonoverlapping blocks of\n4x4 and the number of on pixels are counted in each block. This generates\nan input matrix of 8x8 where each element is an integer in the range\n0..16. This reduces dimensionality and gives invariance to small\ndistortions.\n\nFor info on NIST preprocessing routines, see M. D. Garris, J. L. Blue, G.\nT. Candela, D. L. Dimmick, J. Geist, P. J. Grother, S. A. Janet, and C.\nL. Wilson, NIST Form-Based Handprint Recognition System, NISTIR 5469,\n1994.\n\nReferences\n----------\n - C. Kaynak (1995) Methods of Combining Multiple Classifiers and Their\n Applications to Handwritten Digit Recognition, MSc Thesis, Institute of\n Graduate Studies in Science and Engineering, Bogazici University.\n - E. Alpaydin, C. Kaynak (1998) Cascading Classifiers, Kybernetika.\n - Ken Tang and Ponnuthurai N. Suganthan and Xi Yao and A. Kai Qin.\n Linear dimensionalityreduction using relevance weighted LDA. School of\n Electrical and Electronic Engineering Nanyang Technological University.\n 2005.\n - Claudio Gentile. A New Approximate Maximal Margin Classification\n Algorithm. NIPS. 2000.\n", 'data': array([[ 0., 0., 5., ..., 0., 0., 0.], [ 0., 0., 0., ..., 10., 0., 0.], [ 0., 0., 0., ..., 16., 9., 0.], ..., [ 0., 0., 1., ..., 6., 0., 0.], [ 0., 0., 2., ..., 12., 0., 0.], [ 0., 0., 10., ..., 12., 1., 0.]]), 'images': array([[[ 0., 0., 5., ..., 1., 0., 0.], [ 0., 0., 13., ..., 15., 5., 0.], [ 0., 3., 15., ..., 11., 8., 0.], ..., [ 0., 4., 11., ..., 12., 7., 0.], [ 0., 2., 14., ..., 12., 0., 0.], [ 0., 0., 6., ..., 0., 0., 0.]], [[ 0., 0., 0., ..., 5., 0., 0.], [ 0., 0., 0., ..., 9., 0., 0.], [ 0., 0., 3., ..., 6., 0., 0.], ..., [ 0., 0., 1., ..., 6., 0., 0.], [ 0., 0., 1., ..., 6., 0., 0.], [ 0., 0., 0., ..., 10., 0., 0.]], [[ 0., 0., 0., ..., 12., 0., 0.], [ 0., 0., 3., ..., 14., 0., 0.], [ 0., 0., 8., ..., 16., 0., 0.], ..., [ 0., 9., 16., ..., 0., 0., 0.], [ 0., 3., 13., ..., 11., 5., 0.], [ 0., 0., 0., ..., 16., 9., 0.]], ..., [[ 0., 0., 1., ..., 1., 0., 0.], [ 0., 0., 13., ..., 2., 1., 0.], [ 0., 0., 16., ..., 16., 5., 0.], ..., [ 0., 0., 16., ..., 15., 0., 0.], [ 0., 0., 15., ..., 16., 0., 0.], [ 0., 0., 2., ..., 6., 0., 0.]], [[ 0., 0., 2., ..., 0., 0., 0.], [ 0., 0., 14., ..., 15., 1., 0.], [ 0., 4., 16., ..., 16., 7., 0.], ..., [ 0., 0., 0., ..., 16., 2., 0.], [ 0., 0., 4., ..., 16., 2., 0.], [ 0., 0., 5., ..., 12., 0., 0.]], [[ 0., 0., 10., ..., 1., 0., 0.], [ 0., 2., 16., ..., 1., 0., 0.], [ 0., 0., 15., ..., 15., 0., 0.], ..., [ 0., 4., 16., ..., 16., 6., 0.], [ 0., 8., 16., ..., 16., 8., 0.], [ 0., 1., 8., ..., 12., 1., 0.]]]), 'target': array([0, 1, 2, ..., 8, 9, 8]), 'target_names': array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])}
data = digists.data target = digists.target img = digists.images
data.shape • 1
(1797, 64)
import matplotlib.pyplot as plt %matplotlib inline
plt.figure(figsize=(1,1)) plt.imshow(img[1000],cmap="gray") target[1000]
1
# 切分训练数据和测试数据 from sklearn.model_selection import train_test_split x_train,x_test,y_train,y_test = train_test_split(data,target,test_size=0.02)
# 创建逻辑斯帝模型 lgr = LogisticRegression() # 这个逻辑斯蒂模型算法中没有加入梯度下降,采用的是传统的最大似然估计的求导方法
lgr.fit(x_train,y_train)
LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True, intercept_scaling=1, max_iter=100, multi_class='ovr', n_jobs=1, penalty='l2', random_state=None, solver='liblinear', tol=0.0001, verbose=0, warm_start=False)
lgr.predict(x_test)
array([4, 9, 6, 7, 1, 3, 5, 4, 5, 3, 0, 2, 1, 7, 7, 1, 3, 3, 0, 5, 2, 0, 2, 9, 7, 0, 6, 5, 4, 6, 3, 6, 1, 6, 4, 8])
y_test
array([4, 9, 6, 7, 1, 3, 5, 4, 5, 3, 0, 2, 1, 7, 7, 1, 3, 3, 0, 5, 2, 0, 2, 9, 7, 0, 6, 5, 4, 6, 3, 6, 1, 6, 4, 8])
lgr.score(x_test,y_test) • 1
1.0
2.2 使用make_blobs产生数据集并进行分类
make_blobs函数是为聚类产生数据集,产生一个数据集和相应的标签。
# 创建150个样本,3分类 data,target = datasets.make_blobs(n_features=2,n_samples=150,centers=3,random_state=0)
data.shape • 1
(150, 2) • 1
plt.scatter(data[:,0],data[:,1],c=target)
import numpy as np
x = np.linspace(data[:,0].min(),data[:,0].max(),300) y = np.linspace(data[:,1].min(),data[:,1].max(),500) xx,yy = np.meshgrid(x,y) x_test = np.c_[xx.ravel(),yy.ravel()] x_test.shape
(150000, 2) • 1
逻辑斯蒂对区域的划分
lgr = LogisticRegression() • 1
lgr.fit(data,target)
LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True, intercept_scaling=1, max_iter=100, multi_class='ovr', n_jobs=1, penalty='l2', random_state=None, solver='liblinear', tol=0.0001, verbose=0, warm_start=False)
y_ = lgr.predict(x_test)
plt.scatter(x_test[:,0],x_test[:,1],c=y_,cmap="rainbow") plt.scatter(data[:,0],data[:,1],c=target)
k-近邻模型做区域划分
from sklearn.neighbors import KNeighborsClassifier knn = KNeighborsClassifier()
knn.fit(data,target)
KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski', metric_params=None, n_jobs=1, n_neighbors=5, p=2, weights='uniform')
y_ = knn.predict(x_test)
plt.scatter(x_test[:,0],x_test[:,1],c=y_,cmap="rainbow") plt.scatter(data[:,0],data[:,1],c=target)