【Python机器学习】实验06 贝叶斯推理 1

简介: 【Python机器学习】实验06 贝叶斯推理

朴素贝叶斯推理

贝叶斯推理的基本概念

1.朴素贝叶斯法是典型的生成学习方法。生成方法由训练数据学习联合概率分布

image.png

image.png

后验概率最大等价于0-1损失函数时的期望风险最小化。

模型:

  • 高斯模型
  • 多项式模型
  • 伯努利模型

1 数据读取–文件获取,可视化

from sklearn.datasets import load_iris
import pandas as pd
import numpy as np
#导入鸢尾花数据集
iris=load_iris()
#获得特征X,和相应的标签y
X=iris["data"]
y=iris["target"]
iris
{'data': array([[5.1, 3.5, 1.4, 0.2],
        [4.9, 3. , 1.4, 0.2],
        [4.7, 3.2, 1.3, 0.2],
        [4.6, 3.1, 1.5, 0.2],
        [5. , 3.6, 1.4, 0.2],
        [5.4, 3.9, 1.7, 0.4],
        [4.6, 3.4, 1.4, 0.3],
        [5. , 3.4, 1.5, 0.2],
        [4.4, 2.9, 1.4, 0.2],
        [4.9, 3.1, 1.5, 0.1],
        [5.4, 3.7, 1.5, 0.2],
        [4.8, 3.4, 1.6, 0.2],
        [4.8, 3. , 1.4, 0.1],
        [4.3, 3. , 1.1, 0.1],
        [5.8, 4. , 1.2, 0.2],
        [5.7, 4.4, 1.5, 0.4],
        [5.4, 3.9, 1.3, 0.4],
        [5.1, 3.5, 1.4, 0.3],
        [5.7, 3.8, 1.7, 0.3],
        [5.1, 3.8, 1.5, 0.3],
        [5.4, 3.4, 1.7, 0.2],
        [5.1, 3.7, 1.5, 0.4],
        [4.6, 3.6, 1. , 0.2],
        [5.1, 3.3, 1.7, 0.5],
        [4.8, 3.4, 1.9, 0.2],
        [5. , 3. , 1.6, 0.2],
        [5. , 3.4, 1.6, 0.4],
        [5.2, 3.5, 1.5, 0.2],
        [5.2, 3.4, 1.4, 0.2],
        [4.7, 3.2, 1.6, 0.2],
        [4.8, 3.1, 1.6, 0.2],
        [5.4, 3.4, 1.5, 0.4],
        [5.2, 4.1, 1.5, 0.1],
        [5.5, 4.2, 1.4, 0.2],
        [4.9, 3.1, 1.5, 0.2],
        [5. , 3.2, 1.2, 0.2],
        [5.5, 3.5, 1.3, 0.2],
        [4.9, 3.6, 1.4, 0.1],
        [4.4, 3. , 1.3, 0.2],
        [5.1, 3.4, 1.5, 0.2],
        [5. , 3.5, 1.3, 0.3],
        [4.5, 2.3, 1.3, 0.3],
        [4.4, 3.2, 1.3, 0.2],
        [5. , 3.5, 1.6, 0.6],
        [5.1, 3.8, 1.9, 0.4],
        [4.8, 3. , 1.4, 0.3],
        [5.1, 3.8, 1.6, 0.2],
        [4.6, 3.2, 1.4, 0.2],
        [5.3, 3.7, 1.5, 0.2],
        [5. , 3.3, 1.4, 0.2],
        [7. , 3.2, 4.7, 1.4],
        [6.4, 3.2, 4.5, 1.5],
        [6.9, 3.1, 4.9, 1.5],
        [5.5, 2.3, 4. , 1.3],
        [6.5, 2.8, 4.6, 1.5],
        [5.7, 2.8, 4.5, 1.3],
        [6.3, 3.3, 4.7, 1.6],
        [4.9, 2.4, 3.3, 1. ],
        [6.6, 2.9, 4.6, 1.3],
        [5.2, 2.7, 3.9, 1.4],
        [5. , 2. , 3.5, 1. ],
        [5.9, 3. , 4.2, 1.5],
        [6. , 2.2, 4. , 1. ],
        [6.1, 2.9, 4.7, 1.4],
        [5.6, 2.9, 3.6, 1.3],
        [6.7, 3.1, 4.4, 1.4],
        [5.6, 3. , 4.5, 1.5],
        [5.8, 2.7, 4.1, 1. ],
        [6.2, 2.2, 4.5, 1.5],
        [5.6, 2.5, 3.9, 1.1],
        [5.9, 3.2, 4.8, 1.8],
        [6.1, 2.8, 4. , 1.3],
        [6.3, 2.5, 4.9, 1.5],
        [6.1, 2.8, 4.7, 1.2],
        [6.4, 2.9, 4.3, 1.3],
        [6.6, 3. , 4.4, 1.4],
        [6.8, 2.8, 4.8, 1.4],
        [6.7, 3. , 5. , 1.7],
        [6. , 2.9, 4.5, 1.5],
        [5.7, 2.6, 3.5, 1. ],
        [5.5, 2.4, 3.8, 1.1],
        [5.5, 2.4, 3.7, 1. ],
        [5.8, 2.7, 3.9, 1.2],
        [6. , 2.7, 5.1, 1.6],
        [5.4, 3. , 4.5, 1.5],
        [6. , 3.4, 4.5, 1.6],
        [6.7, 3.1, 4.7, 1.5],
        [6.3, 2.3, 4.4, 1.3],
        [5.6, 3. , 4.1, 1.3],
        [5.5, 2.5, 4. , 1.3],
        [5.5, 2.6, 4.4, 1.2],
        [6.1, 3. , 4.6, 1.4],
        [5.8, 2.6, 4. , 1.2],
        [5. , 2.3, 3.3, 1. ],
        [5.6, 2.7, 4.2, 1.3],
        [5.7, 3. , 4.2, 1.2],
        [5.7, 2.9, 4.2, 1.3],
        [6.2, 2.9, 4.3, 1.3],
        [5.1, 2.5, 3. , 1.1],
        [5.7, 2.8, 4.1, 1.3],
        [6.3, 3.3, 6. , 2.5],
        [5.8, 2.7, 5.1, 1.9],
        [7.1, 3. , 5.9, 2.1],
        [6.3, 2.9, 5.6, 1.8],
        [6.5, 3. , 5.8, 2.2],
        [7.6, 3. , 6.6, 2.1],
        [4.9, 2.5, 4.5, 1.7],
        [7.3, 2.9, 6.3, 1.8],
        [6.7, 2.5, 5.8, 1.8],
        [7.2, 3.6, 6.1, 2.5],
        [6.5, 3.2, 5.1, 2. ],
        [6.4, 2.7, 5.3, 1.9],
        [6.8, 3. , 5.5, 2.1],
        [5.7, 2.5, 5. , 2. ],
        [5.8, 2.8, 5.1, 2.4],
        [6.4, 3.2, 5.3, 2.3],
        [6.5, 3. , 5.5, 1.8],
        [7.7, 3.8, 6.7, 2.2],
        [7.7, 2.6, 6.9, 2.3],
        [6. , 2.2, 5. , 1.5],
        [6.9, 3.2, 5.7, 2.3],
        [5.6, 2.8, 4.9, 2. ],
        [7.7, 2.8, 6.7, 2. ],
        [6.3, 2.7, 4.9, 1.8],
        [6.7, 3.3, 5.7, 2.1],
        [7.2, 3.2, 6. , 1.8],
        [6.2, 2.8, 4.8, 1.8],
        [6.1, 3. , 4.9, 1.8],
        [6.4, 2.8, 5.6, 2.1],
        [7.2, 3. , 5.8, 1.6],
        [7.4, 2.8, 6.1, 1.9],
        [7.9, 3.8, 6.4, 2. ],
        [6.4, 2.8, 5.6, 2.2],
        [6.3, 2.8, 5.1, 1.5],
        [6.1, 2.6, 5.6, 1.4],
        [7.7, 3. , 6.1, 2.3],
        [6.3, 3.4, 5.6, 2.4],
        [6.4, 3.1, 5.5, 1.8],
        [6. , 3. , 4.8, 1.8],
        [6.9, 3.1, 5.4, 2.1],
        [6.7, 3.1, 5.6, 2.4],
        [6.9, 3.1, 5.1, 2.3],
        [5.8, 2.7, 5.1, 1.9],
        [6.8, 3.2, 5.9, 2.3],
        [6.7, 3.3, 5.7, 2.5],
        [6.7, 3. , 5.2, 2.3],
        [6.3, 2.5, 5. , 1.9],
        [6.5, 3. , 5.2, 2. ],
        [6.2, 3.4, 5.4, 2.3],
        [5.9, 3. , 5.1, 1.8]]),
 'target': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
        1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
        1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
        2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
        2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]),
 'frame': None,
 'target_names': array(['setosa', 'versicolor', 'virginica'], dtype='<U10'),
 'DESCR': '.. _iris_dataset:\n\nIris plants dataset\n--------------------\n\n**Data Set Characteristics:**\n\n    :Number of Instances: 150 (50 in each of three classes)\n    :Number of Attributes: 4 numeric, predictive attributes and the class\n    :Attribute Information:\n        - sepal length in cm\n        - sepal width in cm\n        - petal length in cm\n        - petal width in cm\n        - class:\n                - Iris-Setosa\n                - Iris-Versicolour\n                - Iris-Virginica\n                \n    :Summary Statistics:\n\n    ============== ==== ==== ======= ===== ====================\n                    Min  Max   Mean    SD   Class Correlation\n    ============== ==== ==== ======= ===== ====================\n    sepal length:   4.3  7.9   5.84   0.83    0.7826\n    sepal width:    2.0  4.4   3.05   0.43   -0.4194\n    petal length:   1.0  6.9   3.76   1.76    0.9490  (high!)\n    petal width:    0.1  2.5   1.20   0.76    0.9565  (high!)\n    ============== ==== ==== ======= ===== ====================\n\n    :Missing Attribute Values: None\n    :Class Distribution: 33.3% for each of 3 classes.\n    :Creator: R.A. Fisher\n    :Donor: Michael Marshall (MARSHALL%PLU@io.arc.nasa.gov)\n    :Date: July, 1988\n\nThe famous Iris database, first used by Sir R.A. Fisher. The dataset is taken\nfrom Fisher\'s paper. Note that it\'s the same as in R, but not as in the UCI\nMachine Learning Repository, which has two wrong data points.\n\nThis is perhaps the best known database to be found in the\npattern recognition literature.  Fisher\'s paper is a classic in the field and\nis referenced frequently to this day.  (See Duda & Hart, for example.)  The\ndata set contains 3 classes of 50 instances each, where each class refers to a\ntype of iris plant.  One class is linearly separable from the other 2; the\nlatter are NOT linearly separable from each other.\n\n.. topic:: References\n\n   - Fisher, R.A. "The use of multiple measurements in taxonomic problems"\n     Annual Eugenics, 7, Part II, 179-188 (1936); also in "Contributions to\n     Mathematical Statistics" (John Wiley, NY, 1950).\n   - Duda, R.O., & Hart, P.E. (1973) Pattern Classification and Scene Analysis.\n     (Q327.D83) John Wiley & Sons.  ISBN 0-471-22361-1.  See page 218.\n   - Dasarathy, B.V. (1980) "Nosing Around the Neighborhood: A New System\n     Structure and Classification Rule for Recognition in Partially Exposed\n     Environments".  IEEE Transactions on Pattern Analysis and Machine\n     Intelligence, Vol. PAMI-2, No. 1, 67-71.\n   - Gates, G.W. (1972) "The Reduced Nearest Neighbor Rule".  IEEE Transactions\n     on Information Theory, May 1972, 431-433.\n   - See also: 1988 MLC Proceedings, 54-64.  Cheeseman et al"s AUTOCLASS II\n     conceptual clustering system finds 3 classes in the data.\n   - Many, many more ...',
 'feature_names': ['sepal length (cm)',
  'sepal width (cm)',
  'petal length (cm)',
  'petal width (cm)'],
 'filename': 'iris.csv',
 'data_module': 'sklearn.datasets.data'}
#查看X,y的形状
X.shape,y.shape
((150, 4), (150,))
#将y转换为二维数组
y=y.reshape((150,-1))
y.shape
(150, 1)
#通过数据框可视化
df=pd.DataFrame(np.hstack([X,y]),columns=iris.feature_names+["target"])
df
sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) target
0 5.1 3.5 1.4 0.2 0.0
1 4.9 3.0 1.4 0.2 0.0
2 4.7 3.2 1.3 0.2 0.0
3 4.6 3.1 1.5 0.2 0.0
4 5.0 3.6 1.4 0.2 0.0
... ... ... ... ... ...
145 6.7 3.0 5.2 2.3 2.0
146 6.3 2.5 5.0 1.9 2.0
147 6.5 3.0 5.2 2.0 2.0
148 6.2 3.4 5.4 2.3 2.0
149 5.9 3.0 5.1 1.8 2.0

150 rows × 5 columns

#把标签列转为整型
df["target"]=df["target"].astype("int")
df
sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) target
0 5.1 3.5 1.4 0.2 0
1 4.9 3.0 1.4 0.2 0
2 4.7 3.2 1.3 0.2 0
3 4.6 3.1 1.5 0.2 0
4 5.0 3.6 1.4 0.2 0
... ... ... ... ... ...
145 6.7 3.0 5.2 2.3 2
146 6.3 2.5 5.0 1.9 2
147 6.5 3.0 5.2 2.0 2
148 6.2 3.4 5.4 2.3 2
149 5.9 3.0 5.1 1.8 2


150 rows × 5 columns

2 数据读取–训练集和测试集的划分

#划分数据为训练数据和测试数据
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test=train_test_split(X[:100],y[:100],test_size=0.25,random_state=0)
X_train.shape,X_test.shape,y_train.shape,y_test.shape
((75, 4), (25, 4), (75, 1), (25, 1))


3 数据读取–准备好每个类别各自的数据

y_train
array([[0],
       [0],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [0],
       [0],
       [1],
       [1],
       [1],
       [0],
       [1],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [1],
       [1],
       [0],
       [0],
       [0],
       [1],
       [0],
       [0],
       [0],
       [1],
       [0],
       [0],
       [1],
       [1],
       [1],
       [1],
       [0],
       [1],
       [0],
       [1],
       [0],
       [0],
       [0],
       [1],
       [1],
       [1],
       [0],
       [1],
       [1],
       [1],
       [0],
       [0],
       [1],
       [0],
       [0],
       [1],
       [1],
       [0],
       [1],
       [1],
       [1],
       [0],
       [0],
       [1],
       [0],
       [1],
       [1],
       [1],
       [0],
       [0]])
#看看哪些索引处的标签为0
np.where(y_train==0)
(array([ 0,  1, 10, 11, 15, 17, 18, 19, 20, 21, 22, 23, 24, 27, 28, 29, 31,
        32, 33, 35, 36, 41, 43, 45, 46, 47, 51, 55, 56, 58, 59, 62, 66, 67,
        69, 73, 74], dtype=int64),
 array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int64))
np.where(y_train==1)
(array([ 2,  3,  4,  5,  6,  7,  8,  9, 12, 13, 14, 16, 25, 26, 30, 34, 37,
        38, 39, 40, 42, 44, 48, 49, 50, 52, 53, 54, 57, 60, 61, 63, 64, 65,
        68, 70, 71, 72], dtype=int64),
 array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int64))
#新建一个字典,存储每个标签对应的索引(用到行索引),该操作的目的是为了后面对不同类别分别计算均值和方差
dic={}
for i in [0,1]:
    dic[i]=np.where(y_train==i)
dic
{0: (array([ 0,  1, 10, 11, 15, 17, 18, 19, 20, 21, 22, 23, 24, 27, 28, 29, 31,
         32, 33, 35, 36, 41, 43, 45, 46, 47, 51, 55, 56, 58, 59, 62, 66, 67,
         69, 73, 74], dtype=int64),
  array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
         0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int64)),
 1: (array([ 2,  3,  4,  5,  6,  7,  8,  9, 12, 13, 14, 16, 25, 26, 30, 34, 37,
         38, 39, 40, 42, 44, 48, 49, 50, 52, 53, 54, 57, 60, 61, 63, 64, 65,
         68, 70, 71, 72], dtype=int64),
  array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
         0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int64))}

4 定义数据的均值和方差

#计算均值和方差,对于每个特征(列这个维度)计算均值和方差,因此,有多少个特征,那么均值和方差向量中就有多少个元素
#X为数据框
def u_sigma(X):
    u=np.mean(X,axis=0)
    sigma=np.var(X,axis=0)
    return u,sigma
#包含两个元素,第一个元素为类别0对应的均值和方差,第二个元素为类别为1的元素对应的均值和方差
lst=[]
for key,value in dic.items():
    lst.append(u_sigma(X_train[value[0]]))
lst
[(array([5.06486486, 3.45135135, 1.47297297, 0.24054054]),
  array([0.11200877, 0.14195763, 0.02197224, 0.00889701])),
 (array([5.92368421, 2.78684211, 4.26578947, 1.33947368]),
  array([0.27496537, 0.09956371, 0.23646122, 0.04081025]))]
#序列解包,看看是否正确
u_0,sigma_0=lst[0]
u_1,sigma_1=lst[1]
u_0,sigma_0,u_1,sigma_1
(array([5.06486486, 3.45135135, 1.47297297, 0.24054054]),
 array([0.11200877, 0.14195763, 0.02197224, 0.00889701]),
 array([5.92368421, 2.78684211, 4.26578947, 1.33947368]),
 array([0.27496537, 0.09956371, 0.23646122, 0.04081025]))

5 定义概率密度函数

GaussianNB 高斯朴素贝叶斯,特征的可能性被假设为高斯

概率密度函数:

image.png


6 对于每个类别计算均值和方差

#计算类别0(普通鸢尾花)的均值和方差
u_0,sigma_0=u_sigma(X_train[dic[0][0],:])
u_0,sigma_0
(array([5.06486486, 3.45135135, 1.47297297, 0.24054054]),
 array([0.11200877, 0.14195763, 0.02197224, 0.00889701]))
#计算类别1(山鸢尾花)的均值和方差
u_1,sigma_1=u_sigma(X_train[dic[1][0],:])
u_1,sigma_1
(array([5.92368421, 2.78684211, 4.26578947, 1.33947368]),
 array([0.27496537, 0.09956371, 0.23646122, 0.04081025]))

7 定义每个类别的先验概率

len(dic[0][0]),len(dic[1][0])
(37, 38)
dic[0][0]
array([ 0,  1, 10, 11, 15, 17, 18, 19, 20, 21, 22, 23, 24, 27, 28, 29, 31,
       32, 33, 35, 36, 41, 43, 45, 46, 47, 51, 55, 56, 58, 59, 62, 66, 67,
       69, 73, 74], dtype=int64)
#计算每个类别对应的先验概率
lst_pri=[]
for i in [0,1]:
    lst_pri.append(len(dic[i][0]))
lst_pri=[item/sum(lst_pri) for item in lst_pri]
lst_pri
[0.49333333333333335, 0.5066666666666667]


目录
相关文章
|
1月前
|
机器学习/深度学习 数据采集 数据可视化
Python数据科学实战:从Pandas到机器学习
Python数据科学实战:从Pandas到机器学习
|
1月前
|
机器学习/深度学习 人工智能 算法
【手写数字识别】Python+深度学习+机器学习+人工智能+TensorFlow+算法模型
手写数字识别系统,使用Python作为主要开发语言,基于深度学习TensorFlow框架,搭建卷积神经网络算法。并通过对数据集进行训练,最后得到一个识别精度较高的模型。并基于Flask框架,开发网页端操作平台,实现用户上传一张图片识别其名称。
81 0
【手写数字识别】Python+深度学习+机器学习+人工智能+TensorFlow+算法模型
|
28天前
|
机器学习/深度学习 数据可视化 数据处理
掌握Python数据科学基础——从数据处理到机器学习
掌握Python数据科学基础——从数据处理到机器学习
41 0
|
28天前
|
机器学习/深度学习 数据采集 人工智能
机器学习入门:Python与scikit-learn实战
机器学习入门:Python与scikit-learn实战
38 0
|
29天前
|
机器学习/深度学习 数据采集 数据挖掘
Python在数据科学中的应用:从数据处理到模型训练
Python在数据科学中的应用:从数据处理到模型训练
|
7月前
|
机器学习/深度学习 人工智能 自然语言处理
【Python机器学习】文本特征提取及文本向量化讲解和实战(图文解释 附源码)
【Python机器学习】文本特征提取及文本向量化讲解和实战(图文解释 附源码)
441 0
|
7月前
|
机器学习/深度学习 算法 数据挖掘
【Python机器学习】K-Means对文本聚类和半环形数据聚类实战(附源码和数据集)
【Python机器学习】K-Means对文本聚类和半环形数据聚类实战(附源码和数据集)
212 0
|
2月前
|
机器学习/深度学习 算法 数据挖掘
【Python篇】深度探索NumPy(下篇):从科学计算到机器学习的高效实战技巧1
【Python篇】深度探索NumPy(下篇):从科学计算到机器学习的高效实战技巧
59 5
|
2月前
|
机器学习/深度学习 数据采集 分布式计算
【Python篇】深入机器学习核心:XGBoost 从入门到实战
【Python篇】深入机器学习核心:XGBoost 从入门到实战
182 3
|
2月前
|
机器学习/深度学习 算法 数据可视化
【Python篇】深度探索NumPy(下篇):从科学计算到机器学习的高效实战技巧2
【Python篇】深度探索NumPy(下篇):从科学计算到机器学习的高效实战技巧
44 1