ML之kNNC:基于iris莺尾花数据集(PCA处理+三维散点图可视化)利用kNN算法实现分类预测daiding

简介: ML之kNNC:基于iris莺尾花数据集(PCA处理+三维散点图可视化)利用kNN算法实现分类预测daiding


目录

基于iris莺尾花数据集(PCA处理+三维散点图可视化)利用kNN算法实现分类预测

设计思路

输出结果

核心代码


 

 

 

相关文章

ML之kNNC:基于iris莺尾花数据集(PCA处理+三维散点图可视化)利用kNN算法实现分类预测

ML之kNNC:基于iris莺尾花数据集(PCA处理+三维散点图可视化)利用kNN算法实现分类预测实现


 

基于iris莺尾花数据集(PCA处理+三维散点图可视化)利用kNN算法实现分类预测

设计思路

 

 

 

输出结果

 

 

 

 

 

1. (149, 5) 
2. 5.1  3.5  1.4  0.2  Iris-setosa
3. 0  4.9  3.0  1.4  0.2  Iris-setosa
4. 1  4.7  3.2  1.3  0.2  Iris-setosa
5. 2  4.6  3.1  1.5  0.2  Iris-setosa
6. 3  5.0  3.6  1.4  0.2  Iris-setosa
7. 4  5.4  3.9  1.7  0.4  Iris-setosa
8. (149, 5) 
9.     Sepal_Length  Sepal_Width  Petal_Length  Petal_Width            type
10. 0           4.5          2.3           1.3          0.3     Iris-setosa
11. 1           6.3          2.5           5.0          1.9  Iris-virginica
12. 2           5.1          3.4           1.5          0.2     Iris-setosa
13. 3           6.3          3.3           6.0          2.5  Iris-virginica
14. 4           6.8          3.2           5.9          2.3  Iris-virginica
15. 切分点: 29
16. label_classes: ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
17. kNNDIY模型预测,基于原数据: 0.95
18. kNN模型预测,基于原数据预测: [0.96666667 1.         0.93333333 1.         0.93103448]
19. kNN模型预测,原数据PCA处理后: [1.         0.96       0.95918367]

 

 

 

核心代码

1. class KNeighborsClassifier Found at: sklearn.neighbors._classification
2. 
3. class KNeighborsClassifier(NeighborsBase, KNeighborsMixin, 
4.     SupervisedIntegerMixin, ClassifierMixin):
5. """Classifier implementing the k-nearest neighbors vote.
6.     
7.     Read more in the :ref:`User Guide <classification>`.
8.     
9.     Parameters
10.     ----------
11.     n_neighbors : int, default=5
12.     Number of neighbors to use by default for :meth:`kneighbors` queries.
13.     
14.     weights : {'uniform', 'distance'} or callable, default='uniform'
15.     weight function used in prediction.  Possible values:
16.     
17.     - 'uniform' : uniform weights.  All points in each neighborhood
18.     are weighted equally.
19.     - 'distance' : weight points by the inverse of their distance.
20.     in this case, closer neighbors of a query point will have a
21.     greater influence than neighbors which are further away.
22.     - [callable] : a user-defined function which accepts an
23.     array of distances, and returns an array of the same shape
24.     containing the weights.
25.     
26.     algorithm : {'auto', 'ball_tree', 'kd_tree', 'brute'}, default='auto'
27.     Algorithm used to compute the nearest neighbors:
28.     
29.     - 'ball_tree' will use :class:`BallTree`
30.     - 'kd_tree' will use :class:`KDTree`
31.     - 'brute' will use a brute-force search.
32.     - 'auto' will attempt to decide the most appropriate algorithm
33.     based on the values passed to :meth:`fit` method.
34.     
35.     Note: fitting on sparse input will override the setting of
36.     this parameter, using brute force.
37.     
38.     leaf_size : int, default=30
39.     Leaf size passed to BallTree or KDTree.  This can affect the
40.     speed of the construction and query, as well as the memory
41.     required to store the tree.  The optimal value depends on the
42.     nature of the problem.
43.     
44.     p : int, default=2
45.     Power parameter for the Minkowski metric. When p = 1, this is
46.     equivalent to using manhattan_distance (l1), and euclidean_distance
47.     (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used.
48.     
49.     metric : str or callable, default='minkowski'
50.     the distance metric to use for the tree.  The default metric is
51.     minkowski, and with p=2 is equivalent to the standard Euclidean
52.     metric. See the documentation of :class:`DistanceMetric` for a
53.     list of available metrics.
54.     If metric is "precomputed", X is assumed to be a distance matrix and
55.     must be square during fit. X may be a :term:`sparse graph`,
56.     in which case only "nonzero" elements may be considered neighbors.
57.     
58.     metric_params : dict, default=None
59.     Additional keyword arguments for the metric function.
60.     
61.     n_jobs : int, default=None
62.     The number of parallel jobs to run for neighbors search.
63.     ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context.
64.     ``-1`` means using all processors. See :term:`Glossary <n_jobs>`
65.     for more details.
66.     Doesn't affect :meth:`fit` method.
67.     
68.     Attributes
69.     ----------
70.     classes_ : array of shape (n_classes,)
71.     Class labels known to the classifier
72.     
73.     effective_metric_ : str or callble
74.     The distance metric used. It will be same as the `metric` parameter
75.     or a synonym of it, e.g. 'euclidean' if the `metric` parameter set to
76.     'minkowski' and `p` parameter set to 2.
77.     
78.     effective_metric_params_ : dict
79.     Additional keyword arguments for the metric function. For most 
80.      metrics
81.     will be same with `metric_params` parameter, but may also contain the
82.     `p` parameter value if the `effective_metric_` attribute is set to
83.     'minkowski'.
84.     
85.     outputs_2d_ : bool
86.     False when `y`'s shape is (n_samples, ) or (n_samples, 1) during fit
87.     otherwise True.
88.     
89.     Examples
90.     --------
91.     >>> X = [[0], [1], [2], [3]]
92.     >>> y = [0, 0, 1, 1]
93.     >>> from sklearn.neighbors import KNeighborsClassifier
94.     >>> neigh = KNeighborsClassifier(n_neighbors=3)
95.     >>> neigh.fit(X, y)
96.     KNeighborsClassifier(...)
97.     >>> print(neigh.predict([[1.1]]))
98.     [0]
99.     >>> print(neigh.predict_proba([[0.9]]))
100.     [[0.66666667 0.33333333]]
101.     
102.     See also
103.     --------
104.     RadiusNeighborsClassifier
105.     KNeighborsRegressor
106.     RadiusNeighborsRegressor
107.     NearestNeighbors
108.     
109.     Notes
110.     -----
111.     See :ref:`Nearest Neighbors <neighbors>` in the online 
112.      documentation
113.     for a discussion of the choice of ``algorithm`` and ``leaf_size``.
114.     
115.     .. warning::
116.     
117.     Regarding the Nearest Neighbors algorithms, if it is found that two
118.     neighbors, neighbor `k+1` and `k`, have identical distances
119.     but different labels, the results will depend on the ordering of the
120.     training data.
121.     
122.     https://en.wikipedia.org/wiki/K-nearest_neighbor_algorithm
123.     """
124.     @_deprecate_positional_args
125. def __init__(self, n_neighbors=5, 
126.         *, weights='uniform', algorithm='auto', leaf_size=30, 
127.         p=2, metric='minkowski', metric_params=None, n_jobs=None, **
128.         kwargs):
129. super().__init__(n_neighbors=n_neighbors, algorithm=algorithm, 
130.          leaf_size=leaf_size, metric=metric, p=p, metric_params=metric_params, 
131.          n_jobs=n_jobs, **kwargs)
132.         self.weights = _check_weights(weights)
133. 
134. def predict(self, X):
135. """Predict the class labels for the provided data.
136. 
137.         Parameters
138.         ----------
139.         X : array-like of shape (n_queries, n_features), \
140.                 or (n_queries, n_indexed) if metric == 'precomputed'
141.             Test samples.
142. 
143.         Returns
144.         -------
145.         y : ndarray of shape (n_queries,) or (n_queries, n_outputs)
146.             Class labels for each data sample.
147.         """
148.         X = check_array(X, accept_sparse='csr')
149.         neigh_dist, neigh_ind = self.kneighbors(X)
150.         classes_ = self.classes_
151.         _y = self._y
152. if not self.outputs_2d_:
153.             _y = self._y.reshape((-1, 1))
154.             classes_ = [self.classes_]
155.         n_outputs = len(classes_)
156.         n_queries = _num_samples(X)
157.         weights = _get_weights(neigh_dist, self.weights)
158.         y_pred = np.empty((n_queries, n_outputs), dtype=classes_[0].
159.          dtype)
160. for k, classes_k in enumerate(classes_):
161. if weights is None:
162.                 mode, _ = stats.mode(_y[neigh_indk], axis=1)
163. else:
164.                 mode, _ = weighted_mode(_y[neigh_indk], weights, axis=1)
165.             mode = np.asarray(mode.ravel(), dtype=np.intp)
166.             y_pred[:k] = classes_k.take(mode)
167. 
168. if not self.outputs_2d_:
169.             y_pred = y_pred.ravel()
170. return y_pred
171. 
172. def predict_proba(self, X):
173. """Return probability estimates for the test data X.
174. 
175.         Parameters
176.         ----------
177.         X : array-like of shape (n_queries, n_features), \
178.                 or (n_queries, n_indexed) if metric == 'precomputed'
179.             Test samples.
180. 
181.         Returns
182.         -------
183.         p : ndarray of shape (n_queries, n_classes), or a list of n_outputs
184.             of such arrays if n_outputs > 1.
185.             The class probabilities of the input samples. Classes are ordered
186.             by lexicographic order.
187.         """
188.         X = check_array(X, accept_sparse='csr')
189.         neigh_dist, neigh_ind = self.kneighbors(X)
190.         classes_ = self.classes_
191.         _y = self._y
192. if not self.outputs_2d_:
193.             _y = self._y.reshape((-1, 1))
194.             classes_ = [self.classes_]
195.         n_queries = _num_samples(X)
196.         weights = _get_weights(neigh_dist, self.weights)
197. if weights is None:
198.             weights = np.ones_like(neigh_ind)
199.         all_rows = np.arange(X.shape[0])
200.         probabilities = []
201. for k, classes_k in enumerate(classes_):
202.             pred_labels = _y[:k][neigh_ind]
203.             proba_k = np.zeros((n_queries, classes_k.size))
204. # a simple ':' index doesn't work right
205. for i, idx in enumerate(pred_labels.T): # loop is O(n_neighbors)
206.                 proba_k[all_rowsidx] += weights[:i]
207. 
208. # normalize 'votes' into real [0,1] probabilities
209.             normalizer = proba_k.sum(axis=1)[:np.newaxis]
210.             normalizer[normalizer == 0.0] = 1.0
211.             proba_k /= normalizer
212.             probabilities.append(proba_k)
213. 
214. if not self.outputs_2d_:
215.             probabilities = probabilities[0]
216. return probabilities

 


相关文章
|
16天前
|
机器学习/深度学习 人工智能 算法
【新闻文本分类识别系统】Python+卷积神经网络算法+人工智能+深度学习+计算机毕设项目+Django网页界面平台
文本分类识别系统。本系统使用Python作为主要开发语言,首先收集了10种中文文本数据集("体育类", "财经类", "房产类", "家居类", "教育类", "科技类", "时尚类", "时政类", "游戏类", "娱乐类"),然后基于TensorFlow搭建CNN卷积神经网络算法模型。通过对数据集进行多轮迭代训练,最后得到一个识别精度较高的模型,并保存为本地的h5格式。然后使用Django开发Web网页端操作界面,实现用户上传一段文本识别其所属的类别。
31 1
【新闻文本分类识别系统】Python+卷积神经网络算法+人工智能+深度学习+计算机毕设项目+Django网页界面平台
|
28天前
|
机器学习/深度学习 算法 数据挖掘
决策树算法大揭秘:Python让你秒懂分支逻辑,精准分类不再难
【9月更文挑战第12天】决策树算法作为机器学习领域的一颗明珠,凭借其直观易懂和强大的解释能力,在分类与回归任务中表现出色。相比传统统计方法,决策树通过简单的分支逻辑实现了数据的精准分类。本文将借助Python和scikit-learn库,以鸢尾花数据集为例,展示如何使用决策树进行分类,并探讨其优势与局限。通过构建一系列条件判断,决策树不仅模拟了人类决策过程,还确保了结果的可追溯性和可解释性。无论您是新手还是专家,都能轻松上手,享受机器学习的乐趣。
36 9
|
2月前
|
数据采集 机器学习/深度学习 算法
【python】python客户信息审计风险决策树算法分类预测(源码+数据集+论文)【独一无二】
【python】python客户信息审计风险决策树算法分类预测(源码+数据集+论文)【独一无二】
|
2月前
|
机器学习/深度学习 算法 数据中心
【机器学习】面试问答:PCA算法介绍?PCA算法过程?PCA为什么要中心化处理?PCA为什么要做正交变化?PCA与线性判别分析LDA降维的区别?
本文介绍了主成分分析(PCA)算法,包括PCA的基本概念、算法过程、中心化处理的必要性、正交变换的目的,以及PCA与线性判别分析(LDA)在降维上的区别。
57 4
|
2月前
|
算法 5G Windows
OFDM系统中的信号检测算法分类和详解
参考文献 [1]周健, 张冬. MIMO-OFDM系统中的信号检测算法(I)[J]. 南京工程学院学报(自然科学版), 2010. [2]王华龙.MIMO-OFDM系统传统信号检测算法[J].科技创新与应用,2016(23):63.
52 4
|
2月前
|
存储 算法 安全
密码算法的分类
【8月更文挑战第23天】
48 0
|
2月前
|
机器学习/深度学习 算法 数据挖掘
【数据挖掘】PCA 主成分分析算法过程及原理讲解
主成分分析(PCA)的原理和算法过程。
59 0
|
2天前
|
机器学习/深度学习 算法 数据安全/隐私保护
基于GA遗传优化的GroupCNN分组卷积网络时间序列预测算法matlab仿真
该算法结合了遗传算法(GA)与分组卷积神经网络(GroupCNN),利用GA优化GroupCNN的网络结构和超参数,提升时间序列预测精度与效率。遗传算法通过模拟自然选择过程中的选择、交叉和变异操作寻找最优解;分组卷积则有效减少了计算成本和参数数量。本项目使用MATLAB2022A实现,并提供完整代码及视频教程。注意:展示图含水印,完整程序运行无水印。
|
1天前
|
算法 决策智能
基于禁忌搜索算法的VRP问题求解matlab仿真,带GUI界面,可设置参数
该程序基于禁忌搜索算法求解车辆路径问题(VRP),使用MATLAB2022a版本实现,并带有GUI界面。用户可通过界面设置参数并查看结果。禁忌搜索算法通过迭代改进当前解,并利用记忆机制避免陷入局部最优。程序包含初始化、定义邻域结构、设置禁忌列表等步骤,最终输出最优路径和相关数据图表。
|
2天前
|
编解码 算法 数据挖掘
基于MUSIC算法的六阵元圆阵DOA估计matlab仿真
该程序使用MATLAB 2022a版本实现基于MUSIC算法的六阵元圆阵DOA估计仿真。MUSIC算法通过区分信号和噪声子空间,利用协方差矩阵的特征向量估计信号到达方向。程序计算了不同角度下的MUSIC谱,并绘制了三维谱图及对数谱图,展示了高分辨率的DOA估计结果。适用于各种形状的麦克风阵列,尤其在声源定位中表现出色。