ML之kNNC:基于iris莺尾花数据集(PCA处理+三维散点图可视化)利用kNN算法实现分类预测daiding

简介: ML之kNNC:基于iris莺尾花数据集(PCA处理+三维散点图可视化)利用kNN算法实现分类预测daiding


目录

基于iris莺尾花数据集(PCA处理+三维散点图可视化)利用kNN算法实现分类预测

设计思路

输出结果

核心代码


 

 

 

相关文章

ML之kNNC:基于iris莺尾花数据集(PCA处理+三维散点图可视化)利用kNN算法实现分类预测

ML之kNNC:基于iris莺尾花数据集(PCA处理+三维散点图可视化)利用kNN算法实现分类预测实现


 

基于iris莺尾花数据集(PCA处理+三维散点图可视化)利用kNN算法实现分类预测

设计思路

 

 

 

输出结果

 

 

 

 

 

1. (149, 5) 
2. 5.1  3.5  1.4  0.2  Iris-setosa
3. 0  4.9  3.0  1.4  0.2  Iris-setosa
4. 1  4.7  3.2  1.3  0.2  Iris-setosa
5. 2  4.6  3.1  1.5  0.2  Iris-setosa
6. 3  5.0  3.6  1.4  0.2  Iris-setosa
7. 4  5.4  3.9  1.7  0.4  Iris-setosa
8. (149, 5) 
9.     Sepal_Length  Sepal_Width  Petal_Length  Petal_Width            type
10. 0           4.5          2.3           1.3          0.3     Iris-setosa
11. 1           6.3          2.5           5.0          1.9  Iris-virginica
12. 2           5.1          3.4           1.5          0.2     Iris-setosa
13. 3           6.3          3.3           6.0          2.5  Iris-virginica
14. 4           6.8          3.2           5.9          2.3  Iris-virginica
15. 切分点: 29
16. label_classes: ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
17. kNNDIY模型预测,基于原数据: 0.95
18. kNN模型预测,基于原数据预测: [0.96666667 1.         0.93333333 1.         0.93103448]
19. kNN模型预测,原数据PCA处理后: [1.         0.96       0.95918367]

 

 

 

核心代码

1. class KNeighborsClassifier Found at: sklearn.neighbors._classification
2. 
3. class KNeighborsClassifier(NeighborsBase, KNeighborsMixin, 
4.     SupervisedIntegerMixin, ClassifierMixin):
5. """Classifier implementing the k-nearest neighbors vote.
6.     
7.     Read more in the :ref:`User Guide <classification>`.
8.     
9.     Parameters
10.     ----------
11.     n_neighbors : int, default=5
12.     Number of neighbors to use by default for :meth:`kneighbors` queries.
13.     
14.     weights : {'uniform', 'distance'} or callable, default='uniform'
15.     weight function used in prediction.  Possible values:
16.     
17.     - 'uniform' : uniform weights.  All points in each neighborhood
18.     are weighted equally.
19.     - 'distance' : weight points by the inverse of their distance.
20.     in this case, closer neighbors of a query point will have a
21.     greater influence than neighbors which are further away.
22.     - [callable] : a user-defined function which accepts an
23.     array of distances, and returns an array of the same shape
24.     containing the weights.
25.     
26.     algorithm : {'auto', 'ball_tree', 'kd_tree', 'brute'}, default='auto'
27.     Algorithm used to compute the nearest neighbors:
28.     
29.     - 'ball_tree' will use :class:`BallTree`
30.     - 'kd_tree' will use :class:`KDTree`
31.     - 'brute' will use a brute-force search.
32.     - 'auto' will attempt to decide the most appropriate algorithm
33.     based on the values passed to :meth:`fit` method.
34.     
35.     Note: fitting on sparse input will override the setting of
36.     this parameter, using brute force.
37.     
38.     leaf_size : int, default=30
39.     Leaf size passed to BallTree or KDTree.  This can affect the
40.     speed of the construction and query, as well as the memory
41.     required to store the tree.  The optimal value depends on the
42.     nature of the problem.
43.     
44.     p : int, default=2
45.     Power parameter for the Minkowski metric. When p = 1, this is
46.     equivalent to using manhattan_distance (l1), and euclidean_distance
47.     (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used.
48.     
49.     metric : str or callable, default='minkowski'
50.     the distance metric to use for the tree.  The default metric is
51.     minkowski, and with p=2 is equivalent to the standard Euclidean
52.     metric. See the documentation of :class:`DistanceMetric` for a
53.     list of available metrics.
54.     If metric is "precomputed", X is assumed to be a distance matrix and
55.     must be square during fit. X may be a :term:`sparse graph`,
56.     in which case only "nonzero" elements may be considered neighbors.
57.     
58.     metric_params : dict, default=None
59.     Additional keyword arguments for the metric function.
60.     
61.     n_jobs : int, default=None
62.     The number of parallel jobs to run for neighbors search.
63.     ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context.
64.     ``-1`` means using all processors. See :term:`Glossary <n_jobs>`
65.     for more details.
66.     Doesn't affect :meth:`fit` method.
67.     
68.     Attributes
69.     ----------
70.     classes_ : array of shape (n_classes,)
71.     Class labels known to the classifier
72.     
73.     effective_metric_ : str or callble
74.     The distance metric used. It will be same as the `metric` parameter
75.     or a synonym of it, e.g. 'euclidean' if the `metric` parameter set to
76.     'minkowski' and `p` parameter set to 2.
77.     
78.     effective_metric_params_ : dict
79.     Additional keyword arguments for the metric function. For most 
80.      metrics
81.     will be same with `metric_params` parameter, but may also contain the
82.     `p` parameter value if the `effective_metric_` attribute is set to
83.     'minkowski'.
84.     
85.     outputs_2d_ : bool
86.     False when `y`'s shape is (n_samples, ) or (n_samples, 1) during fit
87.     otherwise True.
88.     
89.     Examples
90.     --------
91.     >>> X = [[0], [1], [2], [3]]
92.     >>> y = [0, 0, 1, 1]
93.     >>> from sklearn.neighbors import KNeighborsClassifier
94.     >>> neigh = KNeighborsClassifier(n_neighbors=3)
95.     >>> neigh.fit(X, y)
96.     KNeighborsClassifier(...)
97.     >>> print(neigh.predict([[1.1]]))
98.     [0]
99.     >>> print(neigh.predict_proba([[0.9]]))
100.     [[0.66666667 0.33333333]]
101.     
102.     See also
103.     --------
104.     RadiusNeighborsClassifier
105.     KNeighborsRegressor
106.     RadiusNeighborsRegressor
107.     NearestNeighbors
108.     
109.     Notes
110.     -----
111.     See :ref:`Nearest Neighbors <neighbors>` in the online 
112.      documentation
113.     for a discussion of the choice of ``algorithm`` and ``leaf_size``.
114.     
115.     .. warning::
116.     
117.     Regarding the Nearest Neighbors algorithms, if it is found that two
118.     neighbors, neighbor `k+1` and `k`, have identical distances
119.     but different labels, the results will depend on the ordering of the
120.     training data.
121.     
122.     https://en.wikipedia.org/wiki/K-nearest_neighbor_algorithm
123.     """
124.     @_deprecate_positional_args
125. def __init__(self, n_neighbors=5, 
126.         *, weights='uniform', algorithm='auto', leaf_size=30, 
127.         p=2, metric='minkowski', metric_params=None, n_jobs=None, **
128.         kwargs):
129. super().__init__(n_neighbors=n_neighbors, algorithm=algorithm, 
130.          leaf_size=leaf_size, metric=metric, p=p, metric_params=metric_params, 
131.          n_jobs=n_jobs, **kwargs)
132.         self.weights = _check_weights(weights)
133. 
134. def predict(self, X):
135. """Predict the class labels for the provided data.
136. 
137.         Parameters
138.         ----------
139.         X : array-like of shape (n_queries, n_features), \
140.                 or (n_queries, n_indexed) if metric == 'precomputed'
141.             Test samples.
142. 
143.         Returns
144.         -------
145.         y : ndarray of shape (n_queries,) or (n_queries, n_outputs)
146.             Class labels for each data sample.
147.         """
148.         X = check_array(X, accept_sparse='csr')
149.         neigh_dist, neigh_ind = self.kneighbors(X)
150.         classes_ = self.classes_
151.         _y = self._y
152. if not self.outputs_2d_:
153.             _y = self._y.reshape((-1, 1))
154.             classes_ = [self.classes_]
155.         n_outputs = len(classes_)
156.         n_queries = _num_samples(X)
157.         weights = _get_weights(neigh_dist, self.weights)
158.         y_pred = np.empty((n_queries, n_outputs), dtype=classes_[0].
159.          dtype)
160. for k, classes_k in enumerate(classes_):
161. if weights is None:
162.                 mode, _ = stats.mode(_y[neigh_indk], axis=1)
163. else:
164.                 mode, _ = weighted_mode(_y[neigh_indk], weights, axis=1)
165.             mode = np.asarray(mode.ravel(), dtype=np.intp)
166.             y_pred[:k] = classes_k.take(mode)
167. 
168. if not self.outputs_2d_:
169.             y_pred = y_pred.ravel()
170. return y_pred
171. 
172. def predict_proba(self, X):
173. """Return probability estimates for the test data X.
174. 
175.         Parameters
176.         ----------
177.         X : array-like of shape (n_queries, n_features), \
178.                 or (n_queries, n_indexed) if metric == 'precomputed'
179.             Test samples.
180. 
181.         Returns
182.         -------
183.         p : ndarray of shape (n_queries, n_classes), or a list of n_outputs
184.             of such arrays if n_outputs > 1.
185.             The class probabilities of the input samples. Classes are ordered
186.             by lexicographic order.
187.         """
188.         X = check_array(X, accept_sparse='csr')
189.         neigh_dist, neigh_ind = self.kneighbors(X)
190.         classes_ = self.classes_
191.         _y = self._y
192. if not self.outputs_2d_:
193.             _y = self._y.reshape((-1, 1))
194.             classes_ = [self.classes_]
195.         n_queries = _num_samples(X)
196.         weights = _get_weights(neigh_dist, self.weights)
197. if weights is None:
198.             weights = np.ones_like(neigh_ind)
199.         all_rows = np.arange(X.shape[0])
200.         probabilities = []
201. for k, classes_k in enumerate(classes_):
202.             pred_labels = _y[:k][neigh_ind]
203.             proba_k = np.zeros((n_queries, classes_k.size))
204. # a simple ':' index doesn't work right
205. for i, idx in enumerate(pred_labels.T): # loop is O(n_neighbors)
206.                 proba_k[all_rowsidx] += weights[:i]
207. 
208. # normalize 'votes' into real [0,1] probabilities
209.             normalizer = proba_k.sum(axis=1)[:np.newaxis]
210.             normalizer[normalizer == 0.0] = 1.0
211.             proba_k /= normalizer
212.             probabilities.append(proba_k)
213. 
214. if not self.outputs_2d_:
215.             probabilities = probabilities[0]
216. return probabilities

 


目录
打赏
0
0
0
0
1042
分享
相关文章
【新闻文本分类识别系统】Python+卷积神经网络算法+人工智能+深度学习+计算机毕设项目+Django网页界面平台
文本分类识别系统。本系统使用Python作为主要开发语言,首先收集了10种中文文本数据集("体育类", "财经类", "房产类", "家居类", "教育类", "科技类", "时尚类", "时政类", "游戏类", "娱乐类"),然后基于TensorFlow搭建CNN卷积神经网络算法模型。通过对数据集进行多轮迭代训练,最后得到一个识别精度较高的模型,并保存为本地的h5格式。然后使用Django开发Web网页端操作界面,实现用户上传一段文本识别其所属的类别。
136 1
【新闻文本分类识别系统】Python+卷积神经网络算法+人工智能+深度学习+计算机毕设项目+Django网页界面平台
数据结构与算法学习一:学习前的准备,数据结构的分类,数据结构与算法的关系,实际编程中遇到的问题,几个经典算法问题
这篇文章是关于数据结构与算法的学习指南,涵盖了数据结构的分类、数据结构与算法的关系、实际编程中遇到的问题以及几个经典的算法面试题。
47 0
数据结构与算法学习一:学习前的准备,数据结构的分类,数据结构与算法的关系,实际编程中遇到的问题,几个经典算法问题
前端常用算法全解:特征梳理、复杂度比较、分类解读与示例展示
前端常用算法全解:特征梳理、复杂度比较、分类解读与示例展示
44 0
Leecode 刷题笔记之可视化六大排序算法:冒泡、快速、归并、插入、选择、桶排序
这篇文章是关于LeetCode刷题笔记,主要介绍了六大排序算法(冒泡、快速、归并、插入、选择、桶排序)的Python实现及其可视化过程。
32 0
决策树算法大揭秘:Python让你秒懂分支逻辑,精准分类不再难
【9月更文挑战第12天】决策树算法作为机器学习领域的一颗明珠,凭借其直观易懂和强大的解释能力,在分类与回归任务中表现出色。相比传统统计方法,决策树通过简单的分支逻辑实现了数据的精准分类。本文将借助Python和scikit-learn库,以鸢尾花数据集为例,展示如何使用决策树进行分类,并探讨其优势与局限。通过构建一系列条件判断,决策树不仅模拟了人类决策过程,还确保了结果的可追溯性和可解释性。无论您是新手还是专家,都能轻松上手,享受机器学习的乐趣。
64 9
|
5月前
|
密码算法的分类
【8月更文挑战第23天】
246 0
基于Retinex算法的图像去雾matlab仿真
本项目展示了基于Retinex算法的图像去雾技术。完整程序运行效果无水印,使用Matlab2022a开发。核心代码包含详细中文注释和操作步骤视频。Retinex理论由Edwin Land提出,旨在分离图像的光照和反射分量,增强图像对比度、颜色和细节,尤其在雾天条件下表现优异,有效解决图像去雾问题。
基于DWA优化算法的机器人路径规划matlab仿真
本项目基于DWA优化算法实现机器人路径规划的MATLAB仿真,适用于动态环境下的自主导航。使用MATLAB2022A版本运行,展示路径规划和预测结果。核心代码通过散点图和轨迹图可视化路径点及预测路径。DWA算法通过定义速度空间、采样候选动作并评估其优劣(目标方向性、障碍物距离、速度一致性),实时调整机器人运动参数,确保安全避障并接近目标。
室内障碍物射线追踪算法matlab模拟仿真
### 简介 本项目展示了室内障碍物射线追踪算法在无线通信中的应用。通过Matlab 2022a实现,包含完整程序运行效果(无水印),支持增加发射点和室内墙壁设置。核心代码配有详细中文注释及操作视频。该算法基于几何光学原理,模拟信号在复杂室内环境中的传播路径与强度,涵盖场景建模、射线发射、传播及接收点场强计算等步骤,为无线网络规划提供重要依据。
基于GA遗传优化的CNN-GRU-SAM网络时间序列回归预测算法matlab仿真
本项目基于MATLAB2022a实现时间序列预测,采用CNN-GRU-SAM网络结构。卷积层提取局部特征,GRU层处理长期依赖,自注意力机制捕捉全局特征。完整代码含中文注释和操作视频,运行效果无水印展示。算法通过数据归一化、种群初始化、适应度计算、个体更新等步骤优化网络参数,最终输出预测结果。适用于金融市场、气象预报等领域。
基于GA遗传优化的CNN-GRU-SAM网络时间序列回归预测算法matlab仿真

热门文章

最新文章

AI助理

你好,我是AI助理

可以解答问题、推荐解决方案等