ML之kNNC:基于iris莺尾花数据集(PCA处理+三维散点图可视化)利用kNN算法实现分类预测

简介: ML之kNNC:基于iris莺尾花数据集(PCA处理+三维散点图可视化)利用kNN算法实现分类预测


目录

基于iris莺尾花数据集(PCA处理+三维散点图可视化)利用kNN算法实现分类预测

设计思路

输出结果

核心代码


 

 

 

相关文章

ML之kNNC:基于iris莺尾花数据集(PCA处理+三维散点图可视化)利用kNN算法实现分类预测

ML之kNNC:基于iris莺尾花数据集(PCA处理+三维散点图可视化)利用kNN算法实现分类预测实现


 

基于iris莺尾花数据集(PCA处理+三维散点图可视化)利用kNN算法实现分类预测

设计思路

 

 

 

输出结果

iris鸢尾花标准数据集

网络异常,图片无法展示
|

网络异常,图片无法展示
|

下载

 

 

 

 

 

1. (149, 5) 
2. 5.1  3.5  1.4  0.2  Iris-setosa
3. 0  4.9  3.0  1.4  0.2  Iris-setosa
4. 1  4.7  3.2  1.3  0.2  Iris-setosa
5. 2  4.6  3.1  1.5  0.2  Iris-setosa
6. 3  5.0  3.6  1.4  0.2  Iris-setosa
7. 4  5.4  3.9  1.7  0.4  Iris-setosa
8. (149, 5) 
9.     Sepal_Length  Sepal_Width  Petal_Length  Petal_Width            type
10. 0           4.5          2.3           1.3          0.3     Iris-setosa
11. 1           6.3          2.5           5.0          1.9  Iris-virginica
12. 2           5.1          3.4           1.5          0.2     Iris-setosa
13. 3           6.3          3.3           6.0          2.5  Iris-virginica
14. 4           6.8          3.2           5.9          2.3  Iris-virginica
15. 切分点: 29
16. label_classes: ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
17. kNNDIY模型预测,基于原数据: 0.95
18. kNN模型预测,基于原数据预测: [0.96666667 1.         0.93333333 1.         0.93103448]
19. kNN模型预测,原数据PCA处理后: [1.         0.96       0.95918367]

 

 

 

核心代码

1. class KNeighborsClassifier Found at: sklearn.neighbors._classification
2. 
3. class KNeighborsClassifier(NeighborsBase, KNeighborsMixin, 
4.     SupervisedIntegerMixin, ClassifierMixin):
5. """Classifier implementing the k-nearest neighbors vote.
6.     
7.     Read more in the :ref:`User Guide <classification>`.
8.     
9.     Parameters
10.     ----------
11.     n_neighbors : int, default=5
12.     Number of neighbors to use by default for :meth:`kneighbors` queries.
13.     
14.     weights : {'uniform', 'distance'} or callable, default='uniform'
15.     weight function used in prediction.  Possible values:
16.     
17.     - 'uniform' : uniform weights.  All points in each neighborhood
18.     are weighted equally.
19.     - 'distance' : weight points by the inverse of their distance.
20.     in this case, closer neighbors of a query point will have a
21.     greater influence than neighbors which are further away.
22.     - [callable] : a user-defined function which accepts an
23.     array of distances, and returns an array of the same shape
24.     containing the weights.
25.     
26.     algorithm : {'auto', 'ball_tree', 'kd_tree', 'brute'}, default='auto'
27.     Algorithm used to compute the nearest neighbors:
28.     
29.     - 'ball_tree' will use :class:`BallTree`
30.     - 'kd_tree' will use :class:`KDTree`
31.     - 'brute' will use a brute-force search.
32.     - 'auto' will attempt to decide the most appropriate algorithm
33.     based on the values passed to :meth:`fit` method.
34.     
35.     Note: fitting on sparse input will override the setting of
36.     this parameter, using brute force.
37.     
38.     leaf_size : int, default=30
39.     Leaf size passed to BallTree or KDTree.  This can affect the
40.     speed of the construction and query, as well as the memory
41.     required to store the tree.  The optimal value depends on the
42.     nature of the problem.
43.     
44.     p : int, default=2
45.     Power parameter for the Minkowski metric. When p = 1, this is
46.     equivalent to using manhattan_distance (l1), and euclidean_distance
47.     (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used.
48.     
49.     metric : str or callable, default='minkowski'
50.     the distance metric to use for the tree.  The default metric is
51.     minkowski, and with p=2 is equivalent to the standard Euclidean
52.     metric. See the documentation of :class:`DistanceMetric` for a
53.     list of available metrics.
54.     If metric is "precomputed", X is assumed to be a distance matrix and
55.     must be square during fit. X may be a :term:`sparse graph`,
56.     in which case only "nonzero" elements may be considered neighbors.
57.     
58.     metric_params : dict, default=None
59.     Additional keyword arguments for the metric function.
60.     
61.     n_jobs : int, default=None
62.     The number of parallel jobs to run for neighbors search.
63.     ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context.
64.     ``-1`` means using all processors. See :term:`Glossary <n_jobs>`
65.     for more details.
66.     Doesn't affect :meth:`fit` method.
67.     
68.     Attributes
69.     ----------
70.     classes_ : array of shape (n_classes,)
71.     Class labels known to the classifier
72.     
73.     effective_metric_ : str or callble
74.     The distance metric used. It will be same as the `metric` parameter
75.     or a synonym of it, e.g. 'euclidean' if the `metric` parameter set to
76.     'minkowski' and `p` parameter set to 2.
77.     
78.     effective_metric_params_ : dict
79.     Additional keyword arguments for the metric function. For most 
80.      metrics
81.     will be same with `metric_params` parameter, but may also contain the
82.     `p` parameter value if the `effective_metric_` attribute is set to
83.     'minkowski'.
84.     
85.     outputs_2d_ : bool
86.     False when `y`'s shape is (n_samples, ) or (n_samples, 1) during fit
87.     otherwise True.
88.     
89.     Examples
90.     --------
91.     >>> X = [[0], [1], [2], [3]]
92.     >>> y = [0, 0, 1, 1]
93.     >>> from sklearn.neighbors import KNeighborsClassifier
94.     >>> neigh = KNeighborsClassifier(n_neighbors=3)
95.     >>> neigh.fit(X, y)
96.     KNeighborsClassifier(...)
97.     >>> print(neigh.predict([[1.1]]))
98.     [0]
99.     >>> print(neigh.predict_proba([[0.9]]))
100.     [[0.66666667 0.33333333]]
101.     
102.     See also
103.     --------
104.     RadiusNeighborsClassifier
105.     KNeighborsRegressor
106.     RadiusNeighborsRegressor
107.     NearestNeighbors
108.     
109.     Notes
110.     -----
111.     See :ref:`Nearest Neighbors <neighbors>` in the online 
112.      documentation
113.     for a discussion of the choice of ``algorithm`` and ``leaf_size``.
114.     
115.     .. warning::
116.     
117.     Regarding the Nearest Neighbors algorithms, if it is found that two
118.     neighbors, neighbor `k+1` and `k`, have identical distances
119.     but different labels, the results will depend on the ordering of the
120.     training data.
121.     
122.     https://en.wikipedia.org/wiki/K-nearest_neighbor_algorithm
123.     """
124.     @_deprecate_positional_args
125. def __init__(self, n_neighbors=5, 
126.         *, weights='uniform', algorithm='auto', leaf_size=30, 
127.         p=2, metric='minkowski', metric_params=None, n_jobs=None, **
128.         kwargs):
129. super().__init__(n_neighbors=n_neighbors, algorithm=algorithm, 
130.          leaf_size=leaf_size, metric=metric, p=p, metric_params=metric_params, 
131.          n_jobs=n_jobs, **kwargs)
132.         self.weights = _check_weights(weights)
133. 
134. def predict(self, X):
135. """Predict the class labels for the provided data.
136. 
137.         Parameters
138.         ----------
139.         X : array-like of shape (n_queries, n_features), \
140.                 or (n_queries, n_indexed) if metric == 'precomputed'
141.             Test samples.
142. 
143.         Returns
144.         -------
145.         y : ndarray of shape (n_queries,) or (n_queries, n_outputs)
146.             Class labels for each data sample.
147.         """
148.         X = check_array(X, accept_sparse='csr')
149.         neigh_dist, neigh_ind = self.kneighbors(X)
150.         classes_ = self.classes_
151.         _y = self._y
152. if not self.outputs_2d_:
153.             _y = self._y.reshape((-1, 1))
154.             classes_ = [self.classes_]
155.         n_outputs = len(classes_)
156.         n_queries = _num_samples(X)
157.         weights = _get_weights(neigh_dist, self.weights)
158.         y_pred = np.empty((n_queries, n_outputs), dtype=classes_[0].
159.          dtype)
160. for k, classes_k in enumerate(classes_):
161. if weights is None:
162.                 mode, _ = stats.mode(_y[neigh_indk], axis=1)
163. else:
164.                 mode, _ = weighted_mode(_y[neigh_indk], weights, axis=1)
165.             mode = np.asarray(mode.ravel(), dtype=np.intp)
166.             y_pred[:k] = classes_k.take(mode)
167. 
168. if not self.outputs_2d_:
169.             y_pred = y_pred.ravel()
170. return y_pred
171. 
172. def predict_proba(self, X):
173. """Return probability estimates for the test data X.
174. 
175.         Parameters
176.         ----------
177.         X : array-like of shape (n_queries, n_features), \
178.                 or (n_queries, n_indexed) if metric == 'precomputed'
179.             Test samples.
180. 
181.         Returns
182.         -------
183.         p : ndarray of shape (n_queries, n_classes), or a list of n_outputs
184.             of such arrays if n_outputs > 1.
185.             The class probabilities of the input samples. Classes are ordered
186.             by lexicographic order.
187.         """
188.         X = check_array(X, accept_sparse='csr')
189.         neigh_dist, neigh_ind = self.kneighbors(X)
190.         classes_ = self.classes_
191.         _y = self._y
192. if not self.outputs_2d_:
193.             _y = self._y.reshape((-1, 1))
194.             classes_ = [self.classes_]
195.         n_queries = _num_samples(X)
196.         weights = _get_weights(neigh_dist, self.weights)
197. if weights is None:
198.             weights = np.ones_like(neigh_ind)
199.         all_rows = np.arange(X.shape[0])
200.         probabilities = []
201. for k, classes_k in enumerate(classes_):
202.             pred_labels = _y[:k][neigh_ind]
203.             proba_k = np.zeros((n_queries, classes_k.size))
204. # a simple ':' index doesn't work right
205. for i, idx in enumerate(pred_labels.T): # loop is O(n_neighbors)
206.                 proba_k[all_rowsidx] += weights[:i]
207. 
208. # normalize 'votes' into real [0,1] probabilities
209.             normalizer = proba_k.sum(axis=1)[:np.newaxis]
210.             normalizer[normalizer == 0.0] = 1.0
211.             proba_k /= normalizer
212.             probabilities.append(proba_k)
213. 
214. if not self.outputs_2d_:
215.             probabilities = probabilities[0]
216. return probabilities


相关文章
|
2月前
|
机器学习/深度学习 人工智能 算法
【新闻文本分类识别系统】Python+卷积神经网络算法+人工智能+深度学习+计算机毕设项目+Django网页界面平台
文本分类识别系统。本系统使用Python作为主要开发语言,首先收集了10种中文文本数据集("体育类", "财经类", "房产类", "家居类", "教育类", "科技类", "时尚类", "时政类", "游戏类", "娱乐类"),然后基于TensorFlow搭建CNN卷积神经网络算法模型。通过对数据集进行多轮迭代训练,最后得到一个识别精度较高的模型,并保存为本地的h5格式。然后使用Django开发Web网页端操作界面,实现用户上传一段文本识别其所属的类别。
90 1
【新闻文本分类识别系统】Python+卷积神经网络算法+人工智能+深度学习+计算机毕设项目+Django网页界面平台
|
1月前
|
存储 缓存 分布式计算
数据结构与算法学习一:学习前的准备,数据结构的分类,数据结构与算法的关系,实际编程中遇到的问题,几个经典算法问题
这篇文章是关于数据结构与算法的学习指南,涵盖了数据结构的分类、数据结构与算法的关系、实际编程中遇到的问题以及几个经典的算法面试题。
29 0
数据结构与算法学习一:学习前的准备,数据结构的分类,数据结构与算法的关系,实际编程中遇到的问题,几个经典算法问题
|
1月前
|
机器学习/深度学习 算法
机器学习入门(三):K近邻算法原理 | KNN算法原理
机器学习入门(三):K近邻算法原理 | KNN算法原理
|
1月前
|
机器学习/深度学习 算法 API
机器学习入门(五):KNN概述 | K 近邻算法 API,K值选择问题
机器学习入门(五):KNN概述 | K 近邻算法 API,K值选择问题
|
1月前
|
移动开发 算法 前端开发
前端常用算法全解:特征梳理、复杂度比较、分类解读与示例展示
前端常用算法全解:特征梳理、复杂度比较、分类解读与示例展示
21 0
|
1月前
|
数据可视化 搜索推荐 Python
Leecode 刷题笔记之可视化六大排序算法:冒泡、快速、归并、插入、选择、桶排序
这篇文章是关于LeetCode刷题笔记,主要介绍了六大排序算法(冒泡、快速、归并、插入、选择、桶排序)的Python实现及其可视化过程。
13 0
|
2月前
|
算法 Python
KNN
【9月更文挑战第11天】
49 13
|
2月前
|
机器学习/深度学习 算法 数据挖掘
决策树算法大揭秘:Python让你秒懂分支逻辑,精准分类不再难
【9月更文挑战第12天】决策树算法作为机器学习领域的一颗明珠,凭借其直观易懂和强大的解释能力,在分类与回归任务中表现出色。相比传统统计方法,决策树通过简单的分支逻辑实现了数据的精准分类。本文将借助Python和scikit-learn库,以鸢尾花数据集为例,展示如何使用决策树进行分类,并探讨其优势与局限。通过构建一系列条件判断,决策树不仅模拟了人类决策过程,还确保了结果的可追溯性和可解释性。无论您是新手还是专家,都能轻松上手,享受机器学习的乐趣。
47 9
|
2月前
|
算法 大数据
K-最近邻(KNN)
K-最近邻(KNN)
|
2月前
|
机器学习/深度学习 算法 数据挖掘
R语言中的支持向量机(SVM)与K最近邻(KNN)算法实现与应用
【9月更文挑战第2天】无论是支持向量机还是K最近邻算法,都是机器学习中非常重要的分类算法。它们在R语言中的实现相对简单,但各有其优缺点和适用场景。在实际应用中,应根据数据的特性、任务的需求以及计算资源的限制来选择合适的算法。通过不断地实践和探索,我们可以更好地掌握这些算法并应用到实际的数据分析和机器学习任务中。