开发者社区> 一个处女座的程序猿> 正文

ML之NB:利用朴素贝叶斯NB算法(CountVectorizer+不去除停用词)对fetch_20newsgroups数据集(20类新闻文本)进行分类预测、评估

简介: ML之NB:利用朴素贝叶斯NB算法(CountVectorizer+不去除停用词)对fetch_20newsgroups数据集(20类新闻文本)进行分类预测、评估
+关注继续查看

输出结果

image.png


image.png


设计思路

image.png

核心代码

class MultinomialNB Found at: sklearn.naive_bayes

class MultinomialNB(BaseDiscreteNB):

   """

   Naive Bayes classifier for multinomial models

   

   The multinomial Naive Bayes classifier is suitable for classification with

   discrete features (e.g., word counts for text classification). The

   multinomial distribution normally requires integer feature counts. However,

   in practice, fractional counts such as tf-idf may also work.

   

   Read more in the :ref:`User Guide <multinomial_naive_bayes>`.

   

   Parameters

   ----------

   alpha : float, optional (default=1.0)

   Additive (Laplace/Lidstone) smoothing parameter

   (0 for no smoothing).

   

   fit_prior : boolean, optional (default=True)

   Whether to learn class prior probabilities or not.

   If false, a uniform prior will be used.

   

   class_prior : array-like, size (n_classes,), optional (default=None)

   Prior probabilities of the classes. If specified the priors are not

   adjusted according to the data.

   

   Attributes

   ----------

   class_log_prior_ : array, shape (n_classes, )

   Smoothed empirical log probability for each class.

   

   intercept_ : property

   Mirrors ``class_log_prior_`` for interpreting MultinomialNB

   as a linear model.

   

   feature_log_prob_ : array, shape (n_classes, n_features)

   Empirical log probability of features

   given a class, ``P(x_i|y)``.

   

   coef_ : property

   Mirrors ``feature_log_prob_`` for interpreting MultinomialNB

   as a linear model.

   

   class_count_ : array, shape (n_classes,)

   Number of samples encountered for each class during fitting. This

   value is weighted by the sample weight when provided.

   

   feature_count_ : array, shape (n_classes, n_features)

   Number of samples encountered for each (class, feature)

   during fitting. This value is weighted by the sample weight when

   provided.

   

   Examples

   --------

   >>> import numpy as np

   >>> X = np.random.randint(5, size=(6, 100))

   >>> y = np.array([1, 2, 3, 4, 5, 6])

   >>> from sklearn.naive_bayes import MultinomialNB

   >>> clf = MultinomialNB()

   >>> clf.fit(X, y)

   MultinomialNB(alpha=1.0, class_prior=None, fit_prior=True)

   >>> print(clf.predict(X[2:3]))

   [3]

   

   Notes

   -----

   For the rationale behind the names `coef_` and `intercept_`, i.e.

   naive Bayes as a linear classifier, see J. Rennie et al. (2003),

   Tackling the poor assumptions of naive Bayes text classifiers, ICML.

   

   References

   ----------

   C.D. Manning, P. Raghavan and H. Schuetze (2008). Introduction to

   Information Retrieval. Cambridge University Press, pp. 234-265.

   http://nlp.stanford.edu/IR-book/html/htmledition/naive-bayes-text-

    classification-1.html

   """

   def __init__(self, alpha=1.0, fit_prior=True, class_prior=None):

       self.alpha = alpha

       self.fit_prior = fit_prior

       self.class_prior = class_prior

   

   def _count(self, X, Y):

       """Count and smooth feature occurrences."""

       if np.any((X.data if issparse(X) else X) < 0):

           raise ValueError("Input X must be non-negative")

       self.feature_count_ += safe_sparse_dot(Y.T, X)

       self.class_count_ += Y.sum(axis=0)

   

   def _update_feature_log_prob(self, alpha):

       """Apply smoothing to raw counts and recompute log probabilities"""

       smoothed_fc = self.feature_count_ + alpha

       smoothed_cc = smoothed_fc.sum(axis=1)

       self.feature_log_prob_ = np.log(smoothed_fc) - np.log(smoothed_cc.

        reshape(-1, 1))

   

   def _joint_log_likelihood(self, X):

       """Calculate the posterior log probability of the samples X"""

       check_is_fitted(self, "classes_")

       X = check_array(X, accept_sparse='csr')

       return safe_sparse_dot(X, self.feature_log_prob_.T) + self.class_log_prior_


版权声明:本文内容由阿里云实名注册用户自发贡献,版权归原作者所有,阿里云开发者社区不拥有其著作权,亦不承担相应法律责任。具体规则请查看《阿里云开发者社区用户服务协议》和《阿里云开发者社区知识产权保护指引》。如果您发现本社区中有涉嫌抄袭的内容,填写侵权投诉表单进行举报,一经查实,本社区将立刻删除涉嫌侵权内容。

相关文章
ML之NB:利用朴素贝叶斯NB算法(CountVectorizer+不去除停用词)对fetch_20newsgroups数据集(20类新闻文本)进行分类预测、评估
ML之NB:利用朴素贝叶斯NB算法(CountVectorizer+不去除停用词)对fetch_20newsgroups数据集(20类新闻文本)进行分类预测、评估
84 0
实践 —— 亲测从 RDS MySQL 通过数据集成导入 MaxCompute
作者尝试了从RDS 的MySQL数据库到MaxCompute的 ODPS 的数据同步过程,并导入成功。有需要的同学赶紧试起来吧~~~
2605 0
⭐️STL⭐️之string和vector全解,❤️算法必备❤️<上>
⭐️STL⭐️之string和vector全解,❤️算法必备❤️<上>
36 0
ML之NB:利用朴素贝叶斯NB算法(TfidfVectorizer+不去除停用词)对20类新闻文本数据集进行分类预测、评估
ML之NB:利用朴素贝叶斯NB算法(TfidfVectorizer+不去除停用词)对20类新闻文本数据集进行分类预测、评估
80 0
+关注
一个处女座的程序猿
国内互联网圈知名博主、人工智能领域优秀创作者,全球最大中文IT社区博客专家、CSDN开发者联盟生态成员、中国开源社区专家、华为云社区专家、51CTO社区专家、Python社区专家等,曾受邀采访和评审十多次。仅在国内的CSDN平台,博客文章浏览量超过2500万,拥有超过57万的粉丝。
1702
文章
0
问答
文章排行榜
最热
最新
相关电子书
更多
JS零基础入门教程(上册)
立即下载
性能优化方法论
立即下载
手把手学习日志服务SLS,云启实验室实战指南
立即下载