estimator.score()
准确率:预测结果正确的百分比
预测结果Predicted Condition
正确标记 True Condition
T True
F False
P Positive
N Negative
精确率 Presicion
预测结果为正中真实为正的比例(查的准)
召回率 Recall
真实为正中预测结果为正的比例(查的全,对正样本的区分能力)
F1-score 模型的稳健性
F1=(2TP)/(2TP + FN + FP)
= (2 x Precision x Recall)/(Precision + Recall)
代码示例
from sklearn.datasets import fetch_20newsgroups from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.model_selection import train_test_split from sklearn.naive_bayes import MultinomialNB from sklearn.metrics import classification_report import ssl ssl._create_default_https_context = ssl._create_unverified_context # 如果获取不到就下载 data = fetch_20newsgroups(subset="all") # 数据分割 X_train, X_test, y_train, y_test = train_test_split( data.data, data.target, test_size=0.33, random_state=42 ) # 特征抽取 tfidf = TfidfVectorizer() # 以训练集中的词列表对每篇文章做重要性统计 X_train = tfidf.fit_transform(X_train) print(tfidf.get_feature_names()) X_test = tfidf.transform(X_test) # 朴素贝叶斯算法预测,alpha是拉普拉斯平滑系数 mlt = MultinomialNB(alpha=1.0) mlt.fit(X_train, y_train) y_predict = mlt.predict(X_test) score = mlt.score(X_test, y_test) print("socre: {}".format(score)) # socre: 0.83 # 分类报告 print(classification_report(y_test, y_predict, target_names=data.target_names)) """ precision recall f1-score support alt.atheism 0.86 0.71 0.78 260 comp.graphics 0.86 0.77 0.81 321 comp.os.ms-windows.misc 0.82 0.83 0.82 314 ... avg / total 0.87 0.83 0.83 6220 """