随机森林简介
随机森林是机器学习一种常用的方法。它是以决策树
为基础,用随机的方式排列建立的,森林里每个决策树之间都是没有关联的。 在得到森林之后,当有一个新的输入样本进入的时候,就让森林中的每一棵决策树分别进行一下判断,看看这个样本应该属于哪一类(对于分类算法),然后看看哪一类被选择最多,就预测这个样本为那一类。随机森林可以用来进行无监督学习聚类和异常点检测。
决策树(decision tree)是一个树结构(可以是二叉树或非二叉树)。其每个非叶节点表示一个特征属性上的测试,每个分支代表这个特征属性在某个值域上的输出,而每个叶节点存放一个类别。使用决策树进行决策的过程就是从根节点开始,测试待分类项中相应的特征属性,并按照其值选择输出分支,直到到达叶子节点,将叶子节点存放的类别作为决策结果。详情请前往先前的推送。
算法流程
随机森林里每棵树按照如下规则生成:
- 1、如果训练集大小为N,对于每棵树而言,随机且有放回地从训练集中的抽取N个训练样本,作为该树的训练集;
PS:从这里我们可以知道:每棵树的训练集都是不同的,而且里面包含重复的训练样本。
- 2、如果每个样本的特征维度为M,指定一个常数m<<M,随机地从M个特征中选取m个特征子集,每次树进行分裂时,从这m个属性中采用某种策略(比如说信息增益)来选择1个属性作为该节点的分裂属性。
- 3、每棵树都尽最大程度的生长,并且没有剪枝过程。
- 4、 按照步骤1~3建立大量的决策树,这样就构成了随机森林了。
一开始我们提到的随机森林中的“随机”就是指的这里的两个随机性。两个随机性的引入对随机森林的分类性能至关重要。由于它们的引入,使得随机森林不容易陷入过拟合,并且具有很好得抗噪能力(比如:对缺省值不敏感)。
随机森林优点
- 它能够处理很高维度(feature很多)的数据,并且不用做特征选择(因为特征子集是随机选择的)。
- 在创建随机森林的时候,对generlization error使用的是无偏估计,模型泛化能力强。
- 训练速度快,容易做成并行化方法(训练时树与树之间是相互独立的)。
- 在训练过程中,能够检测到feature间的互相影响。
- 对于不平衡的数据集来说,它可以平衡误差。
随机森林(Random Forest,简称RF)主要应用于回归和分类
,且拥有广泛的应用前景,从市场营销到医疗保健保险,既可以用来做市场营销模拟的建模,统计客户来源,保留和流失,也可用来预测疾病的风险和病患者的易感性。最近几年的国内外大赛,包括2013年百度校园电影推荐系统大赛、2014年阿里巴巴天池大数据竞赛以及Kaggle数据科学竞赛,参赛者对随机森林的使用占有相当高的比例。而且随机森林在运算量没有显著提高的情况下精度得到了很大的改善。
代码实现
随机森林实现分类算法
import pandas as pd import numpy as np import random import math import collections from sklearn.externals.joblib import Parallel, delayed class Tree(object): """定义一棵决策树""" def __init__(self): self.split_feature = None self.split_value = None self.leaf_value = None self.tree_left = None self.tree_right = None def calc_predict_value(self, dataset): """通过递归决策树找到样本所属叶子节点""" if self.leaf_value is not None: return self.leaf_value elif dataset[self.split_feature] <= self.split_value: return self.tree_left.calc_predict_value(dataset) else: return self.tree_right.calc_predict_value(dataset) def describe_tree(self): """以json形式打印决策树,方便查看树结构""" if not self.tree_left and not self.tree_right: leaf_info = "{leaf_value:" + str(self.leaf_value) + "}" return leaf_info left_info = self.tree_left.describe_tree() right_info = self.tree_right.describe_tree() tree_structure = "{split_feature:" + str(self.split_feature) + \ ",split_value:" + str(self.split_value) + \ ",left_tree:" + left_info + \ ",right_tree:" + right_info + "}" return tree_structure class RandomForestClassifier(object): def __init__(self, n_estimators=10, max_depth=-1, min_samples_split=2, min_samples_leaf=1, min_split_gain=0.0, colsample_bytree=None, subsample=0.8, random_state=None): """ 随机森林参数 ---------- n_estimators: 树数量 max_depth: 树深度,-1表示不限制深度 min_samples_split: 节点分裂所需的最小样本数量,小于该值节点终止分裂 min_samples_leaf: 叶子节点最少样本数量,小于该值叶子被合并 min_split_gain: 分裂所需的最小增益,小于该值节点终止分裂 colsample_bytree: 列采样设置,可取[sqrt、log2]。sqrt表示随机选择sqrt(n_features)个特征, log2表示随机选择log(n_features)个特征,设置为其他则不进行列采样 subsample: 行采样比例 random_state: 随机种子,设置之后每次生成的n_estimators个样本集不会变,确保实验可重复 """ self.n_estimators = n_estimators self.max_depth = max_depth if max_depth != -1 else float('inf') self.min_samples_split = min_samples_split self.min_samples_leaf = min_samples_leaf self.min_split_gain = min_split_gain self.colsample_bytree = colsample_bytree self.subsample = subsample self.random_state = random_state self.trees = None self.feature_importances_ = dict() def fit(self, dataset, targets): """模型训练入口""" assert targets.unique().__len__() == 2, "There must be two class for targets!" targets = targets.to_frame(name='label') if self.random_state: random.seed(self.random_state) random_state_stages = random.sample(range(self.n_estimators), self.n_estimators) # 两种列采样方式 if self.colsample_bytree == "sqrt": self.colsample_bytree = int(len(dataset.columns) ** 0.5) elif self.colsample_bytree == "log2": self.colsample_bytree = int(math.log(len(dataset.columns))) else: self.colsample_bytree = len(dataset.columns) # 并行建立多棵决策树 self.trees = Parallel(n_jobs=-1, verbose=0, backend="threading")( delayed(self._parallel_build_trees)(dataset, targets, random_state) for random_state in random_state_stages) def _parallel_build_trees(self, dataset, targets, random_state): """bootstrap有放回抽样生成训练样本集,建立决策树""" subcol_index = random.sample(dataset.columns.tolist(), self.colsample_bytree) dataset_stage = dataset.sample(n=int(self.subsample * len(dataset)), replace=True, random_state=random_state).reset_index(drop=True) dataset_stage = dataset_stage.loc[:, subcol_index] targets_stage = targets.sample(n=int(self.subsample * len(dataset)), replace=True, random_state=random_state).reset_index(drop=True) tree = self._build_single_tree(dataset_stage, targets_stage, depth=0) print(tree.describe_tree()) return tree def _build_single_tree(self, dataset, targets, depth): """递归建立决策树""" # 如果该节点的类别全都一样/样本小于分裂所需最小样本数量,则选取出现次数最多的类别。终止分裂 if len(targets['label'].unique()) <= 1 or dataset.__len__() <= self.min_samples_split: tree = Tree() tree.leaf_value = self.calc_leaf_value(targets['label']) return tree if depth < self.max_depth: best_split_feature, best_split_value, best_split_gain = self.choose_best_feature(dataset, targets) left_dataset, right_dataset, left_targets, right_targets = \ self.split_dataset(dataset, targets, best_split_feature, best_split_value) tree = Tree() # 如果父节点分裂后,左叶子节点/右叶子节点样本小于设置的叶子节点最小样本数量,则该父节点终止分裂 if left_dataset.__len__() <= self.min_samples_leaf or \ right_dataset.__len__() <= self.min_samples_leaf or \ best_split_gain <= self.min_split_gain: tree.leaf_value = self.calc_leaf_value(targets['label']) return tree else: # 如果分裂的时候用到该特征,则该特征的importance加1 self.feature_importances_[best_split_feature] = \ self.feature_importances_.get(best_split_feature, 0) + 1 tree.split_feature = best_split_feature tree.split_value = best_split_value tree.tree_left = self._build_single_tree(left_dataset, left_targets, depth+1) tree.tree_right = self._build_single_tree(right_dataset, right_targets, depth+1) return tree # 如果树的深度超过预设值,则终止分裂 else: tree = Tree() tree.leaf_value = self.calc_leaf_value(targets['label']) return tree def choose_best_feature(self, dataset, targets): """寻找最好的数据集划分方式,找到最优分裂特征、分裂阈值、分裂增益""" best_split_gain = 1 best_split_feature = None best_split_value = None for feature in dataset.columns: if dataset[feature].unique().__len__() <= 100: unique_values = sorted(dataset[feature].unique().tolist()) # 如果该维度特征取值太多,则选择100个百分位值作为待选分裂阈值 else: unique_values = np.unique([np.percentile(dataset[feature], x) for x in np.linspace(0, 100, 100)]) # 对可能的分裂阈值求分裂增益,选取增益最大的阈值 for split_value in unique_values: left_targets = targets[dataset[feature] <= split_value] right_targets = targets[dataset[feature] > split_value] split_gain = self.calc_gini(left_targets['label'], right_targets['label']) if split_gain < best_split_gain: best_split_feature = feature best_split_value = split_value best_split_gain = split_gain return best_split_feature, best_split_value, best_split_gain @staticmethod def calc_leaf_value(targets): """选择样本中出现次数最多的类别作为叶子节点取值""" label_counts = collections.Counter(targets) major_label = max(zip(label_counts.values(), label_counts.keys())) return major_label[1] @staticmethod def calc_gini(left_targets, right_targets): """分类树采用基尼指数作为指标来选择最优分裂点""" split_gain = 0 for targets in [left_targets, right_targets]: gini = 1 # 统计每个类别有多少样本,然后计算gini label_counts = collections.Counter(targets) for key in label_counts: prob = label_counts[key] * 1.0 / len(targets) gini -= prob ** 2 split_gain += len(targets) * 1.0 / (len(left_targets) + len(right_targets)) * gini return split_gain @staticmethod def split_dataset(dataset, targets, split_feature, split_value): """根据特征和阈值将样本划分成左右两份,左边小于等于阈值,右边大于阈值""" left_dataset = dataset[dataset[split_feature] <= split_value] left_targets = targets[dataset[split_feature] <= split_value] right_dataset = dataset[dataset[split_feature] > split_value] right_targets = targets[dataset[split_feature] > split_value] return left_dataset, right_dataset, left_targets, right_targets def predict(self, dataset): """输入样本,预测所属类别""" res = [] for _, row in dataset.iterrows(): pred_list = [] # 统计每棵树的预测结果,选取出现次数最多的结果作为最终类别 for tree in self.trees: pred_list.append(tree.calc_predict_value(row)) pred_label_counts = collections.Counter(pred_list) pred_label = max(zip(pred_label_counts.values(), pred_label_counts.keys())) res.append(pred_label[1]) return np.array(res) if __name__ == '__main__': df = pd.read_csv("source/wine.txt") df = df[df['label'].isin([1, 2])].sample(frac=1, random_state=66).reset_index(drop=True) clf = RandomForestClassifier(n_estimators=5, max_depth=5, min_samples_split=6, min_samples_leaf=2, min_split_gain=0.0, colsample_bytree="sqrt", subsample=0.8, random_state=66) train_count = int(0.7 * len(df)) feature_list = ["Alcohol", "Malic acid", "Ash", "Alcalinity of ash", "Magnesium", "Total phenols", "Flavanoids", "Nonflavanoid phenols", "Proanthocyanins", "Color intensity", "Hue", "OD280/OD315 of diluted wines", "Proline"] clf.fit(df.loc[:train_count, feature_list], df.loc[:train_count, 'label']) from sklearn import metrics print(metrics.accuracy_score(df.loc[:train_count, 'label'], clf.predict(df.loc[:train_count, feature_list]))) print(metrics.accuracy_score(df.loc[train_count:, 'label'], clf.predict(df.loc[train_count:, feature_list])))
现在实现随机森林基本上都是可以直接调用sklearn
里面的API。
from sklearn.ensemble import RandomForestClassifier
参考资料
[1]https://blog.csdn.net/yangyin007/article/details/82385967
[2]https://zhuanlan.zhihu.com/p/22097796
[3]https://github.com/zhaoxingfeng/RandomForest/blob/master/RandomForestClassification.py