2.3.6 变量分布可视化
单一变量分布可视化
plt.figure(figsize=(8, 8)) sns.barplot(data_train["employmentLength"].value_counts(dropna=False)[:20], data_train["employmentLength"].value_counts(dropna=False).keys()[:20]) plt.show()
png
根绝y值不同可视化x某个特征的分布
- 首先查看类别型变量在不同y值上的分布
train_loan_fr = data_train.loc[data_train['isDefault'] == 1] train_loan_nofr = data_train.loc[data_train['isDefault'] == 0]
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(15, 8)) train_loan_fr.groupby('grade')['grade'].count().plot(kind='barh', ax=ax1, title='Count of grade fraud') train_loan_nofr.groupby('grade')['grade'].count().plot(kind='barh', ax=ax2, title='Count of grade non-fraud') train_loan_fr.groupby('employmentLength')['employmentLength'].count().plot(kind='barh', ax=ax3, title='Count of employmentLength fraud') train_loan_nofr.groupby('employmentLength')['employmentLength'].count().plot(kind='barh', ax=ax4, title='Count of employmentLength non-fraud') plt.show()
png
- 其次查看连续型变量在不同y值上的分布
fig, ((ax1, ax2)) = plt.subplots(1, 2, figsize=(15, 6)) data_train.loc[data_train['isDefault'] == 1] \ ['loanAmnt'].apply(np.log) \ .plot(kind='hist', bins=100, title='Log Loan Amt - Fraud', color='r', xlim=(-3, 10), ax= ax1) data_train.loc[data_train['isDefault'] == 0] \ ['loanAmnt'].apply(np.log) \ .plot(kind='hist', bins=100, title='Log Loan Amt - Not Fraud', color='b', xlim=(-3, 10), ax=ax2)
<matplotlib.axes._subplots.AxesSubplot at 0x126a44b50>
png
total = len(data_train) total_amt = data_train.groupby(['isDefault'])['loanAmnt'].sum().sum() plt.figure(figsize=(12,5)) plt.subplot(121)##1代表行,2代表列,所以一共有2个图,1代表此时绘制第一个图。 plot_tr = sns.countplot(x='isDefault',data=data_train)#data_train‘isDefault’这个特征每种类别的数量** plot_tr.set_title("Fraud Loan Distribution \n 0: good user | 1: bad user", fontsize=14) plot_tr.set_xlabel("Is fraud by count", fontsize=16) plot_tr.set_ylabel('Count', fontsize=16) for p in plot_tr.patches: height = p.get_height() plot_tr.text(p.get_x()+p.get_width()/2., height + 3, '{:1.2f}%'.format(height/total*100), ha="center", fontsize=15) percent_amt = (data_train.groupby(['isDefault'])['loanAmnt'].sum()) percent_amt = percent_amt.reset_index() plt.subplot(122) plot_tr_2 = sns.barplot(x='isDefault', y='loanAmnt', dodge=True, data=percent_amt) plot_tr_2.set_title("Total Amount in loanAmnt \n 0: good user | 1: bad user", fontsize=14) plot_tr_2.set_xlabel("Is fraud by percent", fontsize=16) plot_tr_2.set_ylabel('Total Loan Amount Scalar', fontsize=16) for p in plot_tr_2.patches: height = p.get_height() plot_tr_2.text(p.get_x()+p.get_width()/2., height + 3, '{:1.2f}%'.format(height/total_amt * 100), ha="center", fontsize=15)
png
2.3.6 时间格式数据处理及查看
#转化成时间格式 issueDateDT特征表示数据日期离数据集中日期最早的日期(2007-06-01)的天数 data_train['issueDate'] = pd.to_datetime(data_train['issueDate'],format='%Y-%m-%d') startdate = datetime.datetime.strptime('2007-06-01', '%Y-%m-%d') data_train['issueDateDT'] = data_train['issueDate'].apply(lambda x: x-startdate).dt.days
#转化成时间格式 data_test_a['issueDate'] = pd.to_datetime(data_train['issueDate'],format='%Y-%m-%d') startdate = datetime.datetime.strptime('2007-06-01', '%Y-%m-%d') data_test_a['issueDateDT'] = data_test_a['issueDate'].apply(lambda x: x-startdate).dt.days
plt.hist(data_train['issueDateDT'], label='train'); plt.hist(data_test_a['issueDateDT'], label='test'); plt.legend(); plt.title('Distribution of issueDateDT dates'); #train 和 test issueDateDT 日期有重叠 所以使用基于时间的分割进行验证是不明智的
png
2.3.7 掌握透视图可以让我们更好的了解数据
#透视图 索引可以有多个,“columns(列)”是可选的,聚合函数aggfunc最后是被应用到了变量“values”中你所列举的项目上。 pivot = pd.pivot_table(data_train, index=['grade'], columns=['issueDateDT'], values=['loanAmnt'], aggfunc=np.sum)
pivot
</div><div>.dataframe tbody tr th:only-of-type {</div><div>vertical-align: middle;</div><div>}</div><div><em></em></div><div data-card-type="block" data-ready-card="codeblock" data-card-value="data:%7B%22mode%22%3A%22plain%22%2C%22code%22%3A%22.dataframe%20tbody%20tr%20th%20%7B%5Cn%20%20%20%20vertical-align%3A%20top%3B%5Cn%7D%5Cn.dataframe%20thead%20tr%20th%20%7B%5Cn%20%20%20%20text-align%3A%20left%3B%5Cn%7D%5Cn.dataframe%20thead%20tr%3Alast-of-type%20th%20%7B%5Cn%20%20%20%20text-align%3A%20right%3B%5Cn%7D%22%2C%22id%22%3A%228dvzf%22%7D"></div><div><br /></div><div>
loanAmnt | |||||||||||||||||||||
issueDateDT | 0 | 30 | 61 | 92 | 122 | 153 | 183 | 214 | 245 | 274 | ... | 3926 | 3957 | 3987 | 4018 | 4048 | 4079 | 4110 | 4140 | 4171 | 4201 |
grade | |||||||||||||||||||||
A | NaN | 53650.0 | 42000.0 | 19500.0 | 34425.0 | 63950.0 | 43500.0 | 168825.0 | 85600.0 | 101825.0 | ... | 13093850.0 | 11757325.0 | 11945975.0 | 9144000.0 | 7977650.0 | 6888900.0 | 5109800.0 | 3919275.0 | 2694025.0 | 2245625.0 |
B | NaN | 13000.0 | 24000.0 | 32125.0 | 7025.0 | 95750.0 | 164300.0 | 303175.0 | 434425.0 | 538450.0 | ... | 16863100.0 | 17275175.0 | 16217500.0 | 11431350.0 | 8967750.0 | 7572725.0 | 4884600.0 | 4329400.0 | 3922575.0 | 3257100.0 |
C | NaN | 68750.0 | 8175.0 | 10000.0 | 61800.0 | 52550.0 | 175375.0 | 151100.0 | 243725.0 | 393150.0 | ... | 17502375.0 | 17471500.0 | 16111225.0 | 11973675.0 | 10184450.0 | 7765000.0 | 5354450.0 | 4552600.0 | 2870050.0 | 2246250.0 |
D | NaN | NaN | 5500.0 | 2850.0 | 28625.0 | NaN | 167975.0 | 171325.0 | 192900.0 | 269325.0 | ... | 11403075.0 | 10964150.0 | 10747675.0 | 7082050.0 | 7189625.0 | 5195700.0 | 3455175.0 | 3038500.0 | 2452375.0 | 1771750.0 |
E | 7500.0 | NaN | 10000.0 | NaN | 17975.0 | 1500.0 | 94375.0 | 116450.0 | 42000.0 | 139775.0 | ... | 3983050.0 | 3410125.0 | 3107150.0 | 2341825.0 | 2225675.0 | 1643675.0 | 1091025.0 | 1131625.0 | 883950.0 | 802425.0 |
F | NaN | NaN | 31250.0 | 2125.0 | NaN | NaN | NaN | 49000.0 | 27000.0 | 43000.0 | ... | 1074175.0 | 868925.0 | 761675.0 | 685325.0 | 665750.0 | 685200.0 | 316700.0 | 315075.0 | 72300.0 | NaN |
G | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 24625.0 | NaN | NaN | ... | 56100.0 | 243275.0 | 224825.0 | 64050.0 | 198575.0 | 245825.0 | 53125.0 | 23750.0 | 25100.0 | 1000.0 |
7 rows × 139 columns
2.3.8 用pandas_profiling生成数据报告
import pandas_profiling
pfr = pandas_profiling.ProfileReport(data_train) pfr.to_file("./example.html")
2.4 总结
数据探索性分析是我们初步了解数据,熟悉数据为特征工程做准备的阶段,甚至很多时候EDA阶段提取出来的特征可以直接当作规则来用。可见EDA的重要性,这个阶段的主要工作还是借助于各个简单的统计量来对数据整体的了解,分析各个类型变量相互之间的关系,以及用合适的图形可视化出来直观观察。希望本节内容能给初学者带来帮助,更期待各位学习者对其中的不足提出建议。