pandas dataframe 做机器学习训练数据=》直接使用iloc或者as_matrix即可

简介:

样本示意,为kdd99数据源:

复制代码
0,udp,private,SF,105,146,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0.00,0.00,0.00,0.00,1.00,0.00,0.00,255,254,1.00,0.01,0.00,0.00,0.00,0.00,0.00,0.00,normal.
0,udp,private,SF,105,146,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0.00,0.00,0.00,0.00,1.00,0.00,0.00,255,254,1.00,0.01,0.00,0.00,0.00,0.00,0.00,0.00,normal.
0,udp,private,SF,105,146,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0.00,0.00,0.00,0.00,1.00,0.00,0.00,255,254,1.00,0.01,0.00,0.00,0.00,0.00,0.00,0.00,normal.
0,udp,private,SF,105,146,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,2,0.00,0.00,0.00,0.00,1.00,0.00,0.00,255,254,1.00,0.01,0.00,0.00,0.00,0.00,0.00,0.00,snmpgetattack.
0,udp,private,SF,105,146,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,2,0.00,0.00,0.00,0.00,1.00,0.00,0.00,255,254,1.00,0.01,0.01,0.00,0.00,0.00,0.00,0.00,snmpgetattack.
0,udp,private,SF,105,146,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,2,0.00,0.00,0.00,0.00,1.00,0.00,0.00,255,255,1.00,0.00,0.01,0.00,0.00,0.00,0.00,0.00,snmpgetattack.
0,udp,domain_u,SF,29,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,0.00,0.00,0.00,0.00,0.50,1.00,0.00,10,3,0.30,0.30,0.30,0.00,0.00,0.00,0.00,0.00,normal.
0,udp,private,SF,105,146,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0.00,0.00,0.00,0.00,1.00,0.00,0.00,255,253,0.99,0.01,0.00,0.00,0.00,0.00,0.00,0.00,normal.
0,udp,private,SF,105,146,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,2,0.00,0.00,0.00,0.00,1.00,0.00,0.00,255,254,1.00,0.01,0.00,0.00,0.00,0.00,0.00,0.00,snmpgetattack.
0,tcp,http,SF,223,185,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,4,4,0.00,0.00,0.00,0.00,1.00,0.00,0.00,71,255,1.00,0.00,0.01,0.01,0.00,0.00,0.00,0.00,normal.
0,udp,private,SF,105,146,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,2,0.00,0.00,0.00,0.00,1.00,0.00,0.00,255,254,1.00,0.01,0.00,0.00,0.00,0.00,0.00,0.00,snmpgetattack.
0,tcp,http,SF,230,260,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,19,0.00,0.00,0.00,0.00,1.00,0.00,0.11,3,255,1.00,0.00,0.33,0.07,0.33,0.00,0.00,0.00,normal.
0,udp,private,SF,105,146,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0.00,0.00,0.00,0.00,1.00,0.00,0.00,255,254,1.00,0.01,0.01,0.00,0.00,0.00,0.00,0.00,normal.
0,udp,private,SF,105,146,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,2,0.00,0.00,0.00,0.00,1.00,0.00,0.00,255,252,0.99,0.01,0.00,0.00,0.00,0.00,0.00,0.00,snmpgetattack.
1,tcp,smtp,SF,3170,329,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,2,0.00,0.00,0.00,0.00,1.00,0.00,1.00,54,39,0.72,0.11,0.02,0.00,0.02,0.00,0.09,0.13,normal.
0,tcp,http,SF,297,13787,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,2,2,0.00,0.00,0.00,0.00,1.00,0.00,0.00,177,255,1.00,0.00,0.01,0.01,0.00,0.00,0.00,0.00,normal. 
0,tcp,http,SF,291,3542,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,12,12,0.00,0.00,0.00,0.00,1.00,0.00,0.00,187,255,1.00,0.00,0.01,0.01,0.00,0.00,0.00,0.00,normal.
0,tcp,http,SF,295,753,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,21,22,0.00,0.00,0.00,0.00,1.00,0.00,0.09,196,255,1.00,0.00,0.01,0.01,0.00,0.00,0.00,0.00,normal. 
0,udp,private,SF,105,146,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,2,0.00,0.00,0.00,0.00,1.00,0.00,0.00,255,254,1.00,0.01,0.01,0.00,0.00,0.00,0.00,0.00,snmpgetattack.
0,udp,private,SF,105,146,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0.00,0.00,0.00,0.00,1.00,0.00,0.00,255,254,1.00,0.01,0.00,0.00,0.00,0.00,0.00,0.00,snmpgetattack.
0,tcp,http,SF,268,9235,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,5,5,0.00,0.00,0.00,0.00,1.00,0.00,0.00,58,255,1.00,0.00,0.02,0.05,0.00,0.00,0.00,0.00,normal.
0,udp,private,SF,105,146,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,2,0.00,0.00,0.00,0.00,1.00,0.00,0.00,255,253,0.99,0.01,0.00,0.00,0.00,0.00,0.00,0.00,snmpgetattack.
0,tcp,http,SF,223,185,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,3,3,0.00,0.00,0.00,0.00,1.00,0.00,0.00,255,255,1.00,0.00,0.00,0.00,0.00,0.00,0.00,0.00,normal.
0,tcp,http,SF,227,8841,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,13,13,0.00,0.00,0.00,0.00,1.00,0.00,0.00,255,255,1.00,0.00,0.00,0.00,0.00,0.00,0.00,0.00,normal.
0,tcp,http,SF,222,19564,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,22,23,0.00,0.00,0.00,0.00,1.00,0.00,0.09,255,255,1.00,0.00,0.00,0.00,0.00,0.00,0.00,0.00,normal.
0,tcp,ftp_data,SF,740,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,2,0.00,0.00,0.00,0.00,1.00,0.00,0.00,77,33,0.34,0.08,0.34,0.06,0.00,0.00,0.00,0.00,normal.
0,udp,private,SF,105,146,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,2,0.00,0.00,0.00,0.00,1.00,0.00,0.00,255,254,1.00,0.01,0.00,0.00,0.00,0.00,0.00,0.00,normal.
0,tcp,ftp_data,SF,35195,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,10,10,0.00,0.00,0.00,0.00,1.00,0.00,0.00,92,44,0.43,0.07,0.43,0.05,0.00,0.00,0.00,0.00,normal.
0,tcp,ftp_data,SF,8325,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,20,20,0.00,0.00,0.00,0.00,1.00,0.00,0.00,103,54,0.49,0.06,0.49,0.04,0.00,0.00,0.00,0.00,normal.
复制代码

代码:

复制代码
# -*- coding:utf-8 -*-

import re
import matplotlib.pyplot as plt
import os
from sklearn.feature_extraction.text import CountVectorizer
from sklearn import preprocessing
from sklearn import cross_validation
import os
from sklearn.datasets import load_iris
from sklearn import tree
import pydotplus
from sklearn.preprocessing import LabelEncoder
import numpy as np
import pandas as pd
from sklearn_pandas import DataFrameMapper

def label(x):
    if x == "normal.":
        return 0
    else:
        return 1

if __name__ == '__main__':
    data = pd.read_csv('../data/kddcup99/corrected', sep=",", header=None)
    print data.columns
    print data.iloc[0,0], data.iloc[0,1]
    print len(data)
    col_cnt = len(data.columns)

    normal = data.loc[data.loc[:, col_cnt-1] == "normal.", :]
    print "normal len:", len(normal)
    guess = data.loc[data.loc[:, col_cnt-1] == "guess_passwd.", :]
    print "normal len:", len(guess)

    data = pd.concat([normal, guess])
    print len(data)

    le = preprocessing.LabelEncoder()
    for i in range(col_cnt-1): 
        if isinstance(data.iloc[0,i], str):
            print "tranform string column only:", i
            data.loc[:,i] = le.fit_transform(data.loc[:,i])
    data.loc[:,col_cnt-1] = data.loc[:,col_cnt-1].apply(label)
    print data.iloc[0,0], data.iloc[0,1]
    x = data.iloc[:, range(col_cnt-1)]
    #x = data.iloc[:, [0,4,5,6,7,8,22,23,24,25,26,27,28,29,30]]
    y = data.iloc[:, col_cnt-1]
  
''' also OK
    data = data.as_matrix()
    x = data[:, range(col_cnt-1)]
    y = data[:, col_cnt-1]
'''
print "x=>" print x.iloc[0:3, :] print "y=>" print y[-3:] #v=load_kdd99("../data/kddcup99/corrected") #x,y=get_guess_passwdandNormal(v) clf = tree.DecisionTreeClassifier() clf = clf.fit(x, y) print clf print cross_validation.cross_val_score(clf, x, y, n_jobs=-1, cv=10) clf = clf.fit(x, y) dot_data = tree.export_graphviz(clf, out_file=None) graph = pydotplus.graph_from_dot_data(dot_data) graph.write_pdf("../photo/6/iris-dt.pdf")
复制代码

结果:

复制代码
Int64Index([ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15, 16,
            17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33,
            34, 35, 36, 37, 38, 39, 40, 41],
           dtype='int64')
0 udp
311029
normal len: 60593
normal len: 4367
64960
tranform string column only: 1
tranform string column only: 2
tranform string column only: 3
0 2
x=>
   0   1   2   3    4    5   6   7   8   9  ...    31   32   33    34   35  \
0   0   2  15   7  105  146   0   0   0   0 ...   255  254  1.0  0.01  0.0   
1   0   2  15   7  105  146   0   0   0   0 ...   255  254  1.0  0.01  0.0   
2   0   2  15   7  105  146   0   0   0   0 ...   255  254  1.0  0.01  0.0   

    36   37   38   39   40  
0  0.0  0.0  0.0  0.0  0.0  
1  0.0  0.0  0.0  0.0  0.0  
2  0.0  0.0  0.0  0.0  0.0  

[3 rows x 41 columns]
y=>
142098    1
142099    1
142101    1
Name: 41, dtype: int64
DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=None,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=None,
            splitter='best')
fg[ 0.9561336   0.99892258  0.99938433  0.99984606  0.99984606  0.99969212
  1.          0.99984604  0.99969207  1.        ]
复制代码

 
















本文转自张昺华-sky博客园博客,原文链接:http://www.cnblogs.com/bonelee/p/7808478.html,如需转载请自行联系原作者



相关文章
|
5月前
|
数据采集 安全 数据挖掘
Pandas数据合并:10种高效连接技巧与常见问题
在数据分析中,数据合并是常见且关键的步骤。本文针对合并来自多个来源的数据集时可能遇到的问题,如列丢失、重复记录等,提供系统解决方案。基于对超1000个复杂数据集的分析经验,总结了10种关键技术,涵盖Pandas库中`merge`和`join`函数的使用方法。内容包括基本合并、左连接、右连接、外连接、基于索引连接、多键合并、数据拼接、交叉连接、后缀管理和合并验证等场景。通过实际案例与技术原理解析,帮助用户高效准确地完成数据整合任务,提升数据分析效率。
413 13
Pandas数据合并:10种高效连接技巧与常见问题
|
6月前
|
机器学习/深度学习 人工智能 JSON
【解决方案】DistilQwen2.5-R1蒸馏小模型在PAI-ModelGallery的训练、评测、压缩及部署实践
阿里云的人工智能平台 PAI,作为一站式的机器学习和深度学习平台,对DistilQwen2.5-R1模型系列提供了全面的技术支持。无论是开发者还是企业客户,都可以通过 PAI-ModelGallery 轻松实现 Qwen2.5 系列模型的训练、评测、压缩和快速部署。本文详细介绍在 PAI 平台使用 DistilQwen2.5-R1 蒸馏模型的全链路最佳实践。
|
24天前
|
机器学习/深度学习 数据采集 算法
量子机器学习入门:三种数据编码方法对比与应用
在量子机器学习中,数据编码方式决定了量子模型如何理解和处理信息。本文详解角度编码、振幅编码与基础编码三种方法,分析其原理、实现及适用场景,帮助读者选择最适合的编码策略,提升量子模型性能。
118 8
|
5月前
|
人工智能 JSON 算法
【解决方案】DistilQwen2.5-DS3-0324蒸馏小模型在PAI-ModelGallery的训练、评测、压缩及部署实践
DistilQwen 系列是阿里云人工智能平台 PAI 推出的蒸馏语言模型系列,包括 DistilQwen2、DistilQwen2.5、DistilQwen2.5-R1 等。本文详细介绍DistilQwen2.5-DS3-0324蒸馏小模型在PAI-ModelGallery的训练、评测、压缩及部署实践。
|
6月前
|
机器学习/深度学习 算法 数据挖掘
PyTabKit:比sklearn更强大的表格数据机器学习框架
PyTabKit是一个专为表格数据设计的新兴机器学习框架,集成了RealMLP等先进深度学习技术与优化的GBDT超参数配置。相比传统Scikit-Learn,PyTabKit通过元级调优的默认参数设置,在无需复杂超参调整的情况下,显著提升中大型数据集的性能表现。其简化API设计、高效训练速度和多模型集成能力,使其成为企业决策与竞赛建模的理想工具。
179 12
PyTabKit:比sklearn更强大的表格数据机器学习框架
|
7月前
|
人工智能 自然语言处理 算法
MT-MegatronLM:国产训练框架逆袭!三合一并行+FP8黑科技,大模型训练效率暴涨200%
MT-MegatronLM 是摩尔线程推出的面向全功能 GPU 的开源混合并行训练框架,支持多种模型架构和高效混合并行训练,显著提升 GPU 集群的算力利用率。
464 18
|
7月前
|
机器学习/深度学习 人工智能 边缘计算
DistilQwen2.5蒸馏小模型在PAI-ModelGallery的训练、评测、压缩及部署实践
DistilQwen2.5 是阿里云人工智能平台 PAI 推出的全新蒸馏大语言模型系列。通过黑盒化和白盒化蒸馏结合的自研蒸馏链路,DistilQwen2.5各个尺寸的模型在多个基准测试数据集上比原始 Qwen2.5 模型有明显效果提升。这一系列模型在移动设备、边缘计算等资源受限的环境中具有更高的性能,在较小参数规模下,显著降低了所需的计算资源和推理时长。阿里云的人工智能平台 PAI,作为一站式的机器学习和深度学习平台,对 DistilQwen2.5 模型系列提供了全面的技术支持。本文详细介绍在 PAI 平台使用 DistilQwen2.5 蒸馏小模型的全链路最佳实践。
|
8月前
|
缓存 数据可视化 BI
Pandas高级数据处理:数据仪表板制作
在数据分析中,面对庞大、多维度的数据集(如销售记录、用户行为日志),直接查看原始数据难以快速抓住重点。传统展示方式(如Excel表格)缺乏交互性和动态性,影响决策效率。为此,我们利用Python的Pandas库构建数据仪表板,具备数据聚合筛选、可视化图表生成和性能优化功能,帮助业务人员直观分析不同品类商品销量分布、省份销售额排名及日均订单量变化趋势,提升数据洞察力与决策效率。
133 12
|
8月前
|
数据采集 存储 数据可视化
Pandas高级数据处理:数据报告生成
Pandas 是数据分析领域不可或缺的工具,支持多种文件格式的数据读取与写入、数据清洗、筛选与过滤。本文从基础到高级,介绍如何使用 Pandas 进行数据处理,并解决常见问题和报错,如数据类型不一致、时间格式解析错误、内存不足等。最后,通过数据汇总、可视化和报告导出,生成专业的数据报告,帮助你在实际工作中更加高效地处理数据。
196 8
|
7月前
|
机器学习/深度学习 传感器 数据采集
基于机器学习的数据分析:PLC采集的生产数据预测设备故障模型
本文介绍如何利用Python和Scikit-learn构建基于PLC数据的设备故障预测模型。通过实时采集温度、振动、电流等参数,进行数据预处理和特征提取,选择合适的机器学习模型(如随机森林、XGBoost),并优化模型性能。文章还分享了边缘计算部署方案及常见问题排查,强调模型预测应结合定期维护,确保系统稳定运行。
693 0

热门文章

最新文章