1.5 DBSCAN
1.5.1原理
DBSCAN(Density-based spatial clustering of application with nose):基于密度的有噪音应用空间聚类。
密度大的地方是一类,密度小的地方是分界线。不需要事先指明簇的个数。
流程
while(存在没有被访问过的点) : 选择任意一个点 for (遍历该点<eps的所有点) :<="" span=""> if(点的个数<= min_sample): 标记为噪音(noise),这个点不属于任何簇 else: 这个点标记为核心样本(核心点),分配一个簇标签 for (该点在距离eps内的邻居) &&(邻居存在核心样本): if (没有分配一个簇): 将刚才创建的簇分配给它 elif(核心样本) : 依次访问它的邻居
名词
- 核心点
- 核心点距离eps内的点(边界点)
- 噪音
1.5.2类参数、属性和方法
类
class sklearn.cluster.DBSCAN(eps=0.5, *, min_samples=5, metric='euclidean', metric_params=None, algorithm='auto', leaf_size=30, p=None, n_jobs=None)
属性
属性 |
类别 |
介绍 |
core_sample_indices_ |
ndarray of shape (n_core_samples,) |
核心样本的指数 |
components_ |
ndarray of shape (n_core_samples, n_features) |
通过培训找到的每个核心样本的副本 |
labels_ |
ndarray of shape (n_samples) |
将数据集中每个点的标签进行聚类以fit()。噪声样本的标签为-1 |
方法
fit(X[, y, sample_weight]) |
根据特征或距离矩阵执行DBSCAN聚类。 |
fit_predict(X[, y, sample_weight]) |
从要素或距离矩阵执行DBSCAN聚类,并返回聚类标签。 |
get_params([deep]) |
获取此估计器的参数。 |
set_params(**params) |
设置此估计器的参数。 |
1.5.3对make_blobs数据进行DBSCAN算法分析
def dbscan_for_blobs(): myutil = util() epss=[0.5,2,0.5] min_sampless=[5,5,20] for (eps,min_samples) in zip(epss,min_sampless): db = DBSCAN(eps=eps,min_samples=min_samples) blobs = make_blobs(random_state=1,centers=1) X = blobs[0] clusters = db.fit_predict(X) title = "eps="+str(eps)+",min_samples="+str(min_samples) myutil.draw_scatter_for_Clustering(X,"",clusters,title,"DBSN")
eps指定划分为一簇样本的距离有多远,越大,聚类覆盖面越大(默认0.5)。eps加大,簇变大。
min_sample聚类核心点的个数, min_sample越大,核心点个数越小,噪音也就越大; min_sample越小,核心点个数越多,噪音也就越少。默认min_sample=2
min_samples越大,核心点个数越小,噪音也就越大
#绘制不同eps,min_sample下的DBSCAN分布 mglearn.plots.plot_dbscan() plt.show(
输出
min_samples: 2 eps: 1.000000 cluster: [-1 0 0 -1 0 -1 1 1 0 1 -1 -1] min_samples: 2 eps: 1.500000 cluster: [0 1 1 1 1 0 2 2 1 2 2 0] min_samples: 2 eps: 2.000000 cluster: [0 1 1 1 1 0 0 0 1 0 0 0] min_samples: 2 eps: 3.000000 cluster: [0 0 0 0 0 0 0 0 0 0 0 0] min_samples: 3 eps: 1.000000 cluster: [-1 0 0 -1 0 -1 1 1 0 1 -1 -1] min_samples: 3 eps: 1.500000 cluster: [0 1 1 1 1 0 2 2 1 2 2 0] min_samples: 3 eps: 2.000000 cluster: [0 1 1 1 1 0 0 0 1 0 0 0] min_samples: 3 eps: 3.000000 cluster: [0 0 0 0 0 0 0 0 0 0 0 0] min_samples: 5 eps: 1.000000 cluster: [-1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1] min_samples: 5 eps: 1.500000 cluster: [-1 0 0 0 0 -1 -1 -1 0 -1 -1 -1] min_samples: 5 eps: 2.000000 cluster: [-1 0 0 0 0 -1 -1 -1 0 -1 -1 -1] min_samples: 5 eps: 3.000000 cluster: [0 0 0 0 0 0 0 0 0 0 0 0]
1.5.4 DBSCAN分析鸢尾花数据
def dbscan_for_iris(): myutil = util() X,y = datasets.load_iris().data,datasets.load_iris().target dbscan = DBSCAN(min_samples=0.5,eps=1) dbscan.fit(X) result = dbscan.fit_predict(X) title = "鸢尾花" myutil.draw_scatter_for_Clustering(X,y,result,title,"DBSN")
输出
鸢尾花原始数据集分配簇标签为: [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2] 鸢尾花 DBSN 训练簇标签为: [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1]
1.5.5 DBSCAN分析红酒数据
def dbscan_for_wine(): myutil = util() X,y = datasets.load_wine().data,datasets.load_wine().target dbscan = DBSCAN(min_samples=0.5,eps=50) dbscan.fit(X) result = dbscan.fit_predict(X) title = "红酒" myutil.draw_scatter_for_Clustering(X,y,result,title,"DBSN")
输出
红酒原始数据集分配簇标签为: [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2] 红酒 DBSN 训练簇标签为: [0 0 0 1 2 1 3 3 0 0 1 3 3 0 1 3 3 0 4 2 2 2 0 0 2 2 0 3 2 0 3 1 0 3 0 2 2 0 0 2 2 0 0 2 2 0 0 0 0 3 0 3 0 5 0 0 0 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 0 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 6 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2]
1.5.6 DBSCAN分析乳腺癌数据
def dbscan_for_breast_cancer(): myutil = util() X,y = datasets.load_breast_cancer().data,datasets.load_breast_cancer().target dbscan = DBSCAN(min_samples=0.5,eps=100) dbscan.fit(X) result = dbscan.fit_predict(X) title = "乳腺癌" myutil.draw_scatter_for_Clustering(X,y,result,title,"DBSN")
输出
乳腺癌原始数据集分配簇标签为: [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0…1 1 1 1 1 1 1 0 0 0 0 0 0 1] 乳腺癌 DBSN 训练簇标签为: [ 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 1 1 1 1 3 …1 15 1 1 1 1 1 1 7 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1…1 1 1 1 1 1 1 1 1 1 1 1 23 1 1 1 1]
1.5.7 DBSCAN分析两个月亮数据
#两个月亮 def dbscan_for_two_moon(): myutil = util() X, y = datasets.make_moons(n_samples=200,noise=0.05, random_state=0) scaler = StandardScaler() scaler.fit(X) X_scaled = scaler.transform(X) # 打印处理后的数据形态 print("处理后的数据形态:",X_scaled.shape) # 处理后的数据形态: (200, 2) 200个样本 2类 dbscan = DBSCAN() result=dbscan.fit_predict(X_scaled) title = "两个月亮" #绘制簇分配结果 myutil.draw_scatter_for_Clustering(X,y,result,title,"DBSCAN")
输出
处理后的数据形态: (200, 2) 两个月亮原始数据集分配簇标签为: [0 1 1 0 1 1 0 1 0 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 0 1 1 1 1 0 0 0 1 1 0 1 1 0 0 1 1 0 0 1 1 0 0 0 1 1 0 1 1 0 1 0 0 1 0 0 1 0 1 0 1 0 0 1 0 0 1 0 1 1 1 0 1 0 0 1 1 0 1 1 1 0 0 0 1 1 0 0 1 0 1 1 1 1 0 1 1 1 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 1 1 0 0 0 1 0 1 0 0 1 1 1 0 0 0 1 1 1 1 0 1 0 1 1 0 0 0 0 1 1 0 1 1 1 0 0 1 0 1 1 0 0 1 1 0 1 1 1 0 1 1 1 0 0 0 0 1 1 1 0 0 0 1 0 1 1 1 0 0 1 0 0 0 0 0 0 1 0 1 1 0 1] 两个月亮 DBSCAN 训练簇标签为: [0 1 1 0 1 1 0 1 0 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 0 1 1 1 1 0 0 0 1 1 0 1 1 0 0 1 1 0 0 1 1 0 0 0 1 1 0 1 1 0 1 0 0 1 0 0 1 0 1 0 1 0 0 1 0 0 1 0 1 1 1 0 1 0 0 1 1 0 1 1 1 0 0 0 1 1 0 0 1 0 1 1 1 1 0 1 1 1 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 1 1 0 0 0 1 0 1 0 0 1 1 1 0 0 0 1 1 1 1 0 1 0 1 1 0 0 0 0 1 1 0 1 1 1 0 0 1 0 1 1 0 0 1 1 0 1 1 1 0 1 1 1 0 0 0 0 1 1 1 0 0 0 1 0 1 1 1 0 0 1 0 0 0 0 0 0 1 0 1 1 0 1]