我有一段长音频是通过一个3分钟的wav重复30遍组合而成的,音频里都是一个人在说话,然后我用下面的代码推理的时候,在modelscope-funasr发现声纹出现了一百多个说话人,这是为什么?
from funasr import AutoModel
model = AutoModel(model="/workspace/model/download/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch", model_revision="v2.0.4", \
vad_model="/workspace/model/download/speech_fsmn_vad_zh-cn-16k-common-pytorch", vad_model_revision="v2.0.4", \
punc_model="/workspace/model/download/punc_ct-transformer_zh-cn-common-vocab272727-pytorch", punc_model_revision="v2.0.4",\
spk_model="/workspace/model/download/speech_campplus_sv_zh-cn_16k-common", spk_model_revision="v2.0.2",
)