语音顶会Interspeech 论文解读|Audio Tagging with Compact Feedforward Sequential Memory Network and Audio-to-Audio Ratio Based Data Augmentation

简介: Interspeech是世界上规模最大,最全面的顶级语音领域会议,本文为Zhiying Huang, Shiliang Zhang, Ming Lei的入选论文

2019年,国际语音交流协会INTERSPEECH第20届年会将于9月15日至19日在奥地利格拉茨举行。Interspeech是世界上规模最大,最全面的顶级语音领域会议,近2000名一线业界和学界人士将会参与包括主题演讲,Tutorial,论文讲解和主会展览等活动,本次阿里论文有8篇入选,本文为Zhiying Huang, Shiliang Zhang, Ming Lei的论文《Audio Tagging with Compact Feedforward Sequential Memory Network and Audio-to-Audio Ratio Based Data Augmentation 》

点击下载论文

文章解读

音频打标是音频场景和事件分析的任务之一,它的作用是判断音频中所包含的声音事件。近年来,卷积神经网络在音频打标任务上显示特别优异的性能。但是,由于卷积神经网络的模型复杂度较高,其难以被应用于实际产品中。另外,对于一些特定领域,存在低资源的情况,此时音频打标的性能无法保证。

在本次INTERSPEECH2019的工作中,我们将紧凑前馈序列记忆网络(compact Feedforward Sequential Memory Network, cFSMN)应用于音频打标任务,以解决模型复杂度高的问题。同时,我们还提出了一种基于音频-音频能量比(audio-to-audio ratio, AAR)的数据扩增方法,来提升低资源情况下音频打标的性能。

基于cFSMN的音频打标:对应的模型结构图如图1所示。其中,模型的输入是音频片段的声学特征,输出是不同声音事件的概率。模型是由cFSMN和深层神经网络(Deep Neural Network, DNN)堆叠而成,即混合cFSMN-DNN模型。

基于AAR的数据扩增方法:流程图如图2所示。首先,在已有的训练集中随机抽取两个音频片段:A和B。然后,基于设定的AAR调整B的能量,从而获得B’。最后,将音频片段A和B’进行信号层面的叠加,新生成的音频片段A_B’即为新增的数据。

图1.png
图 1基于cFSMN的音频打标

图2.png
图 2基于AAR的数据扩增方法

表1.png
表 1不同方法性能对比

性能:表1罗列出不同模型算法的性能。AlexNet(BN)是一个很健壮的CNN系统,性能最好。在相同训练集下,cFSMN的方法获得了与AlexNet(BN)相当的性能,并且只需要AlexNet(BN) 的1/30模型参数量(1.9M)。进一步地,在cFSMN基础上进行数据扩增,性能获得了进一步提高(0.932的AUC值)。在相同的训练集和测试集下,这个性能是现有发表论文中最优的结果。

总结:我们的工作是首次将cFSMN应用于音频打标任务,获得了与AlexNet(BN)相当的性能。同时,我们提出了基于AAR的数据扩增方法来进一步提升音频打标的性能。下一步,考虑到声音事件的依存关系,我们将探索不同声音事件的分布,以获得更好的效果。

文章摘要

Audio tagging aims to identify the presence or absence of audio events in the audio clip. Recently, a lot of researchers have paid attention to explore different model structures to improve the performance of audio tagging. Convolutional neural network (CNN) is the most popular choice among a wide variety ofmodelstructures,andit’ssuccessfully appliedtoaudioevents prediction task. However, the model complexity of CNN is relatively high, which is not efficient enough to ship in real product. In this paper, compact Feedforward Sequential Memory Network (cFSMN) is proposed for audio tagging task. Experimental results show that cFSMN-based system yields a comparable performance with the CNN-based system. Meanwhile, anaudio-to-audioratio(AAR)baseddataaugmentationmethod is proposed to further improve the classifier performance. Finally, with raw waveforms of the balanced training set of Audio Set which is a published standard database, our system can achieve a state-of-the-art performance with AUC being 0.932. Moreover, cFSMN-based model has only 1.9 million parameters, which is only about 1/30 of the CNN-based model.
Index Terms: Audio Set, audio tagging, compact feedforward sequentialmemorynetwork,audio-to audioratio,dataaugmentation

阿里云开发者社区整理

相关文章
|
6月前
|
机器学习/深度学习 自然语言处理 算法
【论文精读】ACL 2022:Graph Pre-training for AMR Parsing and Generation
【论文精读】ACL 2022:Graph Pre-training for AMR Parsing and Generation
|
25天前
|
机器学习/深度学习 Web App开发 人工智能
轻量级网络论文精度笔(一):《Micro-YOLO: Exploring Efficient Methods to Compress CNN based Object Detection Model》
《Micro-YOLO: Exploring Efficient Methods to Compress CNN based Object Detection Model》这篇论文提出了一种基于YOLOv3-Tiny的轻量级目标检测模型Micro-YOLO,通过渐进式通道剪枝和轻量级卷积层,显著减少了参数数量和计算成本,同时保持了较高的检测性能。
31 2
轻量级网络论文精度笔(一):《Micro-YOLO: Exploring Efficient Methods to Compress CNN based Object Detection Model》
|
算法 PyTorch 算法框架/工具
论文解读:LaMa:Resolution-robust Large Mask Inpainting with Fourier Convolutions
论文解读:LaMa:Resolution-robust Large Mask Inpainting with Fourier Convolutions
647 0
带你读《2022技术人的百宝黑皮书》——SGGG: Self-adaption Generative Gating Graph model for Personalized Micro-video Recommendation(4)
带你读《2022技术人的百宝黑皮书》——SGGG: Self-adaption Generative Gating Graph model for Personalized Micro-video Recommendation(4)
带你读《2022技术人的百宝黑皮书》——SGGG: Self-adaption Generative  Gating Graph model for Personalized Micro-video Recommendation(4)
带你读《2022技术人的百宝黑皮书》——SGGG: Self-adaption Generative Gating Graph model for Personalized Micro-video Recommendation(8)
带你读《2022技术人的百宝黑皮书》——SGGG: Self-adaption Generative Gating Graph model for Personalized Micro-video Recommendation(8)
带你读《2022技术人的百宝黑皮书》——SGGG: Self-adaption Generative Gating Graph model for Personalized Micro-video Recommendation(2)
带你读《2022技术人的百宝黑皮书》——SGGG: Self-adaption Generative Gating Graph model for Personalized Micro-video Recommendation(2)
带你读《2022技术人的百宝黑皮书》——SGGG: Self-adaption Generative Gating Graph model for Personalized Micro-video Recommendation(12)
带你读《2022技术人的百宝黑皮书》——SGGG: Self-adaption Generative Gating Graph model for Personalized Micro-video Recommendation(12)
带你读《2022技术人的百宝黑皮书》——SGGG: Self-adaption Generative Gating Graph model for Personalized Micro-video Recommendation(11)
带你读《2022技术人的百宝黑皮书》——SGGG: Self-adaption Generative Gating Graph model for Personalized Micro-video Recommendation(11)
带你读《2022技术人的百宝黑皮书》——SGGG: Self-adaption Generative Gating Graph model for Personalized Micro-video Recommendation(3)
带你读《2022技术人的百宝黑皮书》——SGGG: Self-adaption Generative Gating Graph model for Personalized Micro-video Recommendation(3)
带你读《2022技术人的百宝黑皮书》——SGGG: Self-adaption Generative Gating Graph model for Personalized Micro-video Recommendation(9)
带你读《2022技术人的百宝黑皮书》——SGGG: Self-adaption Generative Gating Graph model for Personalized Micro-video Recommendation(9)