Zhor Diffallah, H. Ykhlef, Hafida Bouarfa, F. Ykhlef
{"title":"混合超参数调谐对基于深度学习的声学场景分类系统的影响","authors":"Zhor Diffallah, H. Ykhlef, Hafida Bouarfa, F. Ykhlef","doi":"10.1109/ICRAMI52622.2021.9585948","DOIUrl":null,"url":null,"abstract":"Acoustic scene classification (ASC) refers to the identification of the environment in which audio excerpts have been recorded. It associates a semantic label to each audio recording. This task has recently drawn a lot of attention as a result of electronics such as smartphones, autonomous robots, or security systems acquiring the ability to perceive sounds. State-of-the-art sound scene classification heavily relies on deep neural network models. However, the complexity of these models makes them more prone to overfitting. The most widely used approach to overcome this concern is data augmentation. In this paper, we design and analyze the behavior of multiple deep learning-based acoustic scene classification systems. These systems are built following two deep convolutional neural network architectures which are defined with different characteristics. Moreover, this work deeply explores the use of Mixup data augmentation method and the effects of varying its hyperparameters. The obtained results indicate that proper tuning of Mixup hyperparameter significantly improves the classification performance, while considering the network architecture being employed.","PeriodicalId":440750,"journal":{"name":"2021 International Conference on Recent Advances in Mathematics and Informatics (ICRAMI)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Impact of Mixup Hyperparameter Tunning on Deep Learning-based Systems for Acoustic Scene Classification\",\"authors\":\"Zhor Diffallah, H. Ykhlef, Hafida Bouarfa, F. Ykhlef\",\"doi\":\"10.1109/ICRAMI52622.2021.9585948\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Acoustic scene classification (ASC) refers to the identification of the environment in which audio excerpts have been recorded. It associates a semantic label to each audio recording. This task has recently drawn a lot of attention as a result of electronics such as smartphones, autonomous robots, or security systems acquiring the ability to perceive sounds. State-of-the-art sound scene classification heavily relies on deep neural network models. However, the complexity of these models makes them more prone to overfitting. The most widely used approach to overcome this concern is data augmentation. In this paper, we design and analyze the behavior of multiple deep learning-based acoustic scene classification systems. These systems are built following two deep convolutional neural network architectures which are defined with different characteristics. Moreover, this work deeply explores the use of Mixup data augmentation method and the effects of varying its hyperparameters. The obtained results indicate that proper tuning of Mixup hyperparameter significantly improves the classification performance, while considering the network architecture being employed.\",\"PeriodicalId\":440750,\"journal\":{\"name\":\"2021 International Conference on Recent Advances in Mathematics and Informatics (ICRAMI)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-09-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Conference on Recent Advances in Mathematics and Informatics (ICRAMI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICRAMI52622.2021.9585948\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Recent Advances in Mathematics and Informatics (ICRAMI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICRAMI52622.2021.9585948","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
摘要
声学场景分类(Acoustic scene classification, ASC)是指对录制音频片段的环境进行识别。它将一个语义标签关联到每个音频记录。由于智能手机、自动机器人、安保系统等电子产品获得了感知声音的能力,这项任务最近受到了广泛关注。最先进的声音场景分类严重依赖于深度神经网络模型。然而,这些模型的复杂性使它们更容易过度拟合。克服这种担忧的最广泛使用的方法是数据增强。在本文中,我们设计并分析了多个基于深度学习的声学场景分类系统的行为。这些系统是根据两种具有不同特征的深度卷积神经网络架构构建的。此外,本工作还深入探讨了Mixup数据增强方法的使用及其超参数变化的影响。结果表明,在考虑网络结构的情况下,适当调整Mixup超参数可以显著提高分类性能。
Impact of Mixup Hyperparameter Tunning on Deep Learning-based Systems for Acoustic Scene Classification
Acoustic scene classification (ASC) refers to the identification of the environment in which audio excerpts have been recorded. It associates a semantic label to each audio recording. This task has recently drawn a lot of attention as a result of electronics such as smartphones, autonomous robots, or security systems acquiring the ability to perceive sounds. State-of-the-art sound scene classification heavily relies on deep neural network models. However, the complexity of these models makes them more prone to overfitting. The most widely used approach to overcome this concern is data augmentation. In this paper, we design and analyze the behavior of multiple deep learning-based acoustic scene classification systems. These systems are built following two deep convolutional neural network architectures which are defined with different characteristics. Moreover, this work deeply explores the use of Mixup data augmentation method and the effects of varying its hyperparameters. The obtained results indicate that proper tuning of Mixup hyperparameter significantly improves the classification performance, while considering the network architecture being employed.