Eleni Tsalera, A. Papadakis, M. Samarakou, I. Voyiatzis
{"title":"现实条件下基于cnn的声流分割与分类","authors":"Eleni Tsalera, A. Papadakis, M. Samarakou, I. Voyiatzis","doi":"10.1145/3575879.3576020","DOIUrl":null,"url":null,"abstract":"Audio datasets support the training and validation of Machine Learning algorithms in audio classification problems. Such datasets include different, arbitrarily chosen audio classes. We initially investigate a unifying approach, based on the mapping of audio classes according to the Audioset ontology. Using the ESC-10 audio dataset, a tree-like representation of its classes is created. In addition, we employ an audio similarity calculation tool based on the values of extracted features (spectrum centroid, the spectrum flux and the spectral roll-off). This way the audio classes are connected both semantically and in feature-based manner. Employing the same dataset, ESC-10, we perform sound classification using CNN-based algorithms, after transforming the sound excerpts into images (based on their Mel spectrograms). The YAMNet and VGGish networks are used for audio classification and the accuracy reaches 90%. We extend the classification algorithm with segmentation logic, so that it can be applied into more complex sound excerpts, where multiple sound types are included in a sequential and/or overlapping manner. Quantitative metrics are defined on the behavior of the combined segmentation and segmentation functionality, including two key parameters for the merging operation, the minimum duration of the identified sounds and the intervals. The qualitative metrics are related to the number of sound identification events for a concatenated sound excerpt of the dataset and per each sound class. This way the segmentation logic can operate in a fine- and coarse-grained manner while the dataset and the individual sound classes are characterized in terms of clearness and distinguishability.","PeriodicalId":164036,"journal":{"name":"Proceedings of the 26th Pan-Hellenic Conference on Informatics","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CNN-based Segmentation and Classification of Sound Streams under realistic conditions\",\"authors\":\"Eleni Tsalera, A. Papadakis, M. Samarakou, I. Voyiatzis\",\"doi\":\"10.1145/3575879.3576020\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Audio datasets support the training and validation of Machine Learning algorithms in audio classification problems. Such datasets include different, arbitrarily chosen audio classes. We initially investigate a unifying approach, based on the mapping of audio classes according to the Audioset ontology. Using the ESC-10 audio dataset, a tree-like representation of its classes is created. In addition, we employ an audio similarity calculation tool based on the values of extracted features (spectrum centroid, the spectrum flux and the spectral roll-off). This way the audio classes are connected both semantically and in feature-based manner. Employing the same dataset, ESC-10, we perform sound classification using CNN-based algorithms, after transforming the sound excerpts into images (based on their Mel spectrograms). The YAMNet and VGGish networks are used for audio classification and the accuracy reaches 90%. We extend the classification algorithm with segmentation logic, so that it can be applied into more complex sound excerpts, where multiple sound types are included in a sequential and/or overlapping manner. Quantitative metrics are defined on the behavior of the combined segmentation and segmentation functionality, including two key parameters for the merging operation, the minimum duration of the identified sounds and the intervals. The qualitative metrics are related to the number of sound identification events for a concatenated sound excerpt of the dataset and per each sound class. This way the segmentation logic can operate in a fine- and coarse-grained manner while the dataset and the individual sound classes are characterized in terms of clearness and distinguishability.\",\"PeriodicalId\":164036,\"journal\":{\"name\":\"Proceedings of the 26th Pan-Hellenic Conference on Informatics\",\"volume\":\"15 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 26th Pan-Hellenic Conference on Informatics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3575879.3576020\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 26th Pan-Hellenic Conference on Informatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3575879.3576020","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
CNN-based Segmentation and Classification of Sound Streams under realistic conditions
Audio datasets support the training and validation of Machine Learning algorithms in audio classification problems. Such datasets include different, arbitrarily chosen audio classes. We initially investigate a unifying approach, based on the mapping of audio classes according to the Audioset ontology. Using the ESC-10 audio dataset, a tree-like representation of its classes is created. In addition, we employ an audio similarity calculation tool based on the values of extracted features (spectrum centroid, the spectrum flux and the spectral roll-off). This way the audio classes are connected both semantically and in feature-based manner. Employing the same dataset, ESC-10, we perform sound classification using CNN-based algorithms, after transforming the sound excerpts into images (based on their Mel spectrograms). The YAMNet and VGGish networks are used for audio classification and the accuracy reaches 90%. We extend the classification algorithm with segmentation logic, so that it can be applied into more complex sound excerpts, where multiple sound types are included in a sequential and/or overlapping manner. Quantitative metrics are defined on the behavior of the combined segmentation and segmentation functionality, including two key parameters for the merging operation, the minimum duration of the identified sounds and the intervals. The qualitative metrics are related to the number of sound identification events for a concatenated sound excerpt of the dataset and per each sound class. This way the segmentation logic can operate in a fine- and coarse-grained manner while the dataset and the individual sound classes are characterized in terms of clearness and distinguishability.