{"title":"Frequency Embedded Regularization Network for Continuous Music Emotion Recognition","authors":"Meixian Zhang, Yonghua Zhu, Ning Ge, Yunwen Zhu, Tianyu Feng, Wenjun Zhang","doi":"10.1109/PIC53636.2021.9687003","DOIUrl":null,"url":null,"abstract":"Music emotion recognition (MER) has attracted much interest in the past decades for efficient music information organization and retrieval. Although deep learning has been applied to this field to avoid facing the complexity of feature engineering, the processing of original information within music pieces has become another challenge. In this paper, we propose a novel method named Frequency Embedded Regularization Network (FERN) for continuous MER to overcome this issue. Specifically, we apply regularized ResNet to automatically extract features through spectrograms with embedded frequency channels. The receptive fields in the deep architecture are adjusted by modifying the kernel size to maintain original information completely. Furthermore, Long Short-Term Memory (LSTM) is employed to learn the sequential relationship from the extracted contextual features. We conduct experiments on the benchmark dataset 1000 Songs. The experimental results show that our method is superior to most of the compared methods in terms of extracting salient features and catching the distribution of emotions within music pieces.","PeriodicalId":297239,"journal":{"name":"2021 IEEE International Conference on Progress in Informatics and Computing (PIC)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Progress in Informatics and Computing (PIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PIC53636.2021.9687003","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Music emotion recognition (MER) has attracted much interest in the past decades for efficient music information organization and retrieval. Although deep learning has been applied to this field to avoid facing the complexity of feature engineering, the processing of original information within music pieces has become another challenge. In this paper, we propose a novel method named Frequency Embedded Regularization Network (FERN) for continuous MER to overcome this issue. Specifically, we apply regularized ResNet to automatically extract features through spectrograms with embedded frequency channels. The receptive fields in the deep architecture are adjusted by modifying the kernel size to maintain original information completely. Furthermore, Long Short-Term Memory (LSTM) is employed to learn the sequential relationship from the extracted contextual features. We conduct experiments on the benchmark dataset 1000 Songs. The experimental results show that our method is superior to most of the compared methods in terms of extracting salient features and catching the distribution of emotions within music pieces.