{"title":"Mood -Enhancing Music Recommendation System based on Audio Signals and Emotions","authors":"V. Mounika, Y. Charitha","doi":"10.1109/ICICT57646.2023.10134211","DOIUrl":null,"url":null,"abstract":"The identity of emotional speech is a significant topic within the discipline of interactions between humans and computers. Many strategies of figuring out emotions in human speech had been introduced and installed through diverse researchers. To identify noises in audio documents is the purpose of one of these versions. Together with gender recognition and YouTube video will be played depending on mood, this suggested computer also features speech emotion detection, which listens for sentiments like happiness, rage, and sadness in audio cues. This output is sent as input to YouTube, which plays song within the user's mind, resulting in the person's temper to stabilize fast. Using the CNN characteristic extraction approach, the function sizes vector become processed with NumPy, and the audio class became carried out in MFCC. This research study mainly uses two programs: RAVDESS and SAVEE. Using the acquired datasets, a new version of the look was produced in-depth. The device area is the platform where the Google Colab is used to perform code execution.","PeriodicalId":126489,"journal":{"name":"2023 International Conference on Inventive Computation Technologies (ICICT)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Conference on Inventive Computation Technologies (ICICT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICICT57646.2023.10134211","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
The identity of emotional speech is a significant topic within the discipline of interactions between humans and computers. Many strategies of figuring out emotions in human speech had been introduced and installed through diverse researchers. To identify noises in audio documents is the purpose of one of these versions. Together with gender recognition and YouTube video will be played depending on mood, this suggested computer also features speech emotion detection, which listens for sentiments like happiness, rage, and sadness in audio cues. This output is sent as input to YouTube, which plays song within the user's mind, resulting in the person's temper to stabilize fast. Using the CNN characteristic extraction approach, the function sizes vector become processed with NumPy, and the audio class became carried out in MFCC. This research study mainly uses two programs: RAVDESS and SAVEE. Using the acquired datasets, a new version of the look was produced in-depth. The device area is the platform where the Google Colab is used to perform code execution.