{"title":"基于情绪的预测音乐","authors":"Ganesh B. Regulwar, Nikhila Kathirisetty","doi":"10.32628/ijsrset2411310","DOIUrl":null,"url":null,"abstract":"It is often difficult for a person to choose which mu- sic to listen to from a vast array of available options. Relatively, this paper focuses on building an efficient music recommendation system based on the user’s mood which determines the emotion of user using Facial Recognition technique. The model is build using the transfer learning approach for which MobileNet model and Cascade classifier are used. Analyzing the user’s face expression might help you better comprehend their current emotional or mental condition. Music and video are one area where there is a lot of potential to present clients with a variety of options depending on their interests and data. More than 60% of users anticipate that the number of songs in their music collection will grow to the point where they will be unable to find the song they need to play at some point in the future. The user would save time by not having to search for or look up tunes. The image of the user is captured using a webcam. Then, depending on the user’s mood, an appropriate song from the user’s playlist or a movie is shown.","PeriodicalId":14228,"journal":{"name":"International Journal of Scientific Research in Science, Engineering and Technology","volume":"121 45","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Predictive Music Based on Mood\",\"authors\":\"Ganesh B. Regulwar, Nikhila Kathirisetty\",\"doi\":\"10.32628/ijsrset2411310\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"It is often difficult for a person to choose which mu- sic to listen to from a vast array of available options. Relatively, this paper focuses on building an efficient music recommendation system based on the user’s mood which determines the emotion of user using Facial Recognition technique. The model is build using the transfer learning approach for which MobileNet model and Cascade classifier are used. Analyzing the user’s face expression might help you better comprehend their current emotional or mental condition. Music and video are one area where there is a lot of potential to present clients with a variety of options depending on their interests and data. More than 60% of users anticipate that the number of songs in their music collection will grow to the point where they will be unable to find the song they need to play at some point in the future. The user would save time by not having to search for or look up tunes. The image of the user is captured using a webcam. Then, depending on the user’s mood, an appropriate song from the user’s playlist or a movie is shown.\",\"PeriodicalId\":14228,\"journal\":{\"name\":\"International Journal of Scientific Research in Science, Engineering and Technology\",\"volume\":\"121 45\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-05-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Scientific Research in Science, Engineering and Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.32628/ijsrset2411310\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Scientific Research in Science, Engineering and Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.32628/ijsrset2411310","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
It is often difficult for a person to choose which mu- sic to listen to from a vast array of available options. Relatively, this paper focuses on building an efficient music recommendation system based on the user’s mood which determines the emotion of user using Facial Recognition technique. The model is build using the transfer learning approach for which MobileNet model and Cascade classifier are used. Analyzing the user’s face expression might help you better comprehend their current emotional or mental condition. Music and video are one area where there is a lot of potential to present clients with a variety of options depending on their interests and data. More than 60% of users anticipate that the number of songs in their music collection will grow to the point where they will be unable to find the song they need to play at some point in the future. The user would save time by not having to search for or look up tunes. The image of the user is captured using a webcam. Then, depending on the user’s mood, an appropriate song from the user’s playlist or a movie is shown.