{"title":"用深度神经网络和不协和音测量音乐美学","authors":"Razvan Paroiu, Stefan Trausan-Matu","doi":"10.3390/info14070358","DOIUrl":null,"url":null,"abstract":"In this paper, a new method that computes the aesthetics of a melody fragment is proposed, starting from dissonances. While music generated with artificial intelligence applications may be produced considerably more quickly than human-composed music, it has the drawback of not being appreciated like a human composition, being many times perceived by humans as artificial. For achieving supervised machine learning objectives of improving the quality of the great number of generated melodies, it is a challenge to ask humans to grade them. Therefore, it would be preferable if the aesthetics of artificial-intelligence-generated music is calculated by an algorithm. The proposed method in this paper is based on a neural network and a mathematical formula, which has been developed with the help of a study in which 108 students evaluated the aesthetics of several melodies. For evaluation, numerical values generated by this method were compared with ratings provided by human listeners from a second study in which 30 students participated and scores were generated by an existing different method developed by psychologists and three other methods developed by musicians. Our method achieved a Pearson correlation of 0.49 with human aesthetic scores, which is a much better result than other methods obtained. Additionally, our method made a distinction between human-composed melodies and artificial-intelligence-generated scores in the same way that human listeners did.","PeriodicalId":13622,"journal":{"name":"Inf. Comput.","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Measurement of Music Aesthetics Using Deep Neural Networks and Dissonances\",\"authors\":\"Razvan Paroiu, Stefan Trausan-Matu\",\"doi\":\"10.3390/info14070358\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, a new method that computes the aesthetics of a melody fragment is proposed, starting from dissonances. While music generated with artificial intelligence applications may be produced considerably more quickly than human-composed music, it has the drawback of not being appreciated like a human composition, being many times perceived by humans as artificial. For achieving supervised machine learning objectives of improving the quality of the great number of generated melodies, it is a challenge to ask humans to grade them. Therefore, it would be preferable if the aesthetics of artificial-intelligence-generated music is calculated by an algorithm. The proposed method in this paper is based on a neural network and a mathematical formula, which has been developed with the help of a study in which 108 students evaluated the aesthetics of several melodies. For evaluation, numerical values generated by this method were compared with ratings provided by human listeners from a second study in which 30 students participated and scores were generated by an existing different method developed by psychologists and three other methods developed by musicians. Our method achieved a Pearson correlation of 0.49 with human aesthetic scores, which is a much better result than other methods obtained. Additionally, our method made a distinction between human-composed melodies and artificial-intelligence-generated scores in the same way that human listeners did.\",\"PeriodicalId\":13622,\"journal\":{\"name\":\"Inf. Comput.\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Inf. Comput.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3390/info14070358\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Inf. Comput.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/info14070358","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Measurement of Music Aesthetics Using Deep Neural Networks and Dissonances
In this paper, a new method that computes the aesthetics of a melody fragment is proposed, starting from dissonances. While music generated with artificial intelligence applications may be produced considerably more quickly than human-composed music, it has the drawback of not being appreciated like a human composition, being many times perceived by humans as artificial. For achieving supervised machine learning objectives of improving the quality of the great number of generated melodies, it is a challenge to ask humans to grade them. Therefore, it would be preferable if the aesthetics of artificial-intelligence-generated music is calculated by an algorithm. The proposed method in this paper is based on a neural network and a mathematical formula, which has been developed with the help of a study in which 108 students evaluated the aesthetics of several melodies. For evaluation, numerical values generated by this method were compared with ratings provided by human listeners from a second study in which 30 students participated and scores were generated by an existing different method developed by psychologists and three other methods developed by musicians. Our method achieved a Pearson correlation of 0.49 with human aesthetic scores, which is a much better result than other methods obtained. Additionally, our method made a distinction between human-composed melodies and artificial-intelligence-generated scores in the same way that human listeners did.