{"title":"音乐诱发的自传体记忆分析方法比较","authors":"Amy M. Belfi, Elena Bai, Ava Stroud","doi":"10.1525/mp.2020.37.5.392","DOIUrl":null,"url":null,"abstract":"The study of music-evoked autobiographical memories (MEAMs) has grown substantially in recent years. Prior work has used various methods to compare MEAMs to memories evoked by other cues (e.g., images, words). Here, we sought to identify which methods could distinguish between MEAMs and picture-evoked memories. Participants (N = 18) listened to popular music and viewed pictures of famous persons, and described any autobiographical memories evoked by the stimuli. Memories were scored using the Autobiographical Interview (AI; Levine, Svoboda, Hay, Winocur, & Moscovitch, 2002), Linguistic Inquiry and Word Count (LIWC; Pennebaker et al., 2015), and Evaluative Lexicon (EL; Rocklage & Fazio, 2018). We trained three logistic regression models (one for each scoring method) to differentiate between memories evoked by music and faces. Models trained on LIWC and AI data exhibited significantly above chance accuracy when classifying whether a memory was evoked by a face or a song. The EL, which focuses on the affective nature of a text, failed to predict whether memories were evoked by music or faces. This demonstrates that various memory scoring techniques provide complementary information about cued autobiographical memories, and suggests that MEAMs differ from memories evoked by pictures in some aspects (e.g., perceptual and episodic content) but not others (e.g., emotional content).","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":1.3000,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1525/mp.2020.37.5.392","citationCount":"10","resultStr":"{\"title\":\"Comparing Methods for Analyzing Music-Evoked Autobiographical Memories\",\"authors\":\"Amy M. Belfi, Elena Bai, Ava Stroud\",\"doi\":\"10.1525/mp.2020.37.5.392\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The study of music-evoked autobiographical memories (MEAMs) has grown substantially in recent years. Prior work has used various methods to compare MEAMs to memories evoked by other cues (e.g., images, words). Here, we sought to identify which methods could distinguish between MEAMs and picture-evoked memories. Participants (N = 18) listened to popular music and viewed pictures of famous persons, and described any autobiographical memories evoked by the stimuli. Memories were scored using the Autobiographical Interview (AI; Levine, Svoboda, Hay, Winocur, & Moscovitch, 2002), Linguistic Inquiry and Word Count (LIWC; Pennebaker et al., 2015), and Evaluative Lexicon (EL; Rocklage & Fazio, 2018). We trained three logistic regression models (one for each scoring method) to differentiate between memories evoked by music and faces. Models trained on LIWC and AI data exhibited significantly above chance accuracy when classifying whether a memory was evoked by a face or a song. The EL, which focuses on the affective nature of a text, failed to predict whether memories were evoked by music or faces. This demonstrates that various memory scoring techniques provide complementary information about cued autobiographical memories, and suggests that MEAMs differ from memories evoked by pictures in some aspects (e.g., perceptual and episodic content) but not others (e.g., emotional content).\",\"PeriodicalId\":47786,\"journal\":{\"name\":\"Music Perception\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.3000,\"publicationDate\":\"2020-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1525/mp.2020.37.5.392\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Music Perception\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1525/mp.2020.37.5.392\",\"RegionNum\":2,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"MUSIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Music Perception","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1525/mp.2020.37.5.392","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"MUSIC","Score":null,"Total":0}
Comparing Methods for Analyzing Music-Evoked Autobiographical Memories
The study of music-evoked autobiographical memories (MEAMs) has grown substantially in recent years. Prior work has used various methods to compare MEAMs to memories evoked by other cues (e.g., images, words). Here, we sought to identify which methods could distinguish between MEAMs and picture-evoked memories. Participants (N = 18) listened to popular music and viewed pictures of famous persons, and described any autobiographical memories evoked by the stimuli. Memories were scored using the Autobiographical Interview (AI; Levine, Svoboda, Hay, Winocur, & Moscovitch, 2002), Linguistic Inquiry and Word Count (LIWC; Pennebaker et al., 2015), and Evaluative Lexicon (EL; Rocklage & Fazio, 2018). We trained three logistic regression models (one for each scoring method) to differentiate between memories evoked by music and faces. Models trained on LIWC and AI data exhibited significantly above chance accuracy when classifying whether a memory was evoked by a face or a song. The EL, which focuses on the affective nature of a text, failed to predict whether memories were evoked by music or faces. This demonstrates that various memory scoring techniques provide complementary information about cued autobiographical memories, and suggests that MEAMs differ from memories evoked by pictures in some aspects (e.g., perceptual and episodic content) but not others (e.g., emotional content).
期刊介绍:
Music Perception charts the ongoing scholarly discussion and study of musical phenomena. Publishing original empirical and theoretical papers, methodological articles and critical reviews from renowned scientists and musicians, Music Perception is a repository of insightful research. The broad range of disciplines covered in the journal includes: •Psychology •Psychophysics •Linguistics •Neurology •Neurophysiology •Artificial intelligence •Computer technology •Physical and architectural acoustics •Music theory