{"title":"通过音频描述揭示科学和多模态素养","authors":"Jana Holsanova","doi":"10.1080/1051144X.2020.1826219","DOIUrl":null,"url":null,"abstract":"Abstract Today’s scientific texts are complex and multimodal. Due to new technology, the number of images is increasing, as is their diversity and complexity. Interaction with complex texts and visualizations becomes a challenge. How can we help readers and learners achieve multimodal literacy? We use data from the audio description of a popular scientific journal and think-aloud protocols to uncover knowledge and competences necessary for reading and understanding multimodal scientific texts. Four issues of the printed journal were analyzed. The aural version of the journal was compared with the printed version to show how the semiotic interplay has been presented for the users. Additional meaning-making activities have been identified from the think-aloud protocol. As a result, we could reveal how the audio describer combined the contents of the available resources, made judgements about relevant information, determined ways of verbalizing visual information, used conceptual knowledge, filled in the gaps missing in the interplay of the resources, and reordered information for optimal flow and understanding. We argue that the meaning-making activities identified through audio description and think-aloud protocols can be incorporated into instruction in educational contexts and can thereby improve readers’ competencies for reading and understanding multimodal scientific texts.","PeriodicalId":36535,"journal":{"name":"Journal of Visual Literacy","volume":"39 1","pages":"132 - 148"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/1051144X.2020.1826219","citationCount":"4","resultStr":"{\"title\":\"Uncovering scientific and multimodal literacy through audio description\",\"authors\":\"Jana Holsanova\",\"doi\":\"10.1080/1051144X.2020.1826219\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract Today’s scientific texts are complex and multimodal. Due to new technology, the number of images is increasing, as is their diversity and complexity. Interaction with complex texts and visualizations becomes a challenge. How can we help readers and learners achieve multimodal literacy? We use data from the audio description of a popular scientific journal and think-aloud protocols to uncover knowledge and competences necessary for reading and understanding multimodal scientific texts. Four issues of the printed journal were analyzed. The aural version of the journal was compared with the printed version to show how the semiotic interplay has been presented for the users. Additional meaning-making activities have been identified from the think-aloud protocol. As a result, we could reveal how the audio describer combined the contents of the available resources, made judgements about relevant information, determined ways of verbalizing visual information, used conceptual knowledge, filled in the gaps missing in the interplay of the resources, and reordered information for optimal flow and understanding. We argue that the meaning-making activities identified through audio description and think-aloud protocols can be incorporated into instruction in educational contexts and can thereby improve readers’ competencies for reading and understanding multimodal scientific texts.\",\"PeriodicalId\":36535,\"journal\":{\"name\":\"Journal of Visual Literacy\",\"volume\":\"39 1\",\"pages\":\"132 - 148\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1080/1051144X.2020.1826219\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Visual Literacy\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/1051144X.2020.1826219\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Arts and Humanities\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Visual Literacy","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/1051144X.2020.1826219","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Arts and Humanities","Score":null,"Total":0}
Uncovering scientific and multimodal literacy through audio description
Abstract Today’s scientific texts are complex and multimodal. Due to new technology, the number of images is increasing, as is their diversity and complexity. Interaction with complex texts and visualizations becomes a challenge. How can we help readers and learners achieve multimodal literacy? We use data from the audio description of a popular scientific journal and think-aloud protocols to uncover knowledge and competences necessary for reading and understanding multimodal scientific texts. Four issues of the printed journal were analyzed. The aural version of the journal was compared with the printed version to show how the semiotic interplay has been presented for the users. Additional meaning-making activities have been identified from the think-aloud protocol. As a result, we could reveal how the audio describer combined the contents of the available resources, made judgements about relevant information, determined ways of verbalizing visual information, used conceptual knowledge, filled in the gaps missing in the interplay of the resources, and reordered information for optimal flow and understanding. We argue that the meaning-making activities identified through audio description and think-aloud protocols can be incorporated into instruction in educational contexts and can thereby improve readers’ competencies for reading and understanding multimodal scientific texts.