首页 > 最新文献

Proceedings of the 3rd International workshop on Digital Libraries for Musicology最新文献

英文 中文
Representing and Linking Music Performance Data with Score Information 用乐谱信息表示和连接音乐表演数据
J. Devaney, Hubert Léveillé Gauvin
This paper argues for the need to develop a representation for music performance data that is linked with corresponding score information at the note, beat, and measure levels. Building on the results of a survey of music scholars about their music performance data encoding needs, we propose best-practices for encoding perceptually relevant descriptors of the timing, pitch, loudness, and timbral aspects of performance. We are specifically interested in using descriptors that are sufficiently generalized that multiple performances of the same piece can be directly compared with one another. This paper also proposes a specific representation for encoding performance data and presents prototypes of this representation in both Humdrum and Music Encoding Initiative (MEI) formats.
本文认为有必要为音乐表演数据开发一种表示,该表示与音符、拍子和小节级别上的相应分数信息相关联。基于对音乐学者的音乐表演数据编码需求的调查结果,我们提出了对表演的时间、音高、响度和音色方面的感知相关描述符进行编码的最佳实践。我们特别感兴趣的是使用足够一般化的描述符,使同一作品的多个表演可以直接相互比较。本文还提出了一种特定的表演数据编码表示,并以Humdrum和Music encoding Initiative (MEI)格式给出了这种表示的原型。
{"title":"Representing and Linking Music Performance Data with Score Information","authors":"J. Devaney, Hubert Léveillé Gauvin","doi":"10.1145/2970044.2970052","DOIUrl":"https://doi.org/10.1145/2970044.2970052","url":null,"abstract":"This paper argues for the need to develop a representation for music performance data that is linked with corresponding score information at the note, beat, and measure levels. Building on the results of a survey of music scholars about their music performance data encoding needs, we propose best-practices for encoding perceptually relevant descriptors of the timing, pitch, loudness, and timbral aspects of performance. We are specifically interested in using descriptors that are sufficiently generalized that multiple performances of the same piece can be directly compared with one another. This paper also proposes a specific representation for encoding performance data and presents prototypes of this representation in both Humdrum and Music Encoding Initiative (MEI) formats.","PeriodicalId":422109,"journal":{"name":"Proceedings of the 3rd International workshop on Digital Libraries for Musicology","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114693568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Mining metadata from the web for AcousticBrainz 为AcousticBrainz从网络上挖掘元数据
Alastair Porter, D. Bogdanov, Xavier Serra
Semantic annotations of music collections in digital libraries are important for organization and navigation of the collection. These annotations and their associated metadata are useful in many Music Information Retrieval tasks, and related fields in musicology. Music collections used in research are growing in size, and therefore it is useful to use semi-automatic means to obtain such annotations. We present software tools for mining metadata from the web for the purpose of annotating music collections. These tools expand on data present in the AcousticBrainz database, which contains software-generated analysis of music audio files. Using this tool we gather metadata and semantic information from a variety of sources including both community-based services such as MusicBrainz, Last.fm, and Discogs, and commercial databases including Itunes and AllMusic. The tool can be easily expanded to collect data from a new source, and is automatically updated when new items are added to AcousticBrainz. We extract genre annotations for recordings in AcousticBrainz using our tool and study the agreement between folksonomies and expert sources. We discuss the results and explore possibilities for future work.
数字图书馆音乐馆藏的语义标注对馆藏的组织和导航具有重要意义。这些注释及其相关的元数据在许多音乐信息检索任务和音乐学的相关领域中非常有用。在研究中使用的音乐收藏的规模越来越大,因此使用半自动的手段来获得这样的注释是有用的。我们提供了从网络中挖掘元数据的软件工具,用于注释音乐收藏。这些工具扩展了AcousticBrainz数据库中的数据,该数据库包含软件生成的音乐音频文件分析。使用这个工具,我们从各种来源收集元数据和语义信息,包括基于社区的服务,如MusicBrainz, Last。包括Itunes和AllMusic在内的商业数据库。该工具可以很容易地扩展,从一个新的来源收集数据,并自动更新,当新的项目被添加到AcousticBrainz。我们使用我们的工具在AcousticBrainz中提取录音的类型注释,并研究民间分类法和专家来源之间的协议。我们讨论了结果并探讨了未来工作的可能性。
{"title":"Mining metadata from the web for AcousticBrainz","authors":"Alastair Porter, D. Bogdanov, Xavier Serra","doi":"10.1145/2970044.2970048","DOIUrl":"https://doi.org/10.1145/2970044.2970048","url":null,"abstract":"Semantic annotations of music collections in digital libraries are important for organization and navigation of the collection. These annotations and their associated metadata are useful in many Music Information Retrieval tasks, and related fields in musicology. Music collections used in research are growing in size, and therefore it is useful to use semi-automatic means to obtain such annotations. We present software tools for mining metadata from the web for the purpose of annotating music collections. These tools expand on data present in the AcousticBrainz database, which contains software-generated analysis of music audio files. Using this tool we gather metadata and semantic information from a variety of sources including both community-based services such as MusicBrainz, Last.fm, and Discogs, and commercial databases including Itunes and AllMusic. The tool can be easily expanded to collect data from a new source, and is automatically updated when new items are added to AcousticBrainz. We extract genre annotations for recordings in AcousticBrainz using our tool and study the agreement between folksonomies and expert sources. We discuss the results and explore possibilities for future work.","PeriodicalId":422109,"journal":{"name":"Proceedings of the 3rd International workshop on Digital Libraries for Musicology","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122730185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
In Collaboration with In Concert: Reflecting a Digital Library as Linked Data for Performance Ephemera 与In Concert合作:反映数字图书馆作为表演蜉蝣的关联数据
Terhi Nurmikko-Fuller, A. Dix, David M. Weigl, Kevin R. Page
Diverse datasets in the area of Digital Musicology expose complementary information describing works, composers, performers, and wider historical and cultural contexts. Interlinking across such datasets enables new digital methods of scholarly investigation. Such bridging presents challenges when working with legacy tabular or relational datasets that do not natively facilitate linking and referencing to and from external sources. Here, we present pragmatic approaches in turning such legacy datasets into linked data. InConcert is a research collaboration exemplifying these approaches. In this paper, we describe and build on this resource, which is comprised of distinct digital libraries focusing on performance data and on concert ephemera. These datasets were merged with each other and opened up for enrichment from other sources on the Web via conversion to RDF. We outline the main features of the constituent datasets, describe conversion workflows, and perform a comparative analysis. Our findings provide practical recommendations for future efforts focused on exposing legacy datasets as linked data.
数字音乐学领域的不同数据集揭示了描述作品、作曲家、表演者和更广泛的历史和文化背景的互补信息。这些数据集之间的相互连接使学术调查的新数字方法成为可能。当使用传统的表格或关系数据集时,这种桥接提出了挑战,这些数据集本身不便于与外部源的链接和引用。在这里,我们提出了将这些遗留数据集转化为关联数据的实用方法。InConcert是一个研究合作项目,它体现了这些方法。在本文中,我们描述并建立了这个资源,它由不同的数字图书馆组成,重点关注表演数据和音乐会的短暂性。这些数据集相互合并,并通过转换为RDF,从Web上的其他数据源中获得丰富内容。我们概述了组成数据集的主要特征,描述了转换工作流,并进行了比较分析。我们的发现为未来将遗留数据集暴露为关联数据的工作提供了实用的建议。
{"title":"In Collaboration with In Concert: Reflecting a Digital Library as Linked Data for Performance Ephemera","authors":"Terhi Nurmikko-Fuller, A. Dix, David M. Weigl, Kevin R. Page","doi":"10.1145/2970044.2970049","DOIUrl":"https://doi.org/10.1145/2970044.2970049","url":null,"abstract":"Diverse datasets in the area of Digital Musicology expose complementary information describing works, composers, performers, and wider historical and cultural contexts. Interlinking across such datasets enables new digital methods of scholarly investigation. Such bridging presents challenges when working with legacy tabular or relational datasets that do not natively facilitate linking and referencing to and from external sources. Here, we present pragmatic approaches in turning such legacy datasets into linked data. InConcert is a research collaboration exemplifying these approaches. In this paper, we describe and build on this resource, which is comprised of distinct digital libraries focusing on performance data and on concert ephemera. These datasets were merged with each other and opened up for enrichment from other sources on the Web via conversion to RDF. We outline the main features of the constituent datasets, describe conversion workflows, and perform a comparative analysis. Our findings provide practical recommendations for future efforts focused on exposing legacy datasets as linked data.","PeriodicalId":422109,"journal":{"name":"Proceedings of the 3rd International workshop on Digital Libraries for Musicology","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127491637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
MORTY: A Toolbox for Mode Recognition and Tonic Identification 莫蒂:调式识别和音调识别工具箱
Altug Karakurt, Sertan Sentürk, Xavier Serra
In the general sense, mode defines the melodic framework and tonic acts as the reference tuning pitch for the melody in the performances of many music cultures. The mode and tonic information of the audio recordings is essential for many music information retrieval tasks such as automatic transcription, tuning analysis and music similarity. In this paper we present MORTY, an open source toolbox for mode recognition and tonic identification. The toolbox implements generalized variants of two state-of-the-art methods based on pitch distribution analysis. The algorithms are designed in a generic manner such that they can be easily optimized according to the culture-specific aspects of the studied music tradition. We test the generalized methodology systematically on the largest mode recognition dataset curated for Ottoman-Turkish makam music so far, which is composed of 1000 recordings in 50 modes. We obtained 95.8%, 71.8% and 63.6% accuracy in tonic identification, mode recognition and joint mode and tonic estimation tasks, respectively. We additionally present recent experiments on Carnatic and Hindustani music in comparison with several methodologies recently proposed for raga/raag recognition. We prioritized the reproducibility of our work and provide all of our data, code and results publicly. Hence we hope that our toolbox would be used as a benchmark for future methodologies proposed for mode recognition and tonic identification, especially for music traditions in which these computational tasks have not been addressed yet.
在许多音乐文化的演奏中,一般意义上,调式定义了旋律的框架,主音作为旋律的参考调音音高。录音的调式和主音信息对于自动抄写、调音分析和音乐相似度等音乐信息检索任务至关重要。在本文中,我们提出了MORTY,一个用于模式识别和音调识别的开源工具箱。工具箱实现了基于音高分布分析的两种最先进方法的广义变体。算法以通用的方式设计,这样它们就可以根据所研究的音乐传统的文化特定方面轻松优化。我们在迄今为止为奥斯曼-土耳其makam音乐策划的最大模式识别数据集上系统地测试了广义方法,该数据集由50种模式的1000个录音组成。在主音识别、模态识别和关节模态和主音估计任务中,准确率分别达到95.8%、71.8%和63.6%。我们还介绍了最近对卡纳蒂克和印度斯坦音乐的实验,并与最近提出的几种用于拉格/布拉格识别的方法进行了比较。我们优先考虑我们工作的可重复性,并公开提供我们所有的数据、代码和结果。因此,我们希望我们的工具箱将被用作未来提出的调式识别和主音识别方法的基准,特别是对于这些计算任务尚未解决的音乐传统。
{"title":"MORTY: A Toolbox for Mode Recognition and Tonic Identification","authors":"Altug Karakurt, Sertan Sentürk, Xavier Serra","doi":"10.1145/2970044.2970054","DOIUrl":"https://doi.org/10.1145/2970044.2970054","url":null,"abstract":"In the general sense, mode defines the melodic framework and tonic acts as the reference tuning pitch for the melody in the performances of many music cultures. The mode and tonic information of the audio recordings is essential for many music information retrieval tasks such as automatic transcription, tuning analysis and music similarity. In this paper we present MORTY, an open source toolbox for mode recognition and tonic identification. The toolbox implements generalized variants of two state-of-the-art methods based on pitch distribution analysis. The algorithms are designed in a generic manner such that they can be easily optimized according to the culture-specific aspects of the studied music tradition. We test the generalized methodology systematically on the largest mode recognition dataset curated for Ottoman-Turkish makam music so far, which is composed of 1000 recordings in 50 modes. We obtained 95.8%, 71.8% and 63.6% accuracy in tonic identification, mode recognition and joint mode and tonic estimation tasks, respectively. We additionally present recent experiments on Carnatic and Hindustani music in comparison with several methodologies recently proposed for raga/raag recognition. We prioritized the reproducibility of our work and provide all of our data, code and results publicly. Hence we hope that our toolbox would be used as a benchmark for future methodologies proposed for mode recognition and tonic identification, especially for music traditions in which these computational tasks have not been addressed yet.","PeriodicalId":422109,"journal":{"name":"Proceedings of the 3rd International workshop on Digital Libraries for Musicology","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122662381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A standard format proposal for hierarchical analyses and representations 用于分层分析和表示的标准格式建议
D. Rizo, A. Marsden
In the realm of digital musicology, standardizations efforts to date have mostly concentrated on the representation of music. Analyses of music are increasingly being generated or communicated by digital means. We demonstrate that the same arguments for the desirability of standardization in the representation of music apply also to the representation of analyses of music: proper preservation, sharing of data, and facilitation of digital processing. We concentrate here on analyses which can be described as hierarchical and show that this covers a broad range of existing analytical formats. We propose an extension of MEI (Music Encoding Initiative) to allow the encoding of analyses unambiguously associated with and aligned to a representation of the music analysed, making use of existing mechanisms within MEI's parent TEI (Text Encoding Initiative) for the representation of trees and graphs.
在数字音乐学领域,迄今为止的标准化工作主要集中在音乐的表现上。对音乐的分析越来越多地通过数字手段产生或传播。我们证明,在音乐表现中标准化的可取性的相同论点也适用于音乐分析的表现:适当的保存,数据的共享和数字处理的便利。我们在这里集中讨论可以被描述为分层的分析,并表明这涵盖了广泛的现有分析格式。我们建议扩展MEI (Music Encoding Initiative),以允许分析编码明确地与所分析的音乐表示相关联并对齐,利用MEI的父TEI (Text Encoding Initiative)中的现有机制来表示树和图。
{"title":"A standard format proposal for hierarchical analyses and representations","authors":"D. Rizo, A. Marsden","doi":"10.1145/2970044.2970046","DOIUrl":"https://doi.org/10.1145/2970044.2970046","url":null,"abstract":"In the realm of digital musicology, standardizations efforts to date have mostly concentrated on the representation of music. Analyses of music are increasingly being generated or communicated by digital means. We demonstrate that the same arguments for the desirability of standardization in the representation of music apply also to the representation of analyses of music: proper preservation, sharing of data, and facilitation of digital processing. We concentrate here on analyses which can be described as hierarchical and show that this covers a broad range of existing analytical formats. We propose an extension of MEI (Music Encoding Initiative) to allow the encoding of analyses unambiguously associated with and aligned to a representation of the music analysed, making use of existing mechanisms within MEI's parent TEI (Text Encoding Initiative) for the representation of trees and graphs.","PeriodicalId":422109,"journal":{"name":"Proceedings of the 3rd International workshop on Digital Libraries for Musicology","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128003892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Document Analysis for Music Scores via Machine Learning 基于机器学习的乐谱文档分析
Jorge Calvo-Zaragoza, Gabriel Vigliensoni, Ichiro Fujinaga
Content within musical documents not only contains musical notation but can also include text, ornaments, annotations, and editorial data. Before any attempt at automatic recognition of elements in these layers, it is necessary to perform a document analysis process to detect and classify each of its constituent parts. The obstacle for this analysis is the high heterogeneity amongst collections, which makes it difficult to propose methods that can be generalizable to a broader range of sources. In this paper we propose a data-driven document analysis framework based on machine learning, which focuses on classifying regions of interest at pixel level. The main advantage of this approach is that it can be exploited regardless of the type of document provided, as long as training data is available. Our preliminary experimentation includes a set of specific tasks that can be performed on music such as the detection of staff lines, isolation of music symbols, and the layering of the document into its elemental parts.
音乐文档中的内容不仅包含乐谱,还可以包括文本、装饰、注释和编辑数据。在尝试自动识别这些层中的元素之前,有必要执行文档分析过程来检测和分类其每个组成部分。这种分析的障碍是集合之间的高度异质性,这使得很难提出可以推广到更广泛来源的方法。在本文中,我们提出了一个基于机器学习的数据驱动文档分析框架,该框架侧重于在像素级别对感兴趣的区域进行分类。这种方法的主要优点是,只要有可用的训练数据,无论所提供的文档类型如何,都可以利用它。我们的初步实验包括一组可以在音乐上执行的特定任务,例如检测五线谱,隔离音乐符号,以及将文档分层为其基本部分。
{"title":"Document Analysis for Music Scores via Machine Learning","authors":"Jorge Calvo-Zaragoza, Gabriel Vigliensoni, Ichiro Fujinaga","doi":"10.1145/2970044.2970047","DOIUrl":"https://doi.org/10.1145/2970044.2970047","url":null,"abstract":"Content within musical documents not only contains musical notation but can also include text, ornaments, annotations, and editorial data. Before any attempt at automatic recognition of elements in these layers, it is necessary to perform a document analysis process to detect and classify each of its constituent parts. The obstacle for this analysis is the high heterogeneity amongst collections, which makes it difficult to propose methods that can be generalizable to a broader range of sources. In this paper we propose a data-driven document analysis framework based on machine learning, which focuses on classifying regions of interest at pixel level. The main advantage of this approach is that it can be exploited regardless of the type of document provided, as long as training data is available. Our preliminary experimentation includes a set of specific tasks that can be performed on music such as the detection of staff lines, isolation of music symbols, and the layering of the document into its elemental parts.","PeriodicalId":422109,"journal":{"name":"Proceedings of the 3rd International workshop on Digital Libraries for Musicology","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125433060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Exploring J-DISC: Some Preliminary Analyses 探索J-DISC:一些初步分析
Yun Hao, Kahyun Choi, J. S. Downie
J-DISC, a specialized digital library for information about jazz recording sessions that includes rich structured and searchable metadata, has the potential for supporting a wide range of studies on jazz, especially the musicological work of those interested in the social network aspects of jazz creation and production. This paper provides an overview of the entire J-DISC dataset. It also presents some exemplar analyses across this dataset to better illustrate the kinds of uses that musicologists could make of this collection. Our illustrative analyses include both informetric and network analyses of the entire J-DISC data which comprises data on 2,711 unique recording sessions associated with 3,744 distinct artists including such influential jazz figures as Dizzy Gillespie, Don Byas, Charlie Parker, John Coltrane and Kenny Dorham, etc. Our analyses also show that around 60% of the recording sessions included in J-DISC were recorded in New York City, Englewood Cliffs (NJ), Los Angeles (CA) and Paris during the year of 1923 to 2011. Furthermore, our analyses of the J-DISC data show the top venues captured in the J-DISC data include Rudy Van Gelder Studio, Birdland and Reeves Sound Studios. The potential research uses of the J-DISC data in both the DL (Digital Libraries) and MIR (Music Information Retrieval) domains are also briefly discussed.
J-DISC是一个专门的爵士乐录音信息数字图书馆,包含丰富的结构化和可搜索的元数据,有潜力支持广泛的爵士乐研究,特别是那些对爵士乐创作和制作的社会网络方面感兴趣的音乐学工作。本文提供了整个J-DISC数据集的概述。它还提供了一些跨数据集的范例分析,以更好地说明音乐学家可以利用这些集合的各种用途。我们的说明性分析包括对整个J-DISC数据的信息分析和网络分析,其中包括与3,744位不同艺术家相关的2,711个独特录音会话的数据,其中包括诸如Dizzy Gillespie, Don Byas, Charlie Parker, John Coltrane和Kenny Dorham等有影响力的爵士人物。我们的分析还表明,在1923年至2011年期间,J-DISC中包含的约60%的录音会议记录在纽约市,恩格尔伍德悬崖(NJ),洛杉矶(CA)和巴黎。此外,我们对J-DISC数据的分析显示,J-DISC数据中获得的顶级场地包括鲁迪·范·盖尔德工作室、伯德兰和里夫斯声音工作室。本文还简要讨论了J-DISC数据在数字图书馆(DL)和音乐信息检索(MIR)领域的潜在研究用途。
{"title":"Exploring J-DISC: Some Preliminary Analyses","authors":"Yun Hao, Kahyun Choi, J. S. Downie","doi":"10.1145/2970044.2970050","DOIUrl":"https://doi.org/10.1145/2970044.2970050","url":null,"abstract":"J-DISC, a specialized digital library for information about jazz recording sessions that includes rich structured and searchable metadata, has the potential for supporting a wide range of studies on jazz, especially the musicological work of those interested in the social network aspects of jazz creation and production. This paper provides an overview of the entire J-DISC dataset. It also presents some exemplar analyses across this dataset to better illustrate the kinds of uses that musicologists could make of this collection. Our illustrative analyses include both informetric and network analyses of the entire J-DISC data which comprises data on 2,711 unique recording sessions associated with 3,744 distinct artists including such influential jazz figures as Dizzy Gillespie, Don Byas, Charlie Parker, John Coltrane and Kenny Dorham, etc. Our analyses also show that around 60% of the recording sessions included in J-DISC were recorded in New York City, Englewood Cliffs (NJ), Los Angeles (CA) and Paris during the year of 1923 to 2011. Furthermore, our analyses of the J-DISC data show the top venues captured in the J-DISC data include Rudy Van Gelder Studio, Birdland and Reeves Sound Studios. The potential research uses of the J-DISC data in both the DL (Digital Libraries) and MIR (Music Information Retrieval) domains are also briefly discussed.","PeriodicalId":422109,"journal":{"name":"Proceedings of the 3rd International workshop on Digital Libraries for Musicology","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132601530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Data Generation and Multi-Modal Analysis for Recorded Operatic Performance 记录歌剧表演的数据生成和多模态分析
Joshua Neumann
Commercial recordings of live opera performance are only sporadically available, mostly due to various legal protections held by opera houses. The resulting onsite, archive-only access for them inhibits analysis of the creative process in "live" environments. Based on a technique I developed for generating performance data from copyright protected archival recordings, this paper presents a means of interrogating the creative practice in individual operatic performances and across the corpus of a recorded performance history. My analysis uses "In questa Reggia" from Giacomo Puccini's Turandot as performed at New York's Metropolitan Opera. The first part of my analysis builds on tempo mapping developed by the Centre for the History and Analysis of Recorded Music. Given the natural relationship in which performances of the same work exist, statistical and network analyses of the data extracted from a corpus of performances offer ways to contextualize and understand how performances create a tradition to which and through which they relate to varying degrees.
歌剧现场表演的商业录音只是偶尔出现,主要是由于歌剧院拥有各种法律保护。由此产生的现场、档案访问限制了他们在“现场”环境中对创作过程的分析。基于我开发的一种从受版权保护的档案录音中生成表演数据的技术,本文提出了一种方法,可以通过记录的表演历史语料库来询问个人歌剧表演中的创作实践。我的分析使用了贾科莫·普契尼(Giacomo Puccini)在纽约大都会歌剧院(Metropolitan Opera)演出的《图兰朵》(Turandot)中的“In questa Reggia”。我的分析的第一部分是建立在由录音音乐历史和分析中心开发的节奏地图上的。考虑到同一作品的表演之间存在的自然关系,从表演语料库中提取的数据的统计和网络分析提供了一种方法,可以将表演如何创造一种传统,并在不同程度上与之相关。
{"title":"Data Generation and Multi-Modal Analysis for Recorded Operatic Performance","authors":"Joshua Neumann","doi":"10.1145/2970044.2970045","DOIUrl":"https://doi.org/10.1145/2970044.2970045","url":null,"abstract":"Commercial recordings of live opera performance are only sporadically available, mostly due to various legal protections held by opera houses. The resulting onsite, archive-only access for them inhibits analysis of the creative process in \"live\" environments. Based on a technique I developed for generating performance data from copyright protected archival recordings, this paper presents a means of interrogating the creative practice in individual operatic performances and across the corpus of a recorded performance history. My analysis uses \"In questa Reggia\" from Giacomo Puccini's Turandot as performed at New York's Metropolitan Opera. The first part of my analysis builds on tempo mapping developed by the Centre for the History and Analysis of Recorded Music. Given the natural relationship in which performances of the same work exist, statistical and network analyses of the data extracted from a corpus of performances offer ways to contextualize and understand how performances create a tradition to which and through which they relate to varying degrees.","PeriodicalId":422109,"journal":{"name":"Proceedings of the 3rd International workshop on Digital Libraries for Musicology","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126661058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The Music Addressability API: A draft specification for addressing portions of music notation on the web 音乐可寻址API:一个用于在网络上寻址部分音乐符号的规范草案
Raffaele Viglianti
This paper describes an Application Programming Interface (API) for addressing music notation on the web regardless of the format in which it is stored. This API was created as a method for addressing and extracting specific portions of music notation published in machine-readable formats on the web. Music notation, like text, can be "addressed" in new ways in a digital environment, allowing scholars to identify and name structures of various kinds, thus raising such questions as how can one virtually "circle" some music notation? How can a machine interpret this "circling" to select and retrieve the relevant music notation? The API was evaluated by: 1) creating an implementation of the API for documents in the Music Encoding Initiative (MEI) format; and by 2) remodelling a dataset of music analysis statements from the Du Chemin: Lost Voices project (Haverford College) by using the API to connect the analytical statements with the portion of notaiton they refer to. Building this corpus has demonstrated that the Music Addressability API is capable of modelling complex analytical statements containing references to music notation.
本文描述了一个应用程序编程接口(API),用于处理网络上的音乐符号,而不考虑其存储格式。这个API是作为一种方法创建的,用于寻址和提取在网络上以机器可读格式发布的音乐符号的特定部分。乐谱,像文本一样,可以在数字环境中以新的方式“处理”,使学者能够识别和命名各种结构,从而提出诸如如何虚拟地“圈”一些乐谱之类的问题?机器如何解释这个“循环”来选择和检索相关的乐谱呢?该API通过以下方式进行评估:1)为音乐编码倡议(MEI)格式的文档创建API的实现;2)通过使用API将Du Chemin: Lost Voices项目(Haverford College)的音乐分析语句的数据集与它们所引用的音符部分连接起来,从而重新构建音乐分析语句。构建这个语料库表明,Music Addressability API能够对包含音乐符号引用的复杂分析语句进行建模。
{"title":"The Music Addressability API: A draft specification for addressing portions of music notation on the web","authors":"Raffaele Viglianti","doi":"10.1145/2970044.2970056","DOIUrl":"https://doi.org/10.1145/2970044.2970056","url":null,"abstract":"This paper describes an Application Programming Interface (API) for addressing music notation on the web regardless of the format in which it is stored. This API was created as a method for addressing and extracting specific portions of music notation published in machine-readable formats on the web. Music notation, like text, can be \"addressed\" in new ways in a digital environment, allowing scholars to identify and name structures of various kinds, thus raising such questions as how can one virtually \"circle\" some music notation? How can a machine interpret this \"circling\" to select and retrieve the relevant music notation? The API was evaluated by: 1) creating an implementation of the API for documents in the Music Encoding Initiative (MEI) format; and by 2) remodelling a dataset of music analysis statements from the Du Chemin: Lost Voices project (Haverford College) by using the API to connect the analytical statements with the portion of notaiton they refer to. Building this corpus has demonstrated that the Music Addressability API is capable of modelling complex analytical statements containing references to music notation.","PeriodicalId":422109,"journal":{"name":"Proceedings of the 3rd International workshop on Digital Libraries for Musicology","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132317061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Approaches to handwritten conductor annotation extraction in musical scores 乐谱中手写指挥注释的提取方法
Eamonn Bell, L. Pugin
Conductor copies of musical scores are typically rich in handwritten annotations. Ongoing archival efforts to digitize orchestral conductors' scores have made scanned copies of hundreds of these annotated scores available in digital formats. The extraction of handwritten annotations from digitized printed documents is a difficult task for computer vision, with most approaches focusing on the extraction of handwritten text. However, conductors' annotation practices provide us with at least two affordances, which make the task more tractable in the musical domain. First, many conductors opt to mark their scores using colored pencils, which contrast with the black and white print of sheet music. Consequently, we show promising results when using color separation techniques alone to recover handwritten annotations from conductors' scores. We also compare annotated scores to unannotated copies and use a printed sheet music comparison tool to recover handwritten annotations as additions to the clean copy. We then investigate the use of both of these techniques in a combined method, which improves the results of the color separation technique. These techniques are demonstrated using a sample of orchestral scores annotated by professional conductors of the New York Philharmonic. Handwritten annotation extraction in musical scores has applications to the systematic investigation of score annotation practices by performers, annotator attribution, and to the interactive presentation of annotated scores, which we briefly discuss.
指挥家的乐谱副本通常都有大量的手写注释。正在进行的将管弦乐指挥家的乐谱数字化的存档工作已经使数百个这些注释乐谱的扫描副本以数字格式提供。从数字化打印文档中提取手写注释是计算机视觉的一个难点,大多数方法都集中在手写文本的提取上。然而,指挥家的注释实践为我们提供了至少两个启示,这使得在音乐领域的任务更容易处理。首先,许多指挥家选择用彩色铅笔标记乐谱,这与白纸黑字的乐谱形成鲜明对比。因此,当仅使用分色技术从指挥家的乐谱中恢复手写注释时,我们显示了有希望的结果。我们还将注释的乐谱与未注释的乐谱进行比较,并使用打印的乐谱比较工具来恢复手写的注释,作为对干净副本的补充。然后,我们研究了这两种技术在组合方法中的使用,这改善了分色技术的结果。这些技巧是用一个由纽约爱乐乐团的专业指挥注释的管弦乐乐谱样本来演示的。乐谱中手写注释的提取可以应用于演奏者注释实践的系统调查、注释者的归属以及注释乐谱的交互式呈现,我们对此进行简要讨论。
{"title":"Approaches to handwritten conductor annotation extraction in musical scores","authors":"Eamonn Bell, L. Pugin","doi":"10.1145/2970044.2970053","DOIUrl":"https://doi.org/10.1145/2970044.2970053","url":null,"abstract":"Conductor copies of musical scores are typically rich in handwritten annotations. Ongoing archival efforts to digitize orchestral conductors' scores have made scanned copies of hundreds of these annotated scores available in digital formats. The extraction of handwritten annotations from digitized printed documents is a difficult task for computer vision, with most approaches focusing on the extraction of handwritten text. However, conductors' annotation practices provide us with at least two affordances, which make the task more tractable in the musical domain. First, many conductors opt to mark their scores using colored pencils, which contrast with the black and white print of sheet music. Consequently, we show promising results when using color separation techniques alone to recover handwritten annotations from conductors' scores. We also compare annotated scores to unannotated copies and use a printed sheet music comparison tool to recover handwritten annotations as additions to the clean copy. We then investigate the use of both of these techniques in a combined method, which improves the results of the color separation technique. These techniques are demonstrated using a sample of orchestral scores annotated by professional conductors of the New York Philharmonic. Handwritten annotation extraction in musical scores has applications to the systematic investigation of score annotation practices by performers, annotator attribution, and to the interactive presentation of annotated scores, which we briefly discuss.","PeriodicalId":422109,"journal":{"name":"Proceedings of the 3rd International workshop on Digital Libraries for Musicology","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114463480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Proceedings of the 3rd International workshop on Digital Libraries for Musicology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1