首页 > 最新文献

2019 International Workshop on Multilayer Music Representation and Processing (MMRP)最新文献

英文 中文
On the Use of U-Net for Dominant Melody Estimation in Polyphonic Music 论U-Net在复调音乐主调估计中的应用
Guillaume Doras, P. Esling, G. Peeters
Estimation of dominant melody in polyphonic music remains a difficult task, even though promising breakthroughs have been done recently with the introduction of the Harmonic CQT and the use of fully convolutional networks. In this paper, we build upon this idea and describe how U-Net- a neural network originally designed for medical image segmentation - can be used to estimate the dominant melody in polyphonic audio. We propose in particular the use of an original layer-by-layer sequential training method, and show that this method used along with careful training data conditioning improve the results compared to plain convolutional networks.
估计复调音乐中的主旋律仍然是一项艰巨的任务,尽管最近随着谐波CQT的引入和全卷积网络的使用已经取得了有希望的突破。在本文中,我们以这个想法为基础,描述了如何使用U-Net——一个最初设计用于医学图像分割的神经网络——来估计复调音频中的主旋律。我们特别提出使用一种原始的逐层顺序训练方法,并表明与普通卷积网络相比,这种方法与仔细的训练数据调节一起使用可以改善结果。
{"title":"On the Use of U-Net for Dominant Melody Estimation in Polyphonic Music","authors":"Guillaume Doras, P. Esling, G. Peeters","doi":"10.1109/MMRP.2019.8665373","DOIUrl":"https://doi.org/10.1109/MMRP.2019.8665373","url":null,"abstract":"Estimation of dominant melody in polyphonic music remains a difficult task, even though promising breakthroughs have been done recently with the introduction of the Harmonic CQT and the use of fully convolutional networks. In this paper, we build upon this idea and describe how U-Net- a neural network originally designed for medical image segmentation - can be used to estimate the dominant melody in polyphonic audio. We propose in particular the use of an original layer-by-layer sequential training method, and show that this method used along with careful training data conditioning improve the results compared to plain convolutional networks.","PeriodicalId":441469,"journal":{"name":"2019 International Workshop on Multilayer Music Representation and Processing (MMRP)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126673666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Workshop Organization MMRP 2019 MMRP 2019
{"title":"Workshop Organization MMRP 2019","authors":"","doi":"10.1109/mmrp.2019.00006","DOIUrl":"https://doi.org/10.1109/mmrp.2019.00006","url":null,"abstract":"","PeriodicalId":441469,"journal":{"name":"2019 International Workshop on Multilayer Music Representation and Processing (MMRP)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117267264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Requirements for a File Format for Smart Musical Instruments 智能乐器文件格式要求
L. Turchet, P. Kudumakis
Smart musical instruments are an emerging category of musical instruments characterized by sensors, actuators, wireless connectivity, and embedded intelligence. To date, a topic that has received remarkably little attention in smart musical instruments research is that of defining an interoperable file format for the exchange of content produced by this class of instruments. In this paper we preliminary investigate the design of a format specific to smart musical instruments but that at the same time enables interoperability with other devices. We adopted a participatory design methodology consisting of a set of interviews with studio producers. The purpose of such interviews was that of identifying a set of use cases for a format encoding data generated by smart musical instruments, with the end goal of gathering requirements for its design.
智能乐器是一种以传感器、执行器、无线连接和嵌入式智能为特征的新兴乐器。迄今为止,在智能乐器研究中很少受到关注的一个主题是为这类乐器产生的内容交换定义可互操作的文件格式。在本文中,我们初步研究了一种特定于智能乐器的格式设计,同时使其能够与其他设备互操作性。我们采用了一种参与式设计方法,包括与工作室制作人的一系列访谈。这种访谈的目的是为智能乐器生成的编码数据格式确定一组用例,其最终目标是收集其设计的需求。
{"title":"Requirements for a File Format for Smart Musical Instruments","authors":"L. Turchet, P. Kudumakis","doi":"10.1109/MMRP.2019.8665380","DOIUrl":"https://doi.org/10.1109/MMRP.2019.8665380","url":null,"abstract":"Smart musical instruments are an emerging category of musical instruments characterized by sensors, actuators, wireless connectivity, and embedded intelligence. To date, a topic that has received remarkably little attention in smart musical instruments research is that of defining an interoperable file format for the exchange of content produced by this class of instruments. In this paper we preliminary investigate the design of a format specific to smart musical instruments but that at the same time enables interoperability with other devices. We adopted a participatory design methodology consisting of a set of interviews with studio producers. The purpose of such interviews was that of identifying a set of use cases for a format encoding data generated by smart musical instruments, with the end goal of gathering requirements for its design.","PeriodicalId":441469,"journal":{"name":"2019 International Workshop on Multilayer Music Representation and Processing (MMRP)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127590520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Chroma Interval Content as a Key-Independent Harmonic Progression Feature 色度音程含量作为一种与琴键无关的和声进阶特征
M. Queiroz, Rodrigo Borges
This paper introduces a novel chroma-based harmonic feature called Chroma Interval Content (CIC), which extends Directional Interval Content (DIC) vectors to audio data. This feature represents key-independent harmonic progressions, but unlike the Dynamic Chroma feature vector it represents pitch-class energy motions based on a symbolic voice-leading approach, and can be computed more efficiently (in time $mathcal{O}(Nlog N)$ as opposed to $mathcal{O}(N^{2}))$. We present theoretical properties of Chroma Interval Content vectors and explore the expressive power of CIC both in representing isolated chord progressions, establishing links to its symbolic counterpart DIC, as well as in specific harmony-related MIR tasks, such as key-independent search for chord progressions and classification of music datasets according to harmonic diversity.
本文提出了一种新的基于色度的谐波特征,称为色度区间内容(CIC),它将方向区间内容(DIC)向量扩展到音频数据中。该特征表示与键无关的谐波级数,但与动态色度特征向量不同,它表示基于符号语音领先方法的音高级能量运动,并且可以更有效地计算(在时间$mathcal{O}(Nlog N)$而不是$mathcal{O}(N^{2}))$)$。我们提出了色度音程内容向量的理论特性,并探讨了CIC在表示孤立和弦进行、建立与DIC符号对应的链接以及特定和声相关的MIR任务(如和弦进行的独立键搜索和根据和声多样性对音乐数据集进行分类)中的表达能力。
{"title":"Chroma Interval Content as a Key-Independent Harmonic Progression Feature","authors":"M. Queiroz, Rodrigo Borges","doi":"10.1109/MMRP.2019.8665369","DOIUrl":"https://doi.org/10.1109/MMRP.2019.8665369","url":null,"abstract":"This paper introduces a novel chroma-based harmonic feature called Chroma Interval Content (CIC), which extends Directional Interval Content (DIC) vectors to audio data. This feature represents key-independent harmonic progressions, but unlike the Dynamic Chroma feature vector it represents pitch-class energy motions based on a symbolic voice-leading approach, and can be computed more efficiently (in time $mathcal{O}(Nlog N)$ as opposed to $mathcal{O}(N^{2}))$. We present theoretical properties of Chroma Interval Content vectors and explore the expressive power of CIC both in representing isolated chord progressions, establishing links to its symbolic counterpart DIC, as well as in specific harmony-related MIR tasks, such as key-independent search for chord progressions and classification of music datasets according to harmonic diversity.","PeriodicalId":441469,"journal":{"name":"2019 International Workshop on Multilayer Music Representation and Processing (MMRP)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126526922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Improving Singing Voice Separation Using Attribute-Aware Deep Network 利用属性感知深度网络改进歌唱声音分离
R. Swaminathan, Alexander Lerch
Singing Voice Separation (SVS) attempts to separate the predominant singing voice from a polyphonic musical mixture. In this paper, we investigate the effect of introducing attribute-specific information, namely, the frame level vocal activity information as an augmented feature input to a Deep Neural Network performing the separation. Our study considers two types of inputs, i.e, a ground-truth based ‘oracle’ input and labels extracted by a state-of-the-art model for singing voice activity detection in polyphonic music. We show that the separation network informed of vocal activity learns to differentiate between vocal and nonvocal regions. Such a network thus reduces interference and artifacts better compared to the network agnostic to this side information. Results on the MIR1K dataset show that informing the separation network of vocal activity improves the separation results consistently across all the measures used to evaluate the separation quality.
歌唱声音分离(SVS)试图将主要的歌唱声音从复调音乐混合物中分离出来。在本文中,我们研究了引入属性特定信息的效果,即帧级声乐活动信息作为增强特征输入到执行分离的深度神经网络中。我们的研究考虑了两种类型的输入,即基于事实的“神谕”输入和由最先进的模型提取的标签,用于在复调音乐中检测歌唱语音活动。我们表明,被告知发声活动的分离网络学会了区分发声和非发声区域。这样的网络因此减少干扰和伪影比网络不可知的这方面的信息。MIR1K数据集上的结果表明,将声音活动告知分离网络可以在所有用于评估分离质量的措施中一致地提高分离结果。
{"title":"Improving Singing Voice Separation Using Attribute-Aware Deep Network","authors":"R. Swaminathan, Alexander Lerch","doi":"10.1109/MMRP.2019.8665379","DOIUrl":"https://doi.org/10.1109/MMRP.2019.8665379","url":null,"abstract":"Singing Voice Separation (SVS) attempts to separate the predominant singing voice from a polyphonic musical mixture. In this paper, we investigate the effect of introducing attribute-specific information, namely, the frame level vocal activity information as an augmented feature input to a Deep Neural Network performing the separation. Our study considers two types of inputs, i.e, a ground-truth based ‘oracle’ input and labels extracted by a state-of-the-art model for singing voice activity detection in polyphonic music. We show that the separation network informed of vocal activity learns to differentiate between vocal and nonvocal regions. Such a network thus reduces interference and artifacts better compared to the network agnostic to this side information. Results on the MIR1K dataset show that informing the separation network of vocal activity improves the separation results consistently across all the measures used to evaluate the separation quality.","PeriodicalId":441469,"journal":{"name":"2019 International Workshop on Multilayer Music Representation and Processing (MMRP)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131811416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Reviewers MMRP 2019 审稿人MMRP 2019
A. Baratè, I. Barbancho
Federico Avanzini, University of Milan, Italy Adriano Baratè, University of Milan, Italy Isabel Barbancho, Universidad de Malaga, Spain Emilios Cambouropoulos, Aristotle University of Thessaloniki, Greece Michael Cohen, University of Aizu, Japan Shlomo Dubnov, University of California San Diego, USA Douglas Keislar, Computer Music Journal, USA Luca Andrea Ludovico, University of Milan, Italy Alan Marsden, Lancaster University, United Kingdom Davide Andrea Mauro, Marshall University, USA Stavros Ntalampiras, University of Milan, Italy Stephen Travis Pope, HeavenEverywhere Media, Birdentifier LLC, USA Giorgio Presti, University of Milan, Italy Curtis Roads, University of California Santa Barbara, USA Antonio Rodà, University of Padova, Italy Perry Roland, Music Encoding Initiative Stefania Serafin, Aalborg University, Denmark Federico Simonetta, University of Milan, Italy Bob Sturm, Royal Institute of Technology KTH, Sweden
Federico Avanzini,意大利米兰大学Adriano Baratè,意大利米兰大学Isabel Barbancho,西班牙马拉加大学Emilios Cambouropoulos,希腊塞萨洛尼基大学Michael Cohen,日本Aizu大学Shlomo Dubnov,美国加州大学圣地亚哥分校Douglas Keislar,计算机音乐杂志,美国Luca Andrea Ludovico,意大利米兰大学Alan Marsden,兰开斯特大学,英国Davide Andrea Mauro,马歇尔大学美国Stavros Ntalampiras,意大利米兰大学Stephen Travis Pope, HeavenEverywhere Media, Birdentifier LLC,美国Giorgio Presti,意大利米兰大学Curtis Roads,美国加州大学圣巴巴拉分校Antonio rodonou,意大利帕多瓦大学Perry Roland,音乐编码倡议Stefania Serafin,丹麦奥尔堡大学Federico Simonetta,意大利米兰大学Bob Sturm,瑞典皇家理工学院KTH
{"title":"Reviewers MMRP 2019","authors":"A. Baratè, I. Barbancho","doi":"10.1109/mmrp.2019.00008","DOIUrl":"https://doi.org/10.1109/mmrp.2019.00008","url":null,"abstract":"Federico Avanzini, University of Milan, Italy Adriano Baratè, University of Milan, Italy Isabel Barbancho, Universidad de Malaga, Spain Emilios Cambouropoulos, Aristotle University of Thessaloniki, Greece Michael Cohen, University of Aizu, Japan Shlomo Dubnov, University of California San Diego, USA Douglas Keislar, Computer Music Journal, USA Luca Andrea Ludovico, University of Milan, Italy Alan Marsden, Lancaster University, United Kingdom Davide Andrea Mauro, Marshall University, USA Stavros Ntalampiras, University of Milan, Italy Stephen Travis Pope, HeavenEverywhere Media, Birdentifier LLC, USA Giorgio Presti, University of Milan, Italy Curtis Roads, University of California Santa Barbara, USA Antonio Rodà, University of Padova, Italy Perry Roland, Music Encoding Initiative Stefania Serafin, Aalborg University, Denmark Federico Simonetta, University of Milan, Italy Bob Sturm, Royal Institute of Technology KTH, Sweden","PeriodicalId":441469,"journal":{"name":"2019 International Workshop on Multilayer Music Representation and Processing (MMRP)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114429290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
State of the Art and Perspectives in Multi-Layer Formats for Music Representation 音乐表示的多层格式的现状和前景
A. Baratè, G. Haus, L. A. Ludovico
This paper aims to provide an analytical comparison among the most relevant representation formats that support multi-layer descriptions of music content, namely IEEE 1599, Music Encoding Initiative, and MusicXML/MNX. After remarking the technical characteristics of such formats and highlighting their similarities and differences, we will try to shed light on their future, so as to understand the current trends in digital representation of music and multimedia.
本文旨在对支持音乐内容多层描述的最相关的表示格式(即IEEE 1599、music Encoding Initiative和MusicXML/MNX)进行分析比较。在介绍了这些格式的技术特点并强调了它们的异同之后,我们将试图揭示它们的未来,从而了解音乐和多媒体数字表示的当前趋势。
{"title":"State of the Art and Perspectives in Multi-Layer Formats for Music Representation","authors":"A. Baratè, G. Haus, L. A. Ludovico","doi":"10.1109/MMRP.2019.8665381","DOIUrl":"https://doi.org/10.1109/MMRP.2019.8665381","url":null,"abstract":"This paper aims to provide an analytical comparison among the most relevant representation formats that support multi-layer descriptions of music content, namely IEEE 1599, Music Encoding Initiative, and MusicXML/MNX. After remarking the technical characteristics of such formats and highlighting their similarities and differences, we will try to shed light on their future, so as to understand the current trends in digital representation of music and multimedia.","PeriodicalId":441469,"journal":{"name":"2019 International Workshop on Multilayer Music Representation and Processing (MMRP)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114180046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Multilayered Approach to Automatic Music Generation and Expressive Performance 音乐自动生成与表现力表现的多层方法
Filippo Carnovalini, A. Rodà
When analyzing scores, musicologists often use multilayered representations to describe different importance levels of notes and chords, according to hierarchical musical structures. These structures are believed to represent the composer's mental representation as well as the listeners' perception of the piece. Thus, in the context of automated music generation, this kind of information can be of great use to model both the composition itself and its expressive performance. In this paper one computational method to perform this kind of analysis is described. Its implementation is then used to generate short musical phrases according to a hierarchical structure that is also used to model the performance of these melodies.
在分析乐谱时,音乐学家经常根据层次音乐结构使用多层表示来描述音符和和弦的不同重要程度。这些结构被认为代表了作曲家的心理表征以及听众对作品的感知。因此,在自动音乐生成的背景下,这类信息可以很好地用于对作曲本身及其表达性能进行建模。本文介绍了一种进行这类分析的计算方法。然后,它的实现被用来根据一个层次结构生成简短的音乐短语,这个层次结构也被用来模拟这些旋律的表现。
{"title":"A Multilayered Approach to Automatic Music Generation and Expressive Performance","authors":"Filippo Carnovalini, A. Rodà","doi":"10.1109/MMRP.2019.8665367","DOIUrl":"https://doi.org/10.1109/MMRP.2019.8665367","url":null,"abstract":"When analyzing scores, musicologists often use multilayered representations to describe different importance levels of notes and chords, according to hierarchical musical structures. These structures are believed to represent the composer's mental representation as well as the listeners' perception of the piece. Thus, in the context of automated music generation, this kind of information can be of great use to model both the composition itself and its expressive performance. In this paper one computational method to perform this kind of analysis is described. Its implementation is then used to generate short musical phrases according to a hierarchical structure that is also used to model the performance of these melodies.","PeriodicalId":441469,"journal":{"name":"2019 International Workshop on Multilayer Music Representation and Processing (MMRP)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131923117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Workshop Support MMRP 2019 车间支持MMRP 2019
{"title":"Workshop Support MMRP 2019","authors":"","doi":"10.1109/mmrp.2019.00009","DOIUrl":"https://doi.org/10.1109/mmrp.2019.00009","url":null,"abstract":"","PeriodicalId":441469,"journal":{"name":"2019 International Workshop on Multilayer Music Representation and Processing (MMRP)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129632956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scientific Committee MMRP 2019 2019年MMRP科学委员会
I. Barbancho
Gérard Assayag, IRCAM Research Lab, France Isabel Barbancho, Universidad de Málaga, Spain Emilios Cambouropoulos, Aristotle University of Thessaloniki, Greece Antonio Camurri, University of Genoa, Italy Michael Cohen, University of Aizu, Japan Shlomo Dubnov, University of California San Diego, USA Goffredo Haus, University of Milan, Italy Douglas Keislar, Computer Music Journal, MIT Press, USA Marc Leman, Ghent University, Belgium Alan Marsden, Lancaster University, United Kingdom Davide Andrea Mauro, Marshall University, USA Stavros Ntalampiras, University of Milan, Italy Stephen Trevis Pope, HeavenEverywhere, CA, USA Curtis Roads, University of California Santa Barbara, USA Perry Roland, Music Encoding Initiative Stefania Serafin, Aalborg University Copenhagen, Denmark Bob Sturm, Royal Institute of Technology KTH, Sweden
gsamrard Assayag, IRCAM研究实验室,法国Isabel Barbancho, universita de Málaga,西班牙Emilios Cambouropoulos,亚里士多德塞萨洛尼卡大学,希腊Antonio Camurri,意大利热那亚大学,意大利Michael Cohen,会zu大学,日本Shlomo Dubnov,加州大学圣地亚哥分校,美国Goffredo Haus,意大利米兰大学,Douglas Keislar,计算机音乐杂志,麻省理工学院出版社,美国Marc Leman,根特大学,比利时Alan Marsden,兰开斯特大学,英国Davide Andrea Mauro,马歇尔大学,美国Stavros Ntalampiras,意大利米兰大学Stephen Trevis Pope, HeavenEverywhere, CA,美国Curtis Roads,加州大学圣巴巴拉分校,美国Perry Roland,音乐编码倡议Stefania Serafin,丹麦奥尔堡大学哥本哈根,瑞典皇家理工学院Bob Sturm
{"title":"Scientific Committee MMRP 2019","authors":"I. Barbancho","doi":"10.1109/mmrp.2019.00007","DOIUrl":"https://doi.org/10.1109/mmrp.2019.00007","url":null,"abstract":"Gérard Assayag, IRCAM Research Lab, France Isabel Barbancho, Universidad de Málaga, Spain Emilios Cambouropoulos, Aristotle University of Thessaloniki, Greece Antonio Camurri, University of Genoa, Italy Michael Cohen, University of Aizu, Japan Shlomo Dubnov, University of California San Diego, USA Goffredo Haus, University of Milan, Italy Douglas Keislar, Computer Music Journal, MIT Press, USA Marc Leman, Ghent University, Belgium Alan Marsden, Lancaster University, United Kingdom Davide Andrea Mauro, Marshall University, USA Stavros Ntalampiras, University of Milan, Italy Stephen Trevis Pope, HeavenEverywhere, CA, USA Curtis Roads, University of California Santa Barbara, USA Perry Roland, Music Encoding Initiative Stefania Serafin, Aalborg University Copenhagen, Denmark Bob Sturm, Royal Institute of Technology KTH, Sweden","PeriodicalId":441469,"journal":{"name":"2019 International Workshop on Multilayer Music Representation and Processing (MMRP)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132408792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2019 International Workshop on Multilayer Music Representation and Processing (MMRP)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1