The concept of rāga in Carnatic music is based on an ordered set of notes in an octave. Historically rāgas are broadly classified into two sets, namely Janaka (root/parent) and Janya (derived/offspring) rāgas. Every janya rāga is derived from a unique parent. We examine this classification critically and attempt to provide a quantitative basis for such a classification by defining a ‘distance’ between rāgas. The shortest identifies the parentage. Each rāga is defined by a pitch histogram vector in a 12-dimensional space. To achieve consensus, different distance metrics are used in the multi-dimensional space. Using a standard data set (refer to section 4.4), we carry out the distance analysis using entire compositions, which we subsequently fine-tune using only the parts of compositions that contain all the features of the rāga. We also perform an independent analysis for comparing the motif sequences in rāgas. We find that while the conventional method (refer to section 3) is fairly robust, there are exceptions, especially with pentatonic rāgas, and that these exceptions are actively debated in the public domain. Since quantitative methods find it difficult to achieve consensus, we conclude that while a rāga belongs to a family, it does not necessarily belong to a unique parent.
This paper redefines the performance practice of street barrel organs by transcending their conventional physical gestures beyond their role as mere music reproduction machines. We propose a new understanding of these instruments by establishing a parallel with how turntables started to be considered musical instruments through hand manipulation. Collaborating with Chilean organilleros, we experimented with the notion of ‘physical gesture transgression’ and explored creating new sounds through various body actions. We provide a list of ‘transgressive gestures,’ proposing expansions through instrument preparation or additional gestures, and how these new gestures can be annotated alongside traditional notation for other musical instruments.
Many practices have been presented in music generation recently. While stylistic music generation using deep learning techniques has became the main stream, these models still struggle to generate music with high musicality, different levels of music structure, and controllability. In addition, more application scenarios such as music therapy require imitating more specific musical styles from a few given music examples, rather than capturing the overall genre style of a large data corpus. To address requirements that challenge current deep learning methods, we propose a statistical machine learning model that is able to capture and imitate the structure, melody, chord, and bass style from a given example seed song. An evaluation using 10 pop songs shows that our new representations and methods are able to create high-quality stylistic music that is similar to a given input song. We also discuss potential uses of our approach in music evaluation and music therapy.