-
Lartillot, Olivier
(2024).
Introduction to the MiningSuite toolbox.
-
Lartillot, Olivier
(2024).
KI-verkt?y for h?ndtering, transkribering og analyse av musikkarkiver.
Vis sammendrag
Jeg presenterer en rekke verkt?y utviklet i 亚博娱乐官网_亚博pt手机客户端登录 med Nasjonalbiblioteket. AudioSegmentor deler automatisk b?ndopptak i individuelle musikkstykker. Dette verkt?yet forenklet digitaliseringen av Norsk folkemusikksamling. Vi bruker avanserte dyp l?ringsmetoder for ? skape et banebrytende automatisk musikktranskriberingssystem, MusScribe, f?rst finjustert for Hardingfele, og n? gjort tilgjengelig for musikkarkivprofesjonelle for et bredt spekter av musikk. Jeg diskuterer ogs? v?re p?g?ende fremskritt innen den automatiserte musikologiske analysen av folkemusikkstykker og omfattende samlinger.
-
Ziegler, Michelle; Sudo, Marina; Akkermann, Miriam & Lartillot, Olivier
(2024).
Towards Collaborative Analysis: Kaija Saariaho’s IO.
-
Lartillot, Olivier
(2024).
Successes and challenges of computational approaches for audio and music analysis and for predicting music-evoked emotion.
Vis sammendrag
Background
Decades of research in computational sound and music analysis has led to a large range of analysis tools offering rich and diverse description of music, although a large part of the subtlety of music remains out of reach. These descriptors are used to establish computational models predicting perceived or induced emotion directly from music. Although the models can predict a significant amount of variability of emotions experimentally measured (Panda et al., 2023), further progress seems hard to achieve, probably due to the subtlety of music and of the mechanisms underlying the evocation of emotion from music.
Aims
An extensive but synthetic panorama of computational research in sound and music analysis as well as emotion prediction from music is presented. Core challenges are highlighted and prospective ways forward are suggested.
Main contribution
For each separate music dimension (dynamics, timbre, rhythm, tonality and mode, motifs, phrasing, structure and form), a synthetic panorama of the state of the art is evoked, highlighting strengths and challenges as well as indicating how particular sound and music features have been found to correlate with rated emotions. The various strategies for modelling emotional reactions to audio and musical features are presented and discussed.
One common general analytical approach carries out a broad and approximate analysis of the audio recording based on simple mathematical models, describing individual audio or musical characteristics numerically. It is suggested that such loose approach might tend to drift away from commonly understood musical processes and to generate artefacts. This vindicates a more traditional musicological approach based on a focus on the score or approximations of it – through automated transcription if necessary – and a reconstruction of the types of traditional representations commonly studied in musicology. I also argue for the need to closely reflect the way humans listen to and understand music, inspired by a cognitive perspective. Guided by these insights, I sketch the idea of a complex system made of interdependent modules, founded on sequential pattern inference and activation scores not based on statistical sampling.
I also suggest perspectives for the improvement of computational prediction of emotions evoked by music. Discussion and conclusion
Further improvements of computational music analysis methods, as well as emotion prediction, seem to call for a change of modelling paradigm.
References
R. Panda, R. Malheiro, R. Paiva, "Audio Features for Music Emotion Recognition: A Survey", IEEE Transactions on Affective Computing, 14-1, 68-88, 2023.
-
Christodoulou, Anna-Maria; Dutta, Sagar; Lartillot, Olivier; Glette, Kyrre & Jensenius, Alexander Refsum
(2024).
Exploring Convolutional Neural Network Models for Multimodal Classification of Expressive Piano Performance.
-
Monstad, Lars L?berg & Lartillot, Olivier
(2024).
muScribe: a new transcription service for music professionals.
-
Johansson, Mats Sigvard & Lartillot, Olivier
(2024).
Automated transcription of Hardanger fiddle music: Tracking the beats.
-
Monstad, Lars L?berg & Lartillot, Olivier
(2024).
Automated transcription of Hardanger fiddle music: Detecting the notes.
-
Thedens, Hans-Hinrich & Lartillot, Olivier
(2024).
The Norwegian Catalogue of Folk Music Online.
-
Lartillot, Olivier
(2024).
Overview of the MIRAGE project.
-
Lartillot, Olivier
(2024).
MIRAGE Closing Seminar: Digitisation and computer-aided music analysis of folk music.
Vis sammendrag
One aim of the MIRAGE project is to conceive new technologies allowing to better access, understand and appreciate music, with a particular focus on Norwegian folk music. This seminar presents what has been achieved during the four years of the project, leading in particular to the digital version of the Norwegian Catalogue of Folk Music. We are also conceiving tools to automatically transcribe audio recordings of folk music. More advanced musicological applications are discussed as well. To conclude, we introduce the new spinoff project, called muScribe, aimed at the development of transcription services, for a broad range of music, besides folk music, in a first stage tailored to professional organisations such as archives, publishers and producers.
-
Lartillot, Olivier
(2024).
Musicological and Technological Perspectives on Computational Analysis of Electroacoustic Music.
I Jensenius, Alexander Refsum (Red.),
Sonic Design: Explorations Between Art and Science.
Springer Nature.
ISSN 978-3-031-57892-2.
s. 271–297.
doi:
https:/doi.org/10.1007/978-3-031-57892-2_15.
Vis sammendrag
Analysing electroacoustic music remains challenging, leaving this artistic treasure somewhat out of reach of mainstream musicology and many music lovers. This chapter examines electroacoustic music analysis, covering musicological investigations and desires and technological challenges and potentials. The aim is to develop new technologies to overcome the current limitations. The compositional and musicological foundations of electroacoustic music analysis are based on Pierre Schaeffer’s Traité des objects musicaux. The chapter presents an overview of core analytical principles underpinning more recent musicological approaches, including R. Murray Schafer’s soundscape analysis, Denis Smalley’s spectro-morphology, and Lasse Thoresen’s graphical formalisation. Then the state of the art in computational analysis of electroacoustic music is compiled and organised along broad themes, from detecting sound objects to estimating dynamics, facture and grain, mass, motions, space, timbre and rhythm. Finally, I sketch the principles of what could be a Toolbox des objets sonores.
-
Lartillot, Olivier
(2024).
Real-time MIRAGE visualisation of Bartok's first quartet, first movement.
-
Lartillot, Olivier
(2024).
Harmonizing Tradition with Technology: Enhancing Norwegian Folk Music through Computational Innovation.
Vis sammendrag
My work involves developing computational tools to safeguard and elevate the cultural significance of music repertoires, with a focus on a cooperative project with the National Library of Norway related to their collection of Norwegian folk music. Our first phase centered on transforming unstructured audio tapes into a systematic dataset of melodies while ensuring its access and longevity through efficient data management and linking with other catalogues.
Our core activity involves transcribing audio recordings into scores, comparing the traditional manual method with our modern attempts towards automation. Providing detailed performance notation, the close alignment between scores and audio recordings will help improve comprehension and overall accessibility, as well as a more advanced structuring of the collection.
Challenges arose when incorporating this music into the International Inventory of Musical Sources (RISM) database due to the incompatible 'incipit' concept, unfitting genres like Hardanger fiddle folk music. We suggest innovative generalisations for this concept. Moreover, we're creating techniques to digitally dissect the musical corpus, aiming to extract key features of each tune. This initiative not only serves as an alternative to incipits but also provides novel metadata formats, increasing the usability and connectivity within its content and with other databases.
-
Monstad, Lars L?berg & Lartillot, Olivier
(2023).
Automatic Transcription Of Multi-Instrumental Songs: Integrating Demixing, Harmonic Dilated Convolution, And Joint Beat Tracking.
Vis sammendrag
In the rapidly expanding field of music information retrieval (MIR), automatic transcription remains one of the most sought-after capabilities, especially for songs that employ multiple instruments. Musscribe emerges as a state-of-the-art transcription tool that addresses this challenge by integrating three distinct methodologies: demixing, harmonic dilated convolution, and joint beat tracking. Demixing is employed to isolate individual instruments within a song by separating overlapping audio sources, thus ensuring each instrument is transcribed distinctly. Beat tracking is then run as a parallel process to extract the joint beat and downbeat estimations. These processes results in an output midi file, which is then quantized using information derived from the beat tracking. As such, this method paves the way for more accurate and sophisticated analyses, bridging the gap between human and machine understanding of music. Together, these methodologies allow us to produce transcriptions that are not only accurate but also highly representative of the original compositions. Preliminary tests and evaluations showcase the potential in transcribing complex musical pieces with high fidelity, outperforming many contemporary tools in the market. This innovative approach not only has implications for music transcription but also for broader applications in audio analysis, remixing, and digital music production. The model has been instrumental in accelerating the composition process for several Norwegian television shows. Moreover, its efficacy can be observed in the Netflix series "A Storm for Christmas." Renowned composer Peter Baden harnessed this tool to enhance his workflow, proving the demand for innovative tools like this in the professional music industry.
-
Maidhof, Clemens; Agres, Kat; Fachner, J?rg & Lartillot, Olivier
(2023).
Intra- and inter-brain coupling during music therapy.
-
Wosch, Thomas; Vobig, Bastian; Lartillot, Olivier & Christodoulou, Anna-Maria
(2023).
HIGH-M (Human Interaction assessment and Generative segmentation in Health and Music).
-
Lartillot, Olivier
(2023).
Music Therapy Toolbox, and prospects.
-
Lartillot, Olivier; Swarbrick, Dana; Upham, Finn & Cancino-Chacón, Carlos Eduardo
(2023).
Video visualization of a string quartet performance of a Bach Fugue: Design and subjective evaluation.
-
Bishop, Laura; H?ffding, Simon; Laeng, Bruno & Lartillot, Olivier
(2023).
Mental effort and expressive interaction in expert and student string quartet performance.
-
-
Lartillot, Olivier; Thedens, Hans-Hinrich; Mjelva, Olav Lukseng?rd; Elovsson, Anders; Monstad, Lars L?berg & Johansson, Mats Sigvard
[Vis alle 8 forfattere av denne artikkelen]
(2023).
Norwegian Folk Music & Computational Analysis.
Vis sammendrag
As a prélude for Norway's Constitution Day, this special event celebrated the Norwegian folk music tradition, showcasing our new online archive and demonstrating the richness of Hardanger fiddle music, with live performance. One aim of the project is to conceive new technologies allowing to better access, understand and appreciate Norwegian folk music.
In this event, we introduced a new online version of the Norwegian Folk Music Archive and discuss underlying theoretical and technical challenges. A live concert/workshop, with the participation of Olav Lukseng?rd Mjelva, offered a lively introduction to Hardanger fiddle music and its elaborate rhythm. The interests and challenges of automated transcription and analysis were discussed, with the public release of our new software Annotemus.
The symposium was organised in the context of the MIRAGE project (RITMO, in collaboration with the National Library of Norway's Digital Humanities Laboratory).
-
Lartillot, Olivier & Monstad, Lars L?berg
(2023).
Computational music analysis: Significance, challenges, and our proposed approach.
Vis sammendrag
Music is something that we mostly all appreciate, yet it remains a hidden and enigmatic concept for many of us. Music notation, in the form of music scores, facilitates practicing and enhances the understanding of the richness of musical works. However, acquiring musical scores for any music performance is a tedious and demanding task (called music transcription) that demands considerable proficiency. Hence the interest of computational automation. But music is not just notes, it is also melody, rhythm, themes, timbre, and very subtle aspects such as form. While many of us may not be consciously familiar with these concepts, they still have a subconscious influence on our aesthetic experience. Interestingly, it often happens that the more we consciously understand the underlying language of music, the more we tend to appreciate and enjoy it. Therefore, there is value in creating computational tools that can automate and enhance these types of analyses.
The presenters' past work resulted in the creation of Matlab's MIRtoolbox, which measures a broad range of musical characteristics directly from audio through signal processing techniques. Currently, the MIRAGE project prioritises music transcription (with a particular focus on Norwegian folk music), blending neural-network-based deep learning with conventional rule-based models. Through this project, they highlight the importance of acknowledging the interconnectedness between all musical elements. Additionally, they have crafted animated visualisations to make analyses more accessible to the general public and are aiming to make music transcription technology available to the public, with support from UiO Growth House.
-
-
Lartillot, Olivier
(2023).
Dynamic Visualisation of Fugue Analysis, Demonstrated in a Live Concert by the Danish String Quartet.
-
Lartillot, Olivier
(2023).
Towards a comprehensive model for computational music transcription and analysis: a necessary dialog between machine learning and rule-based design?
-
Lartillot, Olivier & Monstad, Lars L?berg
(2023).
MIRAGE - A Comprehensive AI-Based System for Advanced Music Analysis.
-
Lartillot, Olivier
(2023).
Computational audio and musical features extraction: from MIRtoolbox to the MiningSuite.
-
Bishop, Laura; H?ffding, Simon; Lartillot, Olivier Serge Gabriel & Laeng, Bruno
(2023).
Mental effort and expressive interaction in expert and student string quartet performance.
-
Christodoulou, Anna-Maria; Lartillot, Olivier & Anagnostopoulou, Christina
(2023).
Computational Analysis of Greek Folk Music of the Aegean.
-
Lartillot, Olivier
(2023).
Towards a Comprehensive Modelling Framework for Computational Music Transcription/Analysis.
Vis sammendrag
Computational music analysis, still in its infancy, lacking overarching reliable tools, can be seen at the same time as a promising approach to fulfill core epistemo- logical needs. Analysis in the audio domain, although approaching music in its entirety, is doomed to superficiality if it does not fully embrace the underlying symbolic system, requiring a complete automated transcription and scaffolding of metrical, modal/harmonic, voicing and formal structures on top of the layers of elementary events (such as notes). Automated transcription enables to get over the polarity between sound and music notation, providing an interfacing semiotic system that combines the advantages of both domains, and surpassing the limitation of traditional approaches based on graphic representations. Deep learning and signal processing approaches for the discretisation of the continuous signal are compared and discussed. The multi-dimensional music transcription and analysis framework (where both tasks are actually deeply intertwined) requires to take into account the far-reaching interdependencies between dimensions, for instance between motivic and metrical analysis. We propose an attempt to build such a comprehensive framework, founded on general musical and cognitive principles and an attempt to build music analysis capabilities through a combina- tion of simple and general operators. The validity of the analyses is addressed in close discussion with music experts. The potential capability to produce valid analyses for a very large corpus of music would make such a complex system a potentially relevant blueprint for a cognitive modelling of music understanding. We try to address a large diversity of music cultures and their specific challenges: among others, maqam modes (with Mondher Ayari), Norwegian Hardanger fiddle rhythm (with Mats Johansson and Hans-Hinrich Thedens), djembe drumming from Mali (with Rainer Polak) or electroacoustic music (Towards a Toolbox des objets musicaux, with Rolf Inge God?y). We aim at making the framework fully transparent, collaborative and open.
-
-
-
-
Lartillot, Olivier; Elovsson, Anders; Johansson, Mats Sigvard; Thedens, Hans-Hinrich & Monstad, Lars Alfred L?berg
(2022).
Segmentation, Transcription, Analysis and Visualisation of the Norwegian Folk Music Archive.
Vis sammendrag
We present an ongoing project dedicated to the transmutation of a collection of field recordings of Norwegian folk music established in the 1960s into an easily accessible online catalogue augmented with advanced music technology and computer musicology tools. We focus in particular on a major highlight of this collection: Hardanger fiddle music. The studied corpus was available as a series of 600 tape recordings, each tape containing up to 2 hours of recordings, associated with metadata indicating approximate positions of pieces of music. We first need to retrieve the individual recording associated with each tune, through the combination of an automated pre-segmentation based on sound classification and audio analysis, and a subsequent manual verification and fine-tuning of the temporal positions, using a home-made user interface.
Note detection is carried out by a deep learning method. To adapt the model to Hardanger fiddle music, musicians were asked to record themselves and annotate all played note, using a dedicated interface. Data augmentation techniques have been designed to accelerate the process, in particular using alignment of varied performances of same tunes. The transcription also requires the reconstruction of the metrical structure, which is particularly challenging in this style of music. We have also collected ground-truth data, and are conceiving a computational model.
The next step consists in carrying out detailed music analysis of the transcriptions, in order to reveal in particular intertextuality within the corpus. A last direction of research is aimed at designing tools to visualise each tune and the whole catalogue, both for musicologists and general public.
-
Lartillot, Olivier; God?y, Rolf Inge & Christodoulou, Anna-Maria
(2022).
Computational detection and characterisation of sonic shapes: Towards a Toolbox des objets sonores.
Vis sammendrag
Computational detection and analysis of sound objects is of high importance both for musicology and sound design. Yet Music Information Retrieval technologies have so far been mostly focusing on transcription of music into notes in a classical sense whereas we are interested in detecting sound objects and their feature categories, as was suggested by Pierre Schaeffer’s typology and morphology of sound objects in 1966, reflecting basic sound-producing action types. We propose a signal-processing based approach for segmentation, based on a tracking of the salient characteristics over time, and dually Gestalt-based segmentation decisions based on changes. Tracking of pitched sound relies on partial tracking, whereas the analysis of noisy sound requires tracking of larger frequency bands possibly varying over time. The resulting sound objects are then described based on Schaeffer’s taxonomy and morphology, expressed first in the form of numerical descriptors, each related to one type of taxonomy (percussive/sustained/iterative, stable/moving pitch vs unclear pitch) or morphology (such as grain). This multidimensional feature representation is further divided into discrete categories related to the different classes of sounds. The typological and morphological categorisation is driven by the theoretical and experimental framework of the morphodynamical theory. We first experiment on isolated sounds from the Solfège des objets sonores—which features a large variety of sound sources—before considering more complex configurations featuring a succession of sound objects without silence or with simultaneous sound objects. Analytical results are visualised in the form of graphical representations, aimed both for musicology and music pedagogy purposes. This will be applied to the graphical descriptions of and browsing within large music catalogues. The application of the analytical descriptions to music creation is also investigated.
-
-
Elovsson, Anders & Lartillot, Olivier
(2021).
HF1: Hardanger fiddle dataset.
Vis sammendrag
HF1 is a Hardanger fiddle dataset with polyphonic performances spanning five different emotional expressions. The onsets and offsets, together with an associated pitch, were human-annotated for each note in each performance by the fiddle players themselves. The dataset is around 43 minutes long and consists of 19 734 notes of Hardanger fiddle music, recorded in stereo.
-
Tidemann, Aleksander; Lartillot, Olivier & Johansson, Mats Sigvard
(2021).
Towards New Analysis And Visualization Software For Studying Performance Patterns in Hardanger Fiddle Music.
Vis sammendrag
Analyzing musical performances is a challenging and emergent field of computational music research, aiming to reveal performance patterns and link them to musical contexts. There exists a modest amount of computational research on Hardanger fiddle performances. The MIRAGE research project is currently contributing to this scientific body, developing advanced MIR frameworks that build on recent musicological research. This paper presents the development and evaluation of two Max/MSP/Jitter software applications for music analysis and data visualization that integrate contemporary research perspectives on the complex rhythmical structuring of springar performances, investigating how we can design user-friendly computational tools that explore performance patterns in Hardanger fiddle music, in collaboration with MIRAGE.
Based on a small questionnaire and a few operational tests, the study shows an interest in more effective software tools capable of revealing complex interrelations between musical dimensions in Hardanger fiddle performances. Additionally, the study highlights design considerations for tools aiming to increase the availability of computational music research in the field of musicology, such as cross-compatibility and integrated features that actively facilitate nuanced interpretation processes.
-
-
-
-
-
-
-
-
Elovsson, Anders & Lartillot, Olivier
(2021).
A Hardanger Fiddle Dataset with Performances Spanning Emotional Expressions and Annotations Aligned using Image Registration.
Vis sammendrag
This paper presents a Hardanger fiddle dataset “HF1” with polyphonic performances spanning five different emotional expressions: normal, angry, sad, happy, and tender. The performances thus cover the four quadrants of the activity/valence-space. The onsets and offsets, together with an associated pitch, were human-annotated for each note in each performance by the fiddle players themselves. First, they annotated the normal version. These annotations were then transferred to the expressive performances using music alignment and finally human-verified. Two separate music alignment methods based on image registration were developed for this purpose; a B-spline implementation that produces a continuous temporal transformation curve and a Demons algorithm that produces displacement matrices for time and pitch that also account for local timing variations across the pitch range. Both methods start from an “Onsetgram” of onset salience across pitch and time and perform the alignment task accurately. Various settings of the Demons algorithm were further evaluated in an ablation study. The final dataset is around 43 minutes long and consists of 19 734 notes of Hardanger fiddle music, recorded in stereo. The dataset and source code are available online. The dataset will be used in MIR research for tasks involving polyphonic transcription, score alignment, beat tracking, downbeat tracking, tempo estimation, and classification of emotional expressions.
-
-
Tidemann, Aleksander & Lartillot, Olivier
(2021).
Interactive tools for exploring performance patterns in hardanger fiddle music.
-
Dalgard, Joachim; Lartillot, Olivier; Vuoskoski, Jonna Katariina & Guldbrandsen, Erling Eliseus
(2021).
Absorption - Somewhere between the heart and the brain.
-
Bruford, Fred & Lartillot, Olivier
(2020).
Multidimensional similarity modelling of complex drum loops using the GrooveToolbox.
Vis sammendrag
The GrooveToolbox is a new Python toolbox implementing various algorithms, new and pre-existing, for the analysis and comparison of symbolic drum loops, including rhythm features, similarity metrics and microtiming features. As part of the GrooveToolbox we introduce two new metrics of rhythm similarity and four features for describing the significant properties of microtiming deviations in drum loops. Based on a two-part perceptual evaluation, we show these four new microtiming features can each correlate to similarity perception, and be used with rhythm similarity metrics to improve personalized similarity models for drum loops. A new measure of structural rhythmic similarity is also shown to correlate more strongly to similarity perception of drum loops than the more com- monly used Hamming distance. These results point to the potential application of the GrooveToolbox and its new features in drum loop analysis for intelligent music production tools. The GrooveToolbox may be found at: https://github.com/fredbru/GrooveToolbox
-
Lartillot, Olivier & Toiviainen, Petri
(2020).
Read about the Matlab MIRtoolbox.
Young Acousticians Network (YAN) Newsletter.
s. 4–10.
Vis sammendrag
MIRtoolbox is a Matlab toolbox dedicated to the analysis of music and sound from audio recordings and to the extraction of musical features such as tonality, rhythm, or structures. It has also been used for non- musical applications, such as in Non Destructive Testing, and with non-audio signals. In this issue of the newsletter, the YAN discusses the MIRtoolbox with Olivier Lartillot (RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Norway) and Petri Toiviainen (University of Jyv?skyl?, Finland)
You can also check out the MIRtoolbox website at:
shorturl.at/oA038
-
Lartillot, Olivier & Toiviainen, Petri
(2020).
Read about the Matlab MIRtoolbox.
[Internett].
Young Acousticians Network Newsletter.
Vis sammendrag
MIRtoolbox is a Matlab toolbox dedicated to the analysis of music and sound from audio recordings and to the extraction of musical features such as tonality, rhythm, or structures. It has also been used for non- musical applications, such as in Non Destructive Testing, and with non-audio signals. In this issue of the newsletter, the YAN discusses the MIRtoolbox with Olivier Lartillot (RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Norway) and Petri Toiviainen (University of Jyv?skyl?, Finland)
You can also check out the MIRtoolbox website at:
shorturl.at/oA038
-
Lartillot, Olivier & Bruford, Fred
(2020).
Bistate reduction and comparison of drum patterns.
Vis sammendrag
This paper develops the hypothesis that symbolic drum patterns can be represented in a reduced form as a sim- ple oscillation between two states, a Low state (commonly associated with kick drum events) and a High state (often associated with either snare drum or high hat). Both an onset time and an accent time is associated to each state. The systematic inference of the reduced form is formal- ized. This enables the specification of a rhythmic struc- tural similarity measure on drum patterns, where reduced patterns are compared through alignment. The two-state representation allows a low computational cost alignment, once the complex topological formalization is fully taken into account. A comparison with the Hamming distance, as well as similarity ratings collected from listeners on a drum loop dataset, indicates that the bistate reduction enables to convey subtle aspects that goes beyond surface-level com- parison of rhythmic textures.
-
Lartillot, Olivier; Cancino-Chacón, Carlos & Brazier, Charles
(2020).
Real-Time Visualisation Of Fugue Played By A String Quartet.
Vis sammendrag
We present a new system for real-time visualisation of music performance, focused for the moment on a fugue played by a string quartet. The basic principle is to offer a visual guide to better understand music using strategies that should be as engaging, accessible and effective as possible. The pitch curves related to the separate voices are drawn on a space whose temporal axis is normalised with respect to metrical positions, and aligned vertically with respect to their thematic and motivic classification. Aspects related to tonality are represented as well. We describe the underlying technologies we have developed and the technical setting. In particular, the rhythmical and structural representation of the piece relies on real-time polyphonic audio-to-score alignment using online dynamic time warping. The visualisation will be presented at a concert of the Danish String Quartet, performing the last piece of The Art of Fugue by Johann Sebastian Bach.
-
Lartillot, Olivier
(2019).
A comprehensive framework for computational music analysis.
Vis sammendrag
During this presentation, Dr. Olivier Lartillot will give an overview of MIRtoolbox - a Matlab application enabling the extraction of a large range of audio and musical descriptions from recordings. MIRtoolbox is designed to be easy to use both for teaching at any level and for advanced research in musicology, signal analysis and music cognition. One initial aim of MIRtoolbox was to study the relationship between musical features and emotions evoked by music. MIRtoolbox focuses on signal-processing-based approaches that offer limited understanding of music. Lartillot is also developing computational methods for the analysis of notated music, starting from motivic analysis and aiming at building a comprehensive framework where audio and score are combined together.
-
C?mara, Guilherme Schmidt; Nymoen, Kristian; Lartillot, Olivier & Danielsen, Anne
(2019).
Timing is Everything... Or is it? Part I: Effects of Instructed Timing and Reference on Guitar and Bass Sound in Groove Performance.
-
Lartillot, Olivier
(2019).
Computational analysis of tempo and metre: from signal processing to cognitive musicology.
Vis sammendrag
Computational models for the analysis of tempo, metre and the tracking of beats have made significant progress during the last decades. I first present a synthetic overview of the state of the art. Up to recently, classical approaches were based on signal processing, with the integration of heuristics based on assumptions related to music perception and cognition. The standard approach is to first detect percussive events through the establishment of an accentuation curve, followed by periodicity detection, and the construction and tracking of meter. Because rhythmic emphasis can develop on various metrical levels across time, it is necessary to track the metrical structure on multiple levels. I show the benefit of such detailed analysis with the use of a model I have developed, and which obtained one of the highest grades in the MIREX tempo estimation competition.
New approaches based on deep learning have achieved impressive progress and have largely surpassed signal-processing-based approached (including mine) in the recent yearly editions of MIREX. One limitation of these approaches, at least in their current stages, is that they appear as black boxes able to imitate a particular behaviour for which they were trained on particular examples. As such, they hardly offer insight on the cognitive mechanisms underlying the perception of metre.
I will discuss the limitations of signal processing approaches and highlight the complexity of the musical structure. Pulsation in music is not always expressed through a periodic repetition of percussive events, but may emerge from a subtle propagation of motivic or harmonic structures. I present an approach under development that models the different components of music analysis and combine them altogether, extending further Lerdahl and Jackendoff’s vision. Motivic repetition, which plays a core role, is also one of the dimensions that is the most difficult to model and automate.
-
S?rb?, Solveig Isis; Bentham, John; Watson, Pia; Lartillot, Olivier; Gonzalez Sanchez, Victor Evaristo & González, María Isabel
(2019).
Aether Trouble.
Vis sammendrag
Music video for "Aether Trouble" by PYSJ.
With:
Pia Watson
Víctor González (RITMO*)
Torgeir Koppang (PYSJ)
Solveig S?rb? (PYSJ)
Cinematography:
John Bentham
Camera Operators:
Morten Malerstuen (camera and additional cinematography)
Kristoffer Haugen
Lighting:
Nicholas Blakstad Andresen
Morten Malerstuen
Breath data collection (using FLOW* by SweetZpot):
Víctor González, RITMO*
Visualization of breath data and audio:
MIRAGE* and MIRtoolbox
by Olivier Lartillot, RITMO*
Editing:
María Isabel González
Story / Concept:
Solveig S?rb?
Olivier Lartillot
Pia Watson
Directed by:
John Bentham
Solveig S?rb?
Consultant:
Nicholas Blakstad Andresen
Produced by:
Solveig S?rb?
Extras:
The dogs Vips and Willy
Special thanks:
Sagar Sen
Naín Mendoza Fonseca
Turid Svens?y
SweetZpot
RITMO*
Workaway.info
Filter Musikk
Thanks to F21 for letting us use their studio
MUSIC:
Find the audio track on your plattform of choice: https://fanlink.to/aether
Find the audio track on your plattform of choice: https://fanlink.to/aether
Aether Trouble by PYSJ
Written by: Solveig S?rb?
Performed by PYSJ
Solveig S?rb?
Torgeir Koppang
Andreas R?dland Haga
Stig Frogner
Mixing:
Stig Frogner
Mastering:
Bj?rn Engelmann / The Cutting Room
Production:
Solveig S?rb?
* RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo.
This work was partially supported by the Research Council of Norway through its Centres of Excellence scheme, project number 262762.
* Learn more about the visualization technologies on the MIRAGE project website: http://bit.ly/MirageProject
* Data was collected using FLOW? sensors from SweetZpot. For more information about this technology see: https://www.sweetzpot.com/flow
-
Haugen, Mari Romarheim; Johansson, Mats Sigvard & Lartillot, Olivier
(2019).
Investigating rhythm production and perception in traditional scandinavian dance music in non-isochronous meter: A case study of norwegian telespringar.
-
Lartillot, Olivier
(2019).
Miningsuite: A Comprehensive Matlab Framework for Signal, Audio and Music Analysis, Articulating Audio and Symbolic Approaches.
-
Lartillot, Olivier & Grandjean, Didier
(2019).
Tempo and Metrical Analysis by Tracking Multiple Metrical Levels Using Autocorrelation.
-
-
Lartillot, Olivier
(2018).
The MiningSuite 0.10 (first beta version).
Vis sammendrag
The MiningSuite is a free open-source and comprehensive Matlab framework for the analysis of signals, audio recordings, music recordings and music scores (and soon video, and more to come later) under a common modular framework.
The MiningSuite adds a syntactic layer on top of Matlab, so that advanced operations can be specified using a simple and adaptive syntax. This makes the Matlab environment very easy to use for beginners, and in the same time allows power users to design complex workflows in a modular and concise way through a simple assemblage of operators featuring a large set of options.
The MiningSuite is an extension of MIRtoolbox, a Matlab toolbox that has become a reference tool in the Music Information Retrieval (MIR) research and academic community.
-
Lartillot, Olivier
(2018).
MIRtoolbox 1.7.1.
Vis sammendrag
MIRtoolbox offers an integrated set of functions written in Matlab, dedicated to the extraction from audio files of musical features such as tonality, rhythm, structures, etc. The objective is to offer an overview of computational approaches in the area of Music Information Retrieval. The design is based on a modular framework: the different algorithms are decomposed into stages, formalized using a minimal set of elementary mechanisms. These building blocks form the basic vocabulary of the toolbox, which can then be freely articulated in new original ways. These elementary mechanisms integrates all the different variants proposed by alternative approaches - including new strategies we have developed -, that users can select and parametrize. This synthetic digest of feature extraction tools enables a capitalization of the originality offered by all the alternative strategies. Additionally to the basic computational processes, the toolbox also includes higher-level musical feature extraction tools, whose alternative strategies, and their multiple combinations, can be selected by the user.
The choice of an object-oriented design allows a large flexibility with respect to the syntax: the tools are combined in order to form a sets of methods that correspond to basic processes (spectrum, autocorrelation, frame decomposition, etc.) and musical features. These methods can adapt to a large area of objects as input. For instance, the autocorrelation method will behave differently with audio signal or envelope, and can adapt to frame decompositions.
-
Lartillot, Olivier & Jensenius, Alexander Refsum
(2018).
SoundTracer.
Vis sammendrag
Discover traditional Norwegian music by drawing gestures with your iPhone or iPad.
First draw a gesture by moving your iPhone or iPad up and down, to indicate ascending and descending melody. A simple sound is played while you move your device, its pitch gets higher when you move up, and gets lower when you move down. You can also indicate the beginning of a new note by moving abruptly the phone up or down (a blue vertical line indicates the note position). It is also possible to indicate a particular note without indication of pitch by performing a quick movement down and up (a red vertical line indicates the note position).
Once you have terminated your gesture (by tapping once), the piece of music that contains the melodic gesture closest to your gesture is presented. The title and genre of that piece of music is indicated, as well as the performer and the district in Norway where it comes from. A melodic curve of the specific melodic gesture from the song is displayed below your own gesture. The matching between your gesture and the melody is based on a temporal alignment of their contours: the ascending parts of your gesture is aligned with the ascending parts of the melody, and same for the descending parts. The temporal note locations you have specified (bleu and red vertical lines) are also tentatively aligned with corresponding note changes in the melody. Some parts may be skipped, and are shown in red in the curves. When the music is played, a cursor moves throughout both curves so that you can follow the music while looking at the gestures.
SoundTracer uses Apple's ARKit technology, solely in order to precisely track the location of your iPhone or iPad. For that reason, there is no actual Augmented Reality, and the camera is solely used to improve the precision of the tracking, without displaying the actual frames of the camera.
For the moment, the catalogue of music is composed of around 50 pieces, and only the initial part of the melody is considered. The catalogue includes a few tunes played on the traditional Norwegian Hardanger fiddle, and songs sung a cappella. SoundTracer will be progressively improved with more music.
SoundTracer is an innovation project led by Alexander Refsum Jensenius, supported by the University of Oslo and developed in collaboration with the National Library of Norway. More information here: https://www.hf.uio.no/ritmo/english/projects/all/soundtracer/
-
Lartillot, Olivier; Nymoen, Kristian & Danielsen, Anne
(2018).
Prediction of P-centers from audio recordings.
-
Lartillot, Olivier
(2018).
MoCap Toolbox in the MiningSuite.
-
-
Lartillot, Olivier
(2018).
Computational sound/music/gesture analysis and application to gesture-based query in music catalogue.
Vis sammendrag
In the first part of this talk, I will give a short and broad overview of the MiningSuite, a Matlab toolbox that combines sound, music and gesture analysis. I will give a quick tour of the various types of sound and music analyses that can be carried out using the toolbox, covering a large range of musical dimensions such as timbre, rhythm, harmony or structure.
The MiningSuite, integrating previous toolboxes such as MIRtoolbox and MIDI toolbox, can be used both for the analysis of audio recordings and of “symbolic” representations such as MIDI files. I will also present the current integration of motion capture and gesture analysis (from the MoCap Toolbox), as well as other sensor data such as breathing. The benefit of articulating these different types of analyses into a single framework will be demonstrated.
In the second part, I will present a project aimed at automatically extracting melodic gestures from a catalogue of folk music recordings from the National Library of Norway. While melodic lines can be easily extracted from a cappella songs, the task is more challenging for other types of music such as Hardanger fiddle music. In such cases, we need to automatically transcribe the recordings and track melodic voices throughout the counterpoint of each composition. I will also present an iPhone app that enables to draw a gesture in the air with the phone and to find pieces of music from the catalogue that is characterised by a similar musical gesture.
-
-
-
Lartillot, Olivier
(2018).
Analyse musicologique par ordinateur d'enregistrements audio et de partitions informatisées.
Vis sammendrag
Cette présentation offrira un panorama des diverses méthodes d’analyse musicale par ordinateur con?ues et/ou développées par O. Lartillot, couvrant aussi bien l’analyse d’enregistrements sonores que représentations ? symboliques ? (c’est-à-dire partitions informatisées ou séquences MIDI). J’expliquerai aussi brièvement les enjeux musicologiques et épistémologiques sous-jacents. La bo?te à outils MIRtoolbox produit un éventail assez large de descriptions audio et musicales de fichiers sons. Elle a été par exemple utilisée pour étudier le timbre dans la musique extra-européenne, en collaboration avec Stéphanie Weisser. J’ai également con?u de nouvelles techniques d’analyse mélodique et motivique de représentations symboliques, avec des applications notamment à l’analyse de musiques traditionnelles arabes et turques.
-
-
Lartillot, Olivier
(2018).
The MiningSuite - a Matlab toolbox for signal, audio and music analysis.
Vis sammendrag
This course gives an introduction to analysis of signals, audio recordings, music recordings and music scores with MiningSuite - a free open-source Matlab toolbox.
The MiningSuite is a free open-source Matlab toolbox for the analysis of signals, audio recordings, music recordings and music scores (and soon video and motion trajectories) under a common modular framework. The MiningSuite adds a syntactic layer on top of Matlab, so that advanced operations can be specified using a simple and adaptive syntax. This makes the Matlab environment very easy to use for beginners, and in the same time allows power users to design complex workflows in a modular and concise way through a simple assemblage of operators featuring a large set of options.
The MiningSuite is an extension of MIRtoolbox, a Matlab toolbox that has become a reference tool in the Music Information Retrieval (MIR) research and academic community.
In this course, you will first learn the basic principles of the MiningSuite’s syntax, which enables you to quickly design analytic processes, from simple operators to complex pipelines. You will get a broad overview of the analytic tools available, from basic signal processing to auditory modelling to music analysis. All concepts will be explained from the ground up, so that no prior expertise in signal processing or in music analysis is required.
-
Christodoulou, Anna-Maria; Anagnostopoulou, Christina & Lartillot, Olivier
(2022).
Computational Analysis of Greek folk music of the Aegean islands.
National and Kapodistrian University of Athens.
-
Lartillot, Olivier
(2018).
mirtempo 1.8: Tempo Estimation By Tracking a Complete Metrical Structure.
Music Information Retrieval Evaluation eXchange (MIREX).
Fulltekst i vitenarkiv
Vis sammendrag
This paper describes the two variants OL1 and OL2 of the model submitted to the MIREX 2018 tempo estimation tasks, and compare them with respect to my previous submission for MIREX 2013.