Carpentier G.,IRCAM |
Assayag G.,IRCAM |
Journal of Heuristics | Year: 2010
In this paper a computational approach of musical orchestration is presented. We consider orchestration as the search of relevant sound combinations within large instruments sample databases and propose two cooperating metaheuristics to solve this problem. Orchestration is seen here as a particular case of finding optimal constrained multisets on a large ensemble with respect to several objectives. We suggest a generic and easily extendible formalization of orchestration as a constrained multiobjective search towards a target timbre, in which several perceptual dimensions are jointly optimized. We introduce Orchidée, a time-efficient evolutionary orchestration algorithm that allows the discovery of optimal solutions and favors the exploration of non-intuitive sound mixtures. We also define a formal framework for global constraints specification and introduce the innovative CDCSolver repair metaheuristic, thanks to which the search is led towards regions fulfilling a set of musical-related requirements. Evaluation of our approach on a wide set of real orchestration problems is also provided. © Springer Science+Business Media, LLC 2009.
International Computer Music Conference, ICMC 2010 | Year: 2010
This paper presents Business Model, a recent piece for violin, cello and live electronics. It focuses on the compositional elements and live electronic techniques employed. Interpretation and live interaction issues with respect to the written parts are discussed. The main ideas and underlying concepts of the piece are also exposed.
Veaux C.,Ircam |
Veaux C.,University of Edinburgh |
Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH | Year: 2011
Intonation is one of the most important factors of speech expressivity. This paper presents a conversion method for the F0 contours. The F0 segments are represented with discrete cosine transform (DCT) coefficients at the syllable level. Multi-level dynamic features are added to model the temporal correlation between syllables and to constrain the F0 contour at the phrase level. Gaussian mixture models (GMM) are used to map the prosodic features between neutral and expressive speech, and the converted F0 contour is generated under the dynamic features constraints. Experimental evaluation using a database of acted emotional speech shows the effectiveness of the proposed F0 model and conversion method. Copyright © 2011 ISCA.
Proceedings - 40th International Computer Music Conference, ICMC 2014 and 11th Sound and Music Computing Conference, SMC 2014 - Music Technology Meets Philosophy: From Digital Echos to Virtual Ethos | Year: 2014
In this paper I will present some aesthetic and technical aspects of my work related to real-time composition of sound environments (soundscapes and vocalscapes) through two recent works: "Geografia Sonora", a sound and video installation on the theme of the Mediterranean sea, a navigation in an archipelago of "sound islands" of singing/speaking voices, sound signals, natural and mechanical sounds; "Vocalscapes on Walt Whitman", an electroacoustic composition exploring the idea of "poetry as vocalscape" and as "geography" of voices and performances based in the recordings of fifteen talkers. The works have been composed and spatialized in real time by a "sound navigation map", a virtual score within Max/MSP, the Spatialisateur and Antescofo. Through these two works I will show: 1) by which means a vast sound material can be organized and processed/composed automatically in order to beget a sound environment in real-time through a coherent open virtual score; 2) how such a sound environment may be seen simultaneously as a sound composition, as the trace of a shared experience, as the record of poetry and vocal performance or as the soundmark of a community and of a land. Copyright: © 2014 Georgia Spiropoulos.
Proceedings of the 8th Sound and Music Computing Conference, SMC 2011 | Year: 2011
This paper presented the researches and the developments realized for an artistic project called Luna Park. This work is widely connected, at various levels, in the paradigm of the concatenative synthesis, both to its shape and in the processes which it employs. Thanks to a real-time programming environment, synthesis engines and prosodic transformations are manipulated, controlled and activated by the gesture, via accelerometers realized for the piece. This paper explains the sensors, the real time audio engines and the mapping that connects this two parts. The world premiere of Luna Park takes place in Paris, in the space of projection of the IRCAM, on June 10th, 2011, during the festival AGORA. © 2011 Grégory Beller et al.
Proceedings of the 2013 ICMC Conference: International Developments in Electroacoustics | Year: 2013
In this article some advanced methods for sound hybridizations by means of the theory of sound-types will be shown. After a short presentation of the theory, a formal definition will be given. A framework implementing the theory will be presented in detail, with an emphasis on the modules aimed at the sound transformations. Hybridization is achieved by means of two orthogonal processes called type matching (aimed at timbral transformation) and probability merging (aimed at the imitation of the temporal morphology). Finally, some relevant artistic applications of the proposed methods will be discussed.
Proceedings of the 9th International Conference on Digital Audio Effects, DAFx 2006 | Year: 2013
In this paper, we propose a system for the automatic estimation of the key of a music track using hidden Markov models. The front-end of the system performs transient/noise reduction, estimation of the tuning and then represents the track as a succession of chroma vectors over time. The characteristics of the Major and minor modes are learned by training two hidden Markov models on a labeled database. 24 hidden Markov models corresponding to the various keys are then derived from the two trained models. The estimation of the key of a music track is then obtained by computing the likelihood of its chroma sequence given each HMM. The system is evaluated positively using a database of European baroque, classical and romantic music. We compare the results with the ones obtained using a cognitive-based approach. We also compare the chroma-key profiles learned from the database to the cognitive-based ones.
ACM International Conference Proceeding Series | Year: 2015
The Spatial Sampler and the Sound Space are new musical instruments coming from the Synekine Project. This project involves performative and scientific researches for the creation of new ways to express ourselves. In a similar way a sampler is an empty box to be filled up with sound files, the Spatial Sampler turns the physical space around the performer as a key-area to place and to play his own voice samples. With the Sound Space, a performer literally spreads his voice around him by the gesture, creating an entire sound scene while playing with it. The gesture association of time and space makes these two instruments also suitable for dancers and body/movement artists, and for various applications.
Proceedings of the 11th International Society for Music Information Retrieval Conference, ISMIR 2010 | Year: 2010
Computational models of beat tracking of musical audio have been well explored, however, such systems often make "octave errors", identifying the beat period at double or half the beat rate than that actually recorded in the music. A method is described to detect if octave errors have occurred in beat tracking. Following an initial beat tracking estimation, a feature vector of metrical profile separated by spectral subbands is computed. A measure of subbeat quaver (1/8th note) alternation is used to compare half time and double time measures against the initial beat track estimation and indicate a likely octave error. This error estimate can then be used to re-estimate the beat rate. The performance of the approach is evaluated against the RWC database, showing successful identification of octave errors for an existing beat tracker. Using the octave error detector together with the existing beat tracking model improved beat tracking by reducing octave errors to 43% of the previous error rate. © 2010 International Society for Music Information Retrieval.
ACM International Conference Proceeding Series | Year: 2014
The Synekine project involves performative and scientific researches for the creation of new ways to express ourselves. The neologism "Synekinesia" would reflect our capacity to associate two or several sense motors. In the Synekine project, the performers develop a fusional language involving vocal gesture, hand gesture and movement in the space. An interactive environment made of sensors and other Human-Computer Interfaces increases this language.