Entity

Time filter

Source Type

Spain

Spiropoulos G.,IRCAM
Proceedings - 40th International Computer Music Conference, ICMC 2014 and 11th Sound and Music Computing Conference, SMC 2014 - Music Technology Meets Philosophy: From Digital Echos to Virtual Ethos | Year: 2014

In this paper I will present some aesthetic and technical aspects of my work related to real-time composition of sound environments (soundscapes and vocalscapes) through two recent works: "Geografia Sonora", a sound and video installation on the theme of the Mediterranean sea, a navigation in an archipelago of "sound islands" of singing/speaking voices, sound signals, natural and mechanical sounds; "Vocalscapes on Walt Whitman", an electroacoustic composition exploring the idea of "poetry as vocalscape" and as "geography" of voices and performances based in the recordings of fifteen talkers. The works have been composed and spatialized in real time by a "sound navigation map", a virtual score within Max/MSP, the Spatialisateur and Antescofo. Through these two works I will show: 1) by which means a vast sound material can be organized and processed/composed automatically in order to beget a sound environment in real-time through a coherent open virtual score; 2) how such a sound environment may be seen simultaneously as a sound composition, as the trace of a shared experience, as the record of poetry and vocal performance or as the soundmark of a community and of a land. Copyright: © 2014 Georgia Spiropoulos. Source


Beller G.,IRCAM
Proceedings of the 8th Sound and Music Computing Conference, SMC 2011 | Year: 2011

This paper presented the researches and the developments realized for an artistic project called Luna Park. This work is widely connected, at various levels, in the paradigm of the concatenative synthesis, both to its shape and in the processes which it employs. Thanks to a real-time programming environment, synthesis engines and prosodic transformations are manipulated, controlled and activated by the gesture, via accelerometers realized for the piece. This paper explains the sensors, the real time audio engines and the mapping that connects this two parts. The world premiere of Luna Park takes place in Paris, in the space of projection of the IRCAM, on June 10th, 2011, during the festival AGORA. © 2011 Grégory Beller et al. Source


Cella C.E.,IRCAM
Proceedings of the 2013 ICMC Conference: International Developments in Electroacoustics | Year: 2013

In this article some advanced methods for sound hybridizations by means of the theory of sound-types will be shown. After a short presentation of the theory, a formal definition will be given. A framework implementing the theory will be presented in detail, with an emphasis on the modules aimed at the sound transformations. Hybridization is achieved by means of two orthogonal processes called type matching (aimed at timbral transformation) and probability merging (aimed at the imitation of the temporal morphology). Finally, some relevant artistic applications of the proposed methods will be discussed. Source


Peeters G.,IRCAM
Proceedings of the 9th International Conference on Digital Audio Effects, DAFx 2006 | Year: 2013

In this paper, we propose a system for the automatic estimation of the key of a music track using hidden Markov models. The front-end of the system performs transient/noise reduction, estimation of the tuning and then represents the track as a succession of chroma vectors over time. The characteristics of the Major and minor modes are learned by training two hidden Markov models on a labeled database. 24 hidden Markov models corresponding to the various keys are then derived from the two trained models. The estimation of the key of a music track is then obtained by computing the likelihood of its chroma sequence given each HMM. The system is evaluated positively using a database of European baroque, classical and romantic music. We compare the results with the ones obtained using a cognitive-based approach. We also compare the chroma-key profiles learned from the database to the cognitive-based ones. Source


Beller G.,IRCAM
ACM International Conference Proceeding Series | Year: 2015

The Spatial Sampler and the Sound Space are new musical instruments coming from the Synekine Project. This project involves performative and scientific researches for the creation of new ways to express ourselves. In a similar way a sampler is an empty box to be filled up with sound files, the Spatial Sampler turns the physical space around the performer as a key-area to place and to play his own voice samples. With the Sound Space, a performer literally spreads his voice around him by the gesture, creating an entire sound scene while playing with it. The gesture association of time and space makes these two instruments also suitable for dancers and body/movement artists, and for various applications. Source

Discover hidden collaborations