Time filter

Source Type

Zurich, Switzerland

Pollow M.,RWTH Aachen | Nguyen K.-V.,French National Center for Scientific Research | Nguyen K.-V.,Sonic Emotion | Warusfel O.,French National Center for Scientific Research | And 4 more authors.
Acta Acustica united with Acustica | Year: 2012

The head-related transfer function (HRTF) characterizes the transfer path of sound from a certain position in space to the ear of a listener. Traditionally, HRTFs have been measured for relatively distant sources at discrete positions on a surrounding sphere. It is well known from literature that the HRTF changes significantly for sources in the proximal region, i.e., at distances less than one meter from the head. In practice, the measurement of near-field HRTFs requires a significant number of measurement points at different distances from the head; it may thus not be feasible for many projects. An essential question is whether HRTFs can be calculated for any arbitrary position in space from measurements on a single radius. HRTF data can be represented in the spherical wave spectral domain, thus turning the range extrapolation problem into an acoustic radiation problem. The range extrapolation is then calculated using the corresponding wave propagation terms. This paper presents a comprehensive study of the spherical acoustic method for calculating HRTFs at arbitrary field points. Near-field HRTFs are calculated for two dummy heads and then compared to other range extrapolation methods known from literature. Particular attention is paid to the effect of incomplete spherical data sets on the reproduction accuracy and different regularization methods are discussed. The results are compared to near-field measurements for these dummy heads and numerical boundary-element method (BEM) simulations thereof. The spherical acoustic method shows a good agreement with measurement data, thus providing an efficient method for near-field binaural synthesis using already existing HRTF data sets. © S. Hirzel Verlag · EAA. Source

Corteel E.,Room Acoustics Team | Corteel E.,Sonic Emotion
Proceedings of the 9th International Conference on Digital Audio Effects, DAFx 2006 | Year: 2013

Wave Field Synthesis (WFS) is a physical based sound reproduction technique. It relies on linear arrays of regularly spaced omnidirectional loudspeakers. A fundamental limitation of WFS is that the synthesis remains correct only up to a corner frequency referred to as spatial aliasing frequency. This paper addresses irregular spacing of loudspeaker array for WFS. Adapted driving functions are defined. New formulations of the spatial aliasing frequency are proposed. It is shown that the use of logarithmically spaced loudspeaker arrays can significantly increase the spatial aliasing frequency for non focused virtual sources. Source

Rbillat M.,CNRS LIMSI | Rbillat M.,Ecole Polytechnique - Palaiseau | Hennequin R.,Orange S.A. | Corteel T.,Sonic Emotion | Katz B.F.G.,CNRS LIMSI
Journal of Sound and Vibration | Year: 2011

In a number of vibration applications, systems under study are slightly nonlinear. It is thus of great importance to have a way to model and to measure these nonlinearities in the frequency range of use. Cascade of Hammerstein models conveniently allows one to describe a large class of nonlinearities. A simple method based on a phase property of exponential sine sweeps is proposed to identify the structural elements of such a model from only one measured response of the system. Mathematical foundations and practical implementation of the method are discussed. The method is afterwards validated on simulated and real systems. Vibrating devices such as acoustical transducers are well approximated by cascade of Hammerstein models. The harmonic distortion generated by those transducers can be predicted by the model over the entire audio frequency range for any desired input amplitude. Agreement with more time consuming classical distortion measurement methods was found to be good. © 2010 Elsevier Ltd. All rights reserved. Source

Andre C.R.,University of Liege | Andre C.R.,French National Center for Scientific Research | Corteel E.,Sonic Emotion | Embrechts J.-J.,University of Liege | And 2 more authors.
International Journal of Human Computer Studies | Year: 2014

While 3D cinema is becoming increasingly established, little effort has been focused on the general problem of producing a 3D sound scene spatially coherent with the visual content of a stereoscopic-3D (s-3D) movie. The perceptual relevance of such a spatial audiovisual coherence is of significant interest. In this paper, a subjective experiment is carried out where an angular error between an s-3D video and a spatially accurate sound reproduced through Wave Field Synthesis (WFS) is simulated. The psychometric curve is measured with the method of constant stimuli, and the threshold for bimodal integration is estimated. The impact of the presence of background noise is also investigated. A comparison is made between the case without any background noise and the case with an SNR of 4 dBA. Estimates of the thresholds and the slopes, as well as their confidence intervals, are obtained for each level of background noise. When background noise was present, the point of subjective equality (PSE) was higher (19.4 instead of 18.3) and the slope was steeper (-0.077 instead of -0.062 per degree). Because of the overlap between the confidence intervals, however, it was not possible to statistically differentiate between the two levels of noise. The implications for the sound reproduction in a cinema theater are discussed. © 2013 Elsevier Ltd. Source

Sonic Emotion | Date: 2013-09-25

The method of playing back a multichannel audio signal via a playback device comprises a plurality of loudspeakers arranged at fixed locations of the device and define a spatial window for sound playback relative to a reference spatial position. The method comprises for at least one sound object extracted from the signal, estimating a diffuse or localized nature of the object and estimating its position relative to the window. The audio signal is played back via the loudspeakers of the device during which playback treatment is applied to each sound object for playing back via at least one loudspeaker of the device, which treatment depends on the diffuse or localized nature of the object and on its position relative to the window, and includes creating at least one virtual source outside the window from loudspeakers of the device when the object is estimated as being diffuse or positioned outside the window.

Discover hidden collaborations