Time filter

Source Type

Le Touquet – Paris-Plage, France

Papadopoulos H.,University of Victoria | Peeters G.,Sound Analysis Synthesis Team
IEEE Transactions on Audio, Speech and Language Processing

In this paper, we present a method for estimating the progression of musical key from an audio signal. We address the problem of local key finding by investigating the possible combination and extension of different previously proposed approaches for global key estimation. In this work, key progression is estimated from the chord progression. Specifically, we introduce key dependency on the harmonic and the metrical structures. A contribution of our work is that we address the problem of finding an analysis window length for local key estimation that is adapted to the intrinsic music content of the analyzed piece by introducing information related to the metrical structure in our model. Key estimation is not performed on empirically chosen segments but on segments that are expressed in relationship with the tempo period. We evaluate and analyze our results on two databases of different styles. We systematically analyze the influence of various parameters to determine factors important to our model, we study the relationships between the various musical attributes that are taken into account in our work, and we provide case study examples. © 2011 IEEE. Source

Giannoulis D.,Queen Mary, University of London | Stowell D.,Queen Mary, University of London | Benetos E.,City University London | Rossignol M.,Sound Analysis Synthesis Team | And 2 more authors.
European Signal Processing Conference

An increasing number of researchers work in computational auditory scene analysis (CASA). However, a set of tasks, each with a well-defined evaluation framework and commonly used datasets do not yet exist. Thus, it is difficult for results and algorithms to be compared fairly, which hinders research on the field. In this paper we will introduce a newly-launched public evaluation challenge dealing with two closely related tasks of the field: acoustic scene classification and event detection. We give an overview of the tasks involved; describe the processes of creating the dataset; and define the evaluation metrics. Finally, illustrations on results for both tasks using baseline methods applied on this dataset are presented, accompanied by open-source code. © 2013 EURASIP. Source

Papadopoulos H.,Sound Analysis Synthesis Team | Peeters G.,Sound Analysis Synthesis Team
IEEE Transactions on Audio, Speech and Language Processing

We present a new technique for joint estimation of the chord progression and the downbeats from an audio file. Musical signals are highly structured in terms of harmony and rhythm. In this paper, we intend to show that integrating knowledge of mutual dependencies between chords and metric structure allows us to enhance the estimation of these musical attributes. For this, we propose a specific topology of hidden Markov models that enables modelling chord dependence on metric structure. This model allows us to consider pieces with complex metric structures such as beat addition, beat deletion or changes in the meter. The model is evaluated on a large set of popular music songs from the Beatles that present various metric structures. We compare a semi-automatic model in which the beat positions are annotated, with a fully automatic model in which a beat tracker is used as a front-end of the system. The results show that the downbeat positions of a music piece can be estimated in terms of its harmonic structure and that conversely the chord progression estimation benefits from considering the interaction between the metric and the harmonic structures. © 2010 IEEE. Source

Discover hidden collaborations