The Royal Academy of Music

www.ram.ac.uk
Arhus, Denmark

The Royal Academy of Music is a conservatoire in London, England and a constituent college of the University of London. It was founded in 1822 and is Britain's oldest degree-granting music school. It received a Royal Charter in 1830. It is a registered charity under English law. Wikipedia.

SEARCH FILTERS
Time filter
Source Type

News Article | April 17, 2017
Site: www.prweb.com

St. Mark’s Episcopal Church is proud to announce it will hold the Alamo Baroque Festival, as one of San Antonio’s official Tricentennial celebrations, January 8-13, 2018. The Alamo Baroque Festival presents historically informed performances of music from the 17th and 18th centuries by internationally recognized artists. ​ “Music written hundreds of years ago can sound as fresh and alive today as when it was first heard,” said Dr. Joseph Causby, Organist/Choirmaster at St. Mark’s Episcopal Church. “Recorders, baroque violins and trumpets, harpsichords, and viola da gamba excite the senses now as they did centuries ago.” The Alamo Baroque Festival will increase public awareness of the richness and variety of classical music before 1750, create opportunities for the performance of this repertoire, and educate musicians of all backgrounds, ages, and abilities in the techniques appropriate to early music. “We have the best of the best performing,” adds Causby. “The public performances, master classes, and private instruction are free and open to all.” Who: St. Mark’s Episcopal Church, The Royal Academy of Music (London), Children’s Chorus of San Antonio (http://www.ChildrensChorusSA.org), Youth Orchestras of San Antonio (http://www.YOSA.org), Trinity University, and The University of Texas at San Antonio St. Mark’s Episcopal Church promotes sacred music, bringing guest musicians into the community and underwriting special musical events. Now in our 25th year of bringing music to the wider San Antonio community, St. Mark's brings together the choirs of St. Mark’s with partner organizations to offer free concerts for the benefit of all. http://www.stmarks-sa.org St. Mark's Core Vocation: Feeding San Antonio with the Bread of Life


Alluri V.,University of Jyväskylä | Toiviainen P.,University of Jyväskylä | Lund T.E.,Aarhus University Hospital | Lund T.E.,The Royal Academy of Music | And 10 more authors.
NeuroImage | Year: 2013

We aimed at predicting the temporal evolution of brain activity in naturalistic music listening conditions using a combination of neuroimaging and acoustic feature extraction. Participants were scanned using functional Magnetic Resonance Imaging (fMRI) while listening to two musical medleys, including pieces from various genres with and without lyrics. Regression models were built to predict voxel-wise brain activations which were then tested in a cross-validation setting in order to evaluate the robustness of the hence created models across stimuli. To further assess the generalizability of the models we extended the cross-validation procedure by including another dataset, which comprised continuous fMRI responses of musically trained participants to an Argentinean tango. Individual models for the two musical medleys revealed that activations in several areas in the brain belonging to the auditory, limbic, and motor regions could be predicted. Notably, activations in the medial orbitofrontal region and the anterior cingulate cortex, relevant for self-referential appraisal and aesthetic judgments, could be predicted successfully. Cross-validation across musical stimuli and participant pools helped identify a region of the right superior temporal gyrus, encompassing the planum polare and the Heschl's gyrus, as the core structure that processed complex acoustic features of musical pieces from various genres, with or without lyrics. Models based on purely instrumental music were able to predict activation in the bilateral auditory cortices, parietal, somatosensory, and left hemispheric primary and supplementary motor areas. The presence of lyrics on the other hand weakened the prediction of activations in the left superior temporal gyrus. Our results suggest spontaneous emotion-related processing during naturalistic listening to music and provide supportive evidence for the hemispheric specialization for categorical sounds with realistic stimuli. We herewith introduce a powerful means to predict brain responses to music, speech, or soundscapes across a large variety of contexts. © 2013 Elsevier Inc.


Marin M.M.,University of Vienna | Thompson W.F.,Macquarie University | Gingras B.,University of Innsbruck | Stewart L.,Goldsmiths, University of London | Stewart L.,The Royal Academy of Music
Neuropsychologia | Year: 2015

Congenital amusia is a neurodevelopmental disorder characterized by impaired pitch processing. Although pitch simultaneities are among the fundamental building blocks of Western tonal music, affective responses to simultaneities such as isolated dyads varying in consonance/dissonance or chords varying in major/minor quality have rarely been studied in amusic individuals. Thirteen amusics and thirteen matched controls enculturated to Western tonal music provided pleasantness ratings of sine-tone dyads and complex-tone dyads in piano timbre as well as perceived happiness/sadness ratings of sine-tone triads and complex-tone triads in piano timbre. Acoustical analyses of roughness and harmonicity were conducted to determine whether similar acoustic information contributed to these evaluations in amusics and controls. Amusic individuals' pleasantness ratings indicated sensitivity to consonance and dissonance for complex-tone (piano timbre) dyads and, to a lesser degree, sine-tone dyads, whereas controls showed sensitivity when listening to both tone types. Furthermore, amusic individuals showed some sensitivity to the happiness-major association in the complex-tone condition, but not in the sine-tone condition. Controls rated major chords as happier than minor chords in both tone types. Linear regression analyses revealed that affective ratings of dyads and triads by amusic individuals were predicted by roughness but not harmonicity, whereas affective ratings by controls were predicted by both roughness and harmonicity. We discuss affective sensitivity in congenital amusia in view of theories of affective responses to isolated chords in Western listeners. © 2015 The Authors.


Gebauer L.,University of Aarhus | Skewes J.,University of Aarhus | Westphael G.,University of Aarhus | Heaton P.,Goldsmiths, University of London | And 2 more authors.
Frontiers in Neuroscience | Year: 2014

Music is a potent source for eliciting emotions, but not everybody experience emotions in the same way. Individuals with autism spectrum disorder (ASD) show difficulties with social and emotional cognition. Impairments in emotion recognition are widely studied in ASD, and have been associated with atypical brain activation in response to emotional expressions in faces and speech. Whether these impairments and atypical brain responses generalize to other domains, such as emotional processing of music, is less clear. Using functional magnetic resonance imaging, we investigated neural correlates of emotion recognition in music in high-functioning adults with ASD and neurotypical adults. Both groups engaged similar neural networks during processing of emotional music, and individuals with ASD rated emotional music comparable to the group of neurotypical individuals. However, in the ASD group, increased activity in response to happy compared to sad music was observed in dorsolateral prefrontal regions and in the rolandic operculum/insula, and we propose that this reflects increased cognitive processing and physiological arousal in response to emotional musical stimuli in this group. © 2014 Gebauer, Skewes, Westphael, Heaton and Vuust.


Toiviainen P.,University of Jyväskylä | Alluri V.,University of Jyväskylä | Brattico E.,University of Jyväskylä | Brattico E.,Aalto University | And 4 more authors.
NeuroImage | Year: 2014

We investigated neural correlates of musical feature processing with a decoding approach. To this end, we used a method that combines computational extraction of musical features with regularized multiple regression (LASSO). Optimal model parameters were determined by maximizing the decoding accuracy using a leave-one-out cross-validation scheme. The method was applied to functional magnetic resonance imaging (fMRI) data that were collected using a naturalistic paradigm, in which participants' brain responses were recorded while they were continuously listening to pieces of real music. The dependent variables comprised musical feature time series that were computationally extracted from the stimulus. We expected timbral features to obtain a higher prediction accuracy than rhythmic and tonal ones. Moreover, we expected the areas significantly contributing to the decoding models to be consistent with areas of significant activation observed in previous research using a naturalistic paradigm with fMRI. Of the six musical features considered, five could be significantly predicted for the majority of participants. The areas significantly contributing to the optimal decoding models agreed to a great extent with results obtained in previous studies. In particular, areas in the superior temporal gyrus, Heschl's gyrus, Rolandic operculum, and cerebellum contributed to the decoding of timbral features. For the decoding of the rhythmic feature, we found the bilateral superior temporal gyrus, right Heschl's gyrus, and hippocampus to contribute most. The tonal feature, however, could not be significantly predicted, suggesting a higher inter-participant variability in its neural processing. A subsequent classification experiment revealed that segments of the stimulus could be classified from the fMRI data with significant accuracy. The present findings provide compelling evidence for the involvement of the auditory cortex, the cerebellum and the hippocampus in the processing of musical features during continuous listening to music. © 2013 Elsevier Inc.


Brattico P.,The Royal Academy of Music | Brattico E.,The Royal Academy of Music | Vuust P.,The Royal Academy of Music
Frontiers in Neuroscience | Year: 2017

A well-known tradition in the study of visual aesthetics holds that the experience of visual beauty is grounded in global computational or statistical properties of the stimulus, for example, scale-invariant Fourier spectrum or self-similarity. Some approaches rely on neural mechanisms, such as efficient computation, processing fluency, or the responsiveness of the cells in the primary visual cortex. These proposals are united by the fact that the contributing factors are hypothesized to be global (i.e., they concern the percept as a whole), formal or non-conceptual (i.e., they concern form instead of content), computational and/or statistical, and based on relatively low-level sensory properties. Here we consider that the study of aesthetic responses to music could benefit from the same approach. Thus, along with local features such as pitch, tuning, consonance/dissonance, harmony, timbre, or beat, also global sonic properties could be viewed as contributing toward creating an aesthetic musical experience. Several such properties are discussed and their neural implementation is reviewed in the light of recent advances in neuroaesthetics. © 2017 Brattico, Brattico and Vuust.


Hooper M.,The Royal Academy of Music
Leonardo | Year: 2013

Christopher Redgate is developing a new oboe for the 21st century and is working with composers to develop new music for the instrument. This article addresses the early stages of his collaboration with Sam Hayden. It demonstrates some of the coordination problems at the inception of a project that includes an instrument with a long history. The article sets out some of the ongoing concerns in documenting this work, arguing that the oboe itself is one of the principle forces shaping a collaboration that is future-focused. © 2013 ISAST.


Konvalinka I.,Aarhus University Hospital | Vuust P.,Aarhus University Hospital | Vuust P.,The Royal Academy of Music | Roepstorff A.,Aarhus University Hospital | And 3 more authors.
Quarterly Journal of Experimental Psychology | Year: 2010

To study the mechanisms of coordination that are fundamental to successful interactions we carried out a joint finger tapping experiment in which pairs of participants were asked to maintain a given beat while synchronizing to an auditory signal coming from the other person or the computer. When both were hearing each other, the pair became a coupled, mutually and continuously adaptive unit of two "hyper-followers", with their intertap intervals (ITIs) oscillating in opposite directions on a tap-to-tap basis. There was thus no evidence for the emergence of a leader-follower strategy. We also found that dyads were equally good at synchronizing with the irregular, but responsive other as with the predictable, unresponsive computer. However, they performed worse when the "other" was both irregular and unresponsive. We thus propose that interpersonal coordination is facilitated by the mutual abilities to (a) predict the other's subsequent action and (b) adapt accordingly on a millisecond timescale. © 2010 The Experimental Psychology Society.


News Article | December 9, 2016
Site: www.eurekalert.org

A new study from Center for Music in the Brain, Denmark, shows that participants receiving oxytocin -- a hormone known to promote social bonding -- are more synchronized when finger-tapping together, than participants receiving placebo When standing in a crowd at a concert, clapping hands along with the music on stage, it may be that people with higher levels of oxytocin are better synchronised with the beat of the music than those with lower levels of oxytocin. A new study from Center for Music in the Brain (MIB) Aarhus University/The Royal Academy of Music, Denmark, published in Scientific Reports on the 8th of December 2016, shows that participants receiving oxytocin - a hormone known to promote social bonding - are more synchronized when finger-tapping together, than participants receiving placebo. This effect was observed when pairs of participants, placed in separate rooms tapped together in a leader/follower relationship. When people synchronise their movements together, for example by walking in time, clapping or making music, they seem to like each other more and report feeling greater affiliation with each other. Oxytocin is a naturally occurring hormone that has been shown to promote social interaction, such as cooperation and affiliation. However, until now it has been unclear whether the social effect of oxytocin is a direct one, or whether oxytocin in fact primarily affects synchronisation and only secondarily social behaviours. We set out to test these questions by measuring whether increased levels of oxytocin affected how pairs of participants synchronised together to a steady beat. One group of pairs received oxytocin through nasal spray, and another group received a placebo, also through nasal spray. Our results indicate that oxytocin indeed affects synchronisation between participants but we did not find that oxytocin influenced how much tappers liked their tapping partners. The followers in the oxytocin group were less variable in their tapping to the beat suggesting that they were better at predicting the taps of their leaders. Thus oxytocin's social effect may be explained by its role in facilitating prediction in interaction, even in the absence of subjectively experienced social affiliation. The ability to synchronise to a musical beat is largely a human skill. Our study contributes to our understanding of how this form of human behaviour is affected by socio-biological factors, such as oxytocin and leader-follower relationships. It also highlights how music creates and maintains social cohesion in an evolutionary perspective. The study was performed in collaboration with Assoc. Prof. Ivana Konvalinka (DTU). * The Danish National Research Foundation's Center for Music In the Brain (MIB) is an interdisciplinary research center addressing the dual questions of how music is processed in the brain and how this can inform our understanding of fundamental principles behind brain processing in general. The center employs state-of-the-art scanning methods (MR, fMRI, MEG, EEG, PET) and behavioral measures. MIB is a collaboration between Aarhus University (AU) and The Royal Academy of Music (RAMA) located at AU. With a strong foundation in music practice and theory at the highest level, and a focus on clinical application of music, MIB combines neuroscientific, musicological and psychological research in music perception, action, emotion and learning, with the potential to test the most prominent theories of brain function, and to influence the way we play, teach, use, and listen to music. http://dg.


News Article | November 3, 2016
Site: www.eurekalert.org

WASHINGTON - If you've found yourself singing along to Lady Gaga's "Bad Romance" hours after you switched the radio off, you are not alone. Certain songs do tend to stick in our heads more than others for some very specific reasons, according to research published by the American Psychological Association. These songs, often called earworms, are usually faster, with a fairly generic and easy-to-remember melody but with some particular intervals, such as leaps or repetitions that set them apart from the average pop song, according to the first large-scale study of earworms. The article appears online in the APA journal Psychology of Aesthetics, Creativity and the Arts. In addition to "Bad Romance," examples of common earworms named in the study include "Don't Stop Believing" by Journey and, perhaps not surprisingly, "Can't Get You Out Of My Head" by Kylie Minogue. "These musically sticky songs seem to have quite a fast tempo along with a common melodic shape and unusual intervals or repetitions like we can hear in the opening riff of 'Smoke On The Water' by Deep Purple or in the chorus of 'Bad Romance,'" said the study's lead author, Kelly Jakubowski, PhD, of Durham University. She conducted the study while at Goldsmiths, University of London. Earworms are also more likely to get more radio time and be featured at the top of the charts, which is not surprising. However, there has previously been limited evidence about what makes such songs catchy regardless of popularity or how often people may have heard them. "Our findings show that you can, to some extent, predict which songs are going to get stuck in people's heads based on the song's melodic content. This could help aspiring song-writers or advertisers write a jingle everyone will remember for days or months afterwards," said Jakubowski. The study found that the tunes most likely to get stuck in people's heads were those with more common global melodic contours, meaning they have overall melodic shapes commonly found in Western pop music. For example, one of the most common contour patterns is heard in "Twinkle, Twinkle Little Star," where the first phrase rises in pitch and the second falls. Numerous other nursery tunes follow the same pattern, making them easy for young children to remember, according to the authors. The opening riff of "Moves Like Jagger" by Maroon 5, one of the top named earworm tunes in the study, also follows this common contour pattern of rising then falling in pitch. In addition to a common melodic shape, the other crucial ingredient in the earworm formula is an unusual interval structure in the song, such as some unexpected leaps or more repeated notes than you would expect to hear in the average pop song, according to the study. The instrumental interlude of "My Sharona" by the Knack and "In The Mood" by Glen Miller both have this unusual interval structure. The researchers asked 3,000 people to name their most frequent earworm tunes and compared these to tunes that had never been named as earworms in the database but were a match in terms of popularity and how recently they had been in the United Kingdom music charts. The melodic features of the earworm and non-earworm tunes were then analyzed and compared. Songs were limited to popular genres, such as pop, rock, rap and rhythm and blues. The data for the study were collected from 2010 to 2013. Studies of earworms can help to understand how brain networks, which are involved in perception, emotions, memory and spontaneous thoughts, behave in different people, the authors said. Jakubowski offered tips for how to get rid of an earworm: Article: "Dissecting an earworm: Melodic features and song popularity predict involuntary musical imagery," by Kelly Jakubowski, PhD, Goldsmiths, University of London; Sebastian Finkel, MS, University of Tübingen; Lauren Stewart, PhD, Goldsmiths, University of London and Aarhus University and The Royal Academy of Music Aarhus/Aalborg, Denmark; and Daniel Müllensiefen, PhD, Goldsmiths, University of London. Psychology of Aesthetics, Creativity and the Arts, published online Nov. 3, 2016. Full text of the article is available from the APA Public Affairs Office and at http://www. . The American Psychological Association, in Washington, D.C., is the largest scientific and professional organization representing psychology in the United States. APA's membership includes more than 117,500 researchers, educators, clinicians, consultants and students. Through its divisions in 54 subfields of psychology and affiliations with 60 state, territorial and Canadian provincial associations, APA works to advance the creation, communication and application of psychological knowledge to benefit society and improve people's lives. http://www.

Loading The Royal Academy of Music collaborators
Loading The Royal Academy of Music collaborators