Zurich, Switzerland
Zurich, Switzerland

Time filter

Source Type

Degara N.,Fraunhofer Institute for Integrated Circuits | Kuppanda T.,Fraunhofer Institute for Integrated Circuits | Neate T.,University of Swansea | Yang J.,University of York | Torres A.,Zurich University of the Arts
2014 IEEE VR Workshop: Sonic Interaction in Virtual Environments, SIVE 2014 | Year: 2015

The use of sonification for navigation, localization and obstacle avoidance is considered to be one of the most important tasks in auditory display research for its potential application to navigation systems in vehicles and smartphones, assistive technology and other eyes-free applications. The aim of this technology is to deliver location-based information to support navigation through sound. In this paper a comparison of two sonification methods for navigation and obstacle avoidance is presented. These methods were initially developed during a sonification hack day that was ran during the Interactive Sonification (ISon) workshop 2013. In order to allow the formal comparison of methods, we followed a reproducible sonification approach using a set of guidelines provided by SonEX (Sonification Evaluation eXchange). SonEX is a community-based environment that enables the definition and evaluation of standardized tasks, supporting open science standards and reproducible research. In order to allow for reproducible research, the system has been made publicly available. © 2014 IEEE.


Gotz U.,Zurich University of the Arts | Kocher M.,Zurich University of the Arts | Bauer R.,Zurich University of the Arts | Muller C.,Human Interface Design | Meilick B.,Erding
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2016

In this paper we address the challenges in designing game-based training software for use in a neurocognitive context. The project “Hotel Plastisse” is a cooperative software development, designed by researchers of the Specialization in Game Design at the Zurich University of the Arts (ZHdK) and of the International Normal Aging and Plasticity Imaging Center (INAPIC) at the University of Zurich. The aim of the project was to create a game-based software tool for a training study, which allows the comparison of single-domain and multidomain cognitive training of elderly people. As the demand for game-based training software in scientific or therapeutic contexts is expected to grow in the future, we would like to share our insights into the development of a game that follows game design principles while at the same time generating scientifically valuable data. © Springer International Publishing Switzerland 2016.


Grant
Agency: Cordis | Branch: H2020 | Program: RIA | Phase: REFLECTIVE-2-2015 | Award Amount: 2.71M | Year: 2016

Transmitting Contentious Cultural Heritages with the Arts: From Intervention to Co-Production (TRACES) aims to provide new directions for cultural heritage institutions to contribute productively to evolving European identity and reflexive Europeanization. To do so, it deploys an innovative ethnographic/artistic approach, focused on a wide range of types of contentious heritage. Attention to contentious heritage is crucial as it is especially likely to raise barriers to inclusivity and convivial relations, as well as to be difficult to transmit to the public. Transmitted effectively, however, it is potentially especially productive in raising critical reflection and contributing to reflexive Europeanization, in which European identity is shaped by self-awareness and on-going critical reflection. Through rigorous and creative in-depth artistic/ethnographic research, TRACES will provide a systematic analysis of the challenges and opportunities raised by transmitting contentious, awkward and difficult pasts. It will do so by setting up Creative Co-Productions (CCPs) in which artists, researchers, heritage agencies and stakeholders work together in longer term engagements to collaboratively research selected cases of contentious heritage and develop new participatory public interfaces. These will be documented and analysed, including educational research. These interfaces, which include online as well as physical exhibitions and other formats, are part of the significant output planned for TRACES, along with academic publications and a novel reflective Contentious Heritage Manual that will synthesise results to provide directions for future practical reflexive transmission of cultural heritage in Europe. TRACES is a multi-disciplinary team, bringing together established and emerging scholars, and providing high-level expertise, relevant experience and creative energy, to provide a rigorous and innovative approach to the transmission of European cultural heritage.


Del Piccolo A.,University of Venice | Delle Monache S.,IUAV University of Venice | Rocchesso D.,IUAV University of Venice | Papetti S.,Zurich University of the Arts | Mauro D.A.,IUAV University of Venice
Proceedings of the 12th International Conference in Sound and Music Computing, SMC 2015 | Year: 2015

A surface can be harsh and raspy, or smooth and silky, and everything in between. We are used to sense these features with our fingertips as well as with our eyes and ears: the exploration of a surface is a multisensory experience. Tools, too, are often employed in the interaction with surfaces, since they augment our manipulation capabilities. "Sketch-a-Scratch" is a tool for the multisensory exploration and sketching of surface textures. The user's actions drive a physical sound model of real materials' response to interactions such as scraping, rubbing or rolling. Moreover, different input signals can be converted into 2D visual surface profiles, thus enabling to experience them visually, aurally and haptically. © 2015 A. Del Piccolo.


Schacher J.C.,Zurich University of the Arts | Jarvelainen H.,Zurich University of the Arts | Strinning C.,Zurich University of the Arts | Neff P.,University of Zürich
Proceedings of the 12th International Conference in Sound and Music Computing, SMC 2015 | Year: 2015

What are the effects of a musician's movement on the affective impact of experiencing a music performance? How can perceptual, sub-personal and cognitive aspects of music be investigated through experimental processes? This article describes the development of a mixed methods approach that tries to tackle such questions by blending quantitative and qualitative methods with observations and interpretations. Basing the core questions on terms and concepts obtained through a wide survey of literature on musical gesture and movement analysis, the iterative, cyclical advance and extension of a series of experiments is shown, and preliminary conclusions drawn from data and information collected in a pilot study. With the choice of particular canonical pieces from contemporary music, a multiperspective field of questioning is opened up that provides ample materials and challenges for a process of converging, intertwining and cross-discipline methods development. The resulting interpretation points to significant affective impact of movement in music, yet these insights remain subjective and demand that further and deeper investigations are carried out. © 2015 Jan C. Schacher etal.


Schacher J.C.,Zurich University of the Arts | Bisig D.,Zurich University of the Arts
ACM International Conference Proceeding Series | Year: 2014

How does bodily awareness relate to instrumental action, how is movement perceived and what role does it play in music performance with technology? This article investigates such questions in the light of the application of concepts and models for machine-based gesture recognition. The problematic relationship between paradigms of control and the notion of 'inter- Action' in the technically mediated art-form of electronic music is revealed. The concepts used when applying gesture-recognition display a problematic limited scope as well. The challenge in using these advanced algorithms is to be able to detect and translate salient and expressive aspects of a movement-based music performance. Acknowledging this as a goal exposes the need for a unified high-level descriptive system for expressive movements or gestures in music and dance performance.


Candia V.,Collegium Helveticum | Candia V.,Zurich University of the Arts | Deprez P.,Collegium Helveticum | Deprez P.,Zurich University of the Arts | And 2 more authors.
PLoS ONE | Year: 2015

We investigated, in a university student population, spontaneous (non-speeded) fast and slow number-to-line mapping responses using non-symbolic (dots) and symbolic (words) stimuli. Seeking for less conventionalized responses, we used anchors 0-130, rather than the standard 0-100. Slow responses to both types of stimuli only produced linear mappings with no evidence of non-linear compression. In contrast, fast responses revealed distinct patterns of non-linear compression for dots and words. A predicted logarithmic compression was observed in fast responses to dots in the 0-130 range, but not in the reduced 0-100 range, indicating compression in proximity of the upper anchor 130, not the standard 100. Moreover, fast responses to words revealed an unexpected significant negative compression in the reduced 0-100 range, but not in the 0-130 range, indicating compression in proximity to the lower anchor 0. Results show that fast responses help revealing the fundamentally distinct nature of symbolic and non-symbolic quantity representation. Whole number words, being intrinsically mediated by cultural phenomena such as language and education, emphasize the invariance of magnitude between them - essential for linear mappings, and therefore, unlike non-symbolic (psychophysical) stimuli, yield spatial mappings that don't seem to be influenced by the Weber-Fechner law of psychophysics. However, high levels of education (when combined with an absence of standard upper anchors) may lead fast responses to overestimate magnitude invariance on the lower end of word numerals. © 2015 Candia et al.


Schacher J.C.,Zurich University of the Arts | Schacher J.C.,Royal Conservatoire of Scotland | Neff P.,Zurich University of the Arts | Neff P.,University of Zürich
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2016

Skill development, the stabilisation of expertise through practise, and processes of bodily as well as neural sharing in the context of gesture-based electronic music performance are the topic of this article. The key questions centre around the affective, embodied but also neurological aspects of these processes. The types of awareness on a corporeal level and the neural processes that occur within the musician and the listener-viewer are investigated, since in music performance the perceptions of musician and audience depend on shared embodiment and cognitive processes. The aim is to show that ‘enactive’, embodied concepts merely provide a different perspective of the same complex matter than what the cognitive neurosciences propose. A concrete musical piece is used as an example that shows a gestural practice using sensor-based instruments and digital sound processing in order to expose the critical relationships between musician, instrument, technology and the audience. The insights arising from blending the two complementary perspectives in this context can be productive both for artistic practice as well as systematic research in music. © Springer International Publishing Switzerland 2016.


Pfaff S.,Zurich University of the Arts | Lervik O.,Zurich University of the Arts
2015 IEEE 2nd VR Workshop on Sonic Interactions for Virtual Environments, SIVE 2015 - Proceedings | Year: 2015

In this study we explored whether a virtual 3d environment in combination with various game mechanics can function as a creative playground for collaborative music making in a virtual space. The virtual environment is created using Unity3D which communicates to any OSC-enabled music software or programming environment, for example Max. Up to six players were presented with a virtual environment consisting of three interlinked mini games enabling them to create- and control various parameters of the sound. Furthermore it was possible to change the placement of the sound sources within the virtual space and to add reverb. This was reflected by changing the real space position of the sounds provided by a surround installation of eight loudspeakers using the Ambisonics surround panning technology. During preliminary tests we could observe a quick appropriation of the provided surroundings while players felt like being truly creative, resulting in unexpected musical situations. Additionally a very quick and non-verbal distribution of the participants amongst the instruments did occur, meaning spectators could spontaneously enter and contribute to the experience. We conclude that collaborative music making in a virtual space, especially in combination with modern game mechanics, might not only lead to a new way of how we create but also how we perceive music in a modern era. © 2015 IEEE.


Rocchesso D.,IUAV University of Venice | Delle Monache S.,IUAV University of Venice | Papetti S.,Zurich University of the Arts
International Journal of Human Computer Studies | Year: 2015

A tool for the multisensory stylus-based exploration of virtual textures was used to investigate how different feedback modalities (static or dynamically deformed images, vibration, sound) affect exploratory gestures. To this end, we ran an experiment where participants had to steer a path with the stylus through a curved corridor on the surface of a graphic tablet/display, and we measured steering time, dispersion of trajectories, and applied force. Despite the variety of subjective impressions elicited by the different feedback conditions, we found that only nonvisual feedback induced significant variations in trajectories and an increase in movement time. In a post-experiment, using a paper-and-wood physical realization of the same texture, we recorded a variety of gestural behaviors markedly different from those found with the virtual texture. With the physical setup, movement time was shorter and texture-dependent lateral accelerations could be observed. This work highlights the limits of multisensory pseudo-haptic techniques in the exploration of surface textures. © 2015 The Authors.


Fricker S.,Blekinge Institute of Technology | Schumacher S.,Zurich University of the Arts
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2012

[Context and motivation] Requirements catalogues for software release planning are often not complete and homogeneous. Current release planning approaches, however, assume such commitment to detail - at least implicitly. [Question/problem] We evaluate how to relax these expectations, while at the same time reducing release planning effort and increasing decision-making flexibility. [Principal ideas/results] Feature trees capture AND, OR, and REQUIRES relationships between requirements. Such requirements structuring can be used to hide incompleteness and to support abstraction. [Contribution] The paper describes how to utilize feature trees for planning the releases of an evolving software solution and evaluates the effects of the approach on effort, decision-making, and trust with an industrial case. © 2012 Springer-Verlag.


Studer R.,University of Geneva | Danuser B.,University of Geneva | Hildebrandt H.,Zurich University of the Arts | Arial M.,University of Geneva | Gomez P.,University of Geneva
Journal of Psychosomatic Research | Year: 2011

Objective: Despite the importance of respiration and hyperventilation in anxiety disorders, research on breathing disturbances associated with hyperventilation is rare in the field of music performance anxiety (MPA, also known as stage fright). The only comparable study in this area reported a positive correlation between negative feelings of MPA and hyperventilation complaints during performance. The goals of this study were (a) to extend these previous findings to the period before performance, (b) to test whether a positive correlation also exists between hyperventilation complaints and the experience of stage fright as a problem, (c) to investigate instrument-specific symptom reporting, and (d) to confirm gender differences in negative feelings of MPA and hyperventilation complaints reported in other studies. Methods: We assessed 169 university students of classical music with a questionnaire comprising: the State-Trait Anxiety Inventory for negative feelings of MPA, the Nijmegen Questionnaire for hyperventilation complaints, and a single item for the experience of stage fright as a problem. Results: We found a significant positive correlation between hyperventilation complaints and negative feelings of MPA before performance and a significant positive correlation between hyperventilation complaints and the experience of stage fright as a problem. Wind musicians/singers reported a significantly higher frequency of respiratory symptoms than other musicians. Furthermore, women scored significantly higher on hyperventilation complaints and negative feelings of MPA. Conclusion: These results further the findings of previous reports by suggesting that breathing disturbances associated with hyperventilation may play a role in MPA prior to going on stage. Experimental studies are needed to confirm whether hyperventilation complaints associated with negative feelings of MPA manifest themselves at the physiological level. © 2010 Elsevier Inc.


Papetti S.,Zurich University of the Arts
Proceedings of the 9th Sound and Music Computing Conference, SMC 2012 | Year: 2012

The ICST DSP library is a compact collection of C++ routines with focus on rapid development of audio processing and analysis applications. Unlike other similar libraries it offers a set of technical computing tools as well as speedoptimized industrial-grade DSP algorithms, which allow one to prototype, test and implement real-time applications without the need of switching development environment. The package does not rely on any third-party libraries, supports multiple platforms and is released under FreeBSD license. © 2012 Stefano Papetti et al.


Bisig D.,Zurich University of the Arts | Kocher P.,Zurich University of the Arts
ICMC 2012: Non-Cochlear Sound - Proceedings of the International Computer Music Conference 2012 | Year: 2012

A new version of our swarm simulation environment for musical and artistic applications is presented. The main improvements of this version concern the integration of an OSC based communication protocol and the addition of a graphical user interface. These extensions offer a variety of approaches for the configuration, manipulation and application of simulated swarms in real time. We hope, that these improvements open up the application of swarm simulations to a wider audience of artists and musicians.


Schacher J.C.,Zurich University of the Arts | Miyama C.,University of Cologne | Lossius T.,Electronic Arts
Proceedings - 40th International Computer Music Conference, ICMC 2014 and 11th Sound and Music Computing Conference, SMC 2014 - Music Technology Meets Philosophy: From Digital Echos to Virtual Ethos | Year: 2014

The development of SpatDIF, the Spatial Sound Description Interchange Format, continues with the implementation of concrete software tools. In order to make SpatDIF usable in audio workflows, two types of code implementations are developed. The first is the C/C++ software library 'libspatdif', whose purpose is to provide a reference implementation of SpatDIF. The class structure of this library and its main components embodies the principles derived from the concepts and specification of SpatDIF. The second type of tool are specific implementations in audio programming environments, which demonstrate the methods and best-use practices for working with SpatDIF. Two practical scenarios demonstrates the use of an external in MaxMSP and Pure Data as well as the implementation of the same example in a C++ environment. A short-term goal is the complete implementation of the existing specification within the library. A long-term perspective is to develop additional extensions that will further increase the utility of the SpatDIF format. Copyright: © 2014 Jan C. Schacher et al.


Schacher J.C.,Zurich University of the Arts
ACM International Conference Proceeding Series | Year: 2015

This article addresses the intersection of technical, analytical and artistic approaches to perceiving and measuring musical movement. The point of view taken is situated between the development and application of technological tools, the design and running of exploratory experiments, and the musical performance moment, where perception of the body and its movements constitutes an integral part of the experience. Through a use-case that is shared with other artists and researchers, a wide range of necessary developments, both conceptually and in software is shown. The tools and the methods generated are juxtaposed with the realisation that movement analysis is merely one possible usage of acquired data. Artistic translations provide alternate ways of generating meaning from movement data, in particular when translating musical actions to pieces that span multiple modalities. With the proposed multi-perspective methodology, ways and means are sketched out that address the inherent multiplicity of domains involved in music performance and perception.


Hildebrandt H.,Zurich University of the Arts | Hildebrandt H.,University of Basel | Hildebrandt H.,University of Zürich | Nubling M.,Empirical | And 2 more authors.
Medical Problems of Performing Artists | Year: 2012

Background: Public opinion associates music performance with pleasure, relaxation, and entertainment. Nevertheless, several studies have shown that professional musicians and music students are often affected by work-related burdens. These are closely related to stress and anxiety. Objective: Scrutinizing specific health strains and work attitudes of music students during their freshman year of high-level education. Methods: One hundred five students in three Swiss music universities were part of a longitudinal study using standardized assessment questionnaires. Before and after their first study year, some custom-made questionnaires designed to fit the particular work environment of musicians were used together with the already validated inquiry instruments. Results: Fatigue, depression, and stage fright increased significantly. Conclusions: Our results indicate more study is needed and attempts should be made to minimize the stress level, improve the students' ability to cope with stress, and otherwise reduce their risk for injury. This appears particularly important considering the long-term negative effects of stressors on individuals' health as revealed by modern research.


Fontana F.,University of Udine | Jarvelainen H.,Zurich University of the Arts | Papetti S.,Zurich University of the Arts | Avanzini F.,University of Padua | And 2 more authors.
Proceedings - 40th International Computer Music Conference, ICMC 2014 and 11th Sound and Music Computing Conference, SMC 2014 - Music Technology Meets Philosophy: From Digital Echos to Virtual Ethos | Year: 2014

An experiment has been conducted, measuring pianists' sensitivity to piano key vibrations at the fingers while playing an upright or a grand Yamaha Disklavier piano. At each trial, which consisted in playing loud and long A notes across the whole keyboard, vibrations were either present or absent through setting the Disklavier pianos to normal or quiet mode. Sound feedback was always provided by a MIDI controlled piano synthesizer via isolating ear/headphones, which masked the acoustic sound in normal mode. In partial disagreement with the existing literature, our results suggest that significant vibrotactile cues are produced in the lower range of the piano keyboard, with perceptual cut-off around the middle octave. Possible psychophysical mechanisms supporting the existence of such cues are additionally discussed. © 2014 Federico Fontana et al.


Martin A.L.,Zurich University of the Arts | Gotz U.,Zurich University of the Arts | Bauer R.,Zurich University of the Arts
Conference Proceedings - 2014 IEEE Games, Media, Entertainment Conference, IEEE GEM 2014 | Year: 2015

"IMIC" (Innovative Movement Therapy in Childhood) is a translational research and development project that is specialized in creating a motivating locomotion rehabilitation setting for children with neurological disorders and cognitive limitations in an interdisciplinary context. "IMIC" was founded in 2010 by an interdisciplinary cooperation of movement scientists, neurologists and neuropsychologists (Zurich University Children's Hospital / Rehab Research Group, Rehabilitation Centre Affoltern a. Albis), game designers (ZHdK Zurich University of the Arts / Specialization in Game Design) and specialists in sensory-motor robotics (ETH Zurich / Sensory-Motor Systems Lab; ETH Zurich and Zurich University / Institute for Neuroinformatics; ETH Zurich / Rehabilitation Engeneering Lab). The project focuses on the expansion of pédiatrie robot-Assisted rehabilitation, which is achieved through the flexible connection of specifically designed RehabGame scenarios to various therapy devices for upper and lower extremities. The project's key development, the middleware "RehabConnex", allows the combination of several therapy robots or input devices, or respectively a combination of both to the "IMIC"-RehabGames. "IMIC" makes use of the rehabilitation robots Lokomat® (Hocoma) and ChARMin (ETH Zurich) to function as multimodal "game controllers", which translate the patient's physical input into game control parameters. Beside clinical research questions addressing the impact of the developments, "IMIC" focuses on design aspects and technological research and development: Which game-conceptual, audio-visual and technological features must a RehabGame provide in order to maximally motivate young patients in their active participation in therapy as well as to support the therapists? © 2014 IEEE.


Unemi T.,Soka University | Bisig D.,Zurich University of the Arts
Conference on Human Factors in Computing Systems - Proceedings | Year: 2015

The authors' latest artwork entitled Visual Liquidizer or Virtual Merge is a new form of audio-visual interactive installation that displays a deformed dynamic images of visitors as if their bodies become liquidized, scattered, and mixed. It is intended to provide a virtual experience of deeper contact with the other persons. The basic idea was inspired by a science fiction Wetware by R. Rucker. The authors developed an algorithm using two types of swarm simulations, ANT and BOIDS, in order to realize deformation of living fragments. Sound effects are generated synchronously with visuals by mixing sampled sounds of water flows with the visitors' voice. It achieved quick response for realtime interaction utilizing parallel processing with CPU and GPU.


Friedrichs D.,University of Zürich | Maurer D.,Zurich University of the Arts | Dellwo V.,University of Zürich
Journal of the Acoustical Society of America | Year: 2015

In a between-subject perception task, listeners either identified full words or vowels isolated from these words at F0s between 220 and 880 Hz. They received two written words as response options (minimal pair with the stimulus vowel in contrastive position). Listeners' sensitivity (A′) was extremely high in both conditions at all F0s, showing that the phonological function of vowels can also be maintained at high F0s. This indicates that vowel sounds may carry strong acoustic cues departing from common formant frequencies at high F0s and that listeners do not rely on consonantal context phenomena for their identification performance. © 2015 Acoustical Society of America.


Schacher J.C.,Zurich University of the Arts
ACM International Conference Proceeding Series | Year: 2016

Relating movement to sound in an artistic context demands an understanding of the foundations of perception of both domains and the elaboration of techniques that effectively creates a link with technical means from body to sound. This article explores the strategies necessary in interactive dance work to successfully link movement to sound processes. This is done by reducing the dimensions of the observed elements to the fundamentals and at the same time identifying target dimensions that allow the recreation of an equivalent expression. A categorisation helps to elucidate those elements and characteristics that can be applied and looks at how they are perceived by the audience. The asymmetry that arises when using technical links to generate sound in interactive dance poses the question of dependency and exposes limits and challenges of using technology in this performing arts practice.


Neukom M.,Zurich University of the Arts
Proceedings of the 8th Sound and Music Computing Conference, SMC 2011 | Year: 2011

The synchronization of natural and technical periodic processes can be simulated with self-sustained oscillators. Under certain conditions, these oscillators adjust their frequency and their phase to a master oscillator or to other self-sustained oscillators. These processes can be used in sound synthesis for the tuning of non-linear oscillators, for the adjustment of the pitches of other oscillators, for the synchronization of periodic changes of any sound parameters and for the synchronization of rhythms. This paper gives a short introduction to the theory of synchronization [1, 2, 3, 4, 5], shows how to implement the differential equations which describe the self-sustained oscillators and gives some examples of musical applications. The examples are programmed as mxj~ externals for MaxMSP. The Java code samples are taken from the perform routine of these externals. The externals and Max patches can be downloaded from http://www.icst.net/downloads. © 2011 Martin Neukom.


Studer R.K.,University of Geneva | Danuser B.,University of Geneva | Hildebrandt H.,Zurich University of the Arts | Arial M.,University of Geneva | And 2 more authors.
Psychosomatic Medicine | Year: 2012

OBJECTIVES AND METHODS: Self-report studies have shown an association between music performance anxiety (MPA) and hyperventilation complaints. However, hyperventilation was never assessed physiologically in MPA. This study investigated the self-reported affective experience, self-reported physiological symptoms, and cardiorespiratory variables including partial pressure of end-tidal CO2 (PETCO2), which is an indicator for hyperventilation, in 67 music students before a private and a public performance. The response coherence between these response domains was also investigated. RESULTS: From the private to the public session, the intensity of all self-report variables increased (all p values <.001). As predicted, the higher the musician's usual MPA level, the larger were these increases (p values <.10). With the exception of PETCO2, the main cardiorespiratory variables also increased from the private to the public session (p values <.05). These increases were not modulated by the usual MPA level (p values >.10). PETCO2 showed a unique response pattern reflected by an MPA-by-session interaction (p <.01): it increased from the private to the public session for musicians with low MPA levels and decreased for musicians with high MPA levels. Self-reported physiological symptoms were related to the self-reported affective experience (p values <.05) rather than to physiological measures (p values >.17). CONCLUSIONS: These findings show for the first time how respiration is stimulated before a public performance in music students with different MPA levels. The hypothesis of a hyperventilation tendency in high-performance-anxious musicians is supported. The response coherence between physiological symptoms and physiological activation is weak. Copyright © 2012 by the American Psychosomatic Society.


Pyun R.,ETH Zurich | Kim Y.,ETH Zurich | Wespe P.,ETH Zurich | Gassert R.,ETH Zurich | Schneller S.,Zurich University of the Arts
IEEE International Conference on Rehabilitation Robotics | Year: 2013

The white cane is a widely used mobility aid that helps visually impaired people navigate the surroundings. While it reliably and intuitively extends the detection range of ground-level obstacles and drop-offs to about 1.2 m, it lacks the ability to detect trunk and head-level obstacles. Electronic Travel Aids (ETAs) have been proposed to overcome these limitations, but have found minimal adoption due to limitations such as low information content and low reliability thereof. Although existing ETAs extend the sensing range beyond that of the conventional white cane, most of them do not detect head-level obstacles and drop-offs, nor can they identify the vertical extent of obstacles. Furthermore, some ETAs work independent of the white cane, and thus reliable detection of surface textures and drop-offs is not provided. This paper introduces a novel ETA, the Advanced Augmented White Cane, which detects obstacles at four vertical levels and provides multi-sensory feedback. We evaluated the device in five blindfolded subjects through reaction time measurements following the detection of an obstacle, as well as through the reliability of dropoff detection. The results showed that our aid could help the user successfully detect an obstacle and identify its height, with an average reaction time of 410 msec. Drop-offs were reliably detected with an intraclass correlation > 0.95. This work is a first step towards a low-cost ETA to complement the functionality of the conventional white cane. © 2013 IEEE.


Fricker S.,Blekinge Institute of Technology | Schumacher S.,Zurich University of the Arts
Lecture Notes in Business Information Processing | Year: 2011

A release plan defines the short-term evolution of a software product in terms of development project scope. In practice, release planning is often based on just fragmentarily defined requirements. Current release planning approaches, however, assume that a requirements catalogue is available in the form of a complete flat list of requirements. This very early commitment to detail reduces the flexibility of a product manager when planning product development. This paper explores how variability modeling, a software product line technique, can be used to plan, communicate, and track the evolution of a single software. Variability modeling can reduce the number of decisions required for release planning and reduce the information needed for communicating with stakeholders. An industrial case motivates and exemplifies the approach. © 2011 Springer-Verlag.


Fink J.,Ecole Polytechnique Federale de Lausanne | Lemaignan S.,Ecole Polytechnique Federale de Lausanne | Dillenbourg P.,Ecole Polytechnique Federale de Lausanne | Retornaz P.,Ecole Polytechnique Federale de Lausanne | And 5 more authors.
ACM/IEEE International Conference on Human-Robot Interaction | Year: 2014

We present the design approach and evaluation of our prototype called "Ranger". Ranger is a robotic toy box that aims to motivate young children to tidy up their room. We evaluated Ranger in 14 families with 31 children (2-10 years) using the Wizard-of-Oz technique. This case study explores two different robot behaviors (proactive vs. reactive) and their impact on children's interaction with the robot and the tidying behavior. The analysis of the video recorded scenarios shows that the proactive robot tended to encourage more playful and explorative behavior in children, whereas the reactive robot triggered more tidying behavior. Our findings hold implications for the design of interactive robots for children, and may also serve as an example of evaluating an early version of a prototype in a real-world setting.


Swoboda N.,is Research Center Vienna Austria | Moosburner J.,Zurich University of the Arts | Bruckner S.,University of Bergen | Yu J.Y.,University of California at San Francisco | And 2 more authors.
Computer Graphics Forum | Year: 2016

Neurobiologists investigate the brain of the common fruit fly Drosophila melanogaster to discover neural circuits and link them to complex behaviour. Formulating new hypotheses about connectivity requires potential connectivity information between individual neurons, indicated by overlaps of arborizations of two or more neurons. As the number of higher order overlaps (i.e. overlaps of three or more arborizations) increases exponentially with the number of neurons under investigation, visualization is impeded by clutter and quantification becomes a burden. Existing solutions are restricted to visual or quantitative analysis of pairwise overlaps, as they rely on precomputed overlap data. We present a novel tool that complements existing methods for potential connectivity exploration by providing for the first time the possibility to compute and visualize higher order arborization overlaps on the fly and to interactively explore this information in both its spatial anatomical context and on a quantitative level. Qualitative evaluation by neuroscientists and non-experts demonstrated the utility and usability of the tool. Neurobiologists investigate the brain of the common fruit fly Drosophila melanogaster to discover neural circuits and link them to complex behaviour. Formulating new hypotheses about connectivity requires potential connectivity information between individual neurons, indicated by overlaps of arborizations of two or more neurons. As the number of higher order overlaps (i.e. overlaps of three or more arborizations) increases exponentially with the number of neurons under investigation, visualization is impeded by clutter and quantification becomes a burden. © 2016 The Eurographics Association and John Wiley & Sons Ltd.


Lemaitre G.,IUAV University of Venice | Houix O.,French National Center for Scientific Research | Susini P.,French National Center for Scientific Research | Visell Y.,University Pierre and Marie Curie | Franinovic K.,Zurich University of the Arts
IEEE Transactions on Affective Computing | Year: 2012

This paper reports on emotions felt by users manipulating a computationally and acoustically augmented artifact. Prior studies have highlighted systematic relationships between acoustic features and emotions felt when individuals are passively listening to sounds. However, during interaction with real or computationally augmented artifacts, acoustic feedback results from users' active manipulation of the artifact. In such a setting, both sound and manipulation can contribute to the emotions that are elicited. We report on a set of experimental studies that examined the respective roles of sound and manipulation in eliciting emotions from users. The results show that, while the difficulty of the manipulation task predominated, the acoustical qualities of the sounds also influenced the feelings reported by participants. When the sounds were embedded in an interface, their pleasantness primarily influenced the valence of the users' feelings. However, the results also suggested that pleasant sounds made the task slightly easier, and left the users feeling more in control. The results of these studies provide guidelines for the measurement and design of affective aspects of sound in computationally augmented artifacts and interfaces. © 2012 IEEE.


Winkler C.,Zurich University of the Arts | Park S.,Royal College of Art
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2014

This paper investigates new ways of expressing emotions and desires through adaptable and wearable devices. Therefore a conceptual framework is being developed to clarify questions on new upcoming interfaces and their relation to body expression. What kind of aesthetic practices do we have for novel wearable interfaces to guide social interactions? The platform ReFlexLab is proposed, being situated in the academic and market research field, to design wearable interfaces that are changing in response to gestures, both affecting our physical and intellectual selves. Focusing on combining human expressions and emotional design with the responsiveness of new material and computational technologies, this research aims to bring up a new understanding for wearable technologies. These investigations will push novel ways to further express the complex methods of communication - methods that linger behind every bodily expression. © Springer International Publishing Switzerland 2014.


Peters N.,University of California at Berkeley | Lossius T.,Electronic Arts | Schacher J.C.,Zurich University of the Arts
Proceedings of the 9th Sound and Music Computing Conference, SMC 2012 | Year: 2012

SpatDIF, the Spatial Sound Description Interchange Format, is an ongoing collaborative effort offering a semantic and syntactic specification for storing and transmitting spatial audio scene descriptions. The SpatDIF core is a lightweight minimal solution providing the most essential set of descriptors for spatial sound scenes. Additional descriptors are introduced as extensions, expanding the namespace and scope with respect to authoring, scene description, rendering and reproduction of spatial audio. A general overview of the specification is provided, and two use cases are discussed, exemplifying SpatDIF's potential for file-based pieces as well as real-time streaming of spatial audio information. © 2012 Nils Peters et al.


Farhan E.,Zurich University of the Arts | Kocher M.,Zurich University of the Arts
Proceedings of the International Conference on Game Jams, Hackathons, and Game Creation Events, GJH and GC 2016 | Year: 2016

Game jams have grown as a democratized game development process, imitating on several days the production of the AAA-industry. However, the current format of the mainstream game jams does not give the possibilities to learn how to work in a big production or to emulate correctly the feeling to be in a big team, because the team size tend to oscillate between a solo team to maximum ten participants. In countries like Switzerland, there is no AAA-industry that employs hundreds of developers at the same time. For those kinds of experience, game developers and game designers have to "export" themselves out of those countries, making it a harder experience to live. In 2015, the two main game developers' linguistic communities of Switzerland tried two different experiments answering this question: "How can we bring together the different talents at disposal to create one game?". The Swiss-French game developers' community answered with an indie game jam with a team of 26 developers, the I3 Game Jam. The Swiss-German game developers' community answered with an academic workshop of three weeks with a pre-defened game concept, the Swiss Mercenaries Workshop. From those two approaches, the authors formalize and theorize what is a Big Team Game Jam and their different key elements compared to standard game jams: Target Group, Hierarchy, Participants Motivation, Ownership, and Duration. This paper explores those two solutions, their community context, and gives a framework for countries like Switzerland without a AAA-industry to emulate those productions at very low-cost. © 2016 Copyright held by the owner/author(s). Publication rights licensed to ACM.


Schacher J.C.,Zurich University of the Arts
Leonardo | Year: 2016

The practice of gestural electronic music performance provides a valid context for artistic or practice-based investigations in the field of ‘NIME.’ To this end, the material and conceptual conditions for the development of performance pieces using gestural actions need to be explored. The use of digital musical instruments and concepts for the expressive performance with digital sounds leads to questions of perception-by the musician and by the audience-of movements and actions, the body, the instruments, and of their affordances. When considering this performance mode as a topic for investigation, it becomes evident that in order to be based on practice, research in this field needs a definition and differentiation that helps to identify the specific perspectives that are only made possible through application in an actual artistic practice. © 2016 ISAST.


Monache S.D.,IUAV University of Venice | Hug D.,Zurich University of the Arts | Erkut C.,Aalto University
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2010

We present an exploration in sonic interaction design, aimed at integrating the power of narrative sound design with the sonic aesthetics of a physics-based sound synthesis. The emerging process is based on interpretation, and can represent a novel tool in the education of the future generation of interaction designers. In addition, an audio-tactile paradigm, that exploits the potential of the physics-based approach, is introduced. © 2010 Springer-Verlag.


Schacher J.C.,Zurich University of the Arts
Proceedings - 40th International Computer Music Conference, ICMC 2014 and 11th Sound and Music Computing Conference, SMC 2014 - Music Technology Meets Philosophy: From Digital Echos to Virtual Ethos | Year: 2014

What is the relationship between the performer's body, the instrument, the musical actions and their perception by an audience? And how do they relate when the music is generated by abstract digital processes controlled through actions on technical control surfaces, or gestural, tangible interfaces? This article investigates these questions by examining elements and concepts from physiology, the cognitive sciences with an 'enactive' and phenomenological perspective and from the point of view of an artistic performance practice, which brings these elements together on stage. In a broad arc the investigation covers instrumental and perceptual affordances, the physical senses of the body, different levels of awareness, corporeal states and modes of awareness, the senses of agency and intentionality, and the sense of movement inherent to music. Based on these insights, the contradiction between the corporeal space of performance and the abstract, codified domain of the digital sound processes is revealed. By looking at the prevalent metaphors, but also the interaction concepts and models of control and their shortcomings, it becomes evident that they need to be refined, possibly based on the perceptual and corporeal criteria developed here. Copyright: © 2014 Jan C. Schacher.


Kocher P.,Zurich University of the Arts
Proceedings - 40th International Computer Music Conference, ICMC 2014 and 11th Sound and Music Computing Conference, SMC 2014 - Music Technology Meets Philosophy: From Digital Echos to Virtual Ethos | Year: 2014

This paper describes the current development of a system designed for the synchronization of musicians in polytempic music. In order to convey the tempo, an animation is used that resembles the gestures of a conductor, which is believed to be particularly comprehensible for musicians. This system offers an alternative to the use of a click track which is still the most common means for the purpose of synchronization. The possibility to combine several devices in a network allows for the synchronization of several players in ensemble music. It is hoped that this system promotes the creation and performance of music that exhibit ambitious tempo polyphony as well as spatial distribution of the musicians. Copyright: © 2014 Philippe Kocher.


Ellberger E.,Zurich University of the Arts | Perez G.T.,Zurich University of the Arts | Schuett J.,Zurich University of the Arts | Zoia G.,Zurich University of the Arts | Cavaliero L.,Zurich University of the Arts
Proceedings - 40th International Computer Music Conference, ICMC 2014 and 11th Sound and Music Computing Conference, SMC 2014 - Music Technology Meets Philosophy: From Digital Echos to Virtual Ethos | Year: 2014

SSMN intends to develop a conceptual framework and a tool set that allows composers to integrate spatialization in musical notation from the onset of the creation process. As the composition takes form and graphic symbols expressing spatialization is introduced into the score, instant audio rendering provides feedback within a surround sound configuration. In parallel, SSMN helps interpreters and audio engineers to learn and master scores that contain complex instructions of motion in space easily recognizable both in printed and animated electronic format. At first the SSMN Spatial Taxonomy was established to identify key motion in space possibilities within musical context; consequently, a collection of SSMN Symbols has been designed and implemented in a software library of graphical objects within MuseScoreSSMN, a dedicated editor that has been developed to allow interactive use of this library along with CWMN. In order to bridge the gap between visual elements and audio perception, the SSMN Rendering Engine application is at the heart of OSC inter-application communication strategies allowing the use of DAW and user-defined programming environments along with MuseScoreSSMN. A prototype has been prepared and tested by a user group consisting of composers and performers. Further research shall address other user cases integrating electroacoustic paradigms. © 2014 Emile Ellberger et al.


Maeder M.,Zurich University of the Arts
Proceedings - 40th International Computer Music Conference, ICMC 2014 and 11th Sound and Music Computing Conference, SMC 2014 - Music Technology Meets Philosophy: From Digital Echos to Virtual Ethos | Year: 2014

Since its creation by the composer Brian Eno in 1976, the term ambient has undergone significant change. The musical style ambient has developed into a framework of reception and terminology within which digital electronic music as well as visual art are conceived and received. The term ambient opens up a context of artistic and social practices reflecting a reality that is increasingly transported via and created by media technologies. Using as point of departure biologist Jakob von Uexküll's concept of «Umwelt» which postulates a world-generating context of body, cognition and environment, modern constructions of immanence are examined: Ambient as a sort of mimetic ceremony produces extremely complex yet coherent images of the world. The study develops a phenomenology of the sounds found in current ambient music as well as associations and meanings elicited by them. Ambient is a compound of spaces in which a reflection of the world takes place, created through artistic, social, geographical and increasingly virtual devices. The idea of space as the expansion of thought, enclosing its infinite movements as an absolute horizon is implied by the concept of the plane of immanence proposed by Gilles Deleuze and Félix Guattari. In Ambient, a soundtrack of immanence is created, a polyphonic sound of the environment as we experience it, which renders the world in its diversity imaginable and experienceable. Copyright: © 2014 Marcus Maeder.


Neukom M.,Zurich University of the Arts
Proceedings - 40th International Computer Music Conference, ICMC 2014 and 11th Sound and Music Computing Conference, SMC 2014 - Music Technology Meets Philosophy: From Digital Echos to Virtual Ethos | Year: 2014

This text describes the implementation of Ambisonics as user defined opcodes (UDOs) for Csound. The presented package of UDOs includes a basic encoder and a decoder up to 8th order, an encoder with distance correction, an in-phase decoder, opcodes for the two-dimensional equivalent of Ambisonics for any order, opcodes for Ambisonics equivalent panning (AEP) and several utilities such as coordinate converters, Doppler effect and more. Finally the usage of the UDOs is explained in some examples. Copyright: © 2014 First author et al.


Cesari P.,University of Verona | Camponogara I.,University of Verona | Papetti S.,Zurich University of the Arts | Rocchesso D.,IUAV University of Venice | Fontana F.,University of Udine
PLoS ONE | Year: 2014

The aim of the study is to reveal the role of sound in action anticipation and performance, and to test whether the level of precision in action planning and execution is related to the level of sensorimotor skills and experience that listeners possess about a specific action. Individuals ranging from 18 to 75 years of age - some of them without any skills in skateboarding and others experts in this sport - were compared in their ability to anticipate and simulate a skateboarding jump by listening to the sound it produces. Only skaters were able to modulate the forces underfoot and to apply muscle synergies that closely resembled the ones that a skater would use if actually jumping on a skateboard. More importantly we showed that only skaters were able to plan the action by activating anticipatory postural adjustments about 200 ms after the jump event. We conclude that expert patterns are guided by auditory events that trigger proper anticipations of the corresponding patterns of movements. © 2014 Cesari et al.


Peters N.,University of California at Berkeley | Lossius T.,Electronic Arts | Schacher J.C.,Zurich University of the Arts
Computer Music Journal | Year: 2013

SpatDIF, the Spatial Sound Description Interchange Format, is an ongoing collaborative effort offering a semantic and syntactic specification for storing and transmitting spatial audio scene descriptions. The SpatDIF core is a lightweight minimal solution providing the most essential set of descriptors for spatial sound scenes. Additional descriptors are introduced as extensions, expanding the namespace and scope with respect to authoring, scene description, rendering, and reproduction of spatial sound. A general overview presents the principles informing the specification, as well as the structure and the terminology of the SpatDIF syntax. Two use cases exemplify SpatDIF's potential for pre-composed pieces as well as interactive installations, and several prototype implementations that have been developed show its real-life utility. © 2013 Massachusetts Institute of Technology.


Zimper M.,Zurich University of the Arts | Lepetit M.,Zurich University of the Arts | Lypitkas N.,Zurich University of the Arts | Thoenen N.,Zurich University of the Arts
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2013

Abstract. Night Shifts is a digital magazine developed for the iPad that depicts articles and videos with a structured and explicit approach. The magazine-app displays seven different personalities that work over night in Zurich- the portraits' running time is 56 minutes. Classical print magazines inspire the design and ergonomics of Night Shifts. The interactive part of the App is reduced to few intuitive navigation possibilities. The App is programmed with C+ and runs on the iPad native. Thanks to pre-loading the high quality video content runs with no delay. The authors developed a colour concept, icons, typography concept and narrative concept to suite the thematic field of Night Shifts. The combination between text, photos and videos creates an inspiring body of work that captivates the reader by opening a new world of fascinating stories and personalities. © Springer International Publishing 2013.

Loading Zurich University of the Arts collaborators
Loading Zurich University of the Arts collaborators