IRCAM

Logo-Ircam-CNRS-UPMC

The Sound Perception and Design group (PDS) at IRCAM is Work Package leader for WP4 (perception and cognition of imitations) and WP5 (automatic recognition of imitations). Members of two other groups at IRCAM, the Real-Time Musical Interaction group (IMTR) and the Analysis/Synthesis group (A/S), will also be involved respectively in WP4 and WP5.

The Institut de Recherche et de Coordination Acoustique/Musique (IRCAM) is a non-profit research organisation founded in 1976. Its main activities include contemporary music production, R&D of music technologies, and music-related basic scientific research. The research at IRCAM covers the domains of sound analysis and synthesis, acoustics of musical instruments and concert halls, sound perception and de- sign, music cognition, computer science. IRCAM has a research laboratory association with the French Centre National de la Recherche Scientifique and the University Pierre-et-Marie-Curie (STMS-IRCAM- CNRS-UPMC). IRCAM has participated to several European funded projects: Esprit (FP3), CUIDAD (FP4), WedelMusic, Carrouso, Listen, Agnula, Music Network, Listen (FP5), Semantic Hi-Fi , CROSS- MOD, i-Maestro (FP6), CLOSED, MINET, SAME, the SID COST action, Quaero, MIReS, Verve, 3DTVS and HC2(FP7). IRCAM is also participating to several national projects funded by the French National Research Agency (ANR).

The Sound Perception and Design group: Basic research activities of the group include: loudness of non stationary sounds, everyday sound perception and recognition, sound signalling, and sonic interaction design. The group participates to several applied projects in sound quality. The group has coordinated the CLOSED European project (FP7) and is participating to several national projects (ANR).

The Real-Time Musical Interaction group: The IMTR group conducts research and development on interactive music systems, gesture and sound modeling, interactive music synthesis, gesture capture systems and interfaces. The targeted applications concern primarily music performance and the performing arts, but the team also collaborates regularly on industrial projects for the development of audio software, sound simulation and gaming.

The Analysis and Synthesis group: The A/S group carries out research and development activity into sound analysis, transformation and synthesis. The activities of the group include additive analysis/synthesis, automatic music indexing, control of sound synthesis for musical composition, orchestration, pitch recogni- tion in a polyphonic context, processing by phase vocoder, and score following and alignment.

Key Staff

Patrick Susini (Local Manager) received a Ph.D degree in Acoustics in 1999, and a Habilitation in 2011. He is the head of the Sound Perception and Design group. His research activities include everyday sound perception, loudness and sound quality. He organised the 1st and 2nd international symposium in sound design in 2002 and 2004.

Nicolas Misdariis is a research fellow, graduated from an engineering school specialized in mechanics, a Master level on Acoustics and a PhD on synthesis/reproduction/perception of musical and environmental sounds. Since 1995, he has worked at Ircam in different fields of research dealing with sound science and technology. In 1999, he contributed to the constitution of the Ircam / Sound Design team where he has mainly developed works related to sound synthesis, diffusion technologies, environmental sound ans soundscape perception, auditory display or interactive signification.

Olivier Houix received the PhD degree in acoustics in 2003 from the Université du Maine, Le Mans. His research interests concern the perception of environmental sounds and the gesture-sound relationship in sound design. He teaches audio engineering. He has been involved in national and European projects such as CLOSED.

Geoffroy Peeters received his Ph.D. degree in computer science from the Université Paris VI in 2001. His current research interests are in signal processing and pattern matching applied to audio and music indexing: timbre description, sound classification, music classification, audio identification, rhythm description, music structure discovery.

Frédéric Bevilacqua is the head of the Real Time Musical Interactions team (IMTR). He holds a PhD from the Swiss Federal Institute of Technolgy in Lausanne (EPFL) in biomedical engineering. Since 2003, he has been conducting research at IRCAM on gesture analysis and on gesture-based musical interaction systems.

Guillaume Lemaitre studies how human listeners make sense of sounds and use sounds to interact with their environment. He received a Ph.D in Acoustics from the Université du Mans in 2004. Since 2000, he has worked with IRCAM, Carnegie Mellon University, IUAV University of Venice, Genesis Acoustics, and INRIA. His research activities include psychoacoustics, auditory cognition, auditori-motor interactions, and vocal imitations of sounds. His work has been applied to improve the sound quality of industrial products and human-computer interactions.

Enrico Marchetto received his Ph.D. degree in Information Engineering from the University of Padova in 2011. He was visiting researcher at KTH in 2010. He has been involved in the DREAM project (European Culture Programme – numerical real-time simulation of vintage electroacoustic instruments) and in technology transfer activities (start-up founder). Since 2013 he is at IRCAM, working on automatic recognition over large-scale audio datasets.

Jules Françoise received a PhD in Computer Science from Ircam and Université Pierre et Marie Curie, Paris. His research focuses on user-centered interaction design using interactive machine learning, with a focus on expressive movement and its interactions with sound. He has been involved as a postdoctoral researcher within the SkAT-VG project. http://julesfrancoise.com/

Gabriel Meseguer Brocal studied Telecommunications at Alicante University, Spain and obtained his MSc degree in Sound and Music Computing at Pompeu Fabra University, Barcelona, Spain. He developed his career as a researcher and developer in the domains of signal processing and machine learning. His current research topics are gestures description, identification and classification.