Advanced real-time sound and data processing with FTM&Co

STEIM Workshops

10 أيار / مايو 2010
13 أيار / مايو 2010
  • STEIM
  • Utrechtsedwarsstraat 134, Amsterdam

(Start) date: 10th May, 2010
Lecturer(s): Diemo Schwartz
Course cost: €200 (free for MT/HKU students, 50% discount for MT/HKU alumni)

Registration is required for this workshop and can only be done through this registration form. Please register early to ensure a place. Places are limited to 15.

Enlarge

S T E I M [workshops] - source

Advanced real-time sound and data processing with FTM&Co is a 4-day advanced Max/MSP workshop by Ircam researcher-developer Diemo Schwarz that will provide an introduction to the FTM&Co extensions for Max/MSP, looking into the basics and the advanced use of the Ircam libraries FTM, MnM, Gabor and CataRT for interactive real-time musical and artistic applications.

The basic idea of FTM is to extend the data types exchanged between the objects in a Max/MSP patch by complex data structures such as matrices, sequences, dictionaries, break point functions, tuples and whatever might seem helpful for the processing of music, sound and motion capture data. It also comprises visualization and editor components, and operators (expressions and externals) on these data structures, together with file import/export (SDIF, audio, MIDI, text, ...) operators.

As examples of applications in the areas of sound analysis, transformation and synthesis, gesture following, and manipulation of musical scores, we will look at the parts and packages of FTM that allow arbitrary-rate signal processing (Gabor), matrix operations, statistics, machine learning (MnM), corpus-based concatenative synthesis (CataRT), sound description data exchange (SDIF), and Jitter support. The presented concepts will be tried and confirmed by applying them to programming exercises of real-time musical applications, and free experimentation.

Diemo Schwarz

Diemo Schwarz is a researcher--developer in real-time applications of computers to music with the aim of improving musical interaction, notably sound analysis--synthesis, and interactive corpus-based concatenative synthesis.

Since 1997 at Ircam (Institut de Recherche et Coordination Acoustique--Musique) in Paris, France, he combined his studies of computer science and computational linguistics at the University of Stuttgart, Germany, with his interest in music, being an active performer and musician. He holds a PhD in computer science applied to music from the University of Paris, awarded in 2004 for the development of a new method of concatenative musical sound synthesis by unit selection from a large database. This work is continued in the CataRT application for real-time interactive corpus-based concatenative synthesis within Ircam`s Real-Time Music Interaction (IMTR) team.

Participation

This workshop is a collaboration with the Music Technology departement from the Utrecht school of music (HKU). If you`re a student or alumni from MT/HKU please indicate this in the registration form. Students can participate for free, alumni will receive a 50% discount. Students are required to pay €50 at the time of registration, which they will be refunded on the first day of the workshop.