Research Topics
Scroll down to browse or click to jump to topic:
Spatial Audio

While modern spatial audio technologies provide us with ever-more-dynamic means of creating highly detailed virtual environments, the fundamental challenge remains that artists must integrate creative concepts with the technological demands required to control digital rendering systems.

As part of Ircam’s 2012 Musical Research Residency, I was composer-in-residence with Ircam’s Acoustic and Cognitive Spaces team, exploring the aesthetic nuances of Wave Field Synthesis (WFS) and Higher Order Ambisonics (HOA). After a period of experimentation, I developed an working method where spatial rendering signal processing is considered as a type of spatial-instrumental timbre which can then be orchestrated with other instrumental textures. For the 2012 inauguration of Ircam’s WFS/HOA system, I composed Fluoresce, for cellist Séverine Ballon and WFS/HOA, which was the first piece at Ircam to integrate these two systems into one.

A discussion of the ideas developed during my residency will be published in the Computer Music Journal (Winter 2016, 40:4) in an article "Holophonic Sound in IRCAM’s Concert Hall: Technological and Aesthetic Practices", co-authored with Thibaut Carpentier, Natasha Barrett and Markus Noisternig.

Other recent projects in spatial audio include: work with CNMAT's spherical loudspeaker; a project in collaboration with CNMAT and Meyer Sound to develop a Pd based system to run on Meyer’s D-Mitri processing system; and the creation of a set of Max for Live based Ambisonics authoring tools for the San Francisco-based group ENVELOP.

In 2017-18 I will be in residence with Ircam’s Musical Representations team to develop a set of new authoring tools for spatial audio as an extension of my work with computer performance of graphic scores. In conjunction with this residency I will also be at the ZKM working on a camera-based instrumental system for controlling audio and video movement expanding the ideas in my recent piece Apophänie.

Further information:
- Video: Presentation on Spatial Composition at Ircam
- 2012 Ircam Musical Research Residency Report: "Studies on the Compositional Use of Space"
- Ircam's Spatialisateur (aka Spat) software for spatial rendering, unsurpassed in flexibility and power, the Spat release features several tutorials, utility patches and scripts that I developed during my residency.
New Instrument Design

As an audience member, composer, and former improvisor, I am fascinated by the way physical forms shape the way we experience and interact with the world. Instruments provide us with a means to create sound, physical movement, images, and further: words, and ideas.

My work with instrument design in a way encompasses all the other various aspects of my work, with the vantage point that an instrument can be constructed out of literally anything that effects our experience. This focus has brought my attention to human-computer interaction, exploring the use of: multi-touch tablets, XYZ sensors, cellphone accelerometers, game controllers, theremins, 2D and 3D video cameras, in addition to the standard musical input devices (air and contact mic’s, MIDI controllers, pedals, etc.) used with instrumentalists and dancers. Currently I am working on a microbiology inspired video-puppetry piece using physical objects tracked with computer vision to control sound (see Apophänie).

Over the past two years, I have been funded through an Andrew A. Mellon Digital Humanities Grant to help redesign CNMAT’s computer music curriculum. Through this initiative, led by Edmund Campion in collaboration with Adrian Freed, I have worked to create a new sequence of sound and media interaction courses, culminating in a new Situated Instrument Design course which I taught for its trial run this past spring. Based on a pedagogy model centered on a performance-based approach to electroacoustic music, the course draws on David Wessel’s long history of interaction research at IRCAM and CNMAT.

In addition to the new instrument design curriculum, I was also recently involved in a separate Mellon Digital Humanities initiative based in UC Berkeley’s Department of Theater, Dance and Performance Studies (TDPS), where I worked with choreographer Lisa Wymore on the creation of a camera-based interaction studio for dance. The outcome of this project works towards a standardized system for movement tracking, which aims to provide a stable environment for students to learn techniques for interaction design with dance. (For more information please see the github repository: https://github.com/ramagottfried/optic).

I view all of these projects as examples of expanded instrument design, which can be used in many different artistic contexts.

Further information:
- Wacom/transducer/feedback instrument designed for new work Flint, composed for ensemble mosaik
- Cello based sensor/transducer instrument for Jessie Marino: Prototype 12011
- Live video, and sensor array and network design for choreographer Ashley Ferro-Murray's Noisesense. - Instrument design data flow chart developed as part of my curriculum design work:
Installations / Automata

My current concentration in the context of sound art and works for mechanical performance center on the activation of environmental and object awareness, explored through immersive spaces of soft hyperactivity, and intimately scaled pieces of kinetic instrumental automata constructed from a variety of media including: spatial audio processing, micro-controllers, stepper motors, transducers, and other activation devices to create fundamentally physical electronic sounds.

Further information:
- Collaborations with Berlin based visual artist Hideaki Idetsuki:
- Spectacle landscape on the polar circle (pictured above left and center), a multi-channel transducer and paper environment
- Cloud machines, a kinetic sound installation, creating sound through changing the structure of hanging paper clouds with internal motors and wire, plus embedded transducers within the sculptures.
- Radio nest, a work for microcontroller operated FM transmitters
- Prototype for dreaming object (pictured above right), a feedback system based object, which reads light patterns and interprets them as a musical score, to be played as pitches by the movement of its stepper motors, which in turn re-composes the light patterns.
- Music box organism, a 100 meter loop of hole-punched plastic, performed by an ensemble of music boxes, each tuned to a different scale.
SVG Score Project

After moving my composition workspace from pen and paper to to Adobe Illustrator, I stumbled upon the fact that vector graphic scores contain many parameters of information which can be interpreted and performed in different ways. The score is a well-developed form for communication: human readable, and visually expressive of the inner content of the work. With the ability to create new interpretations of symbols and graphic organization, new types of notation might allow composers, sound artists, visual artists, architects, and other visual thinkers, to work more freely with representations and performance of their ideas with digitally controlled systems.

The aim of the SVG Score Project is to provide a context in which the user can make any symbol they like and assign any meaning to the symbol (like pencil and paper). Using a hierarchal grouping scheme, the scores are saved in SVG format, and then converted into an OSC bundle in Max/Pd. Once converted into OSC, the files can be interpreted with the assigned meanings, and are able to be performed by the computer.

In summary, the scores serves three functions: first, as a way for the artist to refine and explore ways of visually representing their ideas; second, to provide a way to perform electronically controlled sounds, while authoring in an abstract, symbolic way; and third, to provide a method for the creation of publishable documentation of the artistic thought that went into creating the piece, which can be understood, studied and built upon in the future.

The images above show an example score in Adobe Illustrator, followed by its text representation in SVG format, and finally its conversion into an OSC bundle containing the complete score information. For more information, including demonstration videos, and MaxMSP tools, please see the github repository: https://github.com/ramagottfried/odot-svg

See also my recent paper, “SVG to OSC Transcoding: Towards a Platform for Notational Praxis and Electronic Performance”, presented at the 2015 TENOR International Conference on Technologies for Music Notation and Representation (Ircam/Sorbonne).

Computer Aided Composition and Analysis

Ongoing computer aided analysis and development of compositional structures, and audio processing such as concatenative synthesis audio mosaicing, predominately in Javascript and Python scripting languages.

Currently in development is a collection of scripts for Adobe Illustrator use to handle tasks such as making parts from a full scores, managing hierarchal representation systems, and utility search functions for named score elements.

Further information:

Figures above, left to right: a distribution of meter type vs. time in Messiaen's Chronochromie, analysis of a saxophone multiphonic, and a correlation matrix of Berio's Visage.

MaxMSP / Pure Data externals

Actively developing externals for MaxMSP and Pure Data (Pd) for spatial audio processing, granular synthesis, user interfaces, and other applications when no existing solutions are available. Now open source and available on github, this work is being maintained in my "tilde" library (link below).

For example, the granubuf~ object is a sample accurate granular synthesis Max external, featuring high-resolution control of low parameters (useful for time stretching), an arbitrary number of output channels, and sample accurate assignment for each grain's sample buffer, window buffer, and all grain parameters. See granubuf~, and shot-ms~ help patches for more information.

The Odot Library for MaxMSP and Pd, by John MacCallum, Adrian Freed, Rama Gottfried, and Ilya Rostovtsev, centers on “o.expr,” an expression language for dynamic, object- and agent-oriented computation of gesture signal processing work-flows using OSC bundles. This new library, based on the 20+ years of CNMAT experience is a powerful new system for synchronous processing of data, especially useful in the data flow based languages such as MaxMSP and Pure Data. My work Fluoresce for solo cello WFS and HOA at Ircam, was the first large scale production using Odot. The Odot library in this piece was a crucial key for synchronizing the seven networked computers used to render the piece.

- Downloads for Odot Library: https://github.com/CNMAT/CNMAT-odot/releases

Modular and Embedded Synthesis