Do you notice a mistake?
NaN:NaN
00:00
This presentation introduces bellplay~, an open-source software and framework for offline algorithmic audio, developed in Max/MSP within the bach ecosystem. bellplay~ allows users to manipulate audio using scripts written in the bell programming language, offering a highly flexible, data-driven approach to algorithmic sound design. By leveraging bellplay~, composers and sound designers can implement customizable algorithms for audio synthesis, processing, and analysis.
The lecture will be divided into two parts. First, I will provide an overview of bellplay~'s core features and workflow, focusing on its integration with the bach, dada, and ears packages. This section will emphasize the framework's script-based architecture, dynamic capabilities, and its seamless interface with the bell programming language. I will also highlight some advanced techniques possible in bellplay~, such as audio mosaicking, concatenative synthesis, and data-driven sampling.
The second part will present a case study: ludus vocalis, a large-scale, fixed multimedia work where bellplay~ played a pivotal role in both the audio and visual elements. I will demonstrate how bellplay~ was used to design and assemble the entirety of the audio/musical material, while also generating control data to influence visuals in TouchDesigner, a popular tool for real-time audiovisual performance. This example illustrates bellplay~’s versatility, not only as a powerful audio tool but also as a system for multimedia projects.
Through these two sections, attendees will gain a deep understanding of how bellplay~ provides compositional flexibility and technical precision for exploring algorithmic composition and audio processing.
Runtime code generation for efficiently evaluating the EaganMatrix. Advanced sound engines with complex synthesis algorithms require low-latency sample generation to keep up with real-time synthesis. Microprocessors such as Analog Devices’
March 27, 2025
What if we could stream immersive virtual events in which audio objects coincide spatially with displayed visuals, or music and soundtracks amenable to artifact-free instrument or language substitution, or to spectator 6-DoF navigation? We
March 27, 2025
This presentation looks back on more than four years of experience in composition, performance, research, analysis and teaching with the Karlax digital musical instrument (DMI). The Karlax is a two-handed interface whose main sensors are te
March 27, 2025
GranShaper project introduces a new method of sound synthesis, combining the principles of granular synthesis and waveshaping, together creating a range of synergistic effects, likely previously unknown. In particular, these include a ‘gran
March 27, 2025
In this short presentation, we will introduce the latest software developments from the EAC Research Team (Acoustics & Cognition). In particular, we will introduce the latest releases of the Spat5 package for Max, and the Panoramix standalo
March 27, 2025
Rodrigo Cadiz presents a device that implements a ring modulator intended to recreate historical analog audio. Drawing inspiration from historical realizations of ring modulation, he experimented with the most common methods described in th
March 27, 2025
*Unknowable Certainty* is an immersive audiovisual performance showing the experience of being caught in the past–searching for a numerical value that might explain the moments leading up to a present confrontation with the “end”–as if to b
March 27, 2025
*rosebud* exemplifies an approach to hybrid composition methodologies. As a transdisciplinary composition between dance and music for dancer, sensors and live-electronics, existing as both a live performance and a video clip, it navigates b
March 27, 2025
*Public Intimacy* is a motion- and site-specific sonic experience that reimagines the relationship between sound, space, and human presence through binaural techniques. *Public Intimacy* explores how collective intimacies emerge within publ
March 27, 2025
Do you notice a mistake?