Sound Canvas — Real-Time AI Album Art at All About Music 2025

Interactive · AI/ML · Installation · Generative
INTRODUCTION
Commissioned by ChordFather Studio for All About Music 2025 — India's largest music industry conference. Sound Canvas was built as the centerpiece audience-engagement piece: a multi-screen system that turned crowd sketches into finished album art in real time, while the gallery wall filled up with everything the audience had made.
MY ROLE
Lead Designer & Developer — built the iPad input app, the TouchDesigner show controller, and the real-time SDXL inference pipeline.
timeline
2025
situation
Music conferences are static — booths, brochures, no real participation. Attendees want to make something, not just collect lanyards. Generative AI tools can deliver that, but only if they work for non-technical visitors holding an iPad with 30 seconds to spare. ChordFather wanted a piece that turned the crowd into the cover artists for AAM 2025 — and made every creation worth posting.
task
  • Non-technical visitor picks a theme and starts drawing in seconds
  • System turns rough strokes into a cover that feels intentional, not noisy
  • Output latency feels live, not "wait while the AI thinks"
  • Every creation lands on a public gallery wall the room can see

action
Sound Canvas system architecture diagram — iPad input through    TouchDesigner show controller to ComfyUI inference, outputting to live projection and gallery wall, with ~800ms end-to-end latency.
  • Input — iPad web app. Visitors entered their name, picked a theme, and drew freely. The app captured strokes + theme + name and pushed everything to the show controller.
  • Show controller — TouchDesigner. The main projection ran in TouchDesigner — a live gallery of every cover the crowd had generated, switching to a generation view when someone was actively drawing. TouchDesigner orchestrated the full loop: input intake, latent prep, ComfyUI calls, gallery refresh.
  • Inference — ComfyUI via API. Evaluated StreamDiffusion but landed on SDXL Lightning running img2img: the user's drawing was encoded to a latent with controlled noise injection (instead of starting from an empty latent), so the structure of the sketch carried through diffusion. The selected theme drove the text prompt. Lightning's distilled steps gave better fidelity than StreamDiffusion at comparable latency, and color/composition stayed faithful to the rough strokes.
  • Crossfade smoothing. New generations crossfaded into the previous frame on the projection so the screen felt continuous, not stop-motion.
  • Latency: ~0.8 seconds end-to-end, finger-up to cover on screen.
variants - same engine, different surface
Personal explorations after the event, applying the pipeline to product customization. Same TouchDesigner + ComfyUI plumbing — input swapped from a freehand drawing to a reference photo, with IP-Adapter added to the workflow for stylized outputs conditioned on a single reference image.
👇 Converse
👇 T-shirt #1
👇 T-shirt #1
result
  • 3,000+ visitors participated across the 2-day conference.
  • Hundreds of unique album covers generated, all surfaced live on the gallery wall, each tagged with the visitor's name.
  • First AI art installation at an Indian music conference.
  • Architecture validated as portable to product customization (T-shirt + Converse).
👇 Sunny MR (ChordFather) on Sound Canvas
"He has made an AI-assisted workflow on the iPad — AI cover art that just works. A lot of people were interested. Next year we'll bring more of this in a bigger way."                
— Sunny MR, ChordFather Studio