Join the 12‑Week Cohort
Profile
V3 AI
From Terminal to Web: Building a AI Music Studio with Sonic Pi
InsightMay 25, 2025

From Terminal to Web: Building a AI Music Studio with Sonic Pi

I took my terminal-based AI music composer and transformed it into a full-featured web application with real-time visualization, live coding, and professional UI. Here's how it went.

I took my terminal-based AI music composer and transformed it into a full-featured web application with real-time visualization, live coding, and professional UI. Here's how it went.

What's New This Time?

A few weeks ago I discovered that you can run midi code with python and make ai create music for you by executing the code it writes. This was an awesome discovery----but the music lowkey kind of sucked. Well, I put my head down this weekend and rebuilt the entire thing in NextJS using Sonic-Pi. www.sonic-pi.net

So What Does This New Version Do?

🎵 AI Music Generation (The Good Stuff)

  • Describe music with Prompt → Sonic Pi code (way more musical than MIDI)
  • Edit the Sonic-Pi Code in real time and the music will update.
  • Use another prompt to modify the track.

🎮 3D Audio Visualization (The Eye Candy)

  • A dope Three.js shader that reacts to audio frequency
  • Real-time microphone capture to visualize any audio

The Tech Stack Upgrade

Moving from Python terminal app to web required some upgrades:

  • Next.js 15 with TypeScript for the frontend
  • Sonic Pi instead of MIDI (way more expressive for electronic music)
  • Three.js for 3D graphics and shader programming
  • shadcn/ui for that professional look
  • OpenAI API for the AI generation (still the MVP)
  • OSC Protocol for real-time communication with Sonic Pi

What I Learned Building This (Part 2)

Web Audio is Surprisingly Complex

Getting real-time audio analysis working in a browser required:

  • Understanding Web Audio API constraints
  • Microphone permission handling
  • Frequency analysis and FFT processing
  • Browser-specific audio context quirks

Sonic Pi > MIDI for AI Generation

Sonic Pi's Ruby-based syntax is actually perfect for AI:

  • More expressive than MIDI for electronic music
  • Built-in timing and synchronization
  • Excellent for live coding and improvisation
  • AI can generate much more musical results

State Management in Music Apps is Hard

Managing playback state, generation history, and real-time updates across components required:

  • React Context for global state
  • Local storage for persistence
  • Careful handling of audio timing
  • Synchronization between multiple components

I created a 'Now playing' context that all the playback passes through. It manages the "Now Playing" State so you can pop around without the music being uninterrupted. Also added dope feature that you can tweak the code then update the now playing track. It was definetly a challenge to keep the single source of truth going while telling Cursor to add all these features.

I did most of this with Claude 4 Sonnet becuase of the drop this week- Even though I've moved away from Claude in the last few weeks because of gpt-4.1 and Gemini 2.5. She does code better, but like all anthropic models likes to do her own thing - you have to keep a tight leash to maintain a single source of truth when building api's.

The Visualization Deep Dive

// Custom vertex shader with Perlin noise
uniform float u_time;
uniform float u_frequency;

// ... noise functions ...

void main() {
    float noise = 3.0 * pnoise(position + u_time, vec3(10.0));
    float displacement = (u_frequency / 30.) * (noise / 10.);
    vec3 newPosition = position + normal * displacement;
    gl_Position = projectionMatrix * modelViewMatrix * vec4(newPosition, 1.0);
}

It analyzes audio in real-time and maps:

  • Bass frequencies → Red color channel
  • Mid frequencies → Green color channel
  • Treble frequencies → Blue color channel
  • Overall amplitude → Mesh displacement

What Still Needs Work

Being honest about the current limitations:

Sonic Pi Dependency

  • Requires local Sonic Pi installation
  • Won't work in hosted environments without setup

Want to Try It?

The setup is a little involved, but way more rewarding:

Prerequisites

  • Node.js 18+
  • Sonic Pi installed locally
  • OpenAI API key

Quick Start

git clone https://github.com/williavs/agentic-composure
cd sonic-pi-composer
npm install --legacy-peer-deps
cp .env.example .env.local
# Add your OpenAI API key to .env.local
npm run dev

Then open Sonic Pi, visit localhost:3000, and start creating!

Example Prompts That Work Great

  • "Create a chill lofi hip hop beat with vinyl crackle"
  • "Generate an upbeat techno track with acid bass"
  • "Make ambient soundscape with reverb and delay"
  • "Create a drum and bass pattern with breakbeats"

AI as Creative Partner

Instead of replacing creativity, it amplifies it. You describe what you want, the AI generates the code, and you can immediately hear and modify it.

Live Coding Made Accessible

Sonic Pi is incredible but has a learning curve. This interface makes live coding more approachable while still allowing you to edit the actual code.

Real-Time Feedback Loop

The combination of AI generation, immediate playback, and visual feedback is a fun loop to get caught in - try it - its kind of addicting to start adding layers- .

If you want to contriubute I think the following stuffs would be dope.

I'm thinking about:

  • Multi-agent collaboration - multi-step agents like in the terminal app will make the music better -
  • Prompt Generation?
  • Integration with Langchains Open Agent Platform they recently launched - would be dope for interchangeable or 'bring your own' agent/
  • Is there a way to connect the web server to local sonic pi? I can do it with the bridge in the services directory on my local but does not work on www.sonicpicomposer.com? Devs? Where you at?

The code's all available at https://github.com/williavs/agentic-composure - check out the sonic-pi-composer directory if you want to see how it all fits together.

Willy out ❤️

NextJSSonicPiAICreativeCodingThreeJSOpenAI
Back to Blog