Software that transforms human body movement into sound. Human body and technology transformed into art, united in a single project.
What if your body were a musical instrument? Not a metaphor, not a dance, an actual instrument where every gesture produces a unique sound, where your posture shapes the sonic landscape around you.
CuerpoSonoro was born from that question. It's a software project that explores the connection between body movement and real-time sound generation, turning the human body into an expressive musical interface.
The software captures body movement through a camera and translates it into musical parameters. Every gesture, every posture generates a different sonic response. There are no scores or pre-recorded sequences: you create the sound with your own body in real-time.
The key artistic principle: the body doesn't "play notes" — the body shapes sound. It's not an instrument that triggers discrete events, but an interface that continuously moulds a living soundscape. Any position of the body produces an interesting sonic state; there are no "wrong notes". It's like putting your hands in clay: there's always a shape, only which shape changes.
Experience CuerpoSonoro directly in your browser. The web demo uses MediaPipe.js for pose detection and the Web Audio API for sound synthesis — no installation needed.
If the demo doesn't load above, you can open it in a new tab.
CuerpoSonoro sits exactly where I like to work: the intersection of art, technology and human experience. It's a project that requires musical thinking, programming skill, and an understanding of the body as an expressive medium.
It represents what I believe technology should be: a tool that serves people, extends their creative capabilities, and creates experiences that wouldn't exist otherwise.
CuerpoSonoro runs a real-time pipeline that goes from camera capture to audio output in under 80 milliseconds. Here's how data flows through the system:
The system uses MediaPipe Pose to detect 33 body landmarks in real-time at ~20–22 FPS. From these raw coordinates, a custom feature extraction module computes 17 movement descriptors that describe the character of your movement — not just where your body is, but how it moves:
Each movement feature controls a specific aspect of the sound. The mapping is designed to feel intuitive — your body understands the connection before your mind does:
CuerpoSonoro is open source. Dive into the code, run it locally, or build on top of it.