mel_2bar_small
Ex latente.
— OUT OF THE LATENT · A SONG —
PROJECT LAVOS · MAGENTA.JS · MUSICVAE · TENSORFLOW.JS · TONE.JS · MMXXVI
MusicVAE is a variational autoencoder trained on millions of melodies.
It compresses each one into a coordinate in a 256-dimensional latent space — a single point
where every shift in any direction is another melody. Sample a random coordinate;
a melody comes out. Interpolate between two coordinates; the model fills in the
melodies along the path. This page runs Yotam Mann's @magenta/music in your
browser, generates melodies live, and plays them through a Tone.PolySynth.
Ex latente. Out of the latent. Every melody is a coordinate.
mel_2bar_small
random gaussian
triangle + reverb
The variational autoencoder is the architecture of compressed possibility. Train it on enough melodies and the network learns to map each one to a single point in a smooth, navigable latent space — close points in the space are similar melodies; the path between any two points is filled with intermediate melodies the network can decode. Every melody is a coordinate; every direction is another melody.
Yotam Mann (the author of Tone.js, named in Plate XV — the voice) joined Google's Magenta team in 2017. Magenta.js is the result. It runs the same models that ship in Magenta's Python research stack — MusicVAE, MelodyRNN, DrumsRNN — directly in the browser through tensorflow.js. The model in this page is mel_2bar_small: a 256-dimensional latent space trained on two-bar melody fragments. Press SAMPLE; a coordinate is drawn at random; the network decodes it; the melody plays. The machine remembers the structure of music. The chord plays.
Sample a point. Interpolate between two. Listen to the path. Ex latente.