Grimes posted something in 2023 that professional musicians are still quietly furious about. She opened her vocal stems to AI training, invited anyone to generate music with her voice, and offered to split royalties 50/50. No features request, no label approval, no negotiation. Just: here are my tools, make something, we split it. The response from the industry was predictable. The response from the internet was a few hundred AI Grimes songs uploaded within the week.

What nobody quite had language for was what Grimes was actually doing. She wasn't licensing her voice. She was redesigning her relationship to her own creative output — separating her sonic identity from the physical act of making sound. That's a fundamentally different proposition. And it points directly at the discipline now called Vibe Composing.

What the discipline actually is

Vibe Composing is the practice of making music through taste, curation, and direction — without necessarily playing an instrument, recording vocals, or operating production software in the traditional sense. The Vibe Composer works at the level of intent: what should this feel like, where should it go, what's missing, what's in the way. The execution layer — which used to require technical mastery — is increasingly handled by tools.

This is not about automating music. It's about who gets to make musical decisions when the barrier to executing those decisions collapses.

For most of recorded music history, the gap between "I hear something in my head" and "that thing now exists as audio" was enormous. You needed years of instrumental training, or expensive studio time, or a producer who would mediate between your vision and something playable. The people who made music were largely the people who had cleared those hurdles. Everyone else listened.

That arrangement is over.

The range of people actually doing this

Holly Herndon has been working at the intersection of voice, identity, and AI since before it was a mainstream conversation. Her model Holly+ — trained on her own voice — allows anyone to transform audio into her vocal texture. What Herndon is exploring isn't AI as novelty. It's the question of what authorship means when a creative identity can be instanced across contributions she didn't make. The theoretical depth behind her work is substantial; the music it produces is stranger and more interesting than most of what's made through conventional means.

Anyma — the electronic project of Matteo Milleri, one half of Tale Of Us — uses AI generation as a compositional tool within a live context. The visual and sonic elements of his sets are entangled in ways that didn't exist before these tools. The crowd at Coachella in 2023, watching a performance that included AI-generated visuals and sound design responding in something close to real time, was experiencing something that has no clean precedent.

And then there's the Suno situation.

"The question isn't whether AI can make music. The question is whether music made this way can mean something."

Holly Herndon, paraphrased from multiple interviews

Suno — the AI music generation platform — signed what was reported as its first record deal in late 2024. The mechanics of it were unusual enough that the music press didn't quite know how to cover it. A platform that generates complete tracks from text prompts, now in a commercial arrangement with a label. The tracks it produces are not all interesting. Some of them are. The ones that are interesting tend to be interesting for the same reason anything is interesting: someone made choices about what to ask for and what to keep.

The tension, addressed honestly

Musicians hate this. Not all musicians, but many, and the ones who are most vocal are usually the ones who spent the most years developing technical skill. That's understandable. It's also not a sufficient argument against the discipline.

Here's what's true: a lot of AI-generated music is bad. Competent-sounding, structurally coherent, emotionally empty. The kind of thing that fills a playlist without ever making you feel anything. This is a genuine problem, and it's not dishonest to point at it. The tools produce average outputs when directed by average taste. That's not a surprise. Every creative tool does the same thing.

Here's what's also true: the musicians who are most critical of AI composition tools are frequently using them. Not always publicly. But the workflow conversations happening in studios right now, if you're in them, reveal something more complicated than the public-facing discourse suggests. The same producer who tweets about protecting human artistry uses AI tools to sketch harmonic ideas before committing to arrangement. The hostility and the adoption are happening simultaneously.

The instrument was never sacred. The intention always was.

The Vibe Composing framework doesn't take a position on whether AI music is better or worse than human-made music. It takes a position on what constitutes creative authorship. A Vibe Composer who produces something emotionally true, sonically specific, and identifiably their own — has composed something. The mechanism that generated the audio is a tool in the same category as a piano or a DAW. Different mechanism, same question: did the person behind it have something to say?

What Vibe Composing actually requires

This is where the discipline gets demanding in ways that aren't always obvious from the outside.

Making good music with AI tools requires an unusually developed sense of what's missing. The tools are good at producing something. They're not good at knowing when something is wrong in a way that matters. That judgment — the ability to hear a generated output and understand precisely what it lacks and why — is a skill. It develops through listening, through reference building, through the same deep immersion in sound that trained composers develop through years of practice. The route is different. The destination is the same.

Grimes didn't get where she is by having no relationship with music theory. Anyma didn't build his sonic world without years of developing a reference library. Holly Herndon has a PhD in composition from Stanford. The tools didn't replace their training. The tools changed what their training is for.

A Vibe Composer's library — the accumulated taste that informs every generative decision — is built the same way any creative arsenal is built: through sustained, intentional listening. The difference is that the library now powers a different kind of instrument. This is one of the central ideas in my upcoming book, The Vibe Creator: that the new creative disciplines require the same depth of input, just a different output mechanism.

The space it opens

Vibe Composing is producing work that wouldn't exist otherwise. Not because it's easier, but because it makes possible music that human hands couldn't execute — not for lack of talent, but for lack of the right interface between imagination and sound.

The composers who will define this discipline are the ones who are building their libraries with the same seriousness that previous generations built their technique. The ones who understand that the tools will keep improving and the taste is the durable asset. The ones who don't confuse accessibility with lack of rigor.

The fact that you can now make music without playing an instrument is not the end of the story. It's the beginning of a much more interesting one.

— IMAJIM

The Vibe Creator — The Book

The complete framework for the eighteen disciplines being reshaped by AI. Coming soon.

Get Early Access
#VibeComposing #AIMusicCreation #AIMusicProduction #VibeCreator #HollyHerndon #Grimes #Anyma #VibeCreating