Our journey at Moises has always been guided by one mission: empowering creative potential. From day one, our goal has been to make musicians of any level more effective and creative by building AI tools that collaborate with them. We are focused on leveraging AI as a co-creator, always working alongside you and amplifying your abilities and ideas. This approach has resonated with creators worldwide, helping us reach over 65 million users across 190 countries. Along the way, we have been recognized as Google Play’s Best App for Personal Growth (2021), Apple’s iPad App of the Year (2024), and an Apple Design Awards finalist (2025).
Over the past few years, you have come to know Moises for our industry-leading stem separation - the ability to isolate vocals, drums, bass, and other instruments from any song. To create this technology, we invested heavily in licensing, producing, and annotating world-class data and pioneering new technologies. The same foundation that lets us take music apart now lets us build it back up. We are ready for our next frontier: generating stems.
Generative AI, with Musicians in the Driver’s Seat
The music industry is witnessing incredible shifts with generative AI. There’s excitement, resistance, lawsuits, and a lot of important debate about what the future holds. Early on, we made a clear choice.
Instead of training massive models to output entire songs, we focused on what musicians and producers actually need: the ability to generate context-aware instrument parts, or stems. Think of this feature as a bandmate that listens, takes cues from the music you already have, and contributes a part that fits.
This reflects our core belief that AI should empower creators. Our generative tools work alongside traditional production workflows, expanding what is possible while keeping human creativity at the center.
The First Stem-by-Stem Generation Model
Our data science team has spent the past year researching, iterating, and innovating, guided by the latest scientific advances and a research community that lights the way. The result is a stems-first, not songs-first, approach to AI music generation.
Full-song generators are typically trained on mixed recordings and output a finished stereo file that is hard to edit. Our models are trained on isolated, high-quality instrumental stems, so they learn and generate at the instrument level. At generation time, they listen to your musical context, then create new parts that lock to it. You get controllable, reusable tracks—not an uneditable file.
Our model conditions on three complementary signals and balances them during stem generation:
- Audio context: user-provided stem or stems, for example, a guitar or piano track; used to infer tempo, bar and beat grid, phrasing, structure, and key or scale, so new parts lock to timing and form.
- Style or content conditioning: a target sound specified by an audio reference, a text prompt, or a preset; steers timbre, playing technique, micro-patterns, and genre feel.
- Harmony adherence: a control that determines how strictly the new stem follows the chords, scale, and harmonies extracted from audio context.
The model optimizes for all three during generation. For more creative control, you can set independent weights for each axis, depending on what you want to prioritize. Higher weights tighten adherence. Lower weights encourage exploration.
How to Use It
Start with any audio foundation—for example, an original session, a demo, or a single stem—and generate additional instrument parts that complement it. Guide the process by providing an audio reference, a text prompt, or a genre-specific preset. Each stem is unique and can be regenerated or fine-tuned, making the creative process truly iterative and collaborative.
Use it to compose original songs, spark new ideas, craft remixes, or recreate individual instruments from existing mixes. As our community explores, we expect to discover even more ways to push musical boundaries together.
A New Kind of Studio: Human-Led, AI-Powered
Building these models made us rethink the music-making interface itself. We did not want to hide generative power behind complexity or limit creativity with oversimplification. Instead, we designed an AI-augmented music studio, a simplified DAW that treats AI as a creative companion and makes creation accessible to everyone: musicians, producers, DJs, and even beginners.
Integrated with our full technology stack of stem separation, beat and chord detection, voice conversion, auto-mixing, mastering, and more, AI Studio opens up new possibilities for musical innovation.
For Everyone, Everywhere
From someone picking up their first instrument to world-class musicians and producers, we have always aimed to be a platform for everyone. That remains our promise. Our new generative capabilities are another step on that journey.
Today marks a pivotal moment in AI-powered music production. We are launching our first collection of generative stem models alongside AI Studio, our first step toward a friendly web-based DAW built from the ground up for AI-assisted music creation.
We are excited to see what you create with these tools and how they will grow alongside your workflow.