Welcome to the comprehensive guide on how to train a voice model for AI applications in music production. Although creating high-quality voice models is not simple, we have learned many techniques while creating models of famous actors and musicians for our enterprise clients. The quality of the model dramatically improves with a systematic approach to training. Below is an expanded step-by-step guide for training a voice model, supplemented with examples and additional recommendations for obtaining the most accurate and versatile results. Let’s dive into the specific steps on how to train a voice model effectively.
Step 1: Start With a High-Quality Audio File
The cornerstone of an effective voice model lies in the quality of the audio file used for training. High-definition recordings provide the level of detail necessary for the model to learn nuanced vocal characteristics.
It's recommended to use professional-grade microphones; please check our recommended mic list.
Utilize lossless audio formats like WAV or FLAC for your recordings to preserve the full dynamic range of the voice without compression artifacts that MP3s might introduce.
Recommended Specs
- Lossless files WAV, FLAC
- 96 kHz - 24 bits (alternatively 48kHz - 24 bits)
- Peaks should NOT surpass -6Db
- We recommend multiple hours of recordings to achieve maximum quality. However, with 40 minutes of high-quality training data, good results can still be achieved.
Microphone recommendations*
- Shure Sm7B (recommended for applications such as voice-overs, podcasts, singing)
- Neumann U87
- Shure SM58
- AKG 414
*Avoid using multiple microphones.
Step 2: Record Only Dry Voices
When training a voice model, clarity and purity of the voice sample are paramount. Forgo any audio effects like reverb, echo, or delay; these can muddy the training data and lead to an inaccurate representation of the voice. An example of a dry recording is singing directly into a microphone with minimum EQ and Compression. By supplying the voice model with clean, unaffected vocal samples, you're allowing it to analyze and replicate the natural characteristics and timbre of the voice, leaving room for post-production voice edits.
Step 3: Record in Different Registers
The human voice is incredibly dynamic, capable of producing a vast range of pitches. To account for this variability, sample recordings should cover the full vocal range of the subject: low (bass), medium (baritone/tenor/alto), and high (soprano). This provides a comprehensive representation of the voice's capabilities. Performing scales and arpeggios can effectively cover this spectrum during the recording sessions.
Step 4: Record Different Sentences That Are Phonetically Balanced
Phonetic diversity in voice recordings ensures that the voice model can generalize to new words or sounds it may encounter in the future. Sentences should be carefully curated to include a wide array of phonemes—the distinct units of sound that distinguish one word from another in a given language. For English, the Rainbow Passage is a great example of a text rich in phonemes:
The Rainbow Passage
When the sunlight strikes raindrops in the air, they act as a prism and form a rainbow. The rainbow is a division of white light into many beautiful colors. These take the shape of a long round arch, with its path high above, and its two ends apparently beyond the horizon. There is, according to legend, a boiling pot of gold at one end. People look, but no one ever finds it. When a man looks for something beyond his reach, his friends say he is looking for the pot of gold at the end of the rainbow. Throughout the centuries people have explained the rainbow in various ways. Some have accepted it as a miracle without physical explanation. To the Hebrews it was a token that there would be no more universal floods. The Greeks used to imagine that it was a sign from the gods to foretell war or heavy rain. The Norsemen considered the rainbow as a bridge over which the gods passed from earth to their home in the sky. Others have tried to explain the phenomenon physically. Aristotle thought that the rainbow was caused by reflection of the sun's rays by the rain. Since then physicists have found that it is not reflection, but refraction by the raindrops which causes the rainbows. Many complicated ideas about the rainbow have been formed. The difference in the rainbow depends considerably upon the size of the drops, and the width of the colored band increases as the size of the drops increases. The actual primary rainbow observed is said to be the effect of super-imposition of a number of bows. If the red of the second bow falls upon the green of the first, the result is to give a bow with an abnormally wide yellow band, since red and green light when mixed form yellow. This is a very common type of bow, one showing mainly red and yellow, with little or no green or blue.
From Fairbanks, G. (1960). Voice and articulation drillbook, 2nd edn. New York: Harper & Row. pp124-139.
Step 5: Record These Vocal Exercises
Vocal drills and exercises are not solely for warm-ups or performance improvement; they're also instrumental in training voice models. These exercises, which can range from simple scales to complex melodic runs, imbue the model with an understanding of the voice's versatility and dynamic control. Singers might execute a series of vowel-focused exercises (like singing "ah", "ee", "ih", "oh", "uh") across different pitches to train the model on maintaining consistent vowel sounds through various registers.
Examples:
Step 6: Create a consistent and lengthy training sample
During recording sessions, ensure that the microphone placement and room conditions remain consistent to avoid introducing variables that could skew the model's learning. More data typically translates to a more accurate model. Aim for hours of audio rather than minutes to provide a diverse and rich dataset for the model to learn from. We recommend a minimum of 40 minutes of high-quality training material, but many models benefit from multiple hours of content.
Conclusion
Following these steps in how to train a model model will ensure that the AI has sufficient high-quality material to create an authentic sounding voice. Moises employs advanced methodologies for voice separation and offers a suite of other features that support musicians and technologists in their creative and technical endeavors.
You can get started creating your own voice model today on the Moises Pro plan in the desktop experience. If you are an enterprise and need the highest quality voice model, contact us to access extra training processing power and additional support in creating high-quality training data for your model.