to download the project base paper of ai music generator.

Abstract:

We tackle the task of conditional (artificial intelligence) ai music generator. We introduce a deep learning project, MusicGen, a single Language Model (LM) that operates over several streams of compressed discrete music representation, i.e., tokens. Unlike prior work, MusicGen comprises a single-stage transformer LM and efficient token interleaving patterns, which eliminates the need for cascading several models, e.g., hierarchically or upsampling. Following this approach, we demonstrate how MusicGen can generate high-quality mono and stereo samples while being conditioned on textual description or melodic features, allowing better controls over the generated output. We conduct extensive empirical evaluation, considering both automatic and human studies, showing the proposed approach is superior to the evaluated baselines on a standard text-to-music benchmark. In this work, we introduce MUSICGEN, a simple and controllable music generation model, which can generate high-quality music given textual description. We propose a general framework for modelling multiple parallel streams of a cous tic tokens, which serves as a generalization of previous studiesWeshowhowthisframeworkallowstoextendgenerationtostereoaudioat no extra computational cost. To improve the controllability of the generated samples, we additionally introduce unsupervised melody conditioning, which allows the model to generate music that matches a given harmonic and melodic structure.

Through ablation studies, we shed light on the importance of each component comprising MusicGen. Music samples, code, and models are available at https://github.com/facebookresearch/audiocraft

SIMPLE AND CONTROLLABLE MUSIC GENERATION - ai music generator- deep learning projects for final year students-ai music generator
Leave a Comment

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *