The majority of artistic endeavours have managed to be impacted by MusicGen AI, and the music business is now entirely under its influence. In the same way as ChatGPT or other huge language model-based AI create text, Meta has recently announced the availability of the open source version of their AI model for generating MusicGen AI.
Late last week, Felix Kreuk, an AI research engineer at Meta, demonstrated the “MusicGen’s” talents in a Twitter conversation. The technology may take some MusicGen AI and alter it, for example, turning a traditional melodic refrain into a ’80s pop tune.
The model employs an EnCodec audio tokenizer based on a transformer language model, as stated by Kreuk. Through the Hugging Face API, users may test out MusicGen AI, albeit depending on how many users are using it at once, it can take some time to generate any music. For considerably quicker results, you may utilise the Hugging Face website to set up your own instance of the model. If you have the necessary skills and equipment, you may download the code and execute it manually instead.
In our own experiments, we used a synth-heavy “symphonic rendition of the happy birthday theme” and an unsettling “Lo-fi hip hop track with samples from nature, including crickets.” The tracks do not by default have lyrics. If you really want to strain your ears on my glass-cracking singing voice, you can find it in our earlier testing of Apple Music’s karaoke feature. Gizmodo tested the system by using our own optional audio track with lyrics written by myself. The extra lyrics made the prompt “Grunge song with heavy bass and violin accompaniment” seem more crackly than it would have otherwise.
It’s uncertain how well the AI can understand particular composers. It’s difficult to determine whether the AI was able to accurately reproduce Hans Zimmer’s themes when we asked it to compose a “Hans Zimmer score for a steampunk mediaeval film.”
There haven’t been many high-quality instances of MusicGen AI that have been made available to the public, despite the fact that many other models are running text generation, voice synthesization, created graphics, and even brief video. The study paper that goes along with it, which is accessible on the preprint arXiv site, claims that one of the key difficulties with music is that it necessitates running the complete frequency spectrum, which calls for more intensive sampling. Not to mention the music’s intricate compositions and overlapping instruments.
Additionally, Meta contrasted their technology with Google’s MusicLM text-to-music methodology. The features of the two models are displayed on their own pages for direct comparison on Meta.
However, the model’s training data may be the most worrying aspect for artists. The study article states that 20,000 hours of licenced music from an internal dataset of 10,000 music songs were used to train MusicGen AI. Additionally, the business utilised almost 390,000 instrument-only songs from Pond5 and Shutterstock. All of the music used to train the Meta researchers’ model was allegedly “covered by legal agreements with the right holders.” A contract with Shutterstock is part of this.
Shutterstock has its own AI picture production tool that is pre-trained on all contributors’ photographs thanks to a contract it signed with OpenAI, the company that created DALL-E, last year. Even said, it doesn’t always follow that artists are pleased about having their work used to teach AI. certain of the largest AI art businesses, like Stability AI and Midjourney, have previously been sued by certain artists, who made claims about how AI databases ingest large amounts of licenced material without user consent. This becomes increasingly challenging when large tech businesses like Meta are able to pay for the usage of creative material in their AI creation. The possibility that the AI is directly appropriating other MusicGen AI’ works without their permission or with licence looms large in the user’s mind.
Recently, Meta has been all about AI, as have the majority of major tech businesses. Meta has declared that, in contrast to its major tech siblings, it aims to disseminate more open-source models into the ether for anybody to adopt. Making the business stand out from competitors like OpenAI, Microsoft, and Google, which have become more reclusive, is an intriguing strategy. However, this does not mean that Meta will remain without controversy, especially given the worry among artists that businesses may substitute artificial intelligence (AI) for human artists. The researchers from Meta admitted in their report that AI “can represent an unfair competition for artists.” They asserted, however, that open models may provide both pros and amateur musicians with new tools for creating MusicGen AI.
What is the AI that generates music from text?
Users who sign up for Google’s AI Test Kitchen may now use MusicGen AI, a text-to-music generator driven by artificial intelligence. The service is accessible on the web, Android, and iOS. One of the most eagerly awaited generative AI tools for generating music was announced in January.
How does AI affect music?
The capacity of AI to analyse massive volumes of data in order to spot patterns and forecast trends is one of the technology’s key benefits when it comes to making MusicGen AI. Producing and marketing music that is more likely to appeal to their target audience might benefit from this.
Also Read: Apple Launches Its $3.5k Vision Pro Goggles