Create royalty-free AI music tracks with one click
Just describe what you want and get an instant track of any duration โ and you will never meet any troubles with copyrights
Set started โ โจ Generate AI TrackBrighton based producer and DJ Etch began making music in his bedroom at the age of 12 after a teenage obsession with early โ00s hardcore, hip hop, and experimental electronic music. Continuously developing his sound and technological skills he has since released music on Ilian Tape, Seagrave, Sneaker Social Club, Dr Banana, EC2A, Wisdom Teeth, Keysound, Lapsus & Soundman Chronicles, to name a few.
Etch creates warm, euphoric backward-looking but forward-facing music aimed at both the body and the mind, where technicality goes hand in hand with soulful expression through drums, bass, and electronic complexity. His approach to DJing is as extensive as his inspirations for production, moving up and down BPM selections and styles creating an exciting and varied amalgamation of sounds. He co-ran the BTG label with fellow producer Bulu, which put out genre-defying dancefloor-based music, and runs his own label Altered Roads which is home purely to his own productions that are of a more experimental nature.
For a better listening experience, listen on the Mubert app
What is your opinion of AI using your compositions and ideas to create new music without your direct involvement, and how does that align with the idea of royalty distribution?
I think the idea of AI using elements I’ve fed into it is extremely interesting; I touched on the subject and built a module at university using Max/MSP that did a similar kind of thing but was a lot more primitive and only worked with pitch scale and waveforms. But I find the whole thing pretty exciting, seeing Autechre perform a few times and also reading about how they use generative modules is incredible. As far as royalty distribution goes I think royalties should still be paid in their entirety to the artist as the AI will have nothing to work with if it doesn’t have some degree of artist involvement.
Is the listener a co-creator if he is simply modifying the compositions with his likes? Should the platform provide more instruments to allow the audience to tinker with ideas and change them? What could those tools look like?
I think this is probably the best route for something like this to take, giving the listener more control over what is being fed to them is ultimately going to be more engaging and fun for the user. I think even simple additions such as pitch alteration to individual elements or maybe things that we already see in software such as Reaktor or Wavetable-type synthesizers where waveshapes can be applied to sounds and also the ability to mute and re-arrange different parts of what is being fed in.
In what new ways can Mubertโs technology be applied?
I think it can be applied to a lot of other art disciplines where perhaps it can interact with visuals or other audio modules that can create a sort of call and response type scenario. Of course when creating atmospherics in particular for moving images or even in personal compositions, the element of randomness or something not being in your control โ but which you can take snippets from โ is always exciting and can often be the initial point of inspiration for where that work goes.
How has the process of creating music together with AI sparked your creativity? How different was this process compared to the way you usually write music?
I think, as someone who always begins with loops rather than just playing, it’s been quite inspiring โ having fed loops and sounds I’ve made into the AI it’s actually come up with compositions that I’ve really enjoyed and probably wouldn’t have thought to come up with myself. I feel for a lot of producers starting out, especially if they are using more loop-based software such as FL Studio or Ableton. One of the hardest first steps is going from loops to arrangements and this really eases that along but it doesn’t steal your creation from you.
For a better listening experience, listen on the Mubert app
For many musicians, mastering computer science and coding is essential. Have you ever tried coding and how did that go? How easy/difficult is it to use Mubertโs platform in that sense? Should the interface be simplified or alternatively, include more features, giving more control to the users?
I think this is one of the great things I’ve felt from experimenting with Mubert. As someone who has always been a computer and technology obsessive, teaching myself from a very early age how to operate different systems and as I moved into working with electronic music and my early inspirations being quite complicated stuff, I’ve fallen on my face quite a lot with the process of learning to code with audio software and also build patches with synths which in itself is basically coding.
While I’ve come a long way and am pretty competent with things like Reaktor โ which I use in almost all of my work โ and to a certain degree Max/MSP, Mubert erases a lot of the really difficult parts to this and lets you focus on the initial creation of sounds while it works on the rest for you.
I think the interface is super simple as is, it would definitely be great for there to be more control for the users. A lot of VST’s I’ve been dabbling with lately have controls that aren’t what you would have historically found on them, such as ADSR, OSC control, Pitch Control, etc., but have a more random element to them. I think if some more research was done into these there could be some really innovative control attached to Mubert to make it stand out and to provide a more engaging interface with the user.
Artist Interviews, Artists, InterviewsAI Music Company
Mubert is a platform powered by music producers that helps creators and brands generate unlimited royalty-free music with the help of AI. Our mission is to empower and protect the creators. Our purpose is to democratize the Creator Economy.
Generate Track API for Developers