Create royalty-free AI music tracks with one click
Just describe what you want and get an instant track of any duration โ and you will never meet any troubles with copyrights
Set started โ โจ Generate AI TrackDimitris Papadatos aka Jay Glass Dubs style is based on a counterfactual historical approach of dub music reductionism to its basic drum/bass/vox/effects form. A prominent voice in the worldwide electronic scene, he has released a significant body of works on some of the main experimental music labels like Bokeh Versions, The Tapeworm, Anรฒmia, DFA Records, Ecstatic, and Berceuse Heroique. He has also collaborated on critically acclaimed releases with Not Waving (as Not Glass), Guerilla Toss and the โgodmother of trip-hopโ Leslie Winer, as well as releasing remixes for artists such as How To Dress Well, Jabu, Maximum Joy and more. Dimitrisโ work has been presented in various international institutions and festivals including Berlin Atonal, Meakusma Festival, Documenta14 and BBK Bilbao.
The main theme in his music is an apposition of disparate elements that assume a re-appropriation of historically applied methodologies while questioning forms of empowering them. His biggest body of work reflects issues such as copyright, spirituality and originality, undergoing a constant state of transfiguration of its outsourcing.
What did you like about the process of making music with AI? What parts were interesting?
The whole process that AI is following is rather absent in my work. Yes, I work with samples and loops but Iโve never used tracking methods where everything should be synched and have a specific tempo and intonation in order to work. These restrictions might and can take the work elsewhere. I function from an arranger’s aspect rather than that of a programmer.In my studio, with only my intervention, solutions to coherency are more loose than are found otherwise.
What was interesting in this process was first and foremost the fact that I have to ”teach” the machine what I want it to approximately sound like, but still maintain an aleatory. Try to prevent its process beforehand. There was a complete reapproach of diffusion and effect processing on my end as well. I didn’t feel like I could do everything that I wanted to do, so I decided to maintain my physicality and the spirituality of the work as the said restrictions made me also feel that my physical ”non artificial” intelligence’s contribution to what the music would sound like or eventually ”do” was as important as the machine’s learning.
To understand that better, I thought of a potential user of the music I was making for Mubert. I thought of someone who writes, maybe raps as well and would use these loops to maybe rehearse. Or a couple making love. Someone walking their dog. Things like that empower the result in a much more coherent way than numbers and tonal systems. The ritualistic mathematics of everyday life.
In a sense, AI acts as a curator, compiling and mixing different samples together based on its own judgement, which makes it a collaboration without the direct involvement of the creators of those tracks. Given this, how do you think musicians will collaborate online in the future? In what ways can AI contribute to these collaborations?
For my work and process, having a second intelligence approaching some of the issues would definitely be helpful. I feel that human intervention, the labor of a human mind, will not stop being crucial to the result of what generative music could become. I am also pondering the countless potentials and it’s really captivating. I imagine possibilities of collaborations and the absolute overcoming of any borders.
Then on the other hand and for many reasons this might not be pragmatic. To me personally this process of crafting is not unfamiliar. I have worked on many collaborations, all of them without a physical presence. The common space of communication and the physicality is created elsewhere.
There are subjectivities intertwining within the space that the music happens in. These subjectivities share a common dispersed experience, they are in a way ”divided into a common goal.” This kind of reminds me of the way the algorithm works, combining dispersed elements into a new treaty. I feel that the contribution of AI is a given already since we are all using DAWs to make/record/produce/master our music.
Is the listener a co-creator if he is simply modifying the compositions with his likes? Should the platform provide more instruments to allow the audience to tinker with ideas and change them? What could those tools look like?
I feel that something that is as connected with mood as this software, is bound to include the listener’s creative swings. Some tempo and pitch shifters would definitely be great. I would also love to see some basic effect processing there.
In general, what other instruments can be created for listeners to lower the barrier between the artist and their audience? Should it be lowered? In what ways would you personally like to connect with your listeners on the platform?
Many years before I was asked to contribute to Mubert, I had this idea of a ”sound deodorant”. A musical piece that would be ever evolving and perpetual, that anyone could contribute to, that could be used in commercial spaces the same way a perfume dispenser is used. Maybe it would be a good idea to start something like that with some of the users that would find it interesting! I don’t get the barrier you talk about. I am as much a listener as the next person who comes to my gig or buys my records. We, as artists, only have to remember that we always do it for even one person.
Artist Interviews, Artists, InterviewsAI Music Company
Mubert is a platform powered by music producers that helps creators and brands generate unlimited royalty-free music with the help of AI. Our mission is to empower and protect the creators. Our purpose is to democratize the Creator Economy.
Generate Track API for Developers