You are on page 1of 2

GENERATIVE ART - GENERATIVE MUSIC PRODUCTION

Some Thoughts About AI-Human Interaction - Collabration :

How should music producers be involved in the music production process when using artificial
intelligence?

Video Link: https://www.facebook.com/1543112324/videos/728077481952826

I made this short tutorial-like video at the 2022’s end of december... the internet was down in the
studio for a while... so i couldn't publish it. i forgot it came up by chance while i was looking for
something.) we can call about the subject of the video; it is a short demo about "generative music
production" ...

(actually i wanted to try live editing, live production, live streaming - publishing setups in this way. this
video can be one of the first ones at least in terms of sharing.)

About the generative music production process;

as I tried to explain briefly in the text... when it comes to generative art- generative sound creation,
there are two options that attract my attention that basically shape the production style and of course
the (intermediate-final) product... one is yes which AI tech is in fashion right now?? which PP creator
is the latest? BUT since these technologies are still in their infancy, when you say PLAY and let the
machine play, the results are not so good for the human soul... at least for me... when I listen and
watch what is produced in this way (video-photographs) it still feels quite artificial and unrealistic... we
say it would be better if it wasn't, about many other things. I think this definition fits perfectly and
comes to the rescue, when I try to explain my thoughts about the currently fashionable AI creation
productions and expectations.

But if I'm taking a different path and using these new technologies for our own benefit, and I'm
claiming and experiencing that the fully automated, cold, cold, annoying, "oh not again" aspects of
these new technologies can be replaced by more "humanized" patterns and eventually more machine-
human compositions, while being intrigued and excited by the results that can be achieved with timely
and experimental skills. the attached example doesn't claim this at all. it's an improvisation, let's say
an "intro" sketch made while trying to explain the main idea. Anyway back to the topic;

When machine learning, cloud databases, neural networks, how the machine learns what and how to
learn are involved in real time, this artificial intelligence technology really becomes another tool just
like any other instrument we use while creating the composition and gives the composer-musician-
performer the faith to move both the production process, techniques and the naturally occurring
music-sound-sound installation ... whatever the situation is now, to a whole new field.
On the other hand, the second method that intrigues me is to use ai tools (since we are talking about
sound here) to calculate music-sound parameters on the time axis and other parameters on the time
axis, to influence one another and to be actively involved in what is happening and to use a kind of
human-machine interaction as a new way of music production. to search for new and interesting
feelings and thoughts, to discover, to learn, to be surprised, to continue the adventure by imagining
possibilities, without losing the tradition of composition based on improvisation and sensation. yes,
this is what I like. And please remember that I mentioned that the attached example is really just a
sample of tens maybe hundreds of different works at least in terms of "genre" emotion thought
rhythm melody scales moods harmony etc. we were in such a mood when we made it. .) whatever it
is... i plan to continue sharing other works in time in the form of videos that can tell the main idea
without too much fine editing in this way. greetings to everyone. your criticism, suggestions and ideas
for co-production are welcome. as long as they come as comments below first.)

You might also like