Sure: http://artsites.ucsc.edu/faculty/cope/. He’s been doing this since the early 1980s, and has written several books about algorithmic composition. I would highly recommend his Experiments in Musical Intelligence, which is a classic. You can listen to some of his works on the website above, and he’s also released a few albums.
Interesting! Yes, Cope’s project comes to mind. I’d guess—not being a programmer at all myself—that the real problem would be dealing with larger-scale and abstract concept? I’d imagine the small scale like controlling a phrase using proper intervals, strong/weak beat, percentage of scale-wise/arp against which types of nonchord tones, etc. would be handled with proper statistical info and input from composers who are experts in the style. But how to handle tonality—when to modulate considering the whole composition scope, how long would the transition/pivot passage be, etc.,—these would be more difficult? In any case, look forward to hear more of what you’re doing!
Althought its an interesting result, and it could maybe become a new tool in making music. But for me music is mostly somekind of communication between humans (with the use of all kinds of instruments/tools).
Thinking further…
This ‘AI’ music could evolve to music created on individual basis: feed the algorithm with you’re taste and it creates new ‘music’ for you.
Combined with the knowledge current platforms (Facebook, Spotify…) know about our taste of music
And a future of micro-targeting commercial ‘AI music’ is not far away…
If AI replaces modern pop music or specifically the shallow, diva, celebrity, fame aspects then bring it on Since most of that is extremely formulaic, I suspect it will, even to the point of virtual artists, actors and so on.
There is little doubt that AI can create all kinds of things, the question is can it do it “better” initially, well in some instances undoubtedly yes, and then the fact that it can do it really fast, and learn from what people like etc. But for the time being I don’t think AI can hope to compete with a human, because it doesn’t have real feelings to convey, lack of flair, creativity of happy accidents etc. the argument might be that algorithms can approximate this, but I’m not so sure, it could sound insincere or trite in just the same way that human made music can. Then there is the possibility of realtime tailor made music, where the listener can specify “faster”, “more sombre” or what have you, and the AI will change its output to suit any whim - but I think humans don’t necessarily always want to go through that when listening to music. So I suppose the logical conclusion will be AI human collaboration, or AI music curators/conductors etc.
Eventually there will probably be playlists and charts of AI music, for some people this will be great, others will still want to listen to human music as well or instead though.
If the technology has advanced this far we can expect a hype of “AI music” soon. Some of it will be cheap marketing blah, some of it will be soulless formulaic crap, and some artists will work it as a creative tool and find the interesting pecularities, glitches and unique features to work with. It s a tool for humans as long as there is one setting the parameters, selecting what generated pieces get used/published and which get binned. Once computers start self releasing it s another matter
About Richard D James and algoritmic composition: don t forget that anything created by mathematic formula fits that shoe, so even an arp is a simple algoritmic composition tool. Max, Puredata and Supercollider are widely used tools for this. For sure Richard D James has dabbled in those, just like Autechre.
I dont think it’s even about tricking audience or not…it’s more about creating with the temperamental nature of life, and inspirations from experience and life. Of course a machine can mimic human output.
Yes, that’s certainly true, but there’s a difference between a tool that will produce randomization or arpeggiation in an existing pattern and an algorithm that composes an entire piece in a particular style, etc. That may be more a difference in degree than in kind–I’m open to the argument that most electronic music is “co-created” by algorithms–but configuring triggers or programming apreggiators doesn’t involve machine learning, etc. “Algorithm” is such a broad term that it gets wildly overused these days. Babylonians used algorithms thousands of years ago for predicting eclipses, etc., but that doesn’t get at what’s new about what algorithms are used for these days.