MIDI Tinkering - A Garden

Jan 27, 2011 17:39

I'm hard at work on setting up an "improvisation system" for the piano right now, that will let me add a bit more complexity beyond what I can do with two hands. It's fairly easy to set up something that acts as an accompaniment, like a karaoke backing track, but that doesn't give me the freedom I want. I want more of a place I wander around in, ( Read more... )

music geekery

Leave a comment

Comments 6

bryguypgh January 28 2011, 00:11:16 UTC
and at least one foot on the sustain pedal.

Sometimes two for extra sustain? :) No seriously , this sounds incredibly neat.

Reply

tinctoris January 28 2011, 00:52:09 UTC
I've got 3 pedals, the other two I don't use so much so I could spare a foot.

I just created a really patch that serves as a rough proof of concept, which you can hear here. The blipping in the background is generated from the bass notes in the piano. Now to make it smarter!

Reply


mzrowan January 28 2011, 01:06:14 UTC
This sounds like something the Echo Nest APIs could help with (where fennel works), except that I don't think you can do analysis on the fly yet, only of completed tracks. But just in case: Echo Nest Remix.

Reply

fennel January 28 2011, 16:09:07 UTC
Yeah, sadly, the whole API infrastructure creates too much latency to use it for 'simultaneous' analysis of music as it's being played-- the analyzer itself is fast (something like 20x realtime) but you'd have a delay of several measures at best.

Some people who wanted to use us as a performance tool did this:

http://musichackdayboston.pbworks.com/w/page/21942244/EchnoNestLive

but I have literally no idea what it does, not being familiar with Ableton or MaxMSP or anything.

Reply

tinctoris January 28 2011, 19:17:02 UTC
Oh, very neat indeed! This seems like a clever way (based on the description) to use what just happened musically to be the seed for what's about to happen. I could see this being very useful as a slowly evolving installation piece.

What would be fairly useful and entirely workable without realtime analysis would being able to pre-slice video data and then have it available to respond to the audio as it's being generated...

Reply

fennel January 28 2011, 20:26:47 UTC
Oh, yeah. Actually, I think one of the examples packaged with ENRemix would get you most of the way to that video triggering you talk about-- it slices a video up and then delivers the samples according to their sonic similarity with the segments of an unrelated audio file. It should be in Video A From B or if not there, somewhere in that folder.

Reply


Leave a comment

Up