I just upgraded to Overbridge 2.0 and A4 1.41 and I was trying to record 4 separate outputs, one for each track. I realized that one of the tracks uses 3 voices of polyphony and when that track is playing it outputs each voice on a separate output(respectively outs 2,3,4) but uses the proper sound. Technically speaking I would assume that the polyphonic track should be sending all 3 voices to a single output. Is this a bug or normal behavior? Maybe I’ve got a setting wrong somewhere?
They come in as individual tracks, you could group the 3 tracks in ableton then record from the group to get 3 voices together I guess?
Technically speaking the discrete analog voice is tapped at the respective FX send - each voice is implicitly separate as this is an analog voice, poly simply cannot happen like in a VA or whatever - theoretically there could be a possibility that those voices could be directed and summed and tapped digitally elsewhere but that involves cost and complexity given just how flexibly the internal poly can work
IIrc you can exclude a track from Mains, so then if you have three note poly on a Track you can use the Mains (with FX from all sending voices if you want too) and take the mono track out a separate output channel - or sum in a Mixer as described above
But the essence of how and where the voices are generated coupled with the OB tap ADC being at the FX send means that each individual voice is coming out separately, there’s no other possibility at the individual outs, so this is normal and unavoidable
yes, sending 3 to the mains might be better if you want to record the A4 effects on the 3 voices, 1 voice without effects
Thanks for the clarification. I was suspecting that it may be the way the hardware was originally designed since polyphony didn’t come until later. Looks like I will route my tracks with any polyphony to the main outs and exclude the mono tracks so they appear on their respective individual outs.
Alright then, off to check out the audio routing page and figure out how to do what I need to.
Question about the A4’s voice allocation method please: I’m playing three-note chords in one MIDI file, sending it out to the A4 on MIDI channel 1, and listening via the four separate channels in Ableton (using overbridge). Each of my chords have low, middle and high notes and no matter which “Voice Allocation Method” I choose the low/middle/high notes get swapped around to different voices as it plays, making it impossible to mix/predict. Is it possible to always have the lowest note go to voice 1, the second lowest to voice 2 and the highest to voice 3? If not, why not?
I understand I could achieve this by separating the chords into 3 MIDI files and sending them out to 3 different MIDI channels and then I could guarantee on which channel I’d hear each part of the chord, but it would be way easier if voices could be allocated by pitch!
Attached is a screenshot of the MIDI channel with the chords in Ableton, it is fully quantised with no overlapping notes.
far too niche probably, use separate channels if you need this
I don’t understand how that’s too niche, it seems like a logical method to allocate voices
I don’t think it could be made to work. Imagine you play two notes, then a third one lower than the rest. The third couldn’t come out of the channel set to “lowest” because that’s already used by the lower of the first two notes.
Thinking about this some more…
MIDI is a serial protocol. Each bit of data travels alone in a single lane. Consider a MIDI file or DAW file that holds chord data which, when looked at on screen, appears to have nice neat chords with a fixed number of notes which our brains will interpret as orderly and clear. Our fingers may even twitch slightly as we imagine playing the chords. However this data may not be arranged in the file or in memory in the order in which our brain interprets the image: all the high notes might be first in the data structure, and thus sent first over MIDI, despite appearing “at the same time” visually. So it’s trivial that the notes will arrive at the synth “in time” but “out of order”. The synth has no way to know that a given note will be lower or higher than the next one to arrive. MIDI makes no contracts about how the notes in a chord will be ordered as they are sent.
You can imagine this problem even without MIDI. Imagine playing the chord on a keyboard manually. Unless you are highly practiced, you’ll play with slight timing imperfections. Each note in each chord has a chance to be played out of order. The synth has no way to know what comes next. It couldn’t reliably assign low/mid/high.
I’m confident this isn’t possible.
You’re better off doing the manual editing in the DAW to put the notes you want on separate channels than worrying about the engineering limits of this.
Have you tried different voice allocation methods in POLY config menu?
You might reach more consistent results by getting you chords notes played very slightly one before the other.