for me its all about sources and what you do to them.
there are three main groups i can think of.
code based, algorythmic. (includes current/voltage too)
data based, samples and tables.
acoustics, sounds from physical objects. as @natehorn pointed out, there can be mixed sources, like videosynthesis!
acoustics can be turned into data, and data can be emulated by code.
i think when people say type of synthesis they mosly refer to processing of those sources, and thats correct because during this step is where differences appear.
while processing, each source has its own unique perk.
code is easily changable allowing precise things, like FM. Highly customizable.
data is re-arrangable, slicing, pitching, stretching. Fast.
acoustics - touchable.
P.S. all the different names people have come up with are useful, but they often refer to type of contol/ui that you have over the sources, rather than sound generation itself.
Realistically, anything is musical synthesis if you can make it output a wave that oscillates in the audible spectrum.
Make a bot that uses the rate at which people tweet in realtime to make a signal oscillate and you get a synthesizer. Use a solar panel and use the amount of sunlight it receives as a signal you send to speakers, and you get a synthesizer.
The âtypes of synthesisâ thing is mostly about how you interact with the system to sculpt the sound. It defines workflow, not really the type of sound. It is simply that each workflow will naturally encourage some sounds to come out. This is why itâs so exciting to see synthesizers come up with new workflows and interfaces.
Of course each synth has a character and everything, but the big difference between two synths is how it makes you navigate the sound design phase.
I would say itâs subtractive and additive (which I donât find particularly satisfactory or useful for describing whatâs going on) or pretty much all of the above as the methods and sounds they produce vary so much and are usually quite recognizable.
hmm, if we talk about sound-shaping concepts, I donât get the 9 from that Wiki Article. Wavetable for example. Itâs a Sound Source. Itâs a digital Oscillator with a special type of wave-form-generation. And while you are able to do some PWM on Squarewaves for example, you are able to navigate through the Wavetable.
After that Wave-Generation, Wavetable synths are not different from other synths.
Same with Vector Synthesis. You move between Sound-Sources. The concept is not that far away from Wavetables.
Phase Distortion Modulation. Technically it very similar to FM. You take a simple waveform and and add harmonics my modulating it with another waveform.
So what is talked about here?
Sound sources aka Wave-Generation? I think that 9 would not be enough.
If we talk about concept of harmonic-shaping you can strip that down to 3? Substractive, DM, Additive?
One might argue there is additive methods and subtractive methods and most âtypesâ of synthesis use both to some degree. Physical modeling is probably where stuff gets most confusing to me but I guess I look at it like a series of modules with deeper processing happening than the surface controls. It is all either subtracting or adding harmonics over time in various ways. A lot of synthesis types are just as much marketing as they are a different synthesis type.
It seems like people consider there to only be a small number of methods after the oscillator stage: namely adding or subtracting, or both, and maybe FM and physical modelling (although there doesnât seem to consensus on that).
So perhaps a taxonomy of synthesis might state the oscillator type and then the dominant method for changing the sound after the oscillator stage, whilst recognising that there is likely to be a mixture of things happening.