Maybe I should explain my application a bit more.
I want to build an additive synthesizer (of the musical variety). The best of the best additive synths have 128 sine waves (partials) per voice, where each partial gets its own pitch and amplitude envelope. Now, there's a metric assload of processing power required for that (each voice is basically a high-powered SHARC processor), so I had my own ideas for a voice design that is simpler.
Each voice (one note pressed on the music keyboard) which consists of a minimum of 32 partials, will have one amplitude envelope. Each partial will then get a coefficient that is applied to the envelope and then that result is applied to the amplitude of the partial. So all partials get affected by a single envelope, but not equally. You could even do negative coefficients (the partial gets louder as the envelope goes down). 32 partials might sound like a lot, but keep in mind that a sawtooth wave at 20Hz and band-limited to 20kHz contains over 2000 partials. If I can get 64, that would be even better. 128 and I'd be over the moon.
I haven't decided yet how to mess around with the pitch of each partial. I might do the same thing, a coefficient applied to an envelope and/or an LFO.
Of course, all of this depends on how much processing power I can get my hands on for each voice, and how well I can code it all. One processor per voice is likely going to be my approach, so the number of total voices, ie, the number of keys on the keyboard that can be pressed and still make a sound, would depend on the number of processors I include. There would be an extra processor for the user interface and managing voice allocation, but I have an idea that might make the second part unnecessary.
Let me know if anything needs to be clarified.