View Single Post
Old 10-20-2012, 06:04 PM   #3
Stretto
Human being with feelings
 
Join Date: Oct 2012
Posts: 147
Default

This is because of the pipeline used. This is why I said one would have to require the use of FX as sort of processing units that could be combined. Essentially we have to do as much audio processing on the gpu as possible.

I do not necessarily think the issue is with paralleling audio but real time manipulation of parameters(e.g., gui and automation). Most things, contrary to believe, can be paralleled(even some seemingly contradictory fx like delays and such). The real question is how to do it effectively. If you do it wrong you'll waste more time than by doing it sequentially.

Essentially, though, a paradigm shift needs to take place. "FX" need to be written in a very specific way and tools need to be developed to create and debug them.

As far as latency goes, You may be right that the in and out from the CPU to GPU adds enough latency to make it useless at this point. That is not so much a denouncement of parallelizing audio but of using the current gpu architecture.


Obviously one would have to have an intelligent way to manage the audio chain. If you have an input that is routed to an output with no fx there is no reason to send it to the gpu. Also, if the gpu could interact with the sound card then the latency would be moot. (one could do this currently with fpga's and the issue is more about getting it done with gpu's(very common) than with fpgas(not common and things like powercore already out there).


What it all boils down to though, if people don't work on it or push the technology it won't happen. (and I imagine people are actually already working on it)

Last edited by Stretto; 10-20-2012 at 06:12 PM.
Stretto is offline   Reply With Quote