View Single Post
Old 10-22-2012, 02:49 PM   #12
Stretto
Human being with feelings
 
Join Date: Oct 2012
Posts: 147
Default

Afraid not...

First, you want some type of innovation to take place that will change the current stagnant state and you think it is not a massive problem too?

Second, One does not have to adapt much of anything. It takes very little work to actually move the same algorithms over to GPU's. Most algorithms use a for loop and the essential different for the GPU is to use a parallel.for(or for multi-core for that matter). One does not have to move the current effects base to the gpu immediate for it to work. It could be a gradual process. It's not like if one creates a new "GPU based" DAW and even new plug in scheme that they cannot still support older tech such as VST's(although it might slow the movement down unless it is much better).

Third, Many people are starting to see the power that the GPU has for other things than graphics. It's starting to be supported in many popular apps(matlab, photoshop, adobe audition, etc...). The reason is because it is not much more work to reap massive benefits.

Fourth, My original question was about current movement in DAW's toward GPU processing... what I got was a bunch of nonsense about "It won't work", "We have other problems to focus on", etc. Things that had nothing to do with the original question. Either there is a movement or there is not... but your own person feelings are irrelevant to the matter. Either there is an issue with GPU's that prevent them working well for audio or their isn't.

For example, Xen, you mention issues about "shuffling" around the data. This is probably the only pertinent fact that has been brought up. This is either a real issue or it is not. From what I have looked at, it probably is not an issue depending on how the system is designed. If we try to send many small blocks repeated back and forth from the cpu and gpu then it will not work out because the overhead would be great. Hence, it then follows that the discussion should lead to being able to "all" the processing on the GPU. I do not know if this is possible or not. On the face of it, it seems like it is possible but I do not know enough about the details to know if there is some real issue that prevents it.

Fifth, I would love to spend the next 5 years of my life writing a DAW that did exactly what I wanted... I neither have the time or the money to do that, and chance are, if I did, I wouldn't finish before someone else with more resources did. This post was not about how to write a DAW that is based on the GPU but if there is any movement towards that. (e.g., if someone new cockos was taking advantage of the GPU with their JS fx)

My issues with the replies is that they are non-constructive and irrelevant to the original question. What does the current "stagnation" of audio technologies have to do with my question? Just because you feel that it might be a "waste" of time to process on the GPU is one thing(which I tried to point out, your probably wrong on many levels)... but, again, what does that have to do what I originally asked?

It's like if I asked "What time is it" and you respond "I'm going to lunch". Did you just answer my question or tell me something irrelevant?

Xen, your first reply was much more relevant but I believe very inaccurate from my experiences with the GPU and my ideas. The newer PCI express bus has a throughput of about 16GB/s to 32GB/s which means means that at 256kSPS 64-bit samples it only takes 0.1ps to transfer 256 byte package(yet 8ms for the ADC to read it). Even if you slow the bus down by a factor of 1000 and it takes 1ns you still have plenty of wiggle room. 1000's audio packets being computed on at any one time would only add 1ms extra from the memory transfers.

So, essentially memory transfer times are irrelevant(obviously in the real world it's more complex but this is what we could achieve if things were done right). The processing plays a much more important role. Just taking 10 extra cpu cycles to process each sample adds a factor of 10 to the cpu added latency. 256 samples would give 10*256 cycles for processing. On a 1GHZ machine processing 1000 audio samples this is 2ms of delay. Much more significant. Processing on a GPU, which hypothetically could reduce the factor by 1000 makes it irrelevant.

These are just estimates but what they show is: 1. With GPU based processing we could drastically increase the sample rate and sample size for higher quality processing( = better audio quality). 2. The memory transfer times are generally irrelevant. 3. CPU processing adds significant overhead = latency(which has a compounding effect in multi-tasking environments). 4. The max practical limitation of latency are from the ADC's. Essentially you have to wait on the samples coming in no matter what, even if you could process them in 0s.

(and what would be really cool is that you could sample lock the input and output so that the timing is dead. This way phase issues due to multi-tasking environments will be irrelevant) (realize that multi-tasking adds "random" latency to your output which causes small compression and expansions in the sampling rate... usually too small to be noticed but who knows...)


My main point is, that neither you nor I know the real practical aspects of using such a technology. What I do know is that history is on my side. Give someone the proper tools and they'll be their most creative. Limit them and expect them to be limited...
Stretto is offline   Reply With Quote