|
12-15-2015, 03:12 AM
|
#561
|
Human being with feelings
Join Date: Oct 2007
Location: home is where the heart is
Posts: 12,096
|
Quote:
Originally Posted by random_id
Yeah, I think that is what happened with me. I updated the VST3 SDK and had to change things for STR16 to work in IPlugVST3.cpp. I think that was the only change I made to get it working in Windows. I haven't tried the newer SDK on OS X, yet.
|
Oli, could you give us an estimation if / when this gets looked at from your side ?
Just asking (not trying to come across impatient), if you say it'll probably take some time no problem, I'd try myself but as I'm still quite noobish I'm a little afraid of breaking something else, so I'd prefer if it's done from your side.
Thanks.
|
|
|
12-15-2015, 06:05 AM
|
#562
|
Human being with feelings
Join Date: May 2012
Location: PA, USA
Posts: 356
|
Quote:
Originally Posted by maajki
I've replaced but FL Studio cannot find the vst3 plugin
|
I haven't noticed issues with vst3 and FL Studio. My vst3 versions are showing up.
Do the vst3 versions work in other host?
|
|
|
12-15-2015, 08:29 AM
|
#563
|
Human being with feelings
Join Date: Apr 2009
Location: Berlin, Germany
Posts: 1,248
|
There is now a branch "VST365" on github. Remember that base.xcodeproj and base_vc10.vcxproj should be reverted on git
|
|
|
12-15-2015, 04:14 PM
|
#564
|
Human being with feelings
Join Date: Dec 2015
Posts: 6
|
Oh, Dear God! FL Studio only sees vst3 plugins from C:\Program Files\Common Files\VSt3
Sorry Guys! I thought I have to use juce
|
|
|
12-16-2015, 06:55 AM
|
#565
|
Human being with feelings
Join Date: Oct 2007
Location: home is where the heart is
Posts: 12,096
|
Quote:
Originally Posted by olilarkin
There is now a branch "VST365" on github. Remember that base.xcodeproj and base_vc10.vcxproj should be reverted on git
|
Thank you.
The plugin I was working on where I got the compile errors with the VST3 version previously (STR16 stuff) is building fine now.
|
|
|
12-22-2015, 08:18 PM
|
#566
|
Human being with feelings
Join Date: Dec 2015
Posts: 6
|
Does anyone suffering from the slow Visual Studio 2013? I can't believe I cannoot type normally to the code editor. I've tried many optimization but still too slow. Any suggestion?
|
|
|
12-23-2015, 01:41 PM
|
#567
|
Human being with feelings
Join Date: Jul 2008
Location: The Netherlands
Posts: 3,646
|
Not really, except maybe to avoid VS as much as possible... I use Notepad++ to edit source files, and only use VS for project management (and even then only if I have to; most of the time I compile from the command line using makefiles).
|
|
|
12-23-2015, 05:52 PM
|
#568
|
Human being with feelings
Join Date: Dec 2015
Posts: 6
|
Quote:
Originally Posted by Tale
Not really, except maybe to avoid VS as much as possible... I use Notepad++ to edit source files, and only use VS for project management (and even then only if I have to; most of the time I compile from the command line using makefiles).
|
Hm... I've tried codeblocks. Looks like working even with mingw64.
|
|
|
12-23-2015, 07:52 PM
|
#569
|
Human being with feelings
Join Date: May 2012
Location: PA, USA
Posts: 356
|
Quote:
Originally Posted by maajki
Does anyone suffering from the slow Visual Studio 2013? I can't believe I cannoot type normally to the code editor. I've tried many optimization but still too slow. Any suggestion?
|
Is it the IDE that is slow, or is it the compiled binary?
I am using VS 2015, and compiling with v120_xp platform toolset. It appears to be working well (for me, at least). It may take a minute to scan all the files for intellisense, but I haven't had any issues with the IDE.
|
|
|
12-24-2015, 12:10 AM
|
#570
|
Human being with feelings
Join Date: Jul 2008
Location: The Netherlands
Posts: 3,646
|
To be clear: I was talking about the IDE, I do use the VS compiler (which is very good IMHO).
|
|
|
12-26-2015, 10:23 AM
|
#571
|
Human being with feelings
Join Date: Dec 2015
Posts: 6
|
Quote:
Originally Posted by Tale
To be clear: I was talking about the IDE, I do use the VS compiler (which is very good IMHO).
|
Visual Studio 2015 far more faster than 2013. Sadly 64bit gcc not really working with wdl-ol
|
|
|
12-30-2015, 07:02 AM
|
#572
|
Human being with feelings
Join Date: Oct 2007
Location: home is where the heart is
Posts: 12,096
|
copy VST2 versions via post-build ?
VST3 builds are automatically copied to a custom folder I set up in common.props. This is nice.
Can I do the same with VST2 builds ?
I see a <COPY_VST2>0</COPY_VST2> entry there and thought maybe setting this to '1' instead would do the trick, but doesn't.
|
|
|
01-06-2016, 10:48 AM
|
#573
|
Human being with feelings
Join Date: Dec 2015
Posts: 18
|
IPlug and latency
I've read some past posts on latency and have some lingering questions. I'm trying to understand how the host program calls processDoubleReplacing. Say we have a sample rate of 44100 and a buffer (nFrames) of 1000. Then the host needs to call processDoubleReplacing at least 44.1 times per second...or every ~23ms, to have a continuous stream of audio. But it could call it faster. It could call it as fast as the cpu allows...which probably depends on track count, plugin count, etc. Based on the past posts I was reading, I got the impression that this is actually what occurs - the host calls process as fast as it can and then buffers the result before going to the audio output (?).
Let's say you have a plugin with 200ms of latency...i.e. it takes you 200ms to process 1 frame of audio (1000 samples to stick with the example above). Set 1 comes in first (s1_in) and processing begins. My understanding is that process must return more or less immediately so process must return 0 or passthrough for now. Meanwhile s1 is processing in another thread. Likewise for s2_in, etc. The idea is that when s1_in is finally processed and available (call this s1_out), then the next process call will input the next set at that time...which should be about s8_in*, but it will output s1_out. Next will be s9_in and s2_out, etc.
*Back to how the host calls process...if it calls it as fast as possible, then s1...sn happen very fast and all are silence/passthrough output. (Meanwhile there are n samples processing). Ultimately, when s1_out is ready...you could be on s50_in. Now s50_in and s1_out returned represents much more than 200ms of latency to the outptut audio stream.
I hope I described that clear enough...this is hard to put in words. If process is called at approximately 23ms, then the latency will match the 200ms of processing time. If process is called "as fast as possible" then the latency could be quite a bit larger which is not good for anyone. So, I'm hoping to hear that I was just confused by the older posts and that really process is called "as slow as the sample rate/buffer combo will allow" (in this case every 23ms). Thanks in advance for any help.
Last edited by bmelonhead; 01-06-2016 at 10:54 AM.
|
|
|
01-06-2016, 10:55 AM
|
#574
|
Human being with feelings
Join Date: Jul 2006
Location: Cowtown
Posts: 1,562
|
Quote:
Originally Posted by bmelonhead
I've read some past posts on latency and have some lingering questions. I'm trying to understand how the host program calls processDoubleReplacing. Say we have a sample rate of 44100 and a buffer (nFrames) of 1000. Then the host needs to call processDoubleReplacing at least 44.1 times per second...or every ~23ms, to have a continuous stream of audio. But it could call it faster. It could call it as fast as the cpu allows...which probably depends on track count, plugin count, etc. Based on the past posts I was reading, I got the impression that this is actually what occurs - the host calls process as fast as it can and then buffers the result before going to the audio output (?).
Let's say you have a plugin with 200ms of latency...i.e. it takes you 200ms to process 1 frame of audio (1000 samples to stick with the example above). Set 1 comes in first (s1_in) and processing begins. My understanding is that process must return more or less immediately so process must return 0 or passthrough for now. Meanwhile s1 is processing in another thread. Likewise for s2_in, etc. The idea is that when s1_in is finally processed and available (call this s1_out), then the next process call will input the next set at that time...which should be about s8_in*, but it will output s1_proc. Next will be s9_in and s2_proc out, etc.
*Back to how the host calls process...if it calls it as fast as possible, then s1...sn happen very fast and all are silence/passthrough output. (Meanwhile there are n samples processing). Ultimately, when s1_proc is ready...you could be on s50_in. Now s50_in and s1_proc returned represents much more than 200ms of latency to the outptut audio stream.
I hope I described that clear enough...this is hard to put in words. If process is called at approximately 23ms, then the latency will match the 200ms of processing time. If process is called "as fast as possible" then the latency could be quite a bit larger which is not good for anyone. So, I'm hoping to hear that I was just confused by the older posts and that really process is called "as slow as the sample rate/buffer combo will allow" (in this case every 23ms). Thanks in advance for any help.
|
You're mistaking latency for processing time, if I'm reading you correctly. Latency is often (almost always) used to "look at" the data ahead of processing. An example would be a look-ahead limiter that "sees" the audio some number of milliseconds ahead of where it's actually processing, then can gradually increase gain reduction ahead of a transient rather than clamping down drastically after the transient has already passed. If a plugin takes more than 1000 frames' worth of time to process 1000 frames, it's not going to work in real time, period. The host calculates the sum of reported latency in each stream and uses that to line everything back up in time before final rendering/output.
Scott
|
|
|
01-06-2016, 11:09 AM
|
#575
|
Human being with feelings
Join Date: Sep 2009
Posts: 623
|
Quote:
Originally Posted by bmelonhead
Let's say you have a plugin with 200ms of latency...i.e. it takes you 200ms to process 1 frame of audio (1000 samples to stick with the example above). Set 1 comes in first (s1_in) and processing begins. My understanding is that process must return more or less immediately so process must return 0 or passthrough for now. Meanwhile s1 is processing in another thread. Likewise for s2_in, etc. The idea is that when s1_in is finally processed and available (call this s1_out), then the next process call will input the next set at that time...which should be about s8_in*, but it will output s1_out. Next will be s9_in and s2_out, etc.
|
A plugin that requires latency doesn't require it because it takes a long time to process the audio, it requires it because it generally requires a certain number of samples to be able to do it's thing correctly.
If I'm understanding you correctly, s1 is not necessarily processing in another thread. It's not necessarily doing anything. If your plugin requires some amount of latency, all it has to do is operate on a different buffer that you store up from the incoming stream.
If you require 1000 samples of audio to do something and the input stream is only providing you with 10, your plugin should start storing the incoming stream in a new buffer so that it can build it up to 1000 samples. Once that buffer is big enough, you just start opperating on it as if you are processing the normal incoming stream. The only difference is that you know exactly the size of your buffer. It increases the latency by the size of your new buffer, and you report that to your DAW.
Extra latency doesn't require extra threads, just extra memory.
|
|
|
01-06-2016, 11:11 AM
|
#576
|
Human being with feelings
Join Date: Jul 2006
Location: Cowtown
Posts: 1,562
|
Quote:
Originally Posted by bozmillar
A plugin that requires latency doesn't require it because it takes a long time to process the audio, it requires it because it generally requires a certain number of samples to be able to do it's thing correctly.
If I'm understanding you correctly, s1 is not necessarily processing in another thread. It's not necessarily doing anything. If your plugin requires some amount of latency, all it has to do is operate on a different buffer that you store up from the incoming stream.
If you require 1000 samples of audio to do something and the input stream is only providing you with 10, your plugin should start storing the incoming stream in a new buffer so that it can build it up to 1000 samples. Once that buffer is big enough, you just start opperating on it as if you are processing the normal incoming stream. The only difference is that you know exactly the size of your buffer. It increases the latency by the size of your new buffer, and you report that to your DAW.
Extra latency doesn't require extra threads, just extra memory.
|
This. Better explanation than mine.
Scott
|
|
|
01-06-2016, 11:18 AM
|
#577
|
Human being with feelings
Join Date: Dec 2015
Posts: 18
|
Quote:
Originally Posted by bozmillar
A plugin that requires latency doesn't require it because it takes a long time to process the audio, it requires it because it generally requires a certain number of samples to be able to do it's thing correctly.
If I'm understanding you correctly, s1 is not necessarily processing in another thread. It's not necessarily doing anything. If your plugin requires some amount of latency, all it has to do is operate on a different buffer that you store up from the incoming stream.
If you require 1000 samples of audio to do something and the input stream is only providing you with 10, your plugin should start storing the incoming stream in a new buffer so that it can build it up to 1000 samples. Once that buffer is big enough, you just start opperating on it as if you are processing the normal incoming stream. The only difference is that you know exactly the size of your buffer. It increases the latency by the size of your new buffer, and you report that to your DAW.
Extra latency doesn't require extra threads, just extra memory.
|
Uh oh...this is not sounding good. Yes, what I'm trying to do really does take up time...so my thread is busy for that time. I was expecting this to be common/normal for plugins that have latency. I didn't realize they just needed to look ahead.
|
|
|
01-06-2016, 01:10 PM
|
#578
|
Human being with feelings
Join Date: Dec 2015
Posts: 18
|
Quote:
Originally Posted by bmelonhead
Uh oh...this is not sounding good. Yes, what I'm trying to do really does take up time...so my thread is busy for that time. I was expecting this to be common/normal for plugins that have latency. I didn't realize they just needed to look ahead.
|
Looking at both sides of this problem...the audio host needs to buffer some output audio to ensure that the output stream is not interrupted by some minor cpu glitch. So, it needs to call at full speed (or at least faster than the audio stream) for some amount of time.
However, the plugin knob controls are supposedly real time. So there must be limits on the buffering...otherwise the users knob changes are not affecting the audio he/she is hearing. I guess I need to experiment and profile this...probably on multiple hosts. Thanks for the replies.
|
|
|
01-06-2016, 02:56 PM
|
#579
|
Human being with feelings
Join Date: Feb 2007
Location: Oulu, Finland
Posts: 8,062
|
Quote:
Originally Posted by bmelonhead
Looking at both sides of this problem...the audio host needs to buffer some output audio to ensure that the output stream is not interrupted by some minor cpu glitch. So, it needs to call at full speed (or at least faster than the audio stream) for some amount of time.
However, the plugin knob controls are supposedly real time. So there must be limits on the buffering...otherwise the users knob changes are not affecting the audio he/she is hearing. I guess I need to experiment and profile this...probably on multiple hosts. Thanks for the replies.
|
Typically hosts don't do any extra buffering for time critical signals in the path if possible. And indeed, if for example some plugin cocks up and spends too much time in the audio callback, a glitch will result. This is a risk the end user has to take if he wants the lowest possible latency for stuff like monitoring live audio. However, for example Reaper actually can do quite a lot of buffering for non-time critical signals. (The so called "anticipative rendering" that helps Reaper a lot in multithreading.)
I sense some sort of a XY problem here. You have decided you must do something (Y) to solve a problem and are seeking solutions/help to that. All the while we know nothing about your actual problem (X). Can you explain in more detail what it is that you actually want to do?
__________________
I am no longer part of the REAPER community. Please don't contact me with any REAPER-related issues.
|
|
|
01-06-2016, 04:53 PM
|
#580
|
Human being with feelings
Join Date: Dec 2015
Posts: 18
|
Quote:
Originally Posted by Xenakios
Typically hosts don't do any extra buffering for time critical signals in the path if possible. And indeed, if for example some plugin cocks up and spends too much time in the audio callback, a glitch will result. This is a risk the end user has to take if he wants the lowest possible latency for stuff like monitoring live audio. However, for example Reaper actually can do quite a lot of buffering for non-time critical signals. (The so called "anticipative rendering" that helps Reaper a lot in multithreading.)
I sense some sort of a XY problem here. You have decided you must do something (Y) to solve a problem and are seeking solutions/help to that. All the while we know nothing about your actual problem (X). Can you explain in more detail what it is that you actually want to do?
|
I have a plugin where the processing for a chunk of audio data (nFrames) will take longer than (nFrames/sample_rate), so it cannot keep up with the incoming audio stream. But each chunk can be processed in parallel, so I'm putting that processing into other thread(s). I expected this to be very common...because even potentially simple processing runs the risk of being overrun at, say...nFrames = 128, sample_rate = 192000 (on an old CPU?).
The offloading to another thread will create latency. My point is that this latency will be minimized if the incoming audio is arriving at the real time audio stream rate (i.e. once every nFrames/sample_rate seconds). If it arrives "as fast as possible", then latency could be quite bad for the reasons in my original post.
However, it seems you are saying that "time critical" operations are in real time (much to my relief)...which to me means that data arrives (processDoubleReplacing called) at approximately nFrames/sample_rate. It makes sense...to get low latency for recording and to get real time feeling for knob changes in the GUI.
|
|
|
01-06-2016, 05:19 PM
|
#581
|
Human being with feelings
Join Date: Sep 2009
Posts: 623
|
When you say time, are you referring to CPU cycles or actual time?
|
|
|
01-06-2016, 06:59 PM
|
#582
|
Human being with feelings
Join Date: Dec 2015
Posts: 18
|
Quote:
Originally Posted by bozmillar
When you say time, are you referring to CPU cycles or actual time?
|
Well...both. I guess it is easiest to think of it as actual time. Think of the processing that I need to do as a multi-stage thing - a pipeline. First the audio data chunk (nFrames) must be processed with process1, then process2, then process3. The entire pipeline takes more than (nFrames/sample_rate) seconds. However, each stage can have audio being processed simultaneously. So, if each of the following lines represents about (nFrames/sample_rate) of time, here are the states of the pipeline
p1(chunk1) p2() p3()
p1(chunk2) p2(chunk1) p3()
p1(chunk3) p2(chunk2) p3(chunk1)
p1(chunk4) p2(chunk3) p3(chunk2) ...chunk1 finally ready!
p1..p3 are in other threads. So, as chunk5 is arriving on the main plugin thread, chunk1 is ready and passed back to that thread.
|
|
|
01-19-2016, 03:04 AM
|
#583
|
Human being with feelings
Join Date: Dec 2015
Posts: 6
|
Code:
error: '/Users/maajki/Documents/Code/wdl-maa/IPlugExamples/maaSynth/build-mac/app/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/English.lproj/InfoPlist.strings' is longer than filepath buffer size (1025).
XCode 7.2 ???
Last edited by Jeffos; 01-22-2016 at 10:16 AM.
Reason: this message was causing problems with the formatting
|
|
|
01-22-2016, 04:56 AM
|
#584
|
Human being with feelings
Join Date: Jan 2016
Posts: 6
|
EDIT: Whoops, what I said here was wrong. Removing it as it doesn't add anything to the discussion.
Last edited by Crazy Eye Joe; 01-22-2016 at 05:57 AM.
|
|
|
01-22-2016, 05:08 AM
|
#585
|
Human being with feelings
Join Date: Apr 2009
Location: Berlin, Germany
Posts: 1,248
|
formatting is really messed up on this page.
RE IPlugSideChain - negative array indexes are definitely not the solution!
|
|
|
01-22-2016, 06:07 AM
|
#586
|
Human being with feelings
Join Date: Jan 2016
Posts: 6
|
Okay, I've retested, and here's what I've found (testing only with VST3, on the VST3PluginTestHost, basing the plug-in on IPlugSideChain):
The following code:
Code:
void SideChain::ProcessDoubleReplacing(double** inputs, double** outputs, int nFrames)
{
for (int i = 0; i < nFrames; i++)
{
outputs[0][i] = inputs[0][i];
outputs[1][i] = inputs[1][i];
}
}
Results in the following behaviour: I get output if I receive something on the AUX channel, and I also get output if I receive something on the main channel. However, what I expect is that I only get output when receiving on the main channel.
The following:
Code:
void SideChain::ProcessDoubleReplacing(double** inputs, double** outputs, int nFrames)
{
for (int i = 0; i < nFrames; i++)
{
outputs[0][i] = inputs[2][i];
outputs[1][i] = inputs[3][i];
}
}
Results in silence regardless of whether or not I'm receiving anything on the AUX channel. Here I expect output only if I receive on the AUX channel.
Am I missing something?
Last edited by Crazy Eye Joe; 01-22-2016 at 06:09 AM.
Reason: Clarification
|
|
|
01-22-2016, 06:12 AM
|
#587
|
Human being with feelings
Join Date: Apr 2009
Location: Berlin, Germany
Posts: 1,248
|
IIRC VST3PluginTestHost is buggy RE sidechain... i need to test it
|
|
|
01-22-2016, 06:39 AM
|
#588
|
Human being with feelings
Join Date: Jan 2016
Posts: 6
|
I see, in that case I will focus on the VST2 implementation, which is the one that I care about at the moment anyway. I just tested my code in Ableton Live, using VST2, and it worked as intended.
However, in that case I have a different problem. My plug-in requires an FFT. For simplicity, I decided to start out with WDL_fft(), since it's provided.
This compiles just fine in VST3, but when I try to compile to VST2 I get the following output from Visual Studio:
Code:
3>LiveConv.obj : error LNK2019: unresolved external symbol _WDL_fft_complexmul referenced in function "private: void __thiscall LiveConv::FreqMult(double * *,double * *,int)" (?FreqMult@LiveConv@@AAEXPAPAN0H@Z)
3>LiveConv.obj : error LNK2019: unresolved external symbol _WDL_fft referenced in function "private: void __thiscall LiveConv::FreqMult(double * *,double * *,int)" (?FreqMult@LiveConv@@AAEXPAPAN0H@Z)
What's up with that? The way I added the FFT was to add fft.h and fft.c to the "base" solution, and then I put #include "fft.h" in the main .cpp file. Is that the wrong way to go about it?
|
|
|
02-02-2016, 07:26 AM
|
#589
|
Human being with feelings
Join Date: Jul 2006
Location: Cowtown
Posts: 1,562
|
Quote:
Originally Posted by Crazy Eye Joe
I see, in that case I will focus on the VST2 implementation, which is the one that I care about at the moment anyway. I just tested my code in Ableton Live, using VST2, and it worked as intended.
However, in that case I have a different problem. My plug-in requires an FFT. For simplicity, I decided to start out with WDL_fft(), since it's provided.
This compiles just fine in VST3, but when I try to compile to VST2 I get the following output from Visual Studio:
Code:
3>LiveConv.obj : error LNK2019: unresolved external symbol _WDL_fft_complexmul referenced in function "private: void __thiscall LiveConv::FreqMult(double * *,double * *,int)" (?FreqMult@LiveConv@@AAEXPAPAN0H@Z)
3>LiveConv.obj : error LNK2019: unresolved external symbol _WDL_fft referenced in function "private: void __thiscall LiveConv::FreqMult(double * *,double * *,int)" (?FreqMult@LiveConv@@AAEXPAPAN0H@Z)
What's up with that? The way I added the FFT was to add fft.h and fft.c to the "base" solution, and then I put #include "fft.h" in the main .cpp file. Is that the wrong way to go about it?
|
Yes, that's the wrong way...put fft.h in your source as you normally would - if your include paths are typical for an IPlug project it will get included as normal. You can add it to the projects for self-documentation purposes, but it's not required as long as it's found in the include search path. You must add the fft.c program to each of the subprojects, however. Adding to VST3 will not make it link for VST2, etc. On Mac it's a little less confusing since you just add the code once and say which targets it should compile/link for.
That's how it works for me, at least...
|
|
|
02-26-2016, 08:17 AM
|
#590
|
Human being with feelings
Join Date: Oct 2007
Location: home is where the heart is
Posts: 12,096
|
Quote:
Originally Posted by nofish
VST3 builds are automatically copied to a custom folder I set up in common.props. This is nice.
Can I do the same with VST2 builds ?
I see a <COPY_VST2>0</COPY_VST2> entry there and thought maybe setting this to '1' instead would do the trick, but doesn't.
|
Anyone please ?
How to automatically copy the VST2 build to a custom folder via postbuild script ?
I'm probably missing something simple but don't know where to look for...
|
|
|
02-26-2016, 08:37 AM
|
#591
|
Human being with feelings
Join Date: Jul 2008
Location: The Netherlands
Posts: 3,646
|
Quote:
Originally Posted by nofish
Anyone please ?
How to automatically copy the VST2 build to a custom folder via postbuild script ?
I'm probably missing something simple but don't know where to look for...
|
Open the solution, right-click on <InsertNameOfPlugHere>-vst2, click on Properties, then navigate to Configuration Properties > Build Events > Post-Build Event.
|
|
|
02-26-2016, 01:08 PM
|
#592
|
Human being with feelings
Join Date: Oct 2007
Location: home is where the heart is
Posts: 12,096
|
Quote:
Originally Posted by Tale
Open the solution, right-click on <InsertNameOfPlugHere>-vst2, click on Properties, then navigate to Configuration Properties > Build Events > Post-Build Event.
|
Thanks Tale, that helped.
I've now copied the entry there from the VST3 project to the VST2 project (modified for VST2) Post-Build event and it works.
Code:
echo Post-Build: copy 32bit binary to 32bit VST2 Plugins folder ... ...
copy /y "$(TargetPath)" "$(VST2_32_PATH)\myVST.dll"
Additional question, with VST3 plugins it works per default for all projects, without having to set it up explicitly as above. Any way to also do that for VST2 ?
Last edited by nofish; 02-26-2016 at 01:37 PM.
|
|
|
02-27-2016, 02:07 AM
|
#593
|
Human being with feelings
Join Date: Jul 2008
Location: The Netherlands
Posts: 3,646
|
Quote:
Originally Posted by nofish
Additional question, with VST3 plugins it works per default for all projects, without having to set it up explicitly as above. Any way to also do that for VST2 ?
|
Well, if you would duplicate this specific project, then the duplicated project would also have VST2 set up, just like VST3...
|
|
|
02-27-2016, 07:21 AM
|
#594
|
Human being with feelings
Join Date: Oct 2007
Location: home is where the heart is
Posts: 12,096
|
Quote:
Originally Posted by Tale
Well, if you would duplicate this specific project, then the duplicated project would also have VST2 set up, just like VST3...
|
Doh, you're right.
Thanks.
|
|
|
04-17-2016, 09:20 PM
|
#595
|
Human being with feelings
Join Date: Dec 2015
Posts: 18
|
iplug and latency
I asked about latency a few months back but never got a full understanding...then I gave up and worked on other things. Now I'm revisiting. With the latency hooks in iPlug, I want to accomplish the following things:
(1) have alignment with other tracks...if the audio in my plugin track were exactly copied to another track without my plugin, and both were then played simultaneously, I want them to be in phase/sync.
(2) for the track that has my plugin, I want it to play seamlessly without jumping or popping when the plugin is enabled/bypassed repeatedly
(3) (less important) if I get two input streams from a stereo source file, but my plugin is purely mono, I would like to apply my effect to just, say, the left channel and have the right channel pass through. In this case, I want both channels to be in phase/sync at the output of the plugin
My plugin has a "pipeline" where samples are processed in stages. Each stage can work in parallel on successive blocks of samples. An example three stage pipeline would look like
empty -> empty -> empty
ProcessDoubleReplacing(): block1 -> empty -> empty
ProcessDoubleReplacing(): block2 -> block1 -> empty
ProcessDoubleReplacing(): block3 -> block2 -> block1
ProcessDoubleReplacing(): block4 -> block3 -> block2 (block 1 ready for output)
It appears that if I set PLUGIN_LATENCY to some number of samples, then at the start of playback (or anytime Reset() is called) the host (my only experience is with Reaper so far) will call ProcessDoubleReplacing however many times necessary to supply PLUGIN_LATENCY samples, and it does this at full speed - meaning without proper sampling rate delay per normal. Since ProcessDoubleReplacing only supplies BlockSize samples...it may overshoot PLUGIN_LATENCY.
Example
PLUGIN_LATENCY = 2049
BlockSize = 1024
then ProcessDoubleReplacing will be called three times at full speed to supply 3072 samples. After that, ProcessDoubleReplacing continues per normal supplying 1024 samples at 1/sample_rate intervals.
So, clearly the host is trying to help my plugin deal with latency by giving me a big chunk of samples up front. I'm just not sure how to make use of it. The fundamental thing that keeps nagging at me is that every call to ProcessDoubleReplacing must output something, and presumably that something is played back by the host and so I'm not gaining anything by getting a rush of ProcessDoubleReplacing calls at the start of playback. Unless maybe those first calls (first three calls in the example above) are not actually played back and sent to the speakers. I'd appreciate any explanation about how this works and how it accomplishes (1) and (2) above...even if you ignore the pipeline thing and just give an example with a simple low pass filter that uses a window of samples. <edit> If it matters...all of the above is Mac/OS X/AU plugin.
Last edited by bmelonhead; 04-17-2016 at 09:32 PM.
|
|
|
05-10-2016, 06:13 PM
|
#596
|
Human being with feelings
Join Date: Jan 2012
Posts: 104
|
Draw mouse coordinates when IGraphics::mShowControlBounds is true
I had some problems IGraphics::mShowControlBounds is true, the redraw is locked after a PromptUserInput(),
this proposed change seems to work well.
code in comment.
https://github.com/olilarkin/wdl-ol/...a397f23008b42b
Last edited by Tronic; 05-12-2016 at 10:58 AM.
|
|
|
06-06-2016, 07:38 AM
|
#597
|
Human being with feelings
Join Date: Mar 2014
Posts: 37
|
flush FX or synthesizer in Reaper
Hi,
over at KVR I am having a discussion about the PG-8X not responding correctly when transport stops, and in particular about the flush FX option.
I don't think I understand what as a VSTi developer I should do to handle 'flush FX' correctly.
Which function does reaper actually call when 'flushing' an effect or synth. Is it the dispatcher opcode effStopProcess? This does not seem to be implemented in WDL-OL (at least my older copy of it).
Thanks,
Martin
|
|
|
08-07-2016, 06:49 AM
|
#598
|
Human being with feelings
Join Date: Mar 2014
Posts: 37
|
OK. Looks like I figured out the transport issue.
But now I found a different problem: Using "Set Point Value" in reaper does not work. If I specify a value, it always reverts to the value of the next point.
Could anybody shine some light on how that process is supposed to work from a plugin's point of view? Which function should be called? I was assuming it's just the VSTSetParameter callback.
Any ideas what could possibly be going wrong?
Thanks,
Martin
|
|
|
11-26-2016, 11:36 AM
|
#599
|
Human being with feelings
Join Date: Mar 2014
Posts: 37
|
Quote:
Originally Posted by MLVST
OK. Looks like I figured out the transport issue.
But now I found a different problem: Using "Set Point Value" in reaper does not work. If I specify a value, it always reverts to the value of the next point.
Could anybody shine some light on how that process is supposed to work from a plugin's point of view? Which function should be called? I was assuming it's just the VSTSetParameter callback.
Any ideas what could possibly be going wrong?
Thanks,
Martin
|
Hi! I am just wondering whether this forum is still alive? Unfortunately, I still did not manage to figure out the reason for the bug in my plugin, and any help would be highly appreciated.
Thanks,
Martin
|
|
|
11-27-2016, 03:35 AM
|
#600
|
Human being with feelings
Join Date: Jul 2008
Location: The Netherlands
Posts: 3,646
|
Quote:
Originally Posted by MLVST
Hi! I am just wondering whether this forum is still alive?
|
It sure is!
Quote:
Originally Posted by MLVST
Unfortunately, I still did not manage to figure out the reason for the bug in my plugin, and any help would be highly appreciated.
|
I can't really help you there, because I have no idea what exactly "Set Point Value" is...
|
|
|
Thread Tools |
|
Display Modes |
Linear Mode
|
Posting Rules
|
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
HTML code is Off
|
|
|
All times are GMT -7. The time now is 08:59 AM.
|