COCKOS
CONFEDERATED FORUMS
Cockos : REAPER : NINJAM : Forums
Forum Home : Register : FAQ : Members List : Search :
Old 12-15-2015, 03:12 AM   #561
nofish
Human being with feelings
 
nofish's Avatar
 
Join Date: Oct 2007
Location: home is where the heart is
Posts: 12,096
Default

Quote:
Originally Posted by random_id View Post
Yeah, I think that is what happened with me. I updated the VST3 SDK and had to change things for STR16 to work in IPlugVST3.cpp. I think that was the only change I made to get it working in Windows. I haven't tried the newer SDK on OS X, yet.
Oli, could you give us an estimation if / when this gets looked at from your side ?

Just asking (not trying to come across impatient), if you say it'll probably take some time no problem, I'd try myself but as I'm still quite noobish I'm a little afraid of breaking something else, so I'd prefer if it's done from your side.

Thanks.
nofish is offline   Reply With Quote
Old 12-15-2015, 06:05 AM   #562
random_id
Human being with feelings
 
random_id's Avatar
 
Join Date: May 2012
Location: PA, USA
Posts: 356
Default

Quote:
Originally Posted by maajki View Post
I've replaced but FL Studio cannot find the vst3 plugin
I haven't noticed issues with vst3 and FL Studio. My vst3 versions are showing up.

Do the vst3 versions work in other host?
__________________
Website: LVC-Audio
random_id is offline   Reply With Quote
Old 12-15-2015, 08:29 AM   #563
olilarkin
Human being with feelings
 
Join Date: Apr 2009
Location: Berlin, Germany
Posts: 1,248
Default

There is now a branch "VST365" on github. Remember that base.xcodeproj and base_vc10.vcxproj should be reverted on git
__________________
VirtualCZ | Endless Series | iPlug2 | Linkedin | Facebook
olilarkin is offline   Reply With Quote
Old 12-15-2015, 04:14 PM   #564
maajki
Human being with feelings
 
Join Date: Dec 2015
Posts: 6
Default

Oh, Dear God! FL Studio only sees vst3 plugins from C:\Program Files\Common Files\VSt3

Sorry Guys! I thought I have to use juce
maajki is offline   Reply With Quote
Old 12-16-2015, 06:55 AM   #565
nofish
Human being with feelings
 
nofish's Avatar
 
Join Date: Oct 2007
Location: home is where the heart is
Posts: 12,096
Default

Quote:
Originally Posted by olilarkin View Post
There is now a branch "VST365" on github. Remember that base.xcodeproj and base_vc10.vcxproj should be reverted on git
Thank you.
The plugin I was working on where I got the compile errors with the VST3 version previously (STR16 stuff) is building fine now.
nofish is offline   Reply With Quote
Old 12-22-2015, 08:18 PM   #566
maajki
Human being with feelings
 
Join Date: Dec 2015
Posts: 6
Default

Does anyone suffering from the slow Visual Studio 2013? I can't believe I cannoot type normally to the code editor. I've tried many optimization but still too slow. Any suggestion?
maajki is offline   Reply With Quote
Old 12-23-2015, 01:41 PM   #567
Tale
Human being with feelings
 
Tale's Avatar
 
Join Date: Jul 2008
Location: The Netherlands
Posts: 3,646
Default

Not really, except maybe to avoid VS as much as possible... I use Notepad++ to edit source files, and only use VS for project management (and even then only if I have to; most of the time I compile from the command line using makefiles).
Tale is offline   Reply With Quote
Old 12-23-2015, 05:52 PM   #568
maajki
Human being with feelings
 
Join Date: Dec 2015
Posts: 6
Default

Quote:
Originally Posted by Tale View Post
Not really, except maybe to avoid VS as much as possible... I use Notepad++ to edit source files, and only use VS for project management (and even then only if I have to; most of the time I compile from the command line using makefiles).
Hm... I've tried codeblocks. Looks like working even with mingw64.
maajki is offline   Reply With Quote
Old 12-23-2015, 07:52 PM   #569
random_id
Human being with feelings
 
random_id's Avatar
 
Join Date: May 2012
Location: PA, USA
Posts: 356
Default

Quote:
Originally Posted by maajki View Post
Does anyone suffering from the slow Visual Studio 2013? I can't believe I cannoot type normally to the code editor. I've tried many optimization but still too slow. Any suggestion?
Is it the IDE that is slow, or is it the compiled binary?
I am using VS 2015, and compiling with v120_xp platform toolset. It appears to be working well (for me, at least). It may take a minute to scan all the files for intellisense, but I haven't had any issues with the IDE.
__________________
Website: LVC-Audio
random_id is offline   Reply With Quote
Old 12-24-2015, 12:10 AM   #570
Tale
Human being with feelings
 
Tale's Avatar
 
Join Date: Jul 2008
Location: The Netherlands
Posts: 3,646
Default

To be clear: I was talking about the IDE, I do use the VS compiler (which is very good IMHO).
Tale is offline   Reply With Quote
Old 12-26-2015, 10:23 AM   #571
maajki
Human being with feelings
 
Join Date: Dec 2015
Posts: 6
Default

Quote:
Originally Posted by Tale View Post
To be clear: I was talking about the IDE, I do use the VS compiler (which is very good IMHO).
Visual Studio 2015 far more faster than 2013. Sadly 64bit gcc not really working with wdl-ol
maajki is offline   Reply With Quote
Old 12-30-2015, 07:02 AM   #572
nofish
Human being with feelings
 
nofish's Avatar
 
Join Date: Oct 2007
Location: home is where the heart is
Posts: 12,096
Default copy VST2 versions via post-build ?

VST3 builds are automatically copied to a custom folder I set up in common.props. This is nice.

Can I do the same with VST2 builds ?
I see a <COPY_VST2>0</COPY_VST2> entry there and thought maybe setting this to '1' instead would do the trick, but doesn't.
nofish is offline   Reply With Quote
Old 01-06-2016, 10:48 AM   #573
bmelonhead
Human being with feelings
 
Join Date: Dec 2015
Posts: 18
Default IPlug and latency

I've read some past posts on latency and have some lingering questions. I'm trying to understand how the host program calls processDoubleReplacing. Say we have a sample rate of 44100 and a buffer (nFrames) of 1000. Then the host needs to call processDoubleReplacing at least 44.1 times per second...or every ~23ms, to have a continuous stream of audio. But it could call it faster. It could call it as fast as the cpu allows...which probably depends on track count, plugin count, etc. Based on the past posts I was reading, I got the impression that this is actually what occurs - the host calls process as fast as it can and then buffers the result before going to the audio output (?).

Let's say you have a plugin with 200ms of latency...i.e. it takes you 200ms to process 1 frame of audio (1000 samples to stick with the example above). Set 1 comes in first (s1_in) and processing begins. My understanding is that process must return more or less immediately so process must return 0 or passthrough for now. Meanwhile s1 is processing in another thread. Likewise for s2_in, etc. The idea is that when s1_in is finally processed and available (call this s1_out), then the next process call will input the next set at that time...which should be about s8_in*, but it will output s1_out. Next will be s9_in and s2_out, etc.

*Back to how the host calls process...if it calls it as fast as possible, then s1...sn happen very fast and all are silence/passthrough output. (Meanwhile there are n samples processing). Ultimately, when s1_out is ready...you could be on s50_in. Now s50_in and s1_out returned represents much more than 200ms of latency to the outptut audio stream.

I hope I described that clear enough...this is hard to put in words. If process is called at approximately 23ms, then the latency will match the 200ms of processing time. If process is called "as fast as possible" then the latency could be quite a bit larger which is not good for anyone. So, I'm hoping to hear that I was just confused by the older posts and that really process is called "as slow as the sample rate/buffer combo will allow" (in this case every 23ms). Thanks in advance for any help.

Last edited by bmelonhead; 01-06-2016 at 10:54 AM.
bmelonhead is offline   Reply With Quote
Old 01-06-2016, 10:55 AM   #574
sstillwell
Human being with feelings
 
Join Date: Jul 2006
Location: Cowtown
Posts: 1,562
Default

Quote:
Originally Posted by bmelonhead View Post
I've read some past posts on latency and have some lingering questions. I'm trying to understand how the host program calls processDoubleReplacing. Say we have a sample rate of 44100 and a buffer (nFrames) of 1000. Then the host needs to call processDoubleReplacing at least 44.1 times per second...or every ~23ms, to have a continuous stream of audio. But it could call it faster. It could call it as fast as the cpu allows...which probably depends on track count, plugin count, etc. Based on the past posts I was reading, I got the impression that this is actually what occurs - the host calls process as fast as it can and then buffers the result before going to the audio output (?).

Let's say you have a plugin with 200ms of latency...i.e. it takes you 200ms to process 1 frame of audio (1000 samples to stick with the example above). Set 1 comes in first (s1_in) and processing begins. My understanding is that process must return more or less immediately so process must return 0 or passthrough for now. Meanwhile s1 is processing in another thread. Likewise for s2_in, etc. The idea is that when s1_in is finally processed and available (call this s1_out), then the next process call will input the next set at that time...which should be about s8_in*, but it will output s1_proc. Next will be s9_in and s2_proc out, etc.

*Back to how the host calls process...if it calls it as fast as possible, then s1...sn happen very fast and all are silence/passthrough output. (Meanwhile there are n samples processing). Ultimately, when s1_proc is ready...you could be on s50_in. Now s50_in and s1_proc returned represents much more than 200ms of latency to the outptut audio stream.

I hope I described that clear enough...this is hard to put in words. If process is called at approximately 23ms, then the latency will match the 200ms of processing time. If process is called "as fast as possible" then the latency could be quite a bit larger which is not good for anyone. So, I'm hoping to hear that I was just confused by the older posts and that really process is called "as slow as the sample rate/buffer combo will allow" (in this case every 23ms). Thanks in advance for any help.
You're mistaking latency for processing time, if I'm reading you correctly. Latency is often (almost always) used to "look at" the data ahead of processing. An example would be a look-ahead limiter that "sees" the audio some number of milliseconds ahead of where it's actually processing, then can gradually increase gain reduction ahead of a transient rather than clamping down drastically after the transient has already passed. If a plugin takes more than 1000 frames' worth of time to process 1000 frames, it's not going to work in real time, period. The host calculates the sum of reported latency in each stream and uses that to line everything back up in time before final rendering/output.

Scott
__________________
https://www.stillwellaudio.com/
sstillwell is offline   Reply With Quote
Old 01-06-2016, 11:09 AM   #575
bozmillar
Human being with feelings
 
bozmillar's Avatar
 
Join Date: Sep 2009
Posts: 623
Default

Quote:
Originally Posted by bmelonhead View Post
Let's say you have a plugin with 200ms of latency...i.e. it takes you 200ms to process 1 frame of audio (1000 samples to stick with the example above). Set 1 comes in first (s1_in) and processing begins. My understanding is that process must return more or less immediately so process must return 0 or passthrough for now. Meanwhile s1 is processing in another thread. Likewise for s2_in, etc. The idea is that when s1_in is finally processed and available (call this s1_out), then the next process call will input the next set at that time...which should be about s8_in*, but it will output s1_out. Next will be s9_in and s2_out, etc.
A plugin that requires latency doesn't require it because it takes a long time to process the audio, it requires it because it generally requires a certain number of samples to be able to do it's thing correctly.

If I'm understanding you correctly, s1 is not necessarily processing in another thread. It's not necessarily doing anything. If your plugin requires some amount of latency, all it has to do is operate on a different buffer that you store up from the incoming stream.

If you require 1000 samples of audio to do something and the input stream is only providing you with 10, your plugin should start storing the incoming stream in a new buffer so that it can build it up to 1000 samples. Once that buffer is big enough, you just start opperating on it as if you are processing the normal incoming stream. The only difference is that you know exactly the size of your buffer. It increases the latency by the size of your new buffer, and you report that to your DAW.

Extra latency doesn't require extra threads, just extra memory.
__________________
http://www.bozdigitallabs.com
bozmillar is offline   Reply With Quote
Old 01-06-2016, 11:11 AM   #576
sstillwell
Human being with feelings
 
Join Date: Jul 2006
Location: Cowtown
Posts: 1,562
Default

Quote:
Originally Posted by bozmillar View Post
A plugin that requires latency doesn't require it because it takes a long time to process the audio, it requires it because it generally requires a certain number of samples to be able to do it's thing correctly.

If I'm understanding you correctly, s1 is not necessarily processing in another thread. It's not necessarily doing anything. If your plugin requires some amount of latency, all it has to do is operate on a different buffer that you store up from the incoming stream.

If you require 1000 samples of audio to do something and the input stream is only providing you with 10, your plugin should start storing the incoming stream in a new buffer so that it can build it up to 1000 samples. Once that buffer is big enough, you just start opperating on it as if you are processing the normal incoming stream. The only difference is that you know exactly the size of your buffer. It increases the latency by the size of your new buffer, and you report that to your DAW.

Extra latency doesn't require extra threads, just extra memory.
This. Better explanation than mine.

Scott
__________________
https://www.stillwellaudio.com/
sstillwell is offline   Reply With Quote
Old 01-06-2016, 11:18 AM   #577
bmelonhead
Human being with feelings
 
Join Date: Dec 2015
Posts: 18
Default

Quote:
Originally Posted by bozmillar View Post
A plugin that requires latency doesn't require it because it takes a long time to process the audio, it requires it because it generally requires a certain number of samples to be able to do it's thing correctly.

If I'm understanding you correctly, s1 is not necessarily processing in another thread. It's not necessarily doing anything. If your plugin requires some amount of latency, all it has to do is operate on a different buffer that you store up from the incoming stream.

If you require 1000 samples of audio to do something and the input stream is only providing you with 10, your plugin should start storing the incoming stream in a new buffer so that it can build it up to 1000 samples. Once that buffer is big enough, you just start opperating on it as if you are processing the normal incoming stream. The only difference is that you know exactly the size of your buffer. It increases the latency by the size of your new buffer, and you report that to your DAW.

Extra latency doesn't require extra threads, just extra memory.
Uh oh...this is not sounding good. Yes, what I'm trying to do really does take up time...so my thread is busy for that time. I was expecting this to be common/normal for plugins that have latency. I didn't realize they just needed to look ahead.
bmelonhead is offline   Reply With Quote
Old 01-06-2016, 01:10 PM   #578
bmelonhead
Human being with feelings
 
Join Date: Dec 2015
Posts: 18
Default

Quote:
Originally Posted by bmelonhead View Post
Uh oh...this is not sounding good. Yes, what I'm trying to do really does take up time...so my thread is busy for that time. I was expecting this to be common/normal for plugins that have latency. I didn't realize they just needed to look ahead.
Looking at both sides of this problem...the audio host needs to buffer some output audio to ensure that the output stream is not interrupted by some minor cpu glitch. So, it needs to call at full speed (or at least faster than the audio stream) for some amount of time.

However, the plugin knob controls are supposedly real time. So there must be limits on the buffering...otherwise the users knob changes are not affecting the audio he/she is hearing. I guess I need to experiment and profile this...probably on multiple hosts. Thanks for the replies.
bmelonhead is offline   Reply With Quote
Old 01-06-2016, 02:56 PM   #579
Xenakios
Human being with feelings
 
Xenakios's Avatar
 
Join Date: Feb 2007
Location: Oulu, Finland
Posts: 8,062
Default

Quote:
Originally Posted by bmelonhead View Post
Looking at both sides of this problem...the audio host needs to buffer some output audio to ensure that the output stream is not interrupted by some minor cpu glitch. So, it needs to call at full speed (or at least faster than the audio stream) for some amount of time.

However, the plugin knob controls are supposedly real time. So there must be limits on the buffering...otherwise the users knob changes are not affecting the audio he/she is hearing. I guess I need to experiment and profile this...probably on multiple hosts. Thanks for the replies.
Typically hosts don't do any extra buffering for time critical signals in the path if possible. And indeed, if for example some plugin cocks up and spends too much time in the audio callback, a glitch will result. This is a risk the end user has to take if he wants the lowest possible latency for stuff like monitoring live audio. However, for example Reaper actually can do quite a lot of buffering for non-time critical signals. (The so called "anticipative rendering" that helps Reaper a lot in multithreading.)

I sense some sort of a XY problem here. You have decided you must do something (Y) to solve a problem and are seeking solutions/help to that. All the while we know nothing about your actual problem (X). Can you explain in more detail what it is that you actually want to do?
__________________
I am no longer part of the REAPER community. Please don't contact me with any REAPER-related issues.
Xenakios is offline   Reply With Quote
Old 01-06-2016, 04:53 PM   #580
bmelonhead
Human being with feelings
 
Join Date: Dec 2015
Posts: 18
Default

Quote:
Originally Posted by Xenakios View Post
Typically hosts don't do any extra buffering for time critical signals in the path if possible. And indeed, if for example some plugin cocks up and spends too much time in the audio callback, a glitch will result. This is a risk the end user has to take if he wants the lowest possible latency for stuff like monitoring live audio. However, for example Reaper actually can do quite a lot of buffering for non-time critical signals. (The so called "anticipative rendering" that helps Reaper a lot in multithreading.)

I sense some sort of a XY problem here. You have decided you must do something (Y) to solve a problem and are seeking solutions/help to that. All the while we know nothing about your actual problem (X). Can you explain in more detail what it is that you actually want to do?
I have a plugin where the processing for a chunk of audio data (nFrames) will take longer than (nFrames/sample_rate), so it cannot keep up with the incoming audio stream. But each chunk can be processed in parallel, so I'm putting that processing into other thread(s). I expected this to be very common...because even potentially simple processing runs the risk of being overrun at, say...nFrames = 128, sample_rate = 192000 (on an old CPU?).

The offloading to another thread will create latency. My point is that this latency will be minimized if the incoming audio is arriving at the real time audio stream rate (i.e. once every nFrames/sample_rate seconds). If it arrives "as fast as possible", then latency could be quite bad for the reasons in my original post.

However, it seems you are saying that "time critical" operations are in real time (much to my relief)...which to me means that data arrives (processDoubleReplacing called) at approximately nFrames/sample_rate. It makes sense...to get low latency for recording and to get real time feeling for knob changes in the GUI.
bmelonhead is offline   Reply With Quote
Old 01-06-2016, 05:19 PM   #581
bozmillar
Human being with feelings
 
bozmillar's Avatar
 
Join Date: Sep 2009
Posts: 623
Default

When you say time, are you referring to CPU cycles or actual time?
__________________
http://www.bozdigitallabs.com
bozmillar is offline   Reply With Quote
Old 01-06-2016, 06:59 PM   #582
bmelonhead
Human being with feelings
 
Join Date: Dec 2015
Posts: 18
Default

Quote:
Originally Posted by bozmillar View Post
When you say time, are you referring to CPU cycles or actual time?
Well...both. I guess it is easiest to think of it as actual time. Think of the processing that I need to do as a multi-stage thing - a pipeline. First the audio data chunk (nFrames) must be processed with process1, then process2, then process3. The entire pipeline takes more than (nFrames/sample_rate) seconds. However, each stage can have audio being processed simultaneously. So, if each of the following lines represents about (nFrames/sample_rate) of time, here are the states of the pipeline

p1(chunk1) p2() p3()
p1(chunk2) p2(chunk1) p3()
p1(chunk3) p2(chunk2) p3(chunk1)
p1(chunk4) p2(chunk3) p3(chunk2) ...chunk1 finally ready!

p1..p3 are in other threads. So, as chunk5 is arriving on the main plugin thread, chunk1 is ready and passed back to that thread.
bmelonhead is offline   Reply With Quote
Old 01-19-2016, 03:04 AM   #583
maajki
Human being with feelings
 
Join Date: Dec 2015
Posts: 6
Default

Code:
error: '/Users/maajki/Documents/Code/wdl-maa/IPlugExamples/maaSynth/build-mac/app/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/maaSynth.app/Contents/Resources/English.lproj/InfoPlist.strings' is longer than filepath buffer size (1025).
XCode 7.2 ???

Last edited by Jeffos; 01-22-2016 at 10:16 AM. Reason: this message was causing problems with the formatting
maajki is offline   Reply With Quote
Old 01-22-2016, 04:56 AM   #584
Crazy Eye Joe
Human being with feelings
 
Join Date: Jan 2016
Posts: 6
Default

EDIT: Whoops, what I said here was wrong. Removing it as it doesn't add anything to the discussion.

Last edited by Crazy Eye Joe; 01-22-2016 at 05:57 AM.
Crazy Eye Joe is offline   Reply With Quote
Old 01-22-2016, 05:08 AM   #585
olilarkin
Human being with feelings
 
Join Date: Apr 2009
Location: Berlin, Germany
Posts: 1,248
Default

formatting is really messed up on this page.

RE IPlugSideChain - negative array indexes are definitely not the solution!
__________________
VirtualCZ | Endless Series | iPlug2 | Linkedin | Facebook
olilarkin is offline   Reply With Quote
Old 01-22-2016, 06:07 AM   #586
Crazy Eye Joe
Human being with feelings
 
Join Date: Jan 2016
Posts: 6
Default

Okay, I've retested, and here's what I've found (testing only with VST3, on the VST3PluginTestHost, basing the plug-in on IPlugSideChain):

The following code:
Code:
void SideChain::ProcessDoubleReplacing(double** inputs, double** outputs, int nFrames)
{      
  for (int i = 0; i < nFrames; i++)
  {
    outputs[0][i] = inputs[0][i];
    outputs[1][i] = inputs[1][i];
  }
}
Results in the following behaviour: I get output if I receive something on the AUX channel, and I also get output if I receive something on the main channel. However, what I expect is that I only get output when receiving on the main channel.

The following:
Code:
void SideChain::ProcessDoubleReplacing(double** inputs, double** outputs, int nFrames)
{
  for (int i = 0; i < nFrames; i++)
  {
    outputs[0][i] = inputs[2][i];
    outputs[1][i] = inputs[3][i];
  }
}
Results in silence regardless of whether or not I'm receiving anything on the AUX channel. Here I expect output only if I receive on the AUX channel.

Am I missing something?

Last edited by Crazy Eye Joe; 01-22-2016 at 06:09 AM. Reason: Clarification
Crazy Eye Joe is offline   Reply With Quote
Old 01-22-2016, 06:12 AM   #587
olilarkin
Human being with feelings
 
Join Date: Apr 2009
Location: Berlin, Germany
Posts: 1,248
Default

IIRC VST3PluginTestHost is buggy RE sidechain... i need to test it
__________________
VirtualCZ | Endless Series | iPlug2 | Linkedin | Facebook
olilarkin is offline   Reply With Quote
Old 01-22-2016, 06:39 AM   #588
Crazy Eye Joe
Human being with feelings
 
Join Date: Jan 2016
Posts: 6
Default

I see, in that case I will focus on the VST2 implementation, which is the one that I care about at the moment anyway. I just tested my code in Ableton Live, using VST2, and it worked as intended.

However, in that case I have a different problem. My plug-in requires an FFT. For simplicity, I decided to start out with WDL_fft(), since it's provided.
This compiles just fine in VST3, but when I try to compile to VST2 I get the following output from Visual Studio:
Code:
3>LiveConv.obj : error LNK2019: unresolved external symbol _WDL_fft_complexmul referenced in function "private: void __thiscall LiveConv::FreqMult(double * *,double * *,int)" (?FreqMult@LiveConv@@AAEXPAPAN0H@Z)
3>LiveConv.obj : error LNK2019: unresolved external symbol _WDL_fft referenced in function "private: void __thiscall LiveConv::FreqMult(double * *,double * *,int)" (?FreqMult@LiveConv@@AAEXPAPAN0H@Z)
What's up with that? The way I added the FFT was to add fft.h and fft.c to the "base" solution, and then I put #include "fft.h" in the main .cpp file. Is that the wrong way to go about it?
Crazy Eye Joe is offline   Reply With Quote
Old 02-02-2016, 07:26 AM   #589
sstillwell
Human being with feelings
 
Join Date: Jul 2006
Location: Cowtown
Posts: 1,562
Default

Quote:
Originally Posted by Crazy Eye Joe View Post
I see, in that case I will focus on the VST2 implementation, which is the one that I care about at the moment anyway. I just tested my code in Ableton Live, using VST2, and it worked as intended.

However, in that case I have a different problem. My plug-in requires an FFT. For simplicity, I decided to start out with WDL_fft(), since it's provided.
This compiles just fine in VST3, but when I try to compile to VST2 I get the following output from Visual Studio:
Code:
3>LiveConv.obj : error LNK2019: unresolved external symbol _WDL_fft_complexmul referenced in function "private: void __thiscall LiveConv::FreqMult(double * *,double * *,int)" (?FreqMult@LiveConv@@AAEXPAPAN0H@Z)
3>LiveConv.obj : error LNK2019: unresolved external symbol _WDL_fft referenced in function "private: void __thiscall LiveConv::FreqMult(double * *,double * *,int)" (?FreqMult@LiveConv@@AAEXPAPAN0H@Z)
What's up with that? The way I added the FFT was to add fft.h and fft.c to the "base" solution, and then I put #include "fft.h" in the main .cpp file. Is that the wrong way to go about it?
Yes, that's the wrong way...put fft.h in your source as you normally would - if your include paths are typical for an IPlug project it will get included as normal. You can add it to the projects for self-documentation purposes, but it's not required as long as it's found in the include search path. You must add the fft.c program to each of the subprojects, however. Adding to VST3 will not make it link for VST2, etc. On Mac it's a little less confusing since you just add the code once and say which targets it should compile/link for.

That's how it works for me, at least...
__________________
https://www.stillwellaudio.com/
sstillwell is offline   Reply With Quote
Old 02-26-2016, 08:17 AM   #590
nofish
Human being with feelings
 
nofish's Avatar
 
Join Date: Oct 2007
Location: home is where the heart is
Posts: 12,096
Default

Quote:
Originally Posted by nofish View Post
VST3 builds are automatically copied to a custom folder I set up in common.props. This is nice.

Can I do the same with VST2 builds ?
I see a <COPY_VST2>0</COPY_VST2> entry there and thought maybe setting this to '1' instead would do the trick, but doesn't.
Anyone please ?
How to automatically copy the VST2 build to a custom folder via postbuild script ?

I'm probably missing something simple but don't know where to look for...
nofish is offline   Reply With Quote
Old 02-26-2016, 08:37 AM   #591
Tale
Human being with feelings
 
Tale's Avatar
 
Join Date: Jul 2008
Location: The Netherlands
Posts: 3,646
Default

Quote:
Originally Posted by nofish View Post
Anyone please ?
How to automatically copy the VST2 build to a custom folder via postbuild script ?

I'm probably missing something simple but don't know where to look for...
Open the solution, right-click on <InsertNameOfPlugHere>-vst2, click on Properties, then navigate to Configuration Properties > Build Events > Post-Build Event.
Tale is offline   Reply With Quote
Old 02-26-2016, 01:08 PM   #592
nofish
Human being with feelings
 
nofish's Avatar
 
Join Date: Oct 2007
Location: home is where the heart is
Posts: 12,096
Default

Quote:
Originally Posted by Tale View Post
Open the solution, right-click on <InsertNameOfPlugHere>-vst2, click on Properties, then navigate to Configuration Properties > Build Events > Post-Build Event.
Thanks Tale, that helped.
I've now copied the entry there from the VST3 project to the VST2 project (modified for VST2) Post-Build event and it works.

Code:
echo Post-Build: copy 32bit binary to 32bit VST2 Plugins folder ... ...
copy /y "$(TargetPath)" "$(VST2_32_PATH)\myVST.dll"
Additional question, with VST3 plugins it works per default for all projects, without having to set it up explicitly as above. Any way to also do that for VST2 ?

Last edited by nofish; 02-26-2016 at 01:37 PM.
nofish is offline   Reply With Quote
Old 02-27-2016, 02:07 AM   #593
Tale
Human being with feelings
 
Tale's Avatar
 
Join Date: Jul 2008
Location: The Netherlands
Posts: 3,646
Default

Quote:
Originally Posted by nofish View Post
Additional question, with VST3 plugins it works per default for all projects, without having to set it up explicitly as above. Any way to also do that for VST2 ?
Well, if you would duplicate this specific project, then the duplicated project would also have VST2 set up, just like VST3...
Tale is offline   Reply With Quote
Old 02-27-2016, 07:21 AM   #594
nofish
Human being with feelings
 
nofish's Avatar
 
Join Date: Oct 2007
Location: home is where the heart is
Posts: 12,096
Default

Quote:
Originally Posted by Tale View Post
Well, if you would duplicate this specific project, then the duplicated project would also have VST2 set up, just like VST3...
Doh, you're right.
Thanks.
nofish is offline   Reply With Quote
Old 04-17-2016, 09:20 PM   #595
bmelonhead
Human being with feelings
 
Join Date: Dec 2015
Posts: 18
Default iplug and latency

I asked about latency a few months back but never got a full understanding...then I gave up and worked on other things. Now I'm revisiting. With the latency hooks in iPlug, I want to accomplish the following things:

(1) have alignment with other tracks...if the audio in my plugin track were exactly copied to another track without my plugin, and both were then played simultaneously, I want them to be in phase/sync.
(2) for the track that has my plugin, I want it to play seamlessly without jumping or popping when the plugin is enabled/bypassed repeatedly
(3) (less important) if I get two input streams from a stereo source file, but my plugin is purely mono, I would like to apply my effect to just, say, the left channel and have the right channel pass through. In this case, I want both channels to be in phase/sync at the output of the plugin

My plugin has a "pipeline" where samples are processed in stages. Each stage can work in parallel on successive blocks of samples. An example three stage pipeline would look like
empty -> empty -> empty
ProcessDoubleReplacing(): block1 -> empty -> empty
ProcessDoubleReplacing(): block2 -> block1 -> empty
ProcessDoubleReplacing(): block3 -> block2 -> block1
ProcessDoubleReplacing(): block4 -> block3 -> block2 (block 1 ready for output)

It appears that if I set PLUGIN_LATENCY to some number of samples, then at the start of playback (or anytime Reset() is called) the host (my only experience is with Reaper so far) will call ProcessDoubleReplacing however many times necessary to supply PLUGIN_LATENCY samples, and it does this at full speed - meaning without proper sampling rate delay per normal. Since ProcessDoubleReplacing only supplies BlockSize samples...it may overshoot PLUGIN_LATENCY.

Example
PLUGIN_LATENCY = 2049
BlockSize = 1024
then ProcessDoubleReplacing will be called three times at full speed to supply 3072 samples. After that, ProcessDoubleReplacing continues per normal supplying 1024 samples at 1/sample_rate intervals.

So, clearly the host is trying to help my plugin deal with latency by giving me a big chunk of samples up front. I'm just not sure how to make use of it. The fundamental thing that keeps nagging at me is that every call to ProcessDoubleReplacing must output something, and presumably that something is played back by the host and so I'm not gaining anything by getting a rush of ProcessDoubleReplacing calls at the start of playback. Unless maybe those first calls (first three calls in the example above) are not actually played back and sent to the speakers. I'd appreciate any explanation about how this works and how it accomplishes (1) and (2) above...even if you ignore the pipeline thing and just give an example with a simple low pass filter that uses a window of samples. <edit> If it matters...all of the above is Mac/OS X/AU plugin.

Last edited by bmelonhead; 04-17-2016 at 09:32 PM.
bmelonhead is offline   Reply With Quote
Old 05-10-2016, 06:13 PM   #596
Tronic
Human being with feelings
 
Tronic's Avatar
 
Join Date: Jan 2012
Posts: 104
Default Draw mouse coordinates when IGraphics::mShowControlBounds is true

I had some problems IGraphics::mShowControlBounds is true, the redraw is locked after a PromptUserInput(),
this proposed change seems to work well.
code in comment.
https://github.com/olilarkin/wdl-ol/...a397f23008b42b

Last edited by Tronic; 05-12-2016 at 10:58 AM.
Tronic is offline   Reply With Quote
Old 06-06-2016, 07:38 AM   #597
MLVST
Human being with feelings
 
Join Date: Mar 2014
Posts: 37
Default flush FX or synthesizer in Reaper

Hi,

over at KVR I am having a discussion about the PG-8X not responding correctly when transport stops, and in particular about the flush FX option.

I don't think I understand what as a VSTi developer I should do to handle 'flush FX' correctly.
Which function does reaper actually call when 'flushing' an effect or synth. Is it the dispatcher opcode effStopProcess? This does not seem to be implemented in WDL-OL (at least my older copy of it).

Thanks,
Martin
MLVST is offline   Reply With Quote
Old 08-07-2016, 06:49 AM   #598
MLVST
Human being with feelings
 
Join Date: Mar 2014
Posts: 37
Default

OK. Looks like I figured out the transport issue.

But now I found a different problem: Using "Set Point Value" in reaper does not work. If I specify a value, it always reverts to the value of the next point.

Could anybody shine some light on how that process is supposed to work from a plugin's point of view? Which function should be called? I was assuming it's just the VSTSetParameter callback.

Any ideas what could possibly be going wrong?

Thanks,
Martin
MLVST is offline   Reply With Quote
Old 11-26-2016, 11:36 AM   #599
MLVST
Human being with feelings
 
Join Date: Mar 2014
Posts: 37
Default

Quote:
Originally Posted by MLVST View Post
OK. Looks like I figured out the transport issue.

But now I found a different problem: Using "Set Point Value" in reaper does not work. If I specify a value, it always reverts to the value of the next point.

Could anybody shine some light on how that process is supposed to work from a plugin's point of view? Which function should be called? I was assuming it's just the VSTSetParameter callback.

Any ideas what could possibly be going wrong?

Thanks,
Martin
Hi! I am just wondering whether this forum is still alive? Unfortunately, I still did not manage to figure out the reason for the bug in my plugin, and any help would be highly appreciated.

Thanks,
Martin
MLVST is offline   Reply With Quote
Old 11-27-2016, 03:35 AM   #600
Tale
Human being with feelings
 
Tale's Avatar
 
Join Date: Jul 2008
Location: The Netherlands
Posts: 3,646
Default

Quote:
Originally Posted by MLVST View Post
Hi! I am just wondering whether this forum is still alive?
It sure is!

Quote:
Originally Posted by MLVST View Post
Unfortunately, I still did not manage to figure out the reason for the bug in my plugin, and any help would be highly appreciated.
I can't really help you there, because I have no idea what exactly "Set Point Value" is...
Tale is offline   Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 03:22 AM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.