Go Back   Cockos Incorporated Forums > REAPER Forums > REAPER General Discussion Forum

Reply
 
Thread Tools Display Modes
Old 11-06-2015, 09:16 AM   #1
sostenuto
Human being with feelings
 
sostenuto's Avatar
 
Join Date: Apr 2011
Location: St George, UT _ USA
Posts: 2,881
Default Mid-Side _ Sorta Urgent _ Blk Fri Coming !!

Old mon/stereo schooler here and some understanding of Mid-Side recording, but no real clue how things are recorded/mic'd today. Ready to jump on some current deals and Black Friday stuff.

How important to now pickup EQ, Comps, Saturation, xxx, that have M/S capability?? How does something like Voxengo Mid-Side Decoder fit in this scenario? With mostly VSTi Audio Tracks, I only think in terms of Mono/Stereo.

(EDIT) Main concern here is immediate plan to purchase Soundtoys5 and no 'specified' Mid-Side capability at all. Just wondering how concerned I should be when adding key functionality to last some time ahead ....

Last edited by sostenuto; 11-06-2015 at 10:11 AM.
sostenuto is offline   Reply With Quote
Old 11-06-2015, 10:22 AM   #2
sostenuto
Human being with feelings
 
sostenuto's Avatar
 
Join Date: Apr 2011
Location: St George, UT _ USA
Posts: 2,881
Default

Found this article this morning and now reading thru it.

http://www.about-audio-mastering-sof...mastering.html

Already starting to puzzle even more about Mid-Side processing/tweaking with such heavy use of VSTi Audio sources as opposed to live instrumental tracking. Some live Vocals possible though.

Original Thread Title should have been "When to Insist On Mid-Side Effects Plugins?"

Last edited by sostenuto; 11-06-2015 at 11:07 AM.
sostenuto is offline   Reply With Quote
Old 11-06-2015, 11:16 AM   #3
Joaquins Void
Human being with feelings
 
Join Date: Dec 2014
Location: Stockholm
Posts: 206
Default

You can use a m/s encoder/decoder to use any plugin in mid side mode. It's a bit of a hassle but it works. And once set up you can always save a track template.
That said, it can be kind of neat to have it built in. EQ would be the most important one I'd say. Can't say I have ever used mid/side for anything else really, random tinkering aside.

Last edited by Joaquins Void; 11-06-2015 at 11:47 AM.
Joaquins Void is offline   Reply With Quote
Old 11-06-2015, 11:34 AM   #4
sostenuto
Human being with feelings
 
sostenuto's Avatar
 
Join Date: Apr 2011
Location: St George, UT _ USA
Posts: 2,881
Default

Thank-you! Still reading Karl Machat's article and much better understanding of my 'lack of real need'

Also pleased to know how Voxengo Mid-Side Decoder and Brainworx Solo _FREE _ plugins can be used to monitor stereo tracks and learn more.

Will think on the EQ possibilities for now and pass on others. Must admit, still nagged by Brainworx bx_Saturator V2 for such a very low cost

Regards,
sostenuto is offline   Reply With Quote
Old 11-06-2015, 12:01 PM   #5
DVDdoug
Human being with feelings
 
Join Date: Jul 2010
Location: Silicon Valley, CA
Posts: 2,779
Default

I'd guess that less than 1% of commercial recording use M/S recording or processing...

It takes a special mic set-up to record M/S, but other than something to convert between mid-side and left-right you don't need any special plug-ins if you want to convert to M/S and apply effects.

I'm sure you can find special plug-ins, but mid-side is two channels and left-right is two channels. So, you can add reverb or EQ the mid channel only just like you can add reverb or EQ the left channel only
DVDdoug is offline   Reply With Quote
Old 11-06-2015, 12:01 PM   #6
standay
Human being with feelings
 
Join Date: Jun 2010
Posts: 46
Default

Quote:
Originally Posted by sostenuto View Post
Found this article this morning and now reading thru it.

http://www.about-audio-mastering-sof...mastering.html

Already starting to puzzle even more about Mid-Side processing/tweaking with such heavy use of VSTi Audio sources as opposed to live instrumental tracking. Some live Vocals possible though.

Original Thread Title should have been "When to Insist On Mid-Side Effects Plugins?"
That's a great article. I always wondered how they cut vinyl for mono and stereo and the article has a good explanation. I will explore mid side a lot more once I've read through things a few times. Thanks for the link!
standay is offline   Reply With Quote
Old 11-06-2015, 12:17 PM   #7
serr
Human being with feelings
 
Join Date: Sep 2010
Posts: 12,562
Default

Quote:
Originally Posted by sostenuto View Post
Old mon/stereo schooler here and some understanding of Mid-Side recording, but no real clue how things are recorded/mic'd today. Ready to jump on some current deals and Black Friday stuff.

How important to now pickup EQ, Comps, Saturation, xxx, that have M/S capability?? How does something like Voxengo Mid-Side Decoder fit in this scenario? With mostly VSTi Audio Tracks, I only think in terms of Mono/Stereo.

(EDIT) Main concern here is immediate plan to purchase Soundtoys5 and no 'specified' Mid-Side capability at all. Just wondering how concerned I should be when adding key functionality to last some time ahead ....
The Voxengo Mid-Side Decoder is free. It works. Passes a encode-decode null test and all that. Put your wallet back away.

Most things are still mic'd in mono and stereo and often close mic'd.
serr is offline   Reply With Quote
Old 11-06-2015, 12:19 PM   #8
sostenuto
Human being with feelings
 
sostenuto's Avatar
 
Join Date: Apr 2011
Location: St George, UT _ USA
Posts: 2,881
Default

Quote:
Originally Posted by DVDdoug View Post
I'd guess that less than 1% of commercial recording use M/S recording or processing...

It takes a special mic set-up to record M/S, but other than something to convert between mid-side and left-right you don't need any special plug-ins if you want to convert to M/S and apply effects.

I'm sure you can find special plug-ins, but mid-side is two channels and left-right is two channels. So, you can add reverb or EQ the mid channel only just like you can add reverb or EQ the left channel only
Thanks!.... assumed this was 'routine' stuff for anyone recording/mixing here, but 'chapter one/page one' for me until now. Plan to do some stereo monitoring, per the Article just to learn more.

Regards,

Last edited by sostenuto; 11-06-2015 at 12:33 PM.
sostenuto is offline   Reply With Quote
Old 11-06-2015, 12:34 PM   #9
cyrano
Human being with feelings
 
cyrano's Avatar
 
Join Date: Jun 2011
Location: Belgium
Posts: 5,246
Default

Quote:
Originally Posted by DVDdoug View Post
I'd guess that less than 1% of commercial recording use M/S recording or processing...
True if you're talking about music. If you include film, you might end up with a completely different number. In radio broadcast, M/S is used a lot because of it's perfect and easy mono capability. You see a lot of setups using Sennheiser MKH mics, like this one:



For music studio recordings, there is no advantage. The figure-of-eight mics usually are expensive and not so extremely versatile.
cyrano is offline   Reply With Quote
Old 11-06-2015, 12:48 PM   #10
karbomusic
Human being with feelings
 
karbomusic's Avatar
 
Join Date: May 2009
Posts: 29,260
Default

Quote:
Originally Posted by cyrano View Post
For music studio recordings, there is no advantage. The figure-of-eight mics usually are expensive and not so extremely versatile.
Huh? Fig-8 isn't a luxury in recording studios and is often used to capture ensembles and or other acoustic situations where you are able to have some distance between mic/source. I use them all the time. In the pic below it's being used as a room mic for a drum recording. Not to mention Fig-8 is nice to have in the collection for various reasons such as using their null points when recording a guitarist/vocalist for separation. And also don't forget most ribbon mics are Fig-8.

__________________
Music is what feelings sound like.
karbomusic is offline   Reply With Quote
Old 11-06-2015, 01:07 PM   #11
Magicbuss
Human being with feelings
 
Join Date: Jul 2007
Posts: 1,957
Default

Quote:
Originally Posted by serr View Post
The Voxengo Mid-Side Decoder is free. It works. Passes a encode-decode null test and all that. Put your wallet back away.

Most things are still mic'd in mono and stereo and often close mic'd.
+1 although all one would need to add mid side micing to your repertoire is the CAD M179. Its cheap, multipattern and sound good on alot of sources particularly drums and room mics.

ITB, there are alot of mid side capable plugs including freebies (Baxter EQ, Meq, Slick EQ). And as already pointed out you can use Voxengo to make any plug m/s.

The big useage for me is M/S EQ. Its great on stereo sources or full mixes where you want to EQ the mid and sides differently. A typical use would be to cut the lows and boost the highs on the sides to create clarity and width without reducing the weight of the center.
Magicbuss is offline   Reply With Quote
Old 11-06-2015, 01:55 PM   #12
sostenuto
Human being with feelings
 
sostenuto's Avatar
 
Join Date: Apr 2011
Location: St George, UT _ USA
Posts: 2,881
Default

Quote:
Originally Posted by Magicbuss View Post
+1 although all one would need to add mid side micing to your repertoire is the CAD M179. Its cheap, multipattern and sound good on alot of sources particularly drums and room mics.

ITB, there are alot of mid side capable plugs including freebies (Baxter EQ, Meq, Slick EQ). And as already pointed out you can use Voxengo to make any plug m/s.

The big useage for me is M/S EQ. Its great on stereo sources or full mixes where you want to EQ the mid and sides differently. A typical use would be to cut the lows and boost the highs on the sides to create clarity and width without reducing the weight of the center.
Do Brainworx bx_digital V2 or Maag Audio EQ4 fall nicely with the 'above' EQ's in terms of M/S?
Last day @ lower cost ($99.) for me, but can't evaluate so quickly. If not way better than Baxter/Slick, then no point. Can't sort the Air Band hype beyond the obvious points. bx_digital_V2 is hyped strongly re. M/S features.

Last edited by sostenuto; 11-06-2015 at 02:05 PM.
sostenuto is offline   Reply With Quote
Old 11-06-2015, 11:03 PM   #13
Otto Tune
Human being with feelings
 
Otto Tune's Avatar
 
Join Date: Dec 2009
Location: Colorado
Posts: 138
Default

If you're looking for just M/S processing (as in, you didn't record it like Karbo's pic is showing but rather in L/R), don't forget about the JS:Kanaka encoder/decoder -- included in REAPER. And as DVDdoug says, you don't actually even need a plugin to sum the L/R or subtract the difference. You can route it that way with nothing else at all, with absolutely any FX you want in between. But JS:Kanaka is pretty handy
Otto Tune is offline   Reply With Quote
Old 11-07-2015, 01:37 AM   #14
pipelineaudio
Mortal
 
pipelineaudio's Avatar
 
Join Date: Jan 2006
Location: Wickenburg, Arizona
Posts: 14,047
Default

Whats wrong with the JS one? That's what I use for decoding acoustic guitars in M/S
pipelineaudio is offline   Reply With Quote
Old 11-07-2015, 09:59 AM   #15
sostenuto
Human being with feelings
 
sostenuto's Avatar
 
Join Date: Apr 2011
Location: St George, UT _ USA
Posts: 2,881
Default

@ Otto Tune __ @ pipeline audio ..... thanks guys I thought the Thread died !!

Big WHEWWWWW this morning as I passed on the two M/S EQs (promotion price).
I have the relevant Voxengo and Brainworx products and was ignorant of JS encoder/decoder. Will go there now and apply what's been learned here and from the Audio Mastering article.

Thanks much!
sostenuto is offline   Reply With Quote
Old 11-07-2015, 02:39 PM   #16
clepsydrae
Human being with feelings
 
clepsydrae's Avatar
 
Join Date: Nov 2011
Posts: 3,409
Default

Yeah, the JS works just great. M/S encoding/decoding is very simple, no need for a VST:
L=spl0;
R=spl1;
spl0=(L+R)*0.5;
spl1=(L-R)*0.5;
tmp=spl0;
spl0 = tmp + spl1;
spl1 = tmp - spl1;
I love M/S recording for recording single sources with a stereo feel. E.g. if I'm recording a voice, I want the voice on-axis on the right mic. If I want a little stereo image to it, I don't want to set up an XY or something with off-axis coloration, so M/S fits the bill.

If you are EQ'ing the mid or side channels, make sure you use a linear-phase EQ! No one ever mentions this. :-) Anything that alters the phase of a signal (like EQ or band-limited compression) is going to have potentially bad effects on the stereo image when the signals are decoded back to stereo.

My main use of M/S processing is to pull the bass end into the center to prevent speaker-to-speaker phase issues. I think it's great for that. Sometimes I'll bump the highs in the sides a touch.

Agreed with others: M/S features in a plugin are handy, but not necessary, as you can almost always achieve the same thing with routing in Reaper.
clepsydrae is offline   Reply With Quote
Old 11-07-2015, 02:55 PM   #17
innuendo
Human being with feelings
 
Join Date: Nov 2013
Location: Jerusalem, Israel
Posts: 659
Default

I use m/s a lot, mainly for monitoring. Useful to know what the mid and the side sound like. Sometimes I use it for EQ, mainly to correct stuff recorded in stereo. For that matter, I generally go with linear-phase EQ, because applying separate M/S minimum-phase EQ tends to decorrelate the mid relatively to the side, which affects the sound in an unpleasant way.

The only use for the m/s EQ on the master for me is to apply a gentle HPF to the sides. That comes in handy for classical and jazz where the bass is not supposed to be centered. This should always be done with a linear-phase EQ for the same reason as above, and especially because HPF in minimum-phase EQ introduces a major phase distortion in the low end. To minimize pre-ringing that also becomes a major problem with linear-phase EQ combined with a HPF, I configure it with as long impulse as possible.

Last edited by innuendo; 11-07-2015 at 03:27 PM.
innuendo is offline   Reply With Quote
Old 11-07-2015, 03:23 PM   #18
innuendo
Human being with feelings
 
Join Date: Nov 2013
Location: Jerusalem, Israel
Posts: 659
Default

Quote:
Originally Posted by clepsydrae View Post
I love M/S recording for recording single sources with a stereo feel. E.g. if I'm recording a voice, I want the voice on-axis on the right mic. If I want a little stereo image to it, I don't want to set up an XY or something with off-axis coloration, so M/S fits the bill.
Could you please elaborate about off-axis coloration? This is clearly an important subject which I don't know how to tackle.

A few things I read about it:
- If "coloration" means phase distortion, then directional mics necessarily introduce it due to the nature of the mechanism that enhances directivity (does this affect figure-8 mics?)
- If "coloration" means frequency response distortion then large diaphragms color the sound more than small diaphragms, attenuating highs coming from the sides.
- Some people swear that in fact this frequency response distortion is desirable because it provides focus to what's in front of the mic while pushing into the background what's off-axis.
innuendo is offline   Reply With Quote
Old 11-07-2015, 04:09 PM   #19
clepsydrae
Human being with feelings
 
clepsydrae's Avatar
 
Join Date: Nov 2011
Posts: 3,409
Default

Quote:
Originally Posted by innuendo View Post
- If "coloration" means phase distortion, then directional mics necessarily introduce it due to the nature of the mechanism that enhances directivity
Do they? I know they use phasing (via "delay path") to implement the directionality, but does it actually distort the phase of the recorded signal? That'd be the first i've heard of that.

Quote:
- Some people swear that in fact this frequency response distortion is desirable because it provides focus to what's in front of the mic while pushing into the background what's off-axis.
To me, an identical polar pattern across frequencies sounds more manageable and useful, but i lack the experience to stake a claim. :-)

In my post i was just referring to frequency response off-axis:

If I have a nice LDC that I like on a voice, I'd rather use it in conjunction with a figure-8 and record the voice mid-side with the mid LDC mic pointing right at the voice. The side channel gives me some dimensionality in the mixing, if desired, but the voice is recorded on-axis; meaning it will have the expected frequency response that I like from that LDC.

The alternative (stereo) option would be to mic the voice using a more traditional approach like XY or whatever, and all of those methods involve a pair of mics that aren't right in front of the voice. We can use a pair of LDC's that are identical to the LDC in the m/s example above, but neither is pointing right at the voice, so the voice is coming off-axis to both of them, and the frequency response will be a little different (since the mics' polar patterns will not be the same across all frequencies).

Another advantage to m/s is that you can close-mic the vocals (or whatever single source) as long as the singer doesn't angle left/right at all. I haven't tried close-micing with an XY pair, though; might be worth the experiment, but i'd be skeptical. :-)

(Another advantage is that you have more flexibility in mic choice: you only need one for the mid, so you can use whatever you like, instead of worrying about matched pairs or whatever.)

The downside for me is that m/s recordings don't have as wide a spread as other techniques. If I want an expansive stereo image when recording an ensemble, I'll use other techniques.

Of course when using m/s you must balance the level between the mid and side appropriately. Too much side gives you a wide feel but excessive anti-phase between the speakers. Too little and you collapse towards mono.
clepsydrae is offline   Reply With Quote
Old 11-07-2015, 05:55 PM   #20
cyrano
Human being with feelings
 
cyrano's Avatar
 
Join Date: Jun 2011
Location: Belgium
Posts: 5,246
Default

Quote:
Originally Posted by karbomusic View Post
Huh? Fig-8 isn't a luxury in recording studios and is often used to capture ensembles and or other acoustic situations where you are able to have some distance between mic/source. I use them all the time. In the pic below it's being used as a room mic for a drum recording. Not to mention Fig-8 is nice to have in the collection for various reasons such as using their null points when recording a guitarist/vocalist for separation. And also don't forget most ribbon mics are Fig-8.

Huh? It wasn't me who said that MS was less than one percent of music recordings. I don't know how many studio's do use MS, but I do know that in broadcast it's used a lot...
cyrano is offline   Reply With Quote
Old 11-07-2015, 06:24 PM   #21
sostenuto
Human being with feelings
 
sostenuto's Avatar
 
Join Date: Apr 2011
Location: St George, UT _ USA
Posts: 2,881
Default

Quote:
Originally Posted by clepsydrae View Post
Yeah, the JS works just great. M/S encoding/decoding is very simple, no need for a VST:
L=spl0;
R=spl1;
spl0=(L+R)*0.5;
spl1=(L-R)*0.5;
tmp=spl0;
spl0 = tmp + spl1;
spl1 = tmp - spl1;
I love M/S recording for recording single sources with a stereo feel. E.g. if I'm recording a voice, I want the voice on-axis on the right mic. If I want a little stereo image to it, I don't want to set up an XY or something with off-axis coloration, so M/S fits the bill.

If you are EQ'ing the mid or side channels, make sure you use a linear-phase EQ! No one ever mentions this. :-) Anything that alters the phase of a signal (like EQ or band-limited compression) is going to have potentially bad effects on the stereo image when the signals are decoded back to stereo.

My main use of M/S processing is to pull the bass end into the center to prevent speaker-to-speaker phase issues. I think it's great for that. Sometimes I'll bump the highs in the sides a touch.

Agreed with others: M/S features in a plugin are handy, but not necessary, as you can almost always achieve the same thing with routing in Reaper.
Thank-you for this. Also pleased to see a very strong recommend for reaFIR to accomplish this. I was aware of very strict phase concerns with M/S and will use accordingly.

Regards,
sostenuto is offline   Reply With Quote
Old 11-07-2015, 07:34 PM   #22
plush2
Human being with feelings
 
Join Date: May 2006
Location: Saskatoon, Canada
Posts: 2,110
Default

Nearly all the music mastering engineers I know use mid/side quite a bit and it is a feature often built into mastering consoles. The thing that bugs me is how it is sometimes spoken of as a mysterious and powerful effect. By that logic the L/R pan control is a mysterious and powerful effect. It is a logical and non-destructive way to deconstruct a stereo sound field, every bit as natural as left and right. The easiest way to conceptualize it is to hit the mono button on the master channel, that is mid, and side is everything that you just turned off (or rather cancelled out by summing to mono).

I guess what I'm saying is it's great to see you being liberated by a knowledge of how mid/side works rather than just buying an effect that has it already implemented. Kudos for pushing through on this.
plush2 is offline   Reply With Quote
Old 11-07-2015, 07:53 PM   #23
clepsydrae
Human being with feelings
 
clepsydrae's Avatar
 
Join Date: Nov 2011
Posts: 3,409
Default

Quote:
Originally Posted by sostenuto View Post
Thank-you for this. Also pleased to see a very strong recommend for reaFIR to accomplish this. I was aware of very strict phase concerns with M/S and will use accordingly.
Personally I use SplineEQ.

Quote:
Originally Posted by plush2 View Post
hit the mono button on the master channel, that is mid, and side is everything that you just ... cancelled out
That's a great way to put it.
clepsydrae is offline   Reply With Quote
Old 11-07-2015, 08:25 PM   #24
sostenuto
Human being with feelings
 
sostenuto's Avatar
 
Join Date: Apr 2011
Location: St George, UT _ USA
Posts: 2,881
Default

@ clepsydrae __ Just installed SplineEQ. Also trying Marvel GEQ and Filtrate LE just to learn a bit.

@ plush2 __ Very nice to have the fog cleared away; especially when most experienced Users have worked with M/S for so long & are amused by hype for new, pricey M/S plugs.


Thread got off to a slow start __ pleased it evolved so well.
sostenuto is offline   Reply With Quote
Old 11-07-2015, 09:29 PM   #25
richie43
Human being with feelings
 
Join Date: Dec 2009
Location: Minnesota
Posts: 9,090
Default

Quote:
Originally Posted by plush2 View Post
The easiest way to conceptualize it is to hit the mono button on the master channel, that is mid, and side is everything that you just turned off (or rather cancelled out by summing to mono).
Correct me if I am wrong, but is that really accurate? If something has been recorded properly and without any phases issues, switching to mono would not leave you only the "mid", there would be everything, just in mono. When a stereo source is processed as mid-side, the mid does not contain everything in the source file in mono, it has much of the "side" actually not there.

Another thing to remember is that mid-side recording is different from processing stereo material with mid-side processing.
__________________
The Sounds of the Hear and Now.
richie43 is offline   Reply With Quote
Old 11-07-2015, 09:56 PM   #26
karbomusic
Human being with feelings
 
karbomusic's Avatar
 
Join Date: May 2009
Posts: 29,260
Default

Quote:
Originally Posted by richie43 View Post
but is that really accurate?
That's the most important bit about M/S, hitting mono causes the two out of phase sides to instantly cancel each other out leaving only the mid.
__________________
Music is what feelings sound like.
karbomusic is offline   Reply With Quote
Old 11-07-2015, 10:06 PM   #27
plush2
Human being with feelings
 
Join Date: May 2006
Location: Saskatoon, Canada
Posts: 2,110
Default

Quote:
Originally Posted by richie43 View Post
Correct me if I am wrong, but is that really accurate? If something has been recorded properly and without any phases issues,
That's where some of the confusion exists. Mid/side or sum/difference is physics really so we're not talking about an improper mix exhibiting a certain behaviour while a proper one does not. Any mix with two channels will be able to be analyzed as sum and difference in the same way it can be analyzed as channel 1 (L) and channel 2 (R).

Quote:
switching to mono would not leave you only the "mid", there would be everything, just in mono. When a stereo source is processed as mid-side, the mid does not contain everything in the source file in mono, it has much of the "side" actually not there.
Actually the only mix that would give you the everything when turning on the mono button is a mono mix. Here's the part that I've found difficult to grasp. The information being lost when switching to mono is the phase information (not phase problems but rather phase necessities) which you can think of as the information that locates a sound in the stereo field. Mid (mono) is the power or magnitude of the mix and side is the directional information of the mix.

Quote:
Another thing to remember is that mid-side recording is different from processing stereo material with mid-side processing.
The physics involved are exactly the same. The difference is that the source starts out as mid-side (sum and difference) on 2 separate channels whereas the stereo material contains mid-side (sum and difference) as mathematical operations (left + right and left - right).

I hope you don't see this as critical of what you wrote. I appreciate the desire for clarity on this as it's a really fundamental part of how sound waves work.
plush2 is offline   Reply With Quote
Old 11-08-2015, 01:02 AM   #28
innuendo
Human being with feelings
 
Join Date: Nov 2013
Location: Jerusalem, Israel
Posts: 659
Default

Quote:
Originally Posted by richie43 View Post
Correct me if I am wrong, but is that really accurate? If something has been recorded properly and without any phases issues, switching to mono would not leave you only the "mid", there would be everything, just in mono. When a stereo source is processed as mid-side, the mid does not contain everything in the source file in mono, it has much of the "side" actually not there.
Actually the "Mid channel" and "Mono" both refer to the same thing - stereo without side.
By definition of both Mid channel and Mono,
M=(L+R)/2

Last edited by innuendo; 11-08-2015 at 02:17 AM.
innuendo is offline   Reply With Quote
Old 11-08-2015, 01:19 AM   #29
innuendo
Human being with feelings
 
Join Date: Nov 2013
Location: Jerusalem, Israel
Posts: 659
Default

Quote:
Originally Posted by clepsydrae View Post
Do they? I know they use phasing (via "delay path") to implement the directionality, but does it actually distort the phase of the recorded signal? That'd be the first i've heard of that.
AFAIK what's going on there is that the diaphragm is used to compare the sound coming from the front to the sound arriving from the back. Since there will be a slight delay, this affects the high frequencies more than the low. Which as a byproduct creates the proximity effect. Obviously when applying group delay to some frequencies, you also affect their phase, just like any minimum-phase EQ does. Now if you move something from on-axis to off-axis, that will change the frequency response, but also the applied group delay, because the difference between the times (the delay) has now changed. So obviously the phase response to what's coming from off-axis depends on the exact angle to the axis, and is different to what's coming from on-axis.
innuendo is offline   Reply With Quote
Old 11-08-2015, 02:11 AM   #30
clepsydrae
Human being with feelings
 
clepsydrae's Avatar
 
Join Date: Nov 2011
Posts: 3,409
Default

A couple additions, in case they are more helpful than redundant; please correct any mistakes/misunderstandings:

As plush2 said, any stereo recording that isn't mono has out-of-phase components, otherwise it wouldn't be stereo. But the definition of "phase" is slippery here; we're just talking about the left and right channels not "agreeing" about where the signal level is at a given time. You could have a sine wave hard-left and a different frequency sine wave hard-right -- since they are different frequencies, you could argue that there is no phase disagreement between the two channels because they aren't even the same frequency; yet at a given time the left and right channels will (usually) be different, so on a scope (see the "goniometer" JS plugin that comes with reaper) you will see deviation from a vertical line. This does not mean your sine wave recording has 'phase issues'. (I wouldn't buy the album, though.)

A mid/side pair that is decoded to regular stereo is exactly like any other stereo signal. When summed to mono, however, the result is a signal that originated with one physical microphone, so it may have a different characteristic than other stereo recording methods. If you're recording an orchestra from 30 feet away, this may make no appreciable difference compared to XY or whatever, but if you're recording something closer up, it can, which I presume is due to the off-vs-on-axis coloration i was describing previously.

You can encode into m/s and back as you like, and it's not even physics so much as basic math that allows this.

Let's say you and I are facing each other some distance from the mid-line of a basketball court. To describe where we are standing, I can say "I'm one foot behind the line and you are five feet in front of the line."

Alternately, I could say "the half-way point between us is two feet in front of the line" and "we are six feet apart from each other". This is the mid/side way. It conveys the same information (where we are standing) just in a different way. (Technically you would also need to say "and I am more 'behind' the line than you are" to be completely unambiguous.)

When we think about what the "mid" actually is, and what the "side" actually is, things can get a little confusing. Do they represent a real "audio" component of the signal or are they just some kind of esoteric encoding?

We're used to hitting "mono" on a channel and thinking of it as the generic 'smushing together' of the left and right into one signal. But consider the case where you have a stereo music track playing. If you overlay a 1k sine on the left channel, and a 1k sine on the right channel but with flipped polarity, you'll clearly hear that coming through the mix. Hit the "mono" button and the sine wave will disappear entirely. So it's not exactly correct to think of the "mono" button as the "combination" of the two channels, at least in a musical/human sense. So what does the "mono" summation really represent (on a human level as opposed to the math?)

The best analogy i can think of is that it is the part of the stereo image that is "in agreement" between the channels without the part that is "in disagreement". Most of the time this will sound like a 'smushing together' of the two channels, but not always.

Mathematically, you can think of it as an instantaneous indication of the average of the two signals: the mid point between their levels. So the "mid" channel is the fluctuating center point between the two disparate signals.

Calculating the difference, then, gives you the instantaneous indication of the degree to which the two channels disagree (or as plush2 put it, everything that cancels out when you hit "mono".) To the degree that they disagree, the "difference" signal will indicate the magnitude of that disagreement and, via the sign, which channel was lower in value than the mid point, and which was higher.

Aw, hell, here's a diagram:



...see how the orange "mid" line is the average between the two? You can think of the "side" as the black bar that slides along the sum, measuring how far from the mid the left channel is, with the sign indicating which is above the sum and which is below.

Here's the same diagram with the side graphed as well; note how it is always positive when the left channel is greater than the right, and negative when the opposite occurs:




So this is what they mean when they say that the side represents the "out of phase" content: it's the strength of the disagreement, where the mid is the average of the two.

In your average stereo recording, where the important sounds tend to be in the center, the mid channel will sound more like the stuff in the center because things in the center of the stereo field tend to have substantial agreement between the two channels. Components hard-panned to one side have partial agreement: they are only on one channel, but at least the other channel isn't directly "disagreeing" with them, so they show up in both the mid and side channels when encoded. Components that have stark disagreement between the two channels live mostly in the side and don't show up much in the mid (the flipped-polarity sine wave in the example above lives entirely in the side.)

This is why the "side" channel often sounds like the reverberant aspects of the room: you're hearing the sounds that bounced around the space and entered the recorded stereo field out of phase.

This is also the explanation for the karaoke trick of killing the mid channel: in some recordings the vocals are dead center and the band is variously panned, so killing (or just lowering) the mid can effectively remove the vocals from the mix.

This is also why you use a figure-8 as the "side" mic in a m/s setup: you need the stuff coming from either side to have opposite phase, which happens to be the characteristic of the lobes of a figure 8 mic's pickup pattern.

Don't know if I've helped or hurt this discussion, but it helped me to sort through some of it, anyway.

Last edited by clepsydrae; 01-09-2016 at 02:19 PM.
clepsydrae is offline   Reply With Quote
Old 11-08-2015, 02:31 AM   #31
clepsydrae
Human being with feelings
 
clepsydrae's Avatar
 
Join Date: Nov 2011
Posts: 3,409
Default

Quote:
Originally Posted by innuendo View Post
Obviously when applying group delay to some frequencies, you also affect their phase, just like any minimum-phase EQ does.
Yeah, makes sense when you think about it, as i apparently did not. :-) Somehow it just seemed impossible that a mechanical filter could do that, but i don't know why i thought that.

Anyhow, searching around i can't find any info on this... DPA seems to be the only company to publish phase graphs for their mics, and they seem to show very flat phase through the audio spectrum (on cardioid mics i checked) -- no more than a few degrees, it would seem. I know DPA makes good stuff, but i'd expect to see some graphs for other mics if it was a significant thing... So this issue could maybe be a non-issue?

Otherwise you'd have to worry about matching the phase response characteristics of mics when multi-mic'ing anything (e.g. you measure the distance from the kick/snare for your glyn johns, but the two mics have different phase response anyway so what'd be the point...)
clepsydrae is offline   Reply With Quote
Old 11-08-2015, 02:50 AM   #32
innuendo
Human being with feelings
 
Join Date: Nov 2013
Location: Jerusalem, Israel
Posts: 659
Default

Quote:
Originally Posted by clepsydrae View Post
Yeah, makes sense when you think about it, as i apparently did not. :-) Somehow it just seemed impossible that a mechanical filter could do that, but i don't know why i thought that.

Anyhow, searching around i can't find any info on this... DPA seems to be the only company to publish phase graphs for their mics, and they seem to show very flat phase through the audio spectrum (on cardioid mics i checked) -- no more than a few degrees, it would seem. I know DPA makes good stuff, but i'd expect to see some graphs for other mics if it was a significant thing... So this issue could maybe be a non-issue?

Otherwise you'd have to worry about matching the phase response characteristics of mics when multi-mic'ing anything (e.g. you measure the distance from the kick/snare for your glyn johns, but the two mics have different phase response anyway so what'd be the point...)
If you mean graphs like the one here:

http://www.dpamicrophones.com/en/pro...24387#diagrams

Specifically on this diagram (which is the only one depicting phase):
http://www.dpamicrophones.com/~/medi...-4011-stor.jpg

Then probably what's pictured is phase response for the sound arriving on-axis. There is no diagram showing what's happening to the phase off-axis here.

That said, also the diagram showing frequency response off-axis is pretty much unbelievable. A flat response at 90 degrees? That is not to say I don't trust DPA's diagrams, just to point out that if the frequency-response diagram is correct, then these mics are very different from most other directional mics.

And yes, in my understanding, off-axis phase response is an issue for multi-micing directional setups. I just can't see how this can be avoided (besides physically blocking the sound coming from off-axis, which kind of defeats the point of the directional mic).
innuendo is offline   Reply With Quote
Old 11-08-2015, 07:09 AM   #33
richie43
Human being with feelings
 
Join Date: Dec 2009
Location: Minnesota
Posts: 9,090
Default

I wasn't thinking...thanks y'all.
__________________
The Sounds of the Hear and Now.
richie43 is offline   Reply With Quote
Old 11-08-2015, 09:43 AM   #34
sostenuto
Human being with feelings
 
sostenuto's Avatar
 
Join Date: Apr 2011
Location: St George, UT _ USA
Posts: 2,881
Default

I am almost a 'clinical' left-brainer, so the theoretical conversation is like a magnet. It also gets well beyond my mainstream needs (and abilities) although routine for others here.

Maybe more confused now with MY heavy emphasis on VSTi audio sources ... versus OTHERS' live recordings ___ and many of the mic/spatial issues discussed above.

1) Perhaps M/S issues are almost non-issues for MY situation ?

2) Does this bring newer M/S VST plugins (EQ, Comp, Saturation, xx) back into the discussion since they deal with many of the issues effectively while focusing on resultant sound ?? At a notable $$ cost, YES, but at what point is the workflow, ease, time, an offset to that cost?

Good news is: Sun came up today and I'm enjoying every minute!

Thank-you for broadening and enriching the discussion
sostenuto is offline   Reply With Quote
Old 11-08-2015, 11:19 AM   #35
clepsydrae
Human being with feelings
 
clepsydrae's Avatar
 
Join Date: Nov 2011
Posts: 3,409
Default

Quote:
Originally Posted by innuendo View Post
If you mean graphs like the one here:
I did, yeah.

Quote:
Then probably what's pictured is phase response for the sound arriving on-axis. There is no diagram showing what's happening to the phase off-axis here.
Yeah, interesting to think about. I wonder what those graphs would look like. Let us know if you come across anything.

Quote:
That said, also the diagram showing frequency response off-axis is pretty much unbelievable. A flat response at 90 degrees? That is not to say I don't trust DPA's diagrams, just to point out that if the frequency-response diagram is correct, then these mics are very different from most other directional mics.
You think? Doesn't seem so unbelievable to me; it being a cardioid as opposed to a hypercardioid. Most of them seem pretty close at 90, if not quite as good as that DPA. More diagrams here.

Quote:
And yes, in my understanding, off-axis phase response is an issue for multi-micing directional setups. I just can't see how this can be avoided (besides physically blocking the sound coming from off-axis, which kind of defeats the point of the directional mic).
Yeah, fascinating. I would love to see some number on how much phase change we're talking about...
clepsydrae is offline   Reply With Quote
Old 11-08-2015, 11:34 AM   #36
clepsydrae
Human being with feelings
 
clepsydrae's Avatar
 
Join Date: Nov 2011
Posts: 3,409
Default

Quote:
Originally Posted by sostenuto View Post
1) Perhaps M/S issues are almost non-issues for MY situation ?
Sure, could be the case, unless you're interested in the mixing techniques that m/s could give you.

Quote:
2) Does this bring newer M/S VST plugins (EQ, Comp, Saturation, xx) back into the discussion since they deal with many of the issues effectively while focusing on resultant sound ?? At a notable $$ cost, YES, but at what point is the workflow, ease, time, an offset to that cost?
I think it comes down to how much you value that convenience vs. how easy it is for you to set up m/s routing in reaper, maybe make FX chains, etc. I can imagine M/S tricks that a plugin could do "inside" the plugin that you wouldn't be able to do "outside" the plugin with routing in reaper, but most of the time you can achieve it with routing.
clepsydrae is offline   Reply With Quote
Old 11-08-2015, 12:49 PM   #37
sostenuto
Human being with feelings
 
sostenuto's Avatar
 
Join Date: Apr 2011
Location: St George, UT _ USA
Posts: 2,881
Default

Thank-you. Oscillating a bit. Decided on some basic work with JS Mid/Side Encoder + Linear EQs & also Voxengo MSED. Only way for me to learn and sort this.
sostenuto is offline   Reply With Quote
Old 11-08-2015, 01:13 PM   #38
plush2
Human being with feelings
 
Join Date: May 2006
Location: Saskatoon, Canada
Posts: 2,110
Default

Quote:
Originally Posted by sostenuto View Post
Thank-you. Oscillating a bit. Decided on some basic work with JS Mid/Side Encoder + Linear EQs & also Voxengo MSED. Only way for me to learn and sort this.
You can also use the right click options of the Mono button on the master channel to analyze your mix now that you understand the concept better. By default it is set to L+R which you now know is the same as the Mid of your mix but down at the bottom is the option to listen to L-R which is the Side information of your mix. I find this extremely useful in cleaning up muddy mixes.

When you start thinking about it mid/side is actually a much better representation of how we perceive sound than L/R. If a sound is in front of us we know it because of agreement between our two ears. If a sound is to one side or the other then the phase discrepancies (Inter-aural time difference is the technical term) and the volume discrepancies (Inter-aural level difference) tell us where the sound is. Like I said earlier, really fundamental stuff once you get into it.
plush2 is offline   Reply With Quote
Old 11-08-2015, 02:36 PM   #39
innuendo
Human being with feelings
 
Join Date: Nov 2013
Location: Jerusalem, Israel
Posts: 659
Default

Quote:
Originally Posted by clepsydrae View Post
Yeah, interesting to think about. I wonder what those graphs would look like. Let us know if you come across anything.
Don't hold your breath to it. While it is possible to measure mic's phase response, this measurement requires a precise loudspeaker and an anechoic chamber at the minimum.



Quote:
Originally Posted by clepsydrae View Post
You think? Doesn't seem so unbelievable to me; it being a cardioid as opposed to a hypercardioid. Most of them seem pretty close at 90, if not quite as good as that DPA. More diagrams here.
We can learn something from those diagrams, but not as much as from DPA's frequency response measured at different angles.
I'll just say that DPA's diagrams are far more detailed, presenting the whole audible frequency response at different angles, while in the Audiotechnica's manual and on this (otherwise very interesting) site, we can only find a few discrete frequencies which do not constitute the whole picture.


Quote:
Originally Posted by clepsydrae View Post
Yeah, fascinating. I would love to see some number on how much phase change we're talking about...
I would like to see those numbers, too. Unfortunately, no one is measuring them. I wouldn't think it's because of them being perfect though. More like because off-axis phase response is just a little too complex for the average jo to understand, and probably because for the manufacturers considering it might result in a more costly research. So it's easier to disregard than to deal with.
innuendo is offline   Reply With Quote
Old 11-08-2015, 03:22 PM   #40
sostenuto
Human being with feelings
 
sostenuto's Avatar
 
Join Date: Apr 2011
Location: St George, UT _ USA
Posts: 2,881
Default

Quote:
Originally Posted by plush2 View Post
You can also use the right click options of the Mono button on the master channel to analyze your mix now that you understand the concept better. By default it is set to L+R which you now know is the same as the Mid of your mix but down at the bottom is the option to listen to L-R which is the Side information of your mix. I find this extremely useful in cleaning up muddy mixes.

When you start thinking about it mid/side is actually a much better representation of how we perceive sound than L/R. If a sound is in front of us we know it because of agreement between our two ears. If a sound is to one side or the other then the phase discrepancies (Inter-aural time difference is the technical term) and the volume discrepancies (Inter-aural level difference) tell us where the sound is. Like I said earlier, really fundamental stuff once you get into it.
So easy and useful!!

Years with Reaper and still 'space out' RIGHT CLICK ON CONTROLS

Appreciate your help and patience!
sostenuto is offline   Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 01:00 AM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.