Spoiler - if you want something specific to listen for. Please listen blindly before reading this, and read it if you can't hear any difference. https://www.pastiebin.com/596239a120f67
Also, any other comments welcome... I wonder if the guitar part is too simple / repetitive.
There is a tiny overtone or articulation in version "A" on the 2nd note of the clarinet which isn't there (or as prevelant) in version "B". Probably more there to find, it's just the first thing I picked up on. Can't give you root cause by ear, maybe a change in the reverb itself or phase or something with other instruments but that wasn't what I was after, just wanted to see what I could hear quickly by poorman ABing between browsers.
OT: This is one of those where I might cut in that 200 range for the Guitar instead of HPF, if these are the only instruments - since it sounds like you want the lower lows when the guitar chord goes low. I think the two are fighting a little in that 100-220 range range. I'm calling frequency ranges out by ear so forgive if I'm a little off.
__________________ Music is what feelings sound like.
Last edited by karbomusic; 07-09-2017 at 08:03 AM.
I thought the clarinet sounded a little clearer and a little louder in version A. It sounds like somebody tried to take out the air noise in B. It sounds like distortion or buzzing at version A at the 0:48sec mark but other than that i prefer version A. If it was my recording I'd choose A.
Only when reading your questions did I pay any attention to differences in the guitar track, other than the low bass occasionally being a perhaps a hair too much in both versions. Might it be a little clearer in A? Perhaps. Is the difference between the tracks applied to the master?
In any case the differences between the tracks are small enough that I'd have never noticed unless I was asked.
__________________
Musician / Guitar Teacher/ Guitar Tech / ex-Physicist (hence the Dr in DrKev)
The acoustic guitar sounded a little muddy, which might be what Karbo is talking about.
It is, and there are a couple of chord changes where there are really low guitar notes with lower frequencies than the mud, those sounded like they might have value if these were to be the only two instruments in the mix. That's why dipping in that mud range some but leaving the low lows might be better than an HPF.
__________________ Music is what feelings sound like.
I'd concur with the others ... the guitar and the clarinet are fighting for space, they are both in the same frequency range, and the guitar is drowning out the clarinet. I'd probably just mix it down and maybe roll some of the lower frequencies as well.
I'm doing this blind without reading the spoiler or other comments in this thread. Here is what I heard.
In the first one it sounds more produced and flat, the transients are squished. TBH the clarinet or what ever it is sounds like a sample. Rather lifeless IMO.
In the second one it was clearer and had more presence. I much preferred the second recording and the clarinet sit very nicely with the guitar. It no longer sounded like a sample. It sounded more natural.
My impression:
Clarinet is more panned to the right in 14Z4.
In 14Z3, both instruments seem played in the same space.
In 14Z4, both instruments seem played in different space, then mixed.
No audible difference in guitar alone.
bennetng, what is this "probability that you were guessing?", some kind of FooBar AB algorithm? Seems interesting.
So the difference is, in version A I put a steep highpass right below the fundamental, and automate it to stay relatively tight, as high as I can without harming the tone.
It seems I was successful in not harming the tone, however the instrument I used had very little noise to begin with, so the highpass wasn't really useful either...
Some are preferring A, and some are preferring B... I wonder if I re-uploaded with new names, if you could really tell on a blind test. Anyone interested?
Ok, thanks everyone,
bennetng, what is this "probability that you were guessing?", some kind of FooBar AB algorithm? Seems interesting.
You asked for a blind test but we normally don't call "AB" test "blind". We call them "ABX" tests. You need another person to play either file A or B without telling you, this unknown file is called X, then you answer X is A or B. An ABX program will randomly play X several times and let you answer.
"probability that you were guessing" is p-value, a statistics term. In my ABX log I evaluated X 10 times and I was right in all 10 times, and the calculated p-value is 0.1%.
In general, people consider a p-value of 5% or lower as "successful", which means the differences are audible. But some scholars consider 5% as "too high", so I give a result of 0.1% to prove that I can really hear the differences without dispute.
Quote:
I wonder if I re-uploaded with new names, if you could really tell on a blind test. Anyone interested?
Useless. Because people still know which file is A or B by looking at their checksums. If you want, you should upload another set of files with different music contents. By using another set of music you can also test if your mixing method is universally usable, or harmful since one set of files is inconclusive.
bennetng, what is this "probability that you were guessing?", some kind of FooBar AB algorithm? Seems interesting.
So the difference is, in version A I put a steep highpass right below the fundamental, and automate it to stay relatively tight, as high as I can without harming the tone.
It seems I was successful in not harming the tone, however the instrument I used had very little noise to begin with, so the highpass wasn't really useful either...
Some are preferring A, and some are preferring B... I wonder if I re-uploaded with new names, if you could really tell on a blind test. Anyone interested?
The difference was not meaningful to me. bennetng must have been concentrating on different elements to me, because I'm confident I couldn't reliably pick them out in an A/B.
Also, the incredibly subtle extra "body" I heard in parts of A could well be to do with the phase relationship to the guitar being altered by the filter (it was in fleeting moments, not consistent), or it could be from the corner frequency ripple on the high pass... or I could have imagined it.
Thanks everyone. I also feel I couldn't reliably AB this.
Quote:
Originally Posted by bennetng
You asked for a blind test but we normally don't call "AB" test "blind". We call them "ABX" tests. You need another person to play either file A or B without telling you, this unknown file is called X, then you answer X is A or B. An ABX program will randomly play X several times and let you answer.
"probability that you were guessing" is p-value, a statistics term. In my ABX log I evaluated X 10 times and I was right in all 10 times, and the calculated p-value is 0.1%.
In general, people consider a p-value of 5% or lower as "successful", which means the differences are audible. But some scholars consider 5% as "too high", so I give a result of 0.1% to prove that I can really hear the differences without dispute.
Useless. Because people still know which file is A or B by looking at their checksums. If you want, you should upload another set of files with different music contents. By using another set of music you can also test if your mixing method is universally usable, or harmful since one set of files is inconclusive.
Wow, so maybe high-passing help puts them in the same space, because it's removing the sub-harmonic room reflections from the clarinet.
I've mixed this some more - EQing the clarinet's fundamental out of the guitar part, and tweaking the verb, adding a panned delay to the clarinet (which I also send to the verb). So is this better?
I've mixed this some more - EQing the clarinet's fundamental out of the guitar part, and tweaking the verb, adding a panned delay to the clarinet (which I also send to the verb). So is this better?
The guitar EQ doesn't work for me in this one. I feel like the guitar should be the foundation, and particularly in the first half it thins out too much when their range intersects.
I really think that static EQ and level balancing is all that is needed for this example.
Thanks everyone. I also feel I couldn't reliably AB this.
Wow, so maybe high-passing help puts them in the same space, because it's removing the sub-harmonic room reflections from the clarinet.
I've mixed this some more - EQing the clarinet's fundamental out of the guitar part, and tweaking the verb, adding a panned delay to the clarinet (which I also send to the verb). So is this better?
foo_abx 2.0.2 report
foobar2000 v1.3.16
2017-07-12 22:30:09
File A: 15tq time aligned.wav
SHA1: 331d3690d3f7a591f0162e43b844cf76502222f4
Gain adjustment: -1.63 dB
File B: 14Z3_mp3.mp3
SHA1: b9c6af9571aeed641773e48b7e6ab3ccf312a6b3
Gain adjustment: -5.01 dB
Output:
WASAPI (push) : Speakers (Creative SB X-Fi), 24-bit
Crossfading: NO
22:30:09 : Test started.
22:32:55 : 01/01
22:33:32 : 02/02
22:33:56 : 03/03
22:34:34 : 04/04
22:35:30 : 05/05
22:35:51 : 06/06
22:36:41 : 07/07
22:37:11 : 08/08
22:37:31 : 09/09
22:38:33 : 10/10
22:38:33 : Test finished.
----------
Total: 10/10
Probability that you were guessing: 0.1%
-- signature --
41883504ac38081cda33c6128a0283eddeaadb20
This file starts earlier than the other two you uploaded, so I decoded it and time-aligned it with another file, otherwise I can't blind test them. Also, this file is quieter than your previous files, so I loudness matched (R128 algorithm) them before testing, you can see the adjustment in the ABX log.
Loudness matching is very important for a reliable listening test, as Ian Shepherd demonstrated: https://youtu.be/mPhOH05EDMg
I mainly focused on the spatial relationship between guitar and clarinet. Now both 15tQ and 14Z3 also sounded like two instruments are played in the same space, but in 15tQ, the clarinet seemed more away from the mic, as a result, it made me feel that the room of 15tQ is larger.
Couldn't say which one is better, it depends on where do you want to place your virtual mic.
Anyway, it is more important that you, the mixing engineer can hear the difference and make an decision based on your preference.
The guitar EQ doesn't work for me in this one. I feel like the guitar should be the foundation, and particularly in the first half it thins out too much when their range intersects.
I really think that static EQ and level balancing is all that is needed for this example.
Well yes, it's only 2 instruments. I don't like to EQ at all frankly; a bit of a purist (even though my guitar is nothing special, of course).
But if the problem is the guitar masking the clarinet, wouldn't automating an EQ (or using something like TrackSpacer) mean I can get rid of the masking while changing the guitar tone less? Maybe I should just soften the EQ.
I also added a slight low-shelf on the guitar, right before the clarinet comes in (because those constant low notes from my thumb I thought were a bit annoying and repetitive). Also added a low sine wav at 2 spots, pretty quietly, I think 55Hz low A.
Quote:
Originally Posted by bennetng
Code:
foo_abx 2.0.2 report
foobar2000 v1.3.16
2017-07-12 22:30:09
File A: 15tq time aligned.wav
SHA1: 331d3690d3f7a591f0162e43b844cf76502222f4
Gain adjustment: -1.63 dB
File B: 14Z3_mp3.mp3
SHA1: b9c6af9571aeed641773e48b7e6ab3ccf312a6b3
Gain adjustment: -5.01 dB
Output:
WASAPI (push) : Speakers (Creative SB X-Fi), 24-bit
Crossfading: NO
22:30:09 : Test started.
22:32:55 : 01/01
22:33:32 : 02/02
22:33:56 : 03/03
22:34:34 : 04/04
22:35:30 : 05/05
22:35:51 : 06/06
22:36:41 : 07/07
22:37:11 : 08/08
22:37:31 : 09/09
22:38:33 : 10/10
22:38:33 : Test finished.
----------
Total: 10/10
Probability that you were guessing: 0.1%
-- signature --
41883504ac38081cda33c6128a0283eddeaadb20
This file starts earlier than the other two you uploaded, so I decoded it and time-aligned it with another file, otherwise I can't blind test them. Also, this file is quieter than your previous files, so I loudness matched (R128 algorithm) them before testing, you can see the adjustment in the ABX log.
Loudness matching is very important for a reliable listening test, as Ian Shepherd demonstrated: https://youtu.be/mPhOH05EDMg
I mainly focused on the spatial relationship between guitar and clarinet. Now both 15tQ and 14Z3 also sounded like two instruments are played in the same space, but in 15tQ, the clarinet seemed more away from the mic, as a result, it made me feel that the room of 15tQ is larger.
Couldn't say which one is better, it depends on where do you want to place your virtual mic.
Anyway, it is more important that you, the mixing engineer can hear the difference and make an decision based on your preference.
Thanks so much for this. Really thinking I should get Foobar now for ABX.
Of course, I can hear the difference with this new version, but it's hard to tell what's "better" once I become so familiar with it. I tend to get too close to the music and miss even obvious things. I like your objective view, but am also looking for subjective "do you like it" because frankly, by time I finish mixing something, I always hate it.
But if the problem is the guitar masking the clarinet
Can't offer much for the other stuff since these guys are giving great advice but the above does make me think of one thing conceptually...
"Why is either fighting with the other or masking to begin with - either they create a bigger whole (masking OK) or they shouldn't be stepping on each other from the get go". Meaning this is pointing out a "potential" orchestration/composition mistake or oversight. Now without getting deep in the weeds we can...
1. Separate their frequency content (play one via different register or inversion).
2. Separate them in the stereo field.
3. Separate them in time aka recompose one of the parts so it falls more in each other's holes.
4. Use EQ or other processing.
Those first three above are my first choice, if I have a choice before Eq'ing. I'm not even going to charge for that one since you'll almost never find VST laden YT videos teaching this.
__________________ Music is what feelings sound like.
Last edited by karbomusic; 07-12-2017 at 12:50 PM.
Can't offer much for the other stuff since these guys are giving great advice but the above does make me think of one thing conceptually...
"Why is either fighting with the other or masking to begin with - either they create a bigger whole (masking OK) or they shouldn't be stepping on each other from the get go". Meaning this is pointing out a "potential" orchestration/composition mistake or oversight. Now without getting deep in the weeds we can...
1. Separate their frequency content (play one via different register or inversion).
2. Separate them in the stereo field.
3. Separate them in time aka recompose one of the parts so it falls more in each other's holes.
4. Use EQ or other processing.
Those first three above are my first choice, if I have a choice before Eq'ing. I'm not even going to charge for that one since you'll almost never find VST laden YT videos teaching this.
So is EQing the last go because it harms the signal / tone? Or simply because the guitar will sound less good when lacking those frequencies?
I don't see how it can be a compositional problem; simply a clarinet and guitar... shouldn't be muddy at all. The clarinet up an octave wouldn't sound as nice. In fact, I like the deep / lowness of both these.
I'm thinking to centre the low tones of the guitar, then send the high frequencies left, and send the clarinet right.
I don't see how it can be a compositional problem; simply a clarinet and guitar... shouldn't be muddy at all. The clarinet up an octave wouldn't sound as nice. In fact, I like the deep / lowness of both these.
It's a potential compositional problem, take that with a tiny grain of salt. I spent a number of years practicing composing (from a more rock/pop standpoint) where the rule was, the next track cannot be added unless it can be done in a way that compliments the existing tracks minus processing (other than volume and pan), based much on the items I listed above. Then the idea was just how many parts/tracks can I compose before I can't fit anything else in. This challenge exists even without multi tracking or recording, its sort of composition (or maybe orchestration, not sure of the best word) 101 to some extent.
Many great compositions (again grain of salt) sound good by the very fact someone paid attention to things that would mask or step on each other at composition time. I can't imagine good writers not taking this into account and you can hear it well, pretty much everywhere once one is aware of it. It's often more likely this care was taken up front than some EQ magic later.
IME, far too many come into this thinking "I'll just process this and that" and never realize A is stepping on B because they weren't composed to support/compliment each other and so on. If one is a mixer and they receive tracks from someone else, they don't get to decide and have to work with that they received as-is, if they are a mixer and producer, they might rearrange to fix such things at record time, and is often what someone in that role does - among a host of other duties.
I'm not saying EQ isn't a great tool, just trying to get some awareness out there about the myriad of concepts that help us need it less so that EQ doesn't become cart before horse as I mentioned in a different thread.
__________________ Music is what feelings sound like.
Last edited by karbomusic; 07-14-2017 at 06:18 AM.
1. Separate their frequency content (play one via different register or inversion).
2. Separate them in the stereo field.
3. Separate them in time aka recompose one of the parts so it falls more in each other's holes.
4. Use EQ or other processing.
I agree with this 99%!
My personal preference would be a little different...
1. Separate them in time aka recompose one of the parts so it falls more in each other's holes.
2. Separate their frequency content (play one via different register or inversion).
3. Separate them in the stereo field.
4. Use EQ or other processing only if rewriting/retracking using 1 of the other 3 options is not possible.
I have found (in my own experience and perhaps it only applies to my work) that often just because something SHOULD work compositionally, it doesn't make it so.
__________________
The Sounds of the Hear and Now.
I agree with this 99%!
My personal preference would be a little different...
I agree, I now see what I wrote implied a preference order but I didn't mean to do that other than just making sure EQ was last in the list.
Quote:
have found (in my own experience and perhaps it only applies to my work) that often just because something SHOULD work compositionally, it doesn't make it so.
Absolutely, I think that is super important concept. This is where a green rock band has the most trouble, it's just a few guys playing "parts they like", they are technically good but the whole suffers, it isn't their fault, they just don't know yet. Then again, some similar group of guys just seem to sound fantastic, and it is often things just like this, EVEN if they don't consciously know they are choosing parts that fit well as-is.
I think it helps when one has that first-time experience of pulling up some tracks then going, "damn, there isn't much for me to do".
__________________ Music is what feelings sound like.
Last edited by karbomusic; 07-14-2017 at 06:28 AM.
NOTE: these "compositional problems" are hypothetic, not in relation to your guitar and clarinet composition, which is all good.
Quote:
Originally Posted by Mr. PC
So is EQing the last go because it harms the signal / tone? Or simply because the guitar will sound less good when lacking those frequencies?
I don't see how it can be a compositional problem; simply a clarinet and guitar... shouldn't be muddy at all. The clarinet up an octave wouldn't sound as nice. In fact, I like the deep / lowness of both these.
EQ'ing is the last go-to (okay, not the last, I guess that would be multi band compression or something), because it is the last step.
I don't like the idea of composing, rehearsing and recording a piece that has audible problems I have decided to fix later with EQ. It's just not a practical or useful way of working, IMHO.
Quote:
Originally Posted by Mr. PC
I'm thinking to centre the low tones of the guitar, then send the high frequencies left, and send the clarinet right.
Why on Satan's good earth would you want to do that?!?
The room tone is the biggest "problem" in your recording by far. My humble advice is to spend this time looking for a space with nice acoustics you can use to record in, not going ape-shit forensic with your captured audio.
Masking doesn't have to be a bad thing - it can lock a bass drum with a bass guitar and enhance grove, it can make backing vocalists sound magical, it can make a guitar solo and vocal part sound like they're almost coming from the same creature...
This is chock-full of masking and instrument's frequency ranges stepping on each other. It's what makes it sound great:
@OP: listening to this mix, it is obvious that you are trying to fix subtle issues while ignoring or not paying attention to things that really do matter.
This mix might work if the clarinet had been accompanying the guitar. Since the roles are the exact opposite, so should be recording and mixing.
For this type of piece where there is a soloist and some accompaniment, you basically need to think in terms of foreground and background. The clarinet should be in the foreground, meaning close recording, louder, lots of high-mid and high frequencies, less reverb, possibly shorter reverb, possibly less pre-delay. The guitar should be exactly the opposite and making space for the clarinet where that makes sense.
The mix being so much off creates a major distraction which makes it difficult to answer your subtle comparison question.
Also there seems to be automation or compression on the guitar that kicks in when the clarinet is playing - this is exaggerated and makes it sound unnatural.
As to putting things in the same space, it helps to add some room reverb and send both instruments to it (you may just use the early reflections from it, muting the tail). It also helps to put the dry part of the mix through subtle compression that is designed to act like "glue" - slow attack, slow release, low ratio, low threshold, cutting just 1-2dB.
Last edited by avocadomix; 07-15-2017 at 03:25 AM.
yes, as always, I've over-thought everything, and the more I mixed it, the worse it got.
Really, I made much better music years ago before I even knew anything about mixing.
Anyhow, this was just to AB test the effects of highpassing... and I still haven't got around to making a better test. Sorry about that and hopefully I'll have a proper test up soon.
Really, I made much better music years ago before I even knew anything about mixing.
I think that's a natural part of the progression.
When you first start out, if you have a reasonable ear, you go by feel and crank knobs using the force. That can turn out pretty well, but it's a crap shoot.
Then you get to the stage where you know just enough to be dangerous. This is when it seems like you're going downhill because you're forever second-guessing yourself and painting yourself into corners.
When you first start out, if you have a reasonable ear, you go by feel and crank knobs using the force. That can turn out pretty well, but it's a crap shoot.
Then you get to the stage where you know just enough to be dangerous. This is when it seems like you're going downhill because you're forever second-guessing yourself and painting yourself into corners.
Mixing is full of these dark nights of the soul!
So are you saying I'll be reborn out of my own ashes and soar? Please be saying that :P
Today I'm listening to Shostakovich, these compositional giants make me feel like nothing.
Also, I made another AB test (with video) but it has nothing to do with a specific filter, just different mixes with lots of changes.
........
Today I'm listening to Shostakovich, these compositional giants make me feel like nothing. .........
It's not just you:
"When he showed me Schindler's List," says Williams, "I was so moved I could barely speak. I remember saying to him, 'Steven, you need a better composer than I am to do this film.' And he said, 'I know, but they're all dead.' "
"When he showed me Schindler's List," says Williams, "I was so moved I could barely speak. I remember saying to him, 'Steven, you need a better composer than I am to do this film.' And he said, 'I know, but they're all dead.' "