Go Back   Cockos Incorporated Forums > REAPER Forums > REAPER Music/Collaboration Discussion

Reply
 
Thread Tools Display Modes
Old 12-03-2017, 03:26 PM   #81
martinmadero
Human being with feelings
 
martinmadero's Avatar
 
Join Date: Mar 2010
Posts: 232
Default

Quote:
Originally Posted by RDBOIS View Post
Congratulations to the winner. The song sounds really nice!

Sorry in advance, this is going to be a long and tedious post (I didn't plan it this way, it just came out because of what I found...)

I just spend a few hours analyzing the winning Project. Took me a while to figure the workflow, but I was impressed on how everything is neatly organized:

- One big huge mega folder with three folders inside it: Vox, All instruments, and Hall (which is the song reverb effect busses)

- Not just one song reverb in the Hall, but subfolders inside the Hall, for Vocals, Drums, and Electric Guitars. In other words, each of these stems have their own reverb processing.

- A lot of using JS- Stereo Upmix and JS Stereo Enhancer, which I must look into a bit deeper, especially since my mix was very MONO.

- The dreaded Acoustic Guitar was treated with: a) XComp, b) EQ -- midscoop - and c) JS- 3 Band Peak Filter. Js - 3 Band what? Never heard of it! But, turned it off to hear the difference and shhhheeeesh; turns a nasal honky cheap guitar swamped with vocal leakage into a half-descent guitar with good low and ringing high end sheen. Well... I got some homework to do -figure out this JS plugin.

- Everything color coded,

- JS Digital Drum Compressor - Hmmm Didn't even know that plugin existed?!

- Mush details in the EQing, compression, choice of JS - Model Amps, saturation in the master, and so on.

*****

Now for some boring technical stuff.

I looked at the Audio Stats. The LUFS of -14 with max peak at -1.0 db is rather intriguing to me, especially since I had to compress and crank the volume with a limiter in my Master Track to meet these requirements. Something I really didn't enjoy doing...

So I figured I'd learn something by looking at the other projects.

But, ohhhh... What is going on here?

Preface -- Not that any of this would actually change the overall quality of the songs submitted to the contest. The best would most likely sound the best, no matter the small volume mastering details, but...

Below are the stats of all the projects:

As you can see most didn't reach a True Peak of -1.0, not that it was clearly required, but doing this actually means cutting/compressing more than would otherwise be needed).



***

I looked into Song number 5 a little deeper:

Audio Stats of the song as coming out of the the MASTER REAPER track:



Audio Stats of the FLAC file used for the contest:



I'm not sure what is going on here, but there is a difference in the song dynamics mixed by the user and the FLAC file we listened to to judge the contest. Please be aware that I'm not implying that the differences in audio are purposefully done. I suspect it may be a case of Rendering to FLAC?

As a test I Rendered song number 5 myself. Then I loaded it up in Reaper and collected the audio stats. This is what I got:



I have no idea what is going on? Three different song profiles.

So... I re-did the entire exercise with song number 3 (my sucky mix): I also got differences in the audio stats from my mix coming out of the Reaper Master Track and the Rendered Flac file.

What does this mean?

What is going on here?

I tried listening to both versions to see if I could detect a difference with my ears. I don't know... At this point they sound the same, but also different. It could be psychological, my ears are tired and I don't have a good A/B setup (not to mention my monitoring system is very noobish...)

Does it matter? Yes and no. No, because my song still sucks compared to the winning song (good is good and bad is bad), but yes, because we should be judging exactly what people are mixing, not a tainted version.

Can someone inform us if a dynamics range of 50 is really different than 100 for the human ear? Does this matter, technically-speaking (i.e. is this just one of these things where 'no rendering algorithm' is perfect?

Ok. I'm done. It's really pretty outside and I need to go harvest veggies from the garden, not to mention I don't think I can listen to that song anymore.

Thanks for reading this lengthy post.
hi RDBOIS, your analysis is very good
we need more contributions like this!
with regard to the standard and requirements, I think it is very different when you get to those values ​​directly in the mix, than when you perform a normalization process on a render.
I think the best thing is to avoid all the intermediate loss that happens from the render process to a .flac file (I guess part of the discrepancy has to do with the type of render and the format of the file as well), and then make the normalization would further modify the values
I think that in the latter all the richness or sonorous implication changes and therefore all the values, I think that is a point to work on in the mixes, since it is very different to do a single render and reach those numbers than doing it through an additional limitation or normalization process to get to eg -14 LUFS
martinmadero is offline   Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 02:06 AM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.