Go Back   Cockos Incorporated Forums > REAPER Forums > newbieland

Reply
 
Thread Tools Display Modes
Old 08-30-2011, 10:06 AM   #1
foreward
Human being with feelings
 
Join Date: Aug 2011
Posts: 18
Default When to use 96k workflows?

Under what circumstances is it appropriate to use a 96k workflow, and how does an increased sample rate affect sound quality?

I am using JackRouter on OSX, running at around 85% with 32 tracks @ 96k/1024 buffer and 30% @ 44k/1024. Is 85% "safe" or am I running dangerously close to crashing?

I could decrease the number fo virtual tracks, but I like having the spare tracks. I could decrease the sample rate, but I'm not familiar enough with this to know what effect this will have. Suggestions?
foreward is offline   Reply With Quote
Old 08-30-2011, 10:46 AM   #2
mplay
Human being with feelings
 
Join Date: Jan 2009
Location: Curaçao
Posts: 410
Default

I vote 32 bit / 48 kHZ
mplay is offline   Reply With Quote
Old 08-30-2011, 10:48 AM   #3
RRokkenAudio
Human being with feelings
 
RRokkenAudio's Avatar
 
Join Date: Jun 2009
Location: Buffalo, NY
Posts: 777
Default

None.
RRokkenAudio is offline   Reply With Quote
Old 08-30-2011, 12:24 PM   #4
n0rd
Human being with feelings
 
n0rd's Avatar
 
Join Date: Dec 2010
Location: Down Under
Posts: 396
Default

Quote:
Originally Posted by RRokkenAudio View Post
None.
Wrong... You need it if recording dolphins... but if you're going to do that may as well use 192k.



To the OP, you'll get a lot of different answers from a lot of different people. Unfortunately, most answers will suffer from one form or another of "purple monkey dishwasher". (Whisper talk - where 'facts' and 'fiction' merge).

My advise - read up some facts:
Nyquist–Shannon Sampling Theorem
Sampling Theory For Digital Audio
[Warning! Lots of Math!]
__________________
Subconscious Inclination...

Last edited by n0rd; 08-30-2011 at 12:26 PM. Reason: typos
n0rd is offline   Reply With Quote
Old 08-30-2011, 02:20 PM   #5
foreward
Human being with feelings
 
Join Date: Aug 2011
Posts: 18
Default

SO! What I am hearing here is that 96k is pretty much just a resource hog. I kind of figured as much being that 41.4k is twice that of our auditory frequency response - what's the chances of us picking up on aliasing in the sampled sound?
foreward is offline   Reply With Quote
Old 08-30-2011, 05:10 PM   #6
phase3
Human being with feelings
 
phase3's Avatar
 
Join Date: Jul 2011
Posts: 36
Default

Nyquist Frequency Theorem says that you can accurately represent a frequency of half your sample rate. Therefore, with a sample rate of 44.1kHz the highest frequency able to be accurately represented is 22.05kHz. Any frequencies above this will be aliased. That is why 44.1kHz is used for CDs because humans hearing range is 20Hz to 20kHz we can't actually physically hear frequencies above 20kHz although some people argue recording frequencies above can enhance the sound.
phase3 is offline   Reply With Quote
Old 08-30-2011, 07:04 PM   #7
karbomusic
Human being with feelings
 
karbomusic's Avatar
 
Join Date: May 2009
Posts: 29,269
Default

Based on your actual question and recording needs, I think you would be just fine with 44.1k or 48k. I also think that there comes a point where the minutia doesn't matter if you find one that works for you. In other words, 48k/24bit has worked fine for ME for 10 years now and its what I use. The entire reason the choices exist is so each person can make their own choice to fit their own needs and needs vary wildly. If we follow that line of thinking, it saves a lot of arguing.
__________________
Music is what feelings sound like.
karbomusic is offline   Reply With Quote
Old 08-30-2011, 09:18 PM   #8
XITE-1/4LIVE
Human being with feelings
 
XITE-1/4LIVE's Avatar
 
Join Date: Nov 2008
Location: Somewhere Between 120 and 150 BPM
Posts: 7,968
Default

Well I have a hardware DSP synth that is the only 96k processed synth ever made, and during development, those invested in the R & D had a say so on if we wanted the 6 x AD21369 DSP chips to process everything @ 96k or the usual 44.1k.
44.1k would have doubled the polyphony, but we were allowed to actually hear the results and the difference was very noticable.
Everyone unamimously agreed to go ahead w/ 96k.

The LFOs, Envelopes and audiorate modulation is stunning.

The best Chorus effect I ever heard, or owned is on this synth.
It's so precise and transparent that I am routing audio out of my DAW via the AES/EBU I/O's on my XITE-1 DSP rack, back into the synths digital I/O's just to see if there's an improvement.
So perhaps 96k has to be built into the processing that creates the audio...?

I don't know, I always used my ears and stayed out of the science/mathematical aspects. But my ears don't lie and I have 9,000+ performances under my belt.

I wish I could explain things to the more mathematical guys here in hopes of understanding why a 96k synth can sound so much better than it's 44.1k counterparts.

Here's a couple of snippets just to share presets with fellow users, and it's an mp3 so the sound isn't as good as my powered cabinets and Sub I gig with, but it's really very noticable to anyone who uses hardware analog and DSP based hardware synths.

http://soundcloud.com/jimmyvee/wormhole
__________________
.
XITE-1/4LIVE is offline   Reply With Quote
Old 08-30-2011, 09:33 PM   #9
hamish
Human being with feelings
 
hamish's Avatar
 
Join Date: Sep 2007
Location: The Reflection Free Zone
Posts: 3,026
Default

Quote:
Originally Posted by XITE-1/4LIVE View Post
So perhaps 96k has to be built into the processing that creates the audio...?
Pretty much... What you are talking about is oversampling for a process, that allows those gritty DSP 'artifacts' to be removed. It has a huge effect, listen to any plugin instrument or FX that has oversampling selection. (ie. ReaComp 2x, 4x, 8x, 16x or 32x) Often the oversampling may be labelled 'quality'. In the Voxengo 'primary user' manual there are some good hints.

It has been discussed around the forum a few times.

You are wasting disc space and bandwidth to record or mix at over 44.1 imo. Going to 96 dB only gives a 1 dB increase in dynamic range for a doubling of the transfer and storage. (very rough figures)
hamish is offline   Reply With Quote
Old 08-31-2011, 12:36 AM   #10
Xenakios
Human being with feelings
 
Xenakios's Avatar
 
Join Date: Feb 2007
Location: Oulu, Finland
Posts: 8,062
Default

Quote:
Originally Posted by hamish View Post

You are wasting disc space and bandwidth to record or mix at over 44.1 imo. Going to 96 dB only gives a 1 dB increase in dynamic range for a doubling of the transfer and storage. (very rough figures)
Whaaat...?
__________________
I am no longer part of the REAPER community. Please don't contact me with any REAPER-related issues.
Xenakios is offline   Reply With Quote
Old 08-31-2011, 12:59 AM   #11
Xenakios
Human being with feelings
 
Xenakios's Avatar
 
Join Date: Feb 2007
Location: Oulu, Finland
Posts: 8,062
Default

Quote:
Originally Posted by XITE-1/4LIVE View Post
So perhaps 96k has to be built into the processing that creates the audio...?
Yes, it's sometimes far more important that some individual processing itself works at a high samplerate internally than that the whole recording and mixing signal chain is at the higher samplerate. Obviously it's not always easy to have just parts of the signal chain at the higher rate and others not. So opting for doing all the recording and mixing at a higher samplerate might be a sensible choice.

Consider this : Has your music ever been mixed and released for CD (which is 44.1khz)? If yes, did this particular synth you are talking about suddenly start sounding like shit on the CD release?
__________________
I am no longer part of the REAPER community. Please don't contact me with any REAPER-related issues.
Xenakios is offline   Reply With Quote
Old 08-31-2011, 01:41 AM   #12
timlloyd
Human being with feelings
 
Join Date: Mar 2010
Posts: 4,713
Default

Quote:
Originally Posted by XITE-1/4LIVE View Post
I wish I could explain things to the more mathematical guys here in hopes of understanding why a 96k synth can sound so much better than it's 44.1k counterparts.
The more mathematical guys here are already aware of this - as Hamish and Xenakios have mentioned, synthesis (of certain types) at high sample rates provide essentially the same benefits as the upsampling (not oversampling) of other non-linear dsp such as distortion and dynamic compression

I guess I would fall into the category of "the more mathematical guys here" and I disagree with those saying that 44.1kHz is as high as is ever necessary.

It depends. Not all conversion is created equal, and on some units (mainly on the lower end of the spectrum) 44.1kHz actually won't sound "as good" as the higher rates due to insufficiently designed anti-alias filters in the a/d circuitry.

In order for Nyquist theory to be realised in practice this aa-filter has to be made in such a way that it produces no artifacts itself in the audible band when operating for a 44.1kHz sr. And it actually turns out that in order to best achieve this, a sample rate of around 60kHz is preferable to 44.1 or 48 (according to the designer of Lavry Engineering and backed up with infallible maths). This has nothing to do with the flawed argument for increasing the capture bandwidth to include frequencies above our upper hearing threshold, it's simply due to physical limitations in real circuits and how best to remediate them.

But nobody makes convertors that operate at 60kHz, and it's unlikely that will change. So what is advisable is for people to properly compare (by that I mean abx) the performance of their equipment in order to determine which sample rate sounds "best" to them and then go with that.

What I often (not always) do, is track and mix at 96kHz. This is so that my latency is lower than otherwise (often an important consideration for me), and because the convertors I often use sound better to me at 96kHz than at 44.1, as do some of my plug-ins. It's absolutely not because I don't understand Nyquist theorem

I'm actually of the "ilk" that mixing with all audio at a higher sample rate can be preferable to mixing at 44.1 and then using upsampling on lots of individual processing ... but again, it depends on the whole system and no massive generalisation is going to be sufficient.

So I agree with Karbo - people need to determine what is best for them, because we're all using slightly different tools for slightly different things.

Last edited by timlloyd; 08-31-2011 at 02:05 AM.
timlloyd is offline   Reply With Quote
Old 08-31-2011, 06:03 AM   #13
AudioWonderland
Human being with feelings
 
AudioWonderland's Avatar
 
Join Date: Jun 2009
Posts: 729
Default

Quote:
Originally Posted by timlloyd View Post
The more mathematical guys here are already aware of this - as Hamish and Xenakios have mentioned, synthesis (of certain types) at high sample rates provide essentially the same benefits as the upsampling (not oversampling) of other non-linear dsp such as distortion and dynamic compression

I guess I would fall into the category of "the more mathematical guys here" and I disagree with those saying that 44.1kHz is as high as is ever necessary.

It depends. Not all conversion is created equal, and on some units (mainly on the lower end of the spectrum) 44.1kHz actually won't sound "as good" as the higher rates due to insufficiently designed anti-alias filters in the a/d circuitry.

In order for Nyquist theory to be realised in practice this aa-filter has to be made in such a way that it produces no artifacts itself in the audible band when operating for a 44.1kHz sr. And it actually turns out that in order to best achieve this, a sample rate of around 60kHz is preferable to 44.1 or 48 (according to the designer of Lavry Engineering and backed up with infallible maths). This has nothing to do with the flawed argument for increasing the capture bandwidth to include frequencies above our upper hearing threshold, it's simply due to physical limitations in real circuits and how best to remediate them.

But nobody makes convertors that operate at 60kHz, and it's unlikely that will change. So what is advisable is for people to properly compare (by that I mean abx) the performance of their equipment in order to determine which sample rate sounds "best" to them and then go with that.

What I often (not always) do, is track and mix at 96kHz. This is so that my latency is lower than otherwise (often an important consideration for me), and because the convertors I often use sound better to me at 96kHz than at 44.1, as do some of my plug-ins. It's absolutely not because I don't understand Nyquist theorem

I'm actually of the "ilk" that mixing with all audio at a higher sample rate can be preferable to mixing at 44.1 and then using upsampling on lots of individual processing ... but again, it depends on the whole system and no massive generalisation is going to be sufficient.

So I agree with Karbo - people need to determine what is best for them, because we're all using slightly different tools for slightly different things.
This... I do 96/24 for these exact reasons

Its not 1999 anymore. Disk space is cheap and Daws are more powerful than ever. I record and mix @ 96/24 and have never had a problem with hitting any CPU limit. These arguments against the higher sample rate are really moot at this point
AudioWonderland is offline   Reply With Quote
Old 08-31-2011, 10:29 AM   #14
Mich
Human being with feelings
 
Join Date: May 2009
Posts: 1,265
Default

http://www.lavryengineering.com/lavr...topic.php?t=24
__________________
Quote:
Originally Posted by vBulletin Message
Sorry pipelineaudio is a moderator/admin and you are not allowed to ignore him or her.
Mich is offline   Reply With Quote
Old 08-31-2011, 10:44 AM   #15
Kihoalu
Human being with feelings
 
Join Date: Jan 2007
Location: Silicon Gulch
Posts: 544
Default

Quote:
In order for Nyquist theory to be realised in practice this aa-filter has to be made in such a way that it produces no artifacts itself in the audible band when operating for a 44.1kHz sr. And it actually turns out that in order to best achieve this, a sample rate of around 60kHz is preferable to 44.1 or 48 (according to the designer of Lavry Engineering and backed up with infallible maths).
Quote:
But nobody makes convertors that operate at 60kHz, and it's unlikely that will change. So what is advisable is for people to properly compare (by that I mean abx) the performance of their equipment in order to determine which sample rate sounds "best" to them and then go with that.
The problem with the above analysis is that almost all audio ADCs are now delta-sigma converters. They do not sample at 44.1Khz anymore but 8 to 32 times that rate with a lower bit depth which is then pieced together (downsampled) to get a higher bit depth at a lower bit rate. The Nyquist limit is thereby moved up to one-half of this higher sampling rate (around 200Khz minimum) and a very simple antialias filter will perform perfectly well. Delta sigma converters also greatly reduce the likelihood of non-monotonic conversion at high bit depths (greater than 14 bits or so) and are strongly advantageous for that reason as well. I do not know of an audio ADC that uses base-frequency sampling anymore, that technology seems to be obsolete. So anti-aliasing filter characteristics are no longer very relevant to audio converter performance.

.
__________________
Inundated by a Perfect Storm of Gluten-Free Artisanal Bespoke Quinoa Avocado-Toast Toilet Paper.
Mahope Kakou (Later Dudes)...
Kihoalu is offline   Reply With Quote
Old 08-31-2011, 11:00 AM   #16
XITE-1/4LIVE
Human being with feelings
 
XITE-1/4LIVE's Avatar
 
Join Date: Nov 2008
Location: Somewhere Between 120 and 150 BPM
Posts: 7,968
Default

Quote:
Originally Posted by Xenakios View Post
Consider this : Has your music ever been mixed and released for CD (which is 44.1khz)? If yes, did this particular synth you are talking about suddenly start sounding like shit on the CD release?
I am checking out a recent mix with Drums, Vox, Bass and Guitar, and I can only say it doesn't sound as good as it does when I am onstage using it, but it definately puts my VSTi, and DSP Modular synths out of a job now..
__________________
.
XITE-1/4LIVE is offline   Reply With Quote
Old 08-31-2011, 06:16 PM   #17
timlloyd
Human being with feelings
 
Join Date: Mar 2010
Posts: 4,713
Default

Quote:
Originally Posted by Kihoalu View Post
The problem with the above analysis is that almost all audio ADCs are now delta-sigma converters. They do not sample at 44.1Khz anymore but 8 to 32 times that rate with a lower bit depth which is then pieced together (downsampled) to get a higher bit depth at a lower bit rate.
I know but this isn't a "problem" with my analysis of the situation. I've bolded the bit of your post that explains why ... downsampling requires filtering (the process is actually decimation, which consists of lowpass filtering followed by downsampling).

Last edited by timlloyd; 08-31-2011 at 06:22 PM.
timlloyd is offline   Reply With Quote
Old 08-31-2011, 06:39 PM   #18
hamish
Human being with feelings
 
hamish's Avatar
 
Join Date: Sep 2007
Location: The Reflection Free Zone
Posts: 3,026
Default

Quote:
Originally Posted by timlloyd View Post
I know but this isn't a "problem" with my analysis of the situation. I've bolded the bit of your post that explains why ... downsampling requires filtering (the process is actually decimation, which consists of lowpass filtering followed by downsampling).
Nice posting Tim. Although I have read and really like the Lavry paper in question you have expressed the point well, and I will be careful not to confuse oversampling and upsampling in the future.

I know you get a lot of pleasure from your null-tests and I'd like to discuss some bit-depth things with you some time, but I don't want to get off topic here.

For now, to the other posters, I suggest either read Lavry or accept Tims laymans explanation.

http://www.lavryengineering.com/docu...ing_Theory.pdf

I've read it and some of the maths is still beyond me.

(the real problem in some Lavry papers is he never got a proof reader, crikey!! some times there are 2 spelling errors a sentence, luckily Sampling Theory is not as bad as a couple of the others)

Last edited by hamish; 08-31-2011 at 06:44 PM.
hamish is offline   Reply With Quote
Old 08-31-2011, 07:19 PM   #19
hamish
Human being with feelings
 
hamish's Avatar
 
Join Date: Sep 2007
Location: The Reflection Free Zone
Posts: 3,026
Default

I accidently had a very small project working in 96 kHz the other day. Transcribing a song, one audio track, (just sitting in bed relaxing) I was getting a terrible sound with Item pitching at 0.7

Before I noticed the samplerate I tried almost every algorithm in elastique 2 and sound touch etc and still sounding awful. I restarted REAPER, changing the hardware switch on my USB box back to 44.1, after that any pitch shifting algorithm was sounding fine.

As has been noted, choose for your needs. I use 44.1 as a lot of the VST I use is optimised for that samplerate.

The use of 48 kHz afaik seems more a legacy of DAT and studio masterclocks than any real difference in sound quality, and as I have said, I don't care how cheap disk space is, sharing bigger files takes time.
hamish is offline   Reply With Quote
Old 08-31-2011, 07:42 PM   #20
Xenakios
Human being with feelings
 
Xenakios's Avatar
 
Join Date: Feb 2007
Location: Oulu, Finland
Posts: 8,062
Default

Quote:
Originally Posted by XITE-1/4LIVE View Post
I can only say it doesn't sound as good as it does when I am onstage using it
Listening with the same loudspeakers and other equipment at stage and otherwise?
__________________
I am no longer part of the REAPER community. Please don't contact me with any REAPER-related issues.
Xenakios is offline   Reply With Quote
Old 09-01-2011, 06:22 AM   #21
slow
Human being with feelings
 
Join Date: Dec 2008
Posts: 347
Default

96kHz is used by sound designers as you can pitch down upper harmonics (those above 20kHz) into the audible range by slowing the sample rate. This means you can get a deeper / slower sound whilst still retaining a 20K bandwidth (eg after lowering by an octave). If you record at 44kHz and slow down then you end up reducing the bandwidth to less than the usual CD range.
slow is offline   Reply With Quote
Old 09-01-2011, 07:17 AM   #22
timlloyd
Human being with feelings
 
Join Date: Mar 2010
Posts: 4,713
Default

Another good point
timlloyd is offline   Reply With Quote
Old 09-01-2011, 07:49 AM   #23
Xenakios
Human being with feelings
 
Xenakios's Avatar
 
Join Date: Feb 2007
Location: Oulu, Finland
Posts: 8,062
Default

Quote:
Originally Posted by slow View Post
96kHz is used by sound designers as you can pitch down upper harmonics (those above 20kHz) into the audible range by slowing the sample rate. This means you can get a deeper / slower sound whilst still retaining a 20K bandwidth (eg after lowering by an octave). If you record at 44kHz and slow down then you end up reducing the bandwidth to less than the usual CD range.
This is one of the few or perhaps the only reason I bother to record in 96khz when it's possible.
__________________
I am no longer part of the REAPER community. Please don't contact me with any REAPER-related issues.
Xenakios is offline   Reply With Quote
Old 09-01-2011, 09:20 AM   #24
foreward
Human being with feelings
 
Join Date: Aug 2011
Posts: 18
Default

Quote:
Originally Posted by slow View Post
96kHz is used by sound designers as you can pitch down upper harmonics (those above 20kHz) into the audible range by slowing the sample rate. This means you can get a deeper / slower sound whilst still retaining a 20K bandwidth (eg after lowering by an octave). If you record at 44kHz and slow down then you end up reducing the bandwidth to less than the usual CD range.
This makes sense for acoustic instruments, but not so much for most synthetic instruments, right?

Though this is interesting. What if there was some way to map frequencies over a curve, so you could move tonal regions around in frequency/frequency space - the way image processors map values?

I don't know much about FFT in general, and even less about tone shifting... maybe I'll look into it and try to get something hacked together in bidule.
foreward is offline   Reply With Quote
Old 09-01-2011, 09:23 AM   #25
timlloyd
Human being with feelings
 
Join Date: Mar 2010
Posts: 4,713
Default

Quote:
Originally Posted by foreward View Post
What if there was some way to map frequencies over a curve, so you could move tonal regions around in frequency/frequency space - the way image processors map values?
Could you elaborate a little? I'm not sure I understand what you're suggesting.
timlloyd is offline   Reply With Quote
Old 09-01-2011, 10:13 AM   #26
foreward
Human being with feelings
 
Join Date: Aug 2011
Posts: 18
Default

Sure. Imagine a curve like this one from photoshop:

http://www.photoshopessentials.com/i...cs3-curves.gif

(side note: what looks like a spectrograph on the bottom is the Histogram. It tells the operator the quantity of pixels in any given value. If an audio spectrograph is in the frequency/value domain, then an image histogram is in the value/quantity domain)

Here, the X axis represents the input, while the y axis represents the output. If you change the curve at, say, Level 64 and dragged to to Level 128, then everything which was at level 64 is now at level 128:

By making an s-curve, the hi lights become brighter, and teh shadows become darker:

http://www.photoshopessentials.com/i...s-baseline.gif

In theory, you could map such a curve to individual windows and pitch shift the each according to the curve. If applied to the subsonic and ultrasonic portions of a wideband signal, such tonal compression could be applied naturally without significantly affecting other frequencies elsewhere.

IN OTHER NEWS

I set the sample rate of the Kapling synthesizer (http://www.sineqube.com/1/kapling.html) to 96k. The sound quality did change, and my first impression was more harmonic content. This would go alongside what others have been saying.

I will do a little more probing and post some examples.
foreward is offline   Reply With Quote
Old 09-01-2011, 10:27 AM   #27
Kihoalu
Human being with feelings
 
Join Date: Jan 2007
Location: Silicon Gulch
Posts: 544
Default

Quote:
I know but this isn't a "problem" with my analysis of the situation. I've bolded the bit of your post that explains why ... downsampling requires filtering (the process is actually decimation, which consists of lowpass filtering followed by downsampling).
The lowpass filter in question is often simply an accumulator (sometimes other sorts of perfect digital filters are used to control noise-shaping) and does not cause any kind of "reversion" to the downsampled Nyquist rate. The input analog anti-alias filter only needs to remove out-of-band signals above 1/2 of the oversampled input rate (at 200Khz or higher). This completely eliminates the need to do base sampling at some rate (like Lavry's 60KHz) in order to accommodate the phase skirt of some hypothetical reasonable analog filter which might roll off starting at 20Khz. This is where Lavry's analysis is obsolete. Oversampled delta Sigma converters greatly reduce the requirement to have a steep high order filter at the input to an ADC.

Here is a short treatise on delta sigma converters (in this case a so called one-bit converter which is the most simplfied case), which is written in a easy to digest form:

[url/http://www.beis.de/Elektronik/DeltaSigma/DeltaSigma.html[/url]

However the converters I am familiar with (like the Crystal ones in my Echo Layla), use a larger number of bits (like 6 or 8) before decimation (piecing together). The input anti-alias filters on this Echo unit are simple 2nd order lowpass which is all that is needed because of the input oversampling.

20 Years ago, I was making the same kind of statements as Lavry, and in fact my preferred sampling rate would have been around 60Khz as well. But input oversampling methods have greatly changed the situation.

.
__________________
Inundated by a Perfect Storm of Gluten-Free Artisanal Bespoke Quinoa Avocado-Toast Toilet Paper.
Mahope Kakou (Later Dudes)...

Last edited by Kihoalu; 09-01-2011 at 10:39 AM.
Kihoalu is offline   Reply With Quote
Old 09-01-2011, 10:43 AM   #28
karbomusic
Human being with feelings
 
karbomusic's Avatar
 
Join Date: May 2009
Posts: 29,269
Default

Quote:
Originally Posted by foreward View Post
Sure. Imagine a curve like this one from photoshop:

http://www.photoshopessentials.com/i...cs3-curves.gif
Histogram values are more concerned with dynamic range and mapping/distribution of those bits. The equivalent in audio would be bits instead of frequencies. In otherwords the darkest value in an image is the lowest volume level in audio and lightest representable value in an image the loudest passage in audio.

This mapping in imaging is needed due to the fact that computer monitors which were/are predominantly 8bit (even if the image is 16 or even 32 as in HDR). Its all about mapping those values into a range the a monitor or printer/paper can reproduce. In this aspect colors are frequencies and how light/dark the color is, is the bit depth and dynamic range.
__________________
Music is what feelings sound like.

Last edited by karbomusic; 09-01-2011 at 10:51 AM.
karbomusic is offline   Reply With Quote
Old 09-01-2011, 10:48 AM   #29
XITE-1/4LIVE
Human being with feelings
 
XITE-1/4LIVE's Avatar
 
Join Date: Nov 2008
Location: Somewhere Between 120 and 150 BPM
Posts: 7,968
Default

Nice link about delta sigma.
I now know which process I want on my Filter when using 96k audiorate modulations of the Waldorf Oscillators, Ensoniq Transwaves, and Prophet VS Wavetables.

Below is 1 of the 6 x large LCD's for modulating the 4 x Filters simultaneously.
__________________
.

Last edited by XITE-1/4LIVE; 09-13-2011 at 06:26 PM.
XITE-1/4LIVE is offline   Reply With Quote
Old 09-01-2011, 11:27 AM   #30
foreward
Human being with feelings
 
Join Date: Aug 2011
Posts: 18
Default

Quote:
Originally Posted by karbomusic View Post
Histogram values are more concerned with dynamic range and mapping/distribution of those bits. The equivalent in audio would be bits instead of frequencies. In otherwords the darkest value in an image is the lowest volume level in audio and lightest representable value in an image the loudest passage in audio.

This mapping in imaging is needed due to the fact that computer monitors which were/are predominantly 8bit (even if the image is 16 or even 32 as in HDR). Its all about mapping those values into a range the a monitor or printer/paper can reproduce. In this aspect colors are frequencies and how light/dark the color is, is the bit depth and dynamic range.
I think you're a bit off target there, but that's prominently an entirely different issue. Histograms are simply the quantity of pixels in a specific tonal value. If you took an entire audio file and mapped out the number of times a sample were at a specific volume level, you'd have a histogram - but that's not a particularly useful graph.

Curves are not simply about matching the file to the device. That's how you did it in closed loop color management, but anymore its done behind the scenes using ICC. Curves are more about getting the limited dynamic range of the captured image to appear closer to what we expect the image to be. But that's beside the point.

At any rate, lets say that you have a signal with loads of harmonics ranging from 12hz to 40khz. How would you get that whole range of sounds to be heard within the audible spectrum? The only way is to compress it into the audible spectrum by bringing the top end down and the low end up in frequency.

I am glad you mentioned HDR, because this is essentially what you're doing. Of course the only time when you'd really need something like that is if you're going to be manipulating hundreds of frequency ranges individually so that each range has enough information to work with within a specific bit depth.

Last edited by foreward; 09-01-2011 at 11:33 AM.
foreward is offline   Reply With Quote
Old 09-01-2011, 11:41 AM   #31
timlloyd
Human being with feelings
 
Join Date: Mar 2010
Posts: 4,713
Default

Quote:
Originally Posted by Kihoalu View Post
The lowpass filter in question is often simply an accumulator (sometimes other sorts of perfect digital filters are used to control noise-shaping) and does not cause any kind of "reversion" to the downsampled Nyquist rate. The input analog anti-alias filter only needs to remove out-of-band signals above 1/2 of the oversampled input rate (at 200Khz or higher). This completely eliminates the need to do base sampling at some rate (like Lavry's 60KHz) in order to accommodate the phase skirt of some hypothetical reasonable analog filter which might roll off starting at 20Khz. This is where Lavry's analysis is obsolete. Oversampled delta Sigma converters greatly reduce the requirement to have a steep high order filter at the input to an ADC.

Here is a short treatise on delta sigma converters (in this case a so called one-bit converter which is the most simplfied case), which is written in a easy to digest form:

[url/http://www.beis.de/Elektronik/DeltaSigma/DeltaSigma.html[/url]

However the converters I am familiar with (like the Crystal ones in my Echo Layla), use a larger number of bits (like 6 or 8) before decimation (piecing together). The input anti-alias filters on this Echo unit are simple 2nd order lowpass which is all that is needed because of the input oversampling.

20 Years ago, I was making the same kind of statements as Lavry, and in fact my preferred sampling rate would have been around 60Khz as well. But input oversampling methods have greatly changed the situation.

.
I can't give a very confident reply before I've done some more research

However, your comments above about input anti-alias filters are beside the point. It's the filter used before decimation that needs to be stringently designed to prevent artifacts within the passband (below the upper bandlimit of the new sample rate). This is the part that I need to research ...

But I guess, what is true is that in my first post I was misconstruing things. Depending on topology, either anti-alias filters or pre-decimation filters are potential issues.

I'll be back another day

Last edited by timlloyd; 09-01-2011 at 12:24 PM.
timlloyd is offline   Reply With Quote
Old 09-01-2011, 12:55 PM   #32
XITE-1/4LIVE
Human being with feelings
 
XITE-1/4LIVE's Avatar
 
Join Date: Nov 2008
Location: Somewhere Between 120 and 150 BPM
Posts: 7,968
Default

Here's my question.
I use a DSP rack as my soundcard. It can do 96k at 1.3msec/64 samples ASIO Duplexed. I guess that's pretty good but should I set up Reaper for 192k / 64bit Float, the DSP rack at 96k and then record....?
This 96k synth is so fat and pretty I wish to make the best possible recording.
__________________
.

Last edited by XITE-1/4LIVE; 09-13-2011 at 06:26 PM.
XITE-1/4LIVE is offline   Reply With Quote
Old 09-01-2011, 02:36 PM   #33
pixeltarian
Human being with feelings
 
pixeltarian's Avatar
 
Join Date: Oct 2008
Location: Minneaplis
Posts: 3,317
Default

88.2 or GTFO
pixeltarian is offline   Reply With Quote
Old 09-01-2011, 07:29 PM   #34
karbomusic
Human being with feelings
 
karbomusic's Avatar
 
Join Date: May 2009
Posts: 29,269
Default

Quote:
Originally Posted by foreward View Post
I think you're a bit off target there, but that's prominently an entirely different issue. Histograms are simply the quantity of pixels in a specific tonal value. If you took an entire audio file and mapped out the number of times a sample were at a specific volume level, you'd have a histogram - but that's not a particularly useful graph.
We are essentially saying the same thing. I used mapping to help clarify the difference between bits and frequencies only because this is a sample rate thread (frequencies) not a bit depth thread (dynamic range). Curves etc deal with brightness and distribution of it (volume/bits etc). I agree it can be very semantically confusing since there are many parallels and some that are not at all parallel.

Quote:
Curves are not simply about matching the file to the device.
That's how you did it in closed loop color management, but anymore its done behind the scenes using ICC. Curves are more about getting the limited dynamic range of the captured image to appear closer to what we expect the image to be. But that's beside the point.
Correct, but frequency is to color what frequency is to pitch. Curves deal with brightness of pixels and their distribution, not their frequency/wavelength/color. The main exception I can think of would be operating on a single color channel but that is still only adjusting brightness and distribution of that channel. Probably not that much different than making a single harmonic of a complex sound louder in proportion to the other harmonic content but not changing its pitch per se.

Quote:
At any rate, lets say that you have a signal with loads of harmonics ranging from 12hz to 40khz. How would you get that whole range of sounds to be heard within the audible spectrum? The only way is to compress it into the audible spectrum by bringing the top end down and the low end up in frequency.
Agreed but I don't think there is a true parallel due to what I explained above. The real problem is the completely different meaning of "tonal" when dealing with photography vs audio. In photography tonal is brightness, detail, gradients, not frequency and in audio tonal is harmonics and frequency (generally anyway).

Quote:
I am glad you mentioned HDR, because this is essentially what you're doing. Of course the only time when you'd really need something like that is if you're going to be manipulating hundreds of frequency ranges individually so that each range has enough information to work with within a specific bit depth.
Yes, but again notice the term bit depth, that is not frequency its the number of distinct values of brightness or volume values. I totally get what you are saying just mentioning that there is no such thing as compressing frequency per se or better yet I don't know what it would be called other than changing pitch or resampling. Maybe someone else can chime in.

HDR really is a good example because with a 32bit HDR image you will have to be "compressing" the brightness (volume) values into a range the monitor (or printer) can display without changing the frequency (color), the exact same analogy in audio would be compressing a very dynamic instrument so that it fits with in the dynamic range of the medium you record it to by changing the dynamic range but not the frequency (pitch). The other thing that makes this difficult to discuss is one is based on time, the other isn't.

Thusly, I'm not quite sure how someone would "compress" pitch or a distribution of frequencies. That however doesn't meant it isn't a very cool thought.
__________________
Music is what feelings sound like.

Last edited by karbomusic; 09-01-2011 at 07:43 PM.
karbomusic is offline   Reply With Quote
Old 09-01-2011, 08:40 PM   #35
foreward
Human being with feelings
 
Join Date: Aug 2011
Posts: 18
Default

Well, of course you have to think like apples and apples. Curves can "mean" anything depending on domain. I'm new to music, but I have a lot of experience in imaging, and for me anyway I see there are a lot of similarities - especially when you start thinking about theoretical color spaces, like HSL in particular, as opposed to physical ones like RGB.

You could make a curve in the volume/volume domain, like a compressor or limiter does. You could make a curve in the volume/frequency domain, like a pass filter is. I've seen photoshop-like filters that handle both. I want to say it was in Audacity, but it's been a while.

Theoretically you could make a curve in the frequency/frequency domain by isolating individual tonal ranges with FFT analysis, and then shift their pitch. In synthesis, the advantage is that you'd have more bandwidth to manipulate, which might lend to better rendering. In processing you could adjust the frequency of important or interesting qualities that might be at the edges of the audible spectrum without affecting others that aren't.

But the only realistic way of doing this would be to work at higher bit rates. Of course, all of this would be very processor intensive.
foreward is offline   Reply With Quote
Old 09-02-2011, 09:24 PM   #36
Kenny
Human being with feelings
 
Kenny's Avatar
 
Join Date: Nov 2010
Location: Central PA
Posts: 598
Default

Wow, this thread got way over my head fast. All great stuff though!
Just an anecdotal contribution. A good friend of mine owns a high-end studio and when I asked him what sample rate they use, he said 44.1 and it was pretty much pointless to go higher unless tracking classical music (paraphrase). I still like 48khz for my work, and I believe the origin of that is actually for sync'ing to film.

Ultimately it's about the music being recorded, right? Good song, good performance - track it at 32khz and it'll still be a good song and good performance.
Kenny is offline   Reply With Quote
Old 09-02-2011, 10:19 PM   #37
DuraMorte
Human being with feelings
 
Join Date: Jun 2010
Location: In your compressor, making coffee.
Posts: 1,165
Default

Quote:
Originally Posted by Kenny View Post
it'll still be a good song and good performance.
Sure... With no frequencies above 16kHz represented in the recording.
__________________
To a man with a hammer, every problem looks like a nail. - yep
There are various ways to skin a cat :D - EvilDragon
DuraMorte is offline   Reply With Quote
Old 09-02-2011, 11:32 PM   #38
Avatar44
Human being with feelings
 
Join Date: Feb 2008
Location: Moon
Posts: 112
Default

For those with higher than 44khz, like 48, 88, 96, 192, keep in mind that if you are using samplers (like kontakt), you convert in fact samples from patches that are stored in 44khz to (whatever you choose) with the conventional SRC from DAW, on the fly. So, in desperate search for finding the big plus, you finish with a big minus IMHO. This is not the case if you are using just synths. As you can see, it is important what SRC you choose for oversampling the samples, if you decide to work with higher than 44khz, and you are an audio maniac trying to save your talent with the big numbers.

For me it is so simple: a good mix is always my only goal, and a great mix believe me or not, can be achieve with 44khz, that's for sure.
Avatar44 is offline   Reply With Quote
Old 09-03-2011, 04:08 PM   #39
hamish
Human being with feelings
 
hamish's Avatar
 
Join Date: Sep 2007
Location: The Reflection Free Zone
Posts: 3,026
Default Dither at high sample rate

Quote:
Originally Posted by Xenakios View Post
Whaaat...?
Sorry for my fuzzy figures before, in fact I was thinking of a related topic, dithering at high sample rate.

This is another valid reason for using high sample rate in a REAPER project.

From Katz 'Mastering Audio' 2nd edition. P.63 (relates to dithering to 16 bit 44.1 kHz)

'.. 16 bits at 96 kHz is 3.4 dB quieter than 44.1 .. Noise-shaping at high sample rates can allow shorter wordlenth files with very low psychoacoustic noise floor ..'

' .. in fact 16-bit noise-shaped dither at 96 kHz can sound as good as 24-bit/44.1 .. '
hamish is offline   Reply With Quote
Old 09-03-2011, 04:55 PM   #40
captain caveman
Human being with feelings
 
Join Date: Feb 2008
Posts: 1,616
Default

Quote:
Originally Posted by timlloyd View Post
in order to best achieve this, a sample rate of around 60kHz is preferable to 44.1 or 48 (according to the designer of Lavry Engineering and backed up with infallible maths). This has nothing to do with the flawed argument for increasing the capture bandwidth to include frequencies above our upper hearing threshold, it's simply due to physical limitations in real circuits and how best to remediate them.

But nobody makes convertors that operate at 60kHz....
If I could just pick up on 2 points there. What I gatherered from Dan Lavry's posts about the subject, 60kHz (ish) is preferable not only to 44.1kHz and 48kHz, but also to 88.2kHz, 96kHz and above.

The 2nd point is that Reaper supports any sample rate that the attached hardware supports and that the Fireface supports 64kHz. Maybe others do too, I don't know. What I do know is that some plugins expect the run of the mill sample rates (B4/Pro 53 being two of them IIRC).
captain caveman is offline   Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 09:06 PM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.