Dither question for PROS using ProTools

chonchball

37studios.com
Aug 3, 2008
177
0
16
Detroit, MI
www.37studios.com
What is the correct way to dither a ProTools bounce?

I normally record @ 48k 24bit, then on the master fader, I just use the stock digidesign dither plug at the end of the chain in 16bit mode and bounce to 44.1 16 bit.

Is this correct? I'm trying to learn all I can about the proper way to do conversions, but to my EARS I cannot hear the difference in this stage, although I do understand the importance of doing it the RIGHT way. I just want to make sure I'm doing this the right way. Thanks!
 
It does automatically do it, but from what I understand, letting it do it automatically is kinda like having an automatic transmission instead of a stick shift. You achieve the same end result in the switch from let's say gear 2 down to gear 1, but when you make sure to do it manually the proper way, it takes care of the little tiny things.

I know that's a pretty piss poor analogy, but that's why I'm askin' the Q cuz I still don't FULL understand the process and correct procedure either :)
 
Is this correct? I'm trying to learn all I can about the proper way to do conversions, but to my EARS I cannot hear the difference in this stage, although I do understand the importance of doing it the RIGHT way. I just want to make sure I'm doing this the right way. Thanks!

While I'm sorry to be that guy, I think it has to be said. There is a great deal of debate going on to the affects of dithering and whether it actually does anything audible or useful.

Just wanted to point that out....here's a small run down of what Ethan from real traps feels....

The importance of dither is one of the Big Lies among many lies in audio. There are two different recent threads about this - one at Gearslutz in the Mastering section, and another at Lynn Fuston's 3dB forum (main section). In both cases I proved - to my satisfaction! - that dither is inaudible in all cases, no matter what source material you have. And this makes perfect sense once you think about it.

The effect of dither is down at the noise floor of 16 bit material, meaning it's 90+ dB below the music and also masked by the music. Most recordings have a noise floor at least 20 dB higher than that due to room tone, guitar amp hiss, and so forth, combined from all the tracks in the mix.

I summarize the main issues in this article:

www.ethanwiner.com/dither.html

You can download a file where I turned dither on and off several times during the course of a mix, including right in the middle of a soft clean guitar part. Anyone who thinks they can hear dither is welcome to listen to that file and tell me where the dither is and where it's not. I have $100 that says you're wrong.

--Ethan
 
Ethan, thank you for your perspective. It has been an ongoing worry of mine as to whether or not I have been doing it "the right way", and now hearing this that there is still a debate about it half eases my mind and half raises other questions I suppose.

Let's say you have a digital recording at 96k and you're going to print it at 44.1k. Essentially you're removing more than half of the information that makes the sound what it is, which makes me wonder WHY then we even have different bit rates if it really is THAT inaudible. My main concern is that when packing in as much as you can during each second of audio, and then REMOVING it, I feel there is still some kind of destructive quality to that, even if it is a necessary evil.

One can DEFINITELY hear conversions from a high quality WAV to a lower bitrate MP3, there is lots of audio being sacrificed to make the file smaller, etc etc etc, but if the dither process within downsampling from 48k to 44.1k is truly inaudible to the human ear, I suppose I can see that.

Kinda like how the difference between 780p and 1080p is only visible to someone with perfect eyesight on a screen larger than 60 inches maybe?
 
Ethan, thank you for your perspective. It has been an ongoing worry of mine as to whether or not I have been doing it "the right way", and now hearing this that there is still a debate about it half eases my mind and half raises other questions I suppose.

Let's say you have a digital recording at 96k and you're going to print it at 44.1k. Essentially you're removing more than half of the information that makes the sound what it is, which makes me wonder WHY then we even have different bit rates if it really is THAT inaudible. My main concern is that when packing in as much as you can during each second of audio, and then REMOVING it, I feel there is still some kind of destructive quality to that, even if it is a necessary evil.

One can DEFINITELY hear conversions from a high quality WAV to a lower bitrate MP3, there is lots of audio being sacrificed to make the file smaller, etc etc etc, but if the dither process within downsampling from 48k to 44.1k is truly inaudible to the human ear, I suppose I can see that.

Kinda like how the difference between 780p and 1080p is only visible to someone with perfect eyesight on a screen larger than 60 inches maybe?

Hey Chonchball:

Whether or not dither is inaudible is of course to debate. However, I notice above that you have some things fundamentally mixed up here. Dither applies to bitdepth. Bit rate applies technically to program file compression, such as creating MP3's. Sampling rate, such as 44.1 Khz have to do with frequency and our hearing range.

Downsampling does have an impact on our signal. Different SR converters produce different results. Some are more or less "cleaner" than others while those others introduce inharmonic or alias frequencies that are considered distortions. Further, when downsampling, you are not getting rid of audible information. Human hearing goes only up to 20 khz, and that's questionable for some people :). But you do get rid of aliasing that occured upon non-linear processes such as compression, etc that folds back onto the audible range. For this reason however, a good sr converter won't introduce as much "garbage" into the audible range.

The dithering process is suppose to use the principle of masking with continuous shaped noise to cover up distortions created by truncating i.e shrinking dynamic range of the signal. Again, whether or not this is significant is debatable as people like Ethan Winer have stated. My perspective is still iffy, I still question fades, where the entire signal is lowered into the lowest bits.
 
See, I'm totally with ya on all that! That's where my question really comes into play then. If I am making an impact on my recordings as they exist in the digital domain, what precautions and measures should I take to properly account of some of this technical stuff that I'm not as familiar with (but would LIKE to be!)

Also, good point on calling me on bit depth and bit rate. I think I've never actually full understood the fundamental difference between the two. Any further explanation is extremely appreciated. I'm trying to wrap my head around the things that most people kinda take for granted, ya know?
 
See, I'm totally with ya on all that! That's where my question really comes into play then. If I am making an impact on my recordings as they exist in the digital domain, what precautions and measures should I take to properly account of some of this technical stuff that I'm not as familiar with (but would LIKE to be!)

Also, good point on calling me on bit depth and bit rate. I think I've never actually full understood the fundamental difference between the two. Any further explanation is extremely appreciated. I'm trying to wrap my head around the things that most people kinda take for granted, ya know?


Well at the end of the day, when it comes to the application of the technical stuff, your ears should tell you everything. However, since there are technical elements at hand, one should learn and understand when to apply them, how, and understand their effect.

Bit depth refers to your resolution, your dynamic range of your signal i.e how many bit you have to represent the amplitude of your signal. Bitrate refers to data transfers as they are measured on the internet for communication. Without getting to heavy into details referring to the bitrate of the signal implicates how the program material was compressed to save file space. Hence MP3 compression creating smaller files for quicker transfer or streaming purposes.
 
I'm not going to stick my neck out and say dither is an audible thing, because by nature it's supposed to be as transparent as possible (especially when you get into noise-shaping types). But it's a necessary evil to use dither when changing bit depths. For no other reason than that the alternative when converting 24-bit audio to 16-bit is truncation (which can be destructively audible).
Redbook standard CD's dictate audio has to be 16-bit, we record initially at 24-bit so we have higher resolution audio to process. The idea behind this is that every time you process digital audio it gets degraded slightly, so you start of with more resolution than you're going to need in the end. When changing bit depths noise gets added to the signal in order to boost the wanted sound beyond the lowest 48dB in the signal (6dB dynamic range per bit), which is what is going to get lost by converting to 16-bit.

I think there's only really one paramount rule in dithering, it's got to be the LAST thing that happens. If you've got any processing to do (that includes mastering) don't dither it. Whether or not you put the dither on the master fader or as an insert or if you do it by bouncing out as 16-bit. The results will be the same, but the automatic/manual analagy used earlier is true.

By the by, why are you tracking at 48k?
 
I track at 48 because it's a step up from 44.1, so like you said, you want to cram in more before it gets crunched down, and my comp can still run a decent amount of plugins before bogging down which is not the case at much higher rates (for my specific set up anyway)

I'm on a G5 with 3g of RAM, PTHD7.4 and a digi192i/o, so the AD/DA conversion sounds real good to my ears at that sample rate, i didn't notice a real difference at anything higher since i'm recording mostly gritty stuff anyway.

my next question (and i do always put my dither plug at the last slot of the master buss) but i fear that some of the limiters that have built in dither may be playing a role here that i don't want. let's say for example:

the session is 48k 24bit, and I'm bouncing to 44.1 16bit. The limiter has a dithering option, but i still have a few more plugs in the chain before the end. Do I leave the dither in the limiter at the same specs as the session, and wait to let that final dither plug do the work, or do i match the two dithers to be what i want my final result to be?
 
Why not dither before mastering? You should dither whenever you lower the bit depth. Isn't it better to randomize quantization error with dither than to master a track that has harmonic distortion cause by truncation?

This is supposing you had to master at 24 bit, and of course, you should then dither down to 16 bit when you're done.
 
in certain case scenarios, i'm just using the mix AS the master. i try not to 2nd step master my own stuff, but if small clients budgets dont allow me to send it out to the mastering house, i just pump the mix til i think it's ready and bounce it to 16bit stereo and call it a day. so i just want to make sure i'm covering all bases when i have to do that step on my own.
 
The difference between 41k & 48k is pretty much negligible. 48k is used almost exclusively for film. If you're using a 192i/o and you want a better quality go for the 88.2k or 176.4k options. If you're worried about plug-in numbers try rendering things instead of using inserts to save processing power.

If your limiter has a built in dither turn it off. Dither rule #2 (the one I neglected in the last post), dither only once. So whatever is last in your signal chain should be the only thing doing the dithering. Remember dither is noise, you only want to add as much as you need to achieve bit depth reduction.

The reason not to dither before mastering is the same as not recording at 16-bit to start with, more resolution than you need for the processing that's going to happen. 24-bit bounces are better for mastering than 16-bit, and if you use 24-bits there'll be no truncation anyway. The dithering should happen at the end of the mastering chain.
 
Why not dither before mastering? You should dither whenever you lower the bit depth. Isn't it better to randomize quantization error with dither than to master a track that has harmonic distortion cause by truncation?

And you can hear this apparent Harmonic Distortion caused by Truncation?
 
The difference between 41k & 48k is pretty much negligible. 48k is used almost exclusively for film. If you're using a 192i/o and you want a better quality go for the 88.2k or 176.4k options. If you're worried about plug-in numbers try rendering things instead of using inserts to save processing power.

If your limiter has a built in dither turn it off. Dither rule #2 (the one I neglected in the last post), dither only once. So whatever is last in your signal chain should be the only thing doing the dithering. Remember dither is noise, you only want to add as much as you need to achieve bit depth reduction.

The reason not to dither before mastering is the same as not recording at 16-bit to start with, more resolution than you need for the processing that's going to happen. 24-bit bounces are better for mastering than 16-bit, and if you use 24-bits there'll be no truncation anyway. The dithering should happen at the end of the mastering chain.
There will be truncation actually. Don't DAWs work in 32 bit and even 64 float?

And you can hear this apparent Harmonic Distortion caused by Truncation?

I never said that. I also haven't TRIED to. (I'm not one of those people who claims to have golden ears, though, btw. In fact, I'm the opposite. I sometimes worry that my ears suck since I don't always hear differences that some people like to point out - and then I realize most people are just fooling themselves. I often think, and point out, low-level gear sounds just as good as really expensive gear. Technology has come a long way) .
Regardless, what is your point? Harmonic Distortions might even be accentuated by mastering.

Also, aside from whether or not one can actually hear the harmonic distortion, you shouldn't be able to hear dither, anyway, as it's an incredibly low level. However, if dither didn't help anything, it wouldn't have been invented. Afterall, dithering/noise shaping is used in converters to get the dynamic range up to 24 bit.

Besides, why not be safe instead of sorry? You can't hear the amount of noise that dither is, especially in highly noisy/distorted/loud metal. So why not add dither? It WILL randomize harmonic distortions that cocur from truncation, whethe ryou could hear those distortions in the first place or not.
 
Protools LE does work in 32-bit floating point, but HD systems run 24-bit fixed point on individual tracks and 48-bit at summing busses. So for his purposes there'll be no truncation if he exports at 24-bit. As for other DAWs... Well I don't really care for them.

I've never given a great deal of thought to 32-bit floating point though, so you raise a good point. As far as I know though, 32-bit floating point gets used on native systems to improve head room, since they often don't have enough processing power to run double precision filtering. I'm wondering if this is only on summing busses or individual tracks too. Melodeath, you've just ruined my weekend.
 
also, i am kind of likening this to how when you first start recording/mixing you can't tell the difference between 500hz and 750hz or the difference between something mildly phasing because you haven't trained yourself to listen FOR it. sure there's a point where you can fool yourself into hearing something that isn't there, BUT that's where training your ear and your judgment comes into play. I want to try to train my ear to listen for what may be happening, if there is still some kind of "trust your ears" effect during dither. cuz right now, i really don't know HOW to listen for it. if it's so subtle as to being among the last 6db of the noise floor (if i understand that correctly) then obviously it will be hard to hear during a double kick blast beat with a shred solo over top, BUT during the fade out on that song or during some really ambient dynamic vocal break during a song, i bet it would be pretty damn obvious not only to someone listening FOR it, but to the people that can tell when something sounds harmonically inappropriate yet has no idea how to vocalize what they are hearing that sounds "off" to them.
 
Protools LE does work in 32-bit floating point, but HD systems run 24-bit fixed point on individual tracks and 48-bit at summing busses. So for his purposes there'll be no truncation if he exports at 24-bit. As for other DAWs... Well I don't really care for them.

I've never given a great deal of thought to 32-bit floating point though, so you raise a good point. As far as I know though, 32-bit floating point gets used on native systems to improve head room, since they often don't have enough processing power to run double precision filtering. I'm wondering if this is only on summing busses or individual tracks too. Melodeath, you've just ruined my weekend.
Ah I didn't know that ProTools was different. I'm fairly certain in Sonar everything is run at 32, and, when you have the double precision engine engaged, 64 if the plugins can take it. You can have a track in the red as much as you like, as long as it's routed tot he master and you pull the master fader down, there's no clipping on the output.

Is that aspect the same in ProTools?

also, i am kind of likening this to how when you first start recording/mixing you can't tell the difference between 500hz and 750hz or the difference between something mildly phasing because you haven't trained yourself to listen FOR it. sure there's a point where you can fool yourself into hearing something that isn't there, BUT that's where training your ear and your judgment comes into play. I want to try to train my ear to listen for what may be happening, if there is still some kind of "trust your ears" effect during dither. cuz right now, i really don't know HOW to listen for it. if it's so subtle as to being among the last 6db of the noise floor (if i understand that correctly) then obviously it will be hard to hear during a double kick blast beat with a shred solo over top, BUT during the fade out on that song or during some really ambient dynamic vocal break during a song, i bet it would be pretty damn obvious not only to someone listening FOR it, but to the people that can tell when something sounds harmonically inappropriate yet has no idea how to vocalize what they are hearing that sounds "off" to them.

Well, there's nothing "harmonically off" about dither. It's noise that's added to prevent harmonic distortions.
Furthermore, you might get an idea of "what to listen for" for something that wasn't dithered by using a lo-fi plugin that lowers bit depth. Just lower the bit depth a lot and you'll hear all sorts of stuff.