44.1khz or 192 khz?! Read & Comment

You sure Colin tracks at 44.1? Just haven't heard that before... I know I've heard Andy and James say it, but haven't read anything about Colin.

There are some advantages to recording at higher sample rates though other than more accurate representation of the analog signal.

The Source-Quality rule is not as big as the difference bit depth makes but with a higher sample rate and using good SRC can improve the sonics of your audio. The biggest difference by recording at higher sample rates is reduced midband distortion from using plug-ins/signal processing and the more relaxed anti-aliasing filters that are used.
 
Alright kids...here's some ACTUAL information about sample rates and bit depths. Make your decisions about what you want to use:

In digital recording, sample rates dictate the number of "snapshots" of audio that are sampled every second. 44.1kHz = 44100 samples per second. While this important (and would infer that higher sample rates are better), there are things to consider...First and foremost, the higher the sample rate, the memory it's going to take up. More info, bigger file. Second (while this next statement is a seperate debate all in itself), there's the consideration that when converting (or trunicating) from a higher sample rate to 44.1k, your computer has to do a very difficult equation.

Let's say you're converting from 48k to 44.1k. This means your computer has to divide everything by 1.0884354... to get you to to 44.1k. That's a big equation. Some say if you use 88.2k, you're doing the easiest equation (a simple division of 2) and therefore not losing any information when your computer is doing the math.

Finally, digital audio has this funny thing called the Nyquist Theorem. It basically states that audio bandwidth of a sampled signal is restricted to half of the sampling rate without aliasing (unmusical tones that is created in incorrectly digitized material). The higher the sample rate, the higher the frequency where this begins happening. Since we can only hear up to 20k, technically, 44.1k should be enough as it falls within the Nyquist Theorem @ 22.05k

Now, on to Bit Depth! Contrary to popular belief, the only thing Bit Depth changes is your dynamic range. In 16 bit, there are 65,536 levels of dynamic range (starting at 0) with a maximum of 96dB. In 24 bit, there are 16,777,216 levels with a max of 144dB.

So there you go. Now you guys at least know the technical aspect of this whole thing. Perhaps now you can make a more educated decision and not rely on "Andy Sneap uses..."
 
Alright kids...here's some ACTUAL information about sample rates and bit depths. Make your decisions about what you want to use:

In digital recording, sample rates dictate the number of "snapshots" of audio that are sampled every second. 44.1kHz = 44100 samples per second. While this important (and would infer that higher sample rates are better), there are things to consider...First and foremost, the higher the sample rate, the memory it's going to take up. More info, bigger file. Second (while this next statement is a seperate debate all in itself), there's the consideration that when converting (or trunicating) from a higher sample rate to 44.1k, your computer has to do a very difficult equation.

Let's say you're converting from 48k to 44.1k. This means your computer has to divide everything by 1.0884354... to get you to to 44.1k. That's a big equation. Some say if you use 88.2k, you're doing the easiest equation (a simple division of 2) and therefore not losing any information when your computer is doing the math.

Finally, digital audio has this funny thing called the Nyquist Theorem. It basically states that audio bandwidth of a sampled signal is restricted to half of the sampling rate without aliasing (unmusical tones that is created in incorrectly digitized material). The higher the sample rate, the higher the frequency where this begins happening. Since we can only hear up to 20k, technically, 44.1k should be enough as it falls within the Nyquist Theorem @ 22.05k

Now, on to Bit Depth! Contrary to popular belief, the only thing Bit Depth changes is your dynamic range. In 16 bit, there are 65,536 levels of dynamic range (starting at 0) with a maximum of 96dB. In 24 bit, there are 16,777,216 levels with a max of 144dB.

So there you go. Now you guys at least know the technical aspect of this whole thing. Perhaps now you can make a more educated decision and not rely on "Andy Sneap uses..."

thanks for that info, it's actually what jas been said before a couple of times, but you supplied some more numbers....
you're absolutely right though.

and welcome to this forum!

it gets truncated to 44.1 anyways (at least on audio cd) and other than with wordlength you don_'t gain anything by "dithering" samplerate, it just gets truncated, that would mean you'd mix in a different sound-environmant as you're listening to later...
for ex you'd perhaps not boost highs on the vox, cause due to the hight samplerate they're crispier, but when truncated down to 44.1 perhaps semm to be a bit muffeld?
other than that i often heard that if you're using higher samplerate at all you should use double, 4x the samplerate etc.
cutting down fom 96 to 44.1 sounds worse than from 88.2 to 44.1...
i'm not sure bout that, coz i've never tried it, but many people say it's best to record and stay in that samplerate you're gonna use for the final product which is 44.1 for audio cd

yes, dvd is 48k.

but again:
48k, 192k or whatever might sound better directly compared to 44.1, but once you truncate 192 to 44.1 you won't have ANY advantage of that previously higher samplerate!!

it's not the same as with dithering 24bit to 16 where you still have an improved dynamic range and stuff.
a 192kfile truncated to 44.1 will def. NOT sound better than a 44.1 recording!!
so save your discspace unless you're producing for DVD or SACD...

(let me assure you, there's no thing like dithering with bit-depth..
converting to 44.1 means nothing but just cutting (truncating/mutilating) it down to 44.1. period.
 
Here's more from an article in Bob Katz site;

"...Get ready for high-resolution release media, like DVD, by following this source-quality rule. Prepare for DVD (and make better CDs in the process), by making your masters now with longer wordlength storage and processing, and if possible, high sample rates. The 96 kHz/24 bit medium has even more analog-like qualities, greater warmth, depth, transparency, and apparent sonic ease than 44.1 kHz. Perhaps it's due to the relaxed filtering requirements, perhaps it's due to the increased bandwidth-regardless, the proof is in the listening. Therefore, produce your master at the highest resolution, and at the end (the production master), use a single process to reduce the wordlength or sample rate. Multiple processes deteriorate quality more than a single reduction at the end.

For example, your multimedia CD ROMs will sound much better if you work at 44.1 kHz/24 bits or higher, even if you must downsample to 11.05 kHz/8 bits at the end. Plus you've preserved your investment for the future. Your master will be ready for DVD-ROM, whose higher storage capacity will permit a higher resolution delivery medium.

Even the 16-bit Compact Disc, can sound better than what most of us are doing today. We're actually compromising the sound of our 16-bit CDs by starting with 16-bit recording. It's the reason why I've been a strong advocate of wide-track, high speed analog tape and/or 24-bit/96 kHz recording and mixing techniques. You can hear the difference; I've already produced several incredible-sounding CDs working at 96 kHz until the last step.

Working at 96 kHz/24 bit is prohibitively expensive for most of today's projects. The DAWs have half the mixing and filtering power, half the storage time, and outboard digital processors for 96 kHz have barely been invented. As a result, work time is more than doubled, and storage costs are quadrupled (due to the need for intermediate captures). I have no doubt that will change in the next few years. So at Digital Domain, most of the time we can't work at 96 kHz, but we still follow the source-quality rule as much as is practical. Clients are beginning to bring in 20-bit mixes at 44.1 or 48 kHz, and especially 1/2" analog tapes, from which we can get incredible results. The majority of mixes arrive on DAT, and we still get happy client faces and superb results by following the rule. When clients bring in 16-bit source DATs, we work with high-resolution techniques, some of them proprietary, to minimize the losses in depth, space, dynamics, and transient clarity of the final 16-bit medium. The result: better-sounding CDs.

Recently, another advance in the audio art was introduced, a digital equalizer which employs double-sampling technology. This digital equalizer accepts up to a 24-bit word at 44.1 kHz (or 48K), upsamples it to to 88.2 (96), performs longword EQ calculations, and before output, resamples back to 44.1/24-bits. I was very skeptical, thinking that these heavy calculations would deteriorate sound, but this equalizer won me over. Its sound is open in the midrange, because of demonstrably low distortion products. The improvement is measurable and quite audible, more...well... analog, than any other digital equalizer I've ever heard. This confirms the hypothesis of Dr. James A. (Andy) Moorer of Sonic Solutions, "[in general], keeping the sound at a high sampling rate, from recording to the final stage will...produce a better product, since the effect of the quantization will be less at each stage". In other words, errors are spread over a much wider bandwidth, therefore we notice less distortion in the 20-20K band. Sources of such distortion include cumulative coefficient inaccuracies in filter (eq), and level calculations.

88.2 kHz Reissues Will Sound Better Than The CD Originals
The above evidence implies that record companies are sitting on a new goldmine. Even old, 16-bit/44.1 session tapes can exhibit more life and purity of tone if properly reprocessed and reissued on a 20-bit (24-bit) 88.2 kHz DVD. In addition, by retaining the output wordlength at 24 bits, it will be unnecessary to add additional degrading 16-bit dither to the product. Many of these older 16-bit tapes were produced with 20-bit accurate A/Ds and dithered to 16 bits; they already have considerable resolution below the 16th bit.

DSD versus Linear PCM
Sony's new high-resolution DSD format is a one-bit (Delta modulation) system running at 3 Mbyte/second. The jury is still out on whether this system sounds as good as or better than linear PCM at 96 kHz/24 bit, but regardless, Sony's whole purpose was to follow the source quality rule. The company feels that DSD is the first medium that will preserve the quality of their historic analog sources, and that DSD is easily convertible to any "lower" form of linear PCM. Regardless of whether DSD or linear 96/24 becomes the next standard, it's a win-win situation for fans of high-resolution recording.

Extending the AES/EBU Standard
I've found that the ear is extremely sensitive to quantization distortion-the degradation can be heard as a "shrinkage" of the soundstage width. In my opinion, even 16-bit sources benefit from longer wordlength processing. I predict that in a few years studio storage and data transmission requirements will rise to 32 (linear) bits. 32-bit will result in subtly better sound than 24-bit, especially with cumulative processing and capturing. Cumulative processing (such as multiple stages of digital EQ) results in gradual degradation of the least significant bits. Thus, moving the LSB down lower than the 24th bit will reduce audible degradation. How long does that wordlength have to be to result in audibly transparent cumulative sound processing? It's hard to say-perhaps 26 to 28 bits, but because storage is organized in 8-bit bytes, it must increase to 32 bits. Processing precision can easily increase to 48 or even 72 bits because of the 24-bit registers in DSP chips.

One big obstacle to better sound is our need to chain external processors and perform capturing and further processing in our workstations. Even if manufacturers use internal double precision (48-bit) or triple precision (72-bit) arithmetic, the chain of processors must still communicate at only 24 bits, for that is the limit of the AES/EBU standard. Despite that, I welcome manufacturers who use higher precision in their internal chains, because all other things being equal, we'll have better sound. The ultimate solution is to extend the AES/EBU transmission standard to a longer wordlength, but in the meantime, try to avoid too many processors in the chain, and reduce the practice of cumulative mix/capturing and reprocessing.

Floating or Fixed?
Don't get into a misinformed "bit war" confusing floating point specs with their fixed point equivalent. A 32-bit floating point processor is roughly equivalent to a 24-bit fixed point processor, though there are some advantages to floating point. There are now 40-bit floating point processors, and all things being equal, they seem to sound better than the 32-bit versions (but when was the last time all things were equal?). On the fixed point side, the buzz word is double-precision, which extends the precision to 48 (fixed point) bits. Double precision arithmetic (or doubled sample rate) in a mixer requires more silicon and more software to have the same apparent power, that is, the same quantity of filters and mixing channels. It'll be expensive, but ultimately less expensive than its high-end analog equivalent, a mixer with very high voltage power rails, and extraordinary headroom (tubes, anyone?).

Warm or Cold? Digital is Perfect?
What does a double-precision digital mixer sound like? It sounds more like analog. The longer the processing wordlength, the warmer the sound; music sounds more natural, with a wider soundstage and depth. Unlike analog tape recording and some analog processors, digital processing doesn't add warmth to sound, longer wordlength processing just reduces the "creep of coldness". The sound slowly but surely will get colder. Cold sound comes from cumulative quantization distortion, which produces nasty inharmonic distortion.

That's why "No generation loss with digital" is a myth. Little by little, bit by precious bit, your sound suffers with every dsp operation. As mastering engineers who use digital processors, we have to choose the lesser of two evils at every turn. Sometimes the result of the processing is not as good as leaving the sound alone.

III. Detecting Those Sonic Bugs

Did you know that the S/PDIF output of the Yamaha mixing consoles is truncated to 20 bits? Now how did I know that? Because I tested it! And you can, too, with some very simple equipment. There are some legitimate reasons why Yamaha made that choice, although I do not agree with them. This means that if you want to get all 24 bits out of your Yamaha console, you must use the AES/EBU output. There are simple ways to adapt the Yamaha's AES/EBU output to the S/PDIF input of your soundcard, and this will preserve all the bits. Many (if not all) soundcards that work at 24 bits accept the 24 bits on their S/PDIF inputs.

Proper use of those 24-bit words is equally important. Bugs that affect sound creep into almost every manufacturer's release product. In 1989, the latest software release of one DAW manufacturer (whose machine I no longer use) had just hit the market. I edited some classical music on this workstation. There was a subtle sonic difference between the source and the output, a degradation that we perceived as a sonic veil. Eventually it was traced to a one bit-level shift at the zero point (crossover point, the lowest level of the waveform) on positive-going waves only. This embarrassing bug should have been caught by the testing department before the software left the company. Does your DAW manufacturer have a quality-control department for sound, with a digital-domain analyzer such as the Audio Precision? Do they test their DSP code from all angles? Incredible diligence is required to test for bugs. For example, a bug can slip into equalizer code that does not affect sound unless the particular equalizer is engaged. It's impossible to test all permutations and switches in a program before it's released, but the manufacturer should check the major ones.

A Bitscope You Can Build Yourself
The first defense against bugs is eternal vigilance. Listening carefully is hard to do-continuous listening is fatiguing, and it's not foolproof. That's why visual aids are a great help, even for the most golden of ears. In the old days, the phase meter was a ubiquitous visual aid (and should still be a required component in every studio); our studio also uses a product we call the "digital bitscope", that is easy and inexpensive to put together. It's not a substitute for a $20,000 digital audio analyzer, but it can't be beat for day-to-day checking on your digital patching, and it instantly verifies the activity of your digital audio equipment. Think of it this way: The bitscope will tell you for sure if something is going wrong, but it cannot prove that something is working right. You need more powerful tools, such as FFT analysers, to confirm that something is working right.

However, the bitscope is your first line of defense. It should be on line in your digital studio at all times. You can assemble a bitscope yourself--see The Digital Detective. If you're not a do-it-yourselfer, Digital Domain manufactures a low-cost box that can be converted to a bitscope with the addition of a pair of outputs and a 2-channel oscilloscope. Our bitscope is always on-line in the mastering studio. It tells us what our dithering processors are putting out, it reveals whether those 20-bit A/D converters are putting out 20-bit words, and it exposes faults in patching and digital audio equipment.

Some Simple Sound Tests You Can Perform on a DAW
With the output of my workstation patched to the bitscope, I can watch a 16 or 20-bit source expand to 24-bits when the gain changes, during crossfades, or if any equalizer is changed from the 0 dB position. A neutral console path is a good indication of data integrity in the DAW. After the bitscope, your next defense is to perform some basic tests, for linearity, and for perfect clones (perfect digital copies). Any workstation that cannot make a perfect clone should be junked. You can perform two important tests just using your ears. The first test is the fade-to-noise test, described previously in my Dither article.

The next test is easier and almost foolproof-the null test, also known as the perfect clone test: Any workstation that can mix should be able to combine two files and invert polarity (phase). A successful null test proves that the digital input section, output section, and processing section of your workstation are neutral to sound. Start with a piece of music in a file on your hard disk. Feed the music out of the system and back in and re-record while you are playing back. (If the DAW cannot simultaneously record while playing back, it's probably not worth buying anyway). Bring the new "captured" sound into an EDL (edit decision list, or playlist), and line it up with the original sound, down to absolute sample accuracy. Then reverse the polarity of one of the two files, play and mix them together at unity gain. You should hear absolutely no sound. If you do hear sound, then your workstation is not able to produce perfect clones. The null test is almost 100% foolproof; a mad scientist might create a system with a perfectly complementary linear distortion on its input and output and which nulls the two distortions outbut the truth will out before too long.

If the workstation is 24-bit capable, and your D/A converter is not, you may not hear the result of an imperfect null in the lower 8 bits. Use the bitscope to examine the null; it will reveal spurious or continuous activity in all the bits and tell you if something funny is happening in the DAW. Even if your DAC is 16 bits, you can hear the activity in the lower 8 bits by placing a redithering processor in front of your DAC.

Use the powerful null test to see whether your digital processors are truly bypassed even if they say "bypass". Several well-known digital processors produce extra bit activity even when they say "bypass"; this activity can also be seen on the bitscope. Use the null test to see if your digital console produces a perfect clone when set to unity gain and with all processors out (you'll be surprised at the result). Use the null test on your console's equalizers; prove they are out of the circuit when set to 0 dB gain. Use the null test to examine the quantization distortion produced by your DAW when you drop gain .1 dB, capture, and then raise the gain .1 dB. The new file, while theoretically at unity gain, is not a clone of the original file. Use the null test to see if your DAW can produce true 24-bit clones. You can "manufacture" a legitimate 24-bit file for your test, even if you do not have a 24-bit A/D. Just start with a 16-bit or 20-bit source file, drop the gain a tiny amount and capture the result to a 24-bit file. All 24 of the new bits will be significant, the product of a gain multiplication that is chopped off at the 24th bit. You'll see the new lower bit activity on the Bitscope...."

Now that's a lot of info!!! :loco: :loco:

More at Digital Domain