Oversampling: When/where should you use it?

Correct me if I'm wrong but we do worry a lot about the harmonics and distortion that occur in our hardware and whether or not they are desireable harmonics is what sets the bad gear from the great. And aliasing doesn't occur in analog hardware since the sample rate is technically infinite because its an electrical signal that is being dealt with versus samples in a daw.

Of course. But that signal hits the AD/DA. If upper harmonics in plugins at 44.1Khz cause alising.. Then why wouldn't the upper harmonics in signals that hit the AD/DA when recording?
 
We record sound sources in rooms, with gear and everything we record, contains that type of information. We pummel the signal trough pre amps, compressors, eq's, saturators, vca's, AD/DA's, tubes, cords, circuit boards etc. They all add different types of distortions, harmonics, modulations and god knows what.

Why would this NOT cause aliasing? If we worry about the harmonics and overtones that our plugins introduce, why shouldn't we worry about the harmonics and overtones that are inherit to the signals we record? Or the harmonics and distortions that our gear introduces?
I agree with this mindset and would like to add that it's because of these distortions and colorations that analog gear is desirable. Aliasing is a digital thing that happens, just like opening a jpg @ 1040x786 on a screen with a resolution of 1920x 768. ideally I want something that fits the screen perfectly, then blend and smear from there, rather than an image that's too big for my monitor that will have to be whittled down and I end up losing all the extra info I upsampled for in the first place. But that's my workflow. It's not that I'm worried about sample rate or file size, our even the drain on my cpu, it's that I can't hear a difference big enough to merit. Even if I did, no one would notice.
 
^Good question, Soundslikefog, and it is indeed one of the things that could have been made more clear in the article.

I don't know the exact way things work, since I'm not so informed on the technical side of these things, so take this with a grain of salt, but the way I understand it is like this:
The last thing that happens before your analog signal gets turned into the signal on your computer, is that it gets converted by the A/D convertor. Depending on your samplerate, that convertor creates a filterslope that removes all the information above that limit.

So even if you have all kinds of saturation with infinite harmonics and ultrasonic information going through the analog chain, as soon as it hits the convertor, it gets rolled off to remove everything that we can't hear/use. The point where the Nyquist theorem comes in, with all the fold-back issues, is right after this stage, so the problem described in it will only happen to the filtered signal that is being transformed in digital information now.
To have no foldback into the audible range, we require a samplerate of 2x the maximum frequency we want to hear. Since humans are limited around 20kHz, if not less, two times that gives us a samplerate of about 40k. Hence why that 44.1k mark is so popular. You could of course go higher if you have the diskspace and processingpower for it, but in an ideal case, there should be no audible difference.

However, not every converter is designed equal or with the best of the best parts, so it could be that a convertor has a slope that isn't ideal for the 44.1k setting. It rolls off too much or too little, and thus the results are coloured in an unintended way. In that case it could very well be that you up the samplerate, and suddenly the recording sounds like it is supposed to sound. At that moment, it is natural to think: hey things sound better at a higher samplerate! But as mentioned in that article, the reason that it sounds better is because the 44.1k samplerate filter hasn't been designed the way it should have. If a higher samplerate sounds better on your convertor...well, absolutely use it! But keep in mind WHY it sounds better.

With plugins it's different, because plugins are already working with the band-limited sampled recording. Let's say it's recorded at 44.1k samples. The plugin then introduces new harmonics that would go beyond the bandwidth in that recording. And everything that goes above that will then be mirrored back, resulting in unexpected results. So that's why those plugins have the option to extend that bandwidth, do their thing, and then bring it back to the old samplerate. The foldback will then again be in the inaudible range, and be removed from the signal when it goes back to 44.1k samples.

Again, all of this is from what I have understood from what I have read about this subject so far, so I may be off the mark! Maybe someone else can correct me on this :)
 
One more thing, does the down sampling only happen at the output of the master track? And if so
.. wouldn't a lp filter prevent any aliasing? But that would be too easy.. I feel like I'm misunderstanding at which points in the signal path up-ampling/down-sampling happens
 
One more thing, does the down sampling only happen at the output of the master track? And if so
.. wouldn't a lp filter prevent any aliasing? But that would be too easy.. I feel like I'm misunderstanding at which points in the signal path up-ampling/down-sampling happens

In a nutshell; it happens where ever the programmer decides it is needed. But typically it happens at the plugin output and not just the master track.