Try to visualize an analogue waveform getting sampled. The lower frequencies, which stretch further across the time domain to complete one cycle are rendered more accurately than higher ones, that take less time. As you start to approach the super high frequencies, very few samples are rendering those waveforms. It doesn't have anything to do with aliasing or LPFing.
it seems like the spokes of the wheel are going backwards, because really the frame rate of the camera recording it can't keep up.
Actually I think that may be a better analogy to describe aliasing!
In this case it's more like filming a passing car where the camera only runs at 30fps, so as the car speeds by at 30mph it looks defined. But when the car does a pass at 100mph it looks like a blur because there aren't enough frames in the shot to render its detail adequately.
the answer is no. For metal is totally unnecessary. I just finished mixing and mastering an album that was recoded at 88.2 and it was tedious, a waste of cpu and hd space. I often find that mediocre engineers record at high resolution in hope to make up for their lame efforts (or at least, that has been my experiece).
Incorrect. The reason it looks like a blur is because the shutter of the camera is slow, so in the time it is open the car moves, perhaps by a meter or so. 30fps is more than enough to accurately render a car at 100mphActually I think that may be a better analogy to describe aliasing! Think of this as 'approaching aliasing'. Not quite bad enough to misrepresent waveforms, but bad enough to diminish their detail significantly.
In this case it's more like filming a passing car where the camera only runs at 30fps, so as the car speeds by at 30mph it looks defined. But when the car does a pass at 100mph it looks like a blur because there aren't enough frames in the shot to render its detail adequately.
But if it's mainly analogue workflow, what difference does 88.2kHz make? It's ultimately the client's choice, and to call them mediocre for it feels a bit strange to see in a professional scenario. Those sorts of decisions are for the producer to make - not the mastering engineer.
the biggest thing i can offer here is what i've noticed now that i've switched to 44 k
when i recorded in 48 k, my daw / playback headroom seemed invincible. i could add hundreds of layers and hear all of them
but when you bounce to 44 k, its all gonna mush and mesh
if you work in 44 k, you're gonna hear that mush and mesh before you bounce, and you can react according before hand
this might sound like garbage who actually knows this shit by the books, but i dont and im self taught and this is what i've experienced first hand.
If you find it sounds better, then do it.
end of thread
I stand corrected!
Work that into the analogy whichever you see fit.