Don't forget to up-vote the question or answer if you find it helpful! If it's your own question and you're very happy with an answer you can accept it as best answer.
Have you ever had one of those moments discussing workflow, processing, or what have you...where someone mentions something that's such a painfully simple concept, you wonder why you never thought of it before? Maybe you've simply had a "Eureka!" moment on your own. Let's create a little archive of these hidden common sense gems.
Here are two examples:
A favorite eureka moment of mine many years ago...just because you're using a compressor, doesn't mean you have to add make-up gain.
Another that I have to give René Coronado massive credit for...You can do easier fast automation moves, if you write automation while in half-playback speeds.
Please note: When commenting on someone's question or answer, you will not receive notification on follow up comments unless you are mentioned with @Username. So for Andrew Spitz, you only need to use @Andrew
how about: don't use bright mics on bright sources
also, headphone gain affects an actor's read.
edited to add: don't eq things when they're soloed out - eq relies on context
When you import high sample rate sounds into ProTools (eg a 96k sound into a 48k session) the usual procedure is to sample rate convert them, so they play at the correct speed....
but if you choose to not do this, its a quick & dirty way to pitch shift an octave without processing ie playing a 96k sound in a 48k session is same as having that sound available at half speed, or quarter speed for a 192k file in a 48k session
(Same goes for manually editing the sample rate in the workspace browser)
I am studying sound design, so there is not much I can say, but the thing that came to me like some kind of angel from the heavens was when i found out about the elastic audio, while editing the dialogue for my second project. During the first one I would ask my ADR friends to start over and over again in order to synch the speech with the picture... and they would just hate me!
Rather than focus on your source sounds, think about how any sound can fit within a mix. Some people spend hours trawling through sample packs when they could of used 100's of sounds they passed. THe trick is making the dynamics and frequencies sit between everything else. Relativity baby...
My most recent mind-leap: dynamic range vs. perceived dynamic range. I was lucky during my last mix to have access to a room full of Meyer Sound loudspeakers (UP-Juniors and a UM-1P, for those who know them) which I set up in a 5.1 configuration. I discovered an incredible difference in perceived dynamic range between the set of midfield monitors (Meyer HD-1s) in the control room and the live sound cabinets in the other room. The slightly loud words in the dialog track that I let pass by on the midfields came across as painful blasts of sound in the other room.
It made me think hard about my use of the frequency spectrum, the surround field, reverbs/delays and subtle compression in conjunction with variation in volume to create a smooth and consistent level throughout the film.
Not to mention the other lessons I learned just by previewing my mix on a second set of speakers. If you can make it happen, do it. A lot. It keeps you on your toes. Then sum to mono and do it again.
When splitting mono atmoses between scenes, use a stereo and swap between the L and R, meaning they can be seamless and no phase issues. It's made things far easier for sitcoms etc based in the same place for the whole episode.
Here's a technique I'm using lately: layering a natural recorded sound with a phase inverted noise floor extracted from the same sound. This is the best way I found to have a good noise reduction without loosing too much of the original feature.