Mastered and Unmastered example

- ask away
Brankis
mnml mmbr
mnml mmbr
Posts: 251
Joined: Thu Jun 16, 2005 11:13 pm

Post by Brankis »

also, my opinion on mastering for what it's worth...

i recently got the masters back for my first release and while there was a difference in clarity, the increase in quality was not a result so much of the "mastering process" than fixing my errors in the balancing of my track, which can be done by yourself without a mastering engineer

i spent a really long time (years) trying to figure out why I had mixes that sounded pretty good in my studio but really sounded like sh!t on other systems. Like many, I thought this was because my tracks were unmastered but that is not at all the case.

all the information you need for an outstanding sounding/balanced track is in the spectral information of all your favorite tracks. Get to know what a "good" spectrum looks like and the relative amplitudes of various frequency ranges in relationship to each other. It sounds crazy but trusting this information over your ears is going to give you much more consistent and accurate results

for me i use my monitors for shaping and designing sounds but when it comes to eqing/mixing/balancing I use my ears in relationship to what I'm measuring with the tools to get the most tight musical result

i recently began playing out live and have had the good fortune to hear this on a variety of systems so far and its translated almost perfectly everywhere

engineers in the 60's were creating consistent albums not by "trusting their golden ears" but mixing by looking at VU meters and knowing what volume each instrument needed to be placed relatively to create the desired energy to the 2 mix bus. No computers or anything back then, if you think someone by ear created consistent sounding albums like Beatles you are dreaming. USE THE TECHNOLOGY

the beauty about understanding the balance/spectrum thing is that you can get it perfect without doing anything to the overall track. every range can be adjusted in the mix so no mastering is really needed.

and, this may have been said before... but as far as using limiters. its fine but set the threshold to equal the difference between the RMS value of whatever reference track you're using and your own track. this will bring your music exactly to the RMS level of the desired track without going overboard (unless your reference has already gone overboard)
supergroover
mnml newbie
mnml newbie
Posts: 16
Joined: Mon Mar 31, 2008 5:34 pm

Post by supergroover »

I'd rather trust my ears in different rooms (and different systems) than trust the graphical spectral information...
User avatar
hydrogen
mnml maxi
mnml maxi
Posts: 2689
Joined: Tue Oct 17, 2006 2:41 am

Post by hydrogen »

Brankis wrote::roll:

If host and all plugins in the chain are using floating point math (like ableton), you can ride every single gain stage "in the red" and it wont matter as long as the main output stays below digital maximum... this is a FACT

the -12 thing has nothing to do with ableton or any DAW. It's about your AD converter being driven up to 0db is why the sound gets muddy. Your converter will function optimally like all other analog gear when you feed it -12 cause that's 0db on the analog scale which is NOT the same as 0dbfs in your computer... Running your mix hot and making your prosumer converter provide that much voltage is why the sound gets "cloudy" "smeared" or whatever you want to call it. has nothing to do with the software at all
sure... and what happens when you render to audio? its not running through your a/d converter... its simply clipping the wave. ableton mud in the wav. oh noes!
------------------------------------------------------
http://soundcloud.com/kirkwoodwest
RichardLodge
mnml newbie
mnml newbie
Posts: 69
Joined: Fri Nov 13, 2009 11:22 am

Post by RichardLodge »

hmm.. kinda meant 'clip' as in short example taken from a longer piece of music, rather than a channel peaking in the red.
Brankis
mnml mmbr
mnml mmbr
Posts: 251
Joined: Thu Jun 16, 2005 11:13 pm

Post by Brankis »

hydrogen wrote:
Brankis wrote::roll:

If host and all plugins in the chain are using floating point math (like ableton), you can ride every single gain stage "in the red" and it wont matter as long as the main output stays below digital maximum... this is a FACT

the -12 thing has nothing to do with ableton or any DAW. It's about your AD converter being driven up to 0db is why the sound gets muddy. Your converter will function optimally like all other analog gear when you feed it -12 cause that's 0db on the analog scale which is NOT the same as 0dbfs in your computer... Running your mix hot and making your prosumer converter provide that much voltage is why the sound gets "cloudy" "smeared" or whatever you want to call it. has nothing to do with the software at all
sure... and what happens when you render to audio? its not running through your a/d converter... its simply clipping the wave. ableton mud in the wav. oh noes!
what are you talking about? flawed logic... as long as the master doesnt go beyond 0dbfs in the track. rendering has nothing to do with this, its just an offline process of the same thing as playing the track back

didnt you buy the bob katz book? you may want to give it another read as its clear as day.
Last edited by Brankis on Thu Mar 11, 2010 5:26 pm, edited 2 times in total.
Brankis
mnml mmbr
mnml mmbr
Posts: 251
Joined: Thu Jun 16, 2005 11:13 pm

Post by Brankis »

supergroover wrote:I'd rather trust my ears in different rooms (and different systems) than trust the graphical spectral information...
what about a combination of both?
damagedgoods
mnml mmbr
mnml mmbr
Posts: 349
Joined: Tue Feb 12, 2008 1:38 am

Post by damagedgoods »

hydrogen wrote:
Brankis wrote::roll:

If host and all plugins in the chain are using floating point math (like ableton), you can ride every single gain stage "in the red" and it wont matter as long as the main output stays below digital maximum... this is a FACT

the -12 thing has nothing to do with ableton or any DAW. It's about your AD converter being driven up to 0db is why the sound gets muddy. Your converter will function optimally like all other analog gear when you feed it -12 cause that's 0db on the analog scale which is NOT the same as 0dbfs in your computer... Running your mix hot and making your prosumer converter provide that much voltage is why the sound gets "cloudy" "smeared" or whatever you want to call it. has nothing to do with the software at all
sure... and what happens when you render to audio? its not running through your a/d converter... its simply clipping the wave. ableton mud in the wav. oh noes!
When you press play on a DAW, the whole shebang works with a certain buffer size (say 12 to a couple of thousand frames, depending on your latency settings) and processes the whole arrangement, 'n' samples at a time:

1) For each channel, the first effect is fed n samples of audio from the source (a synth, a wav file, whatever). The host calls the "processBuffer" function on the effect, telling it a) that it wants it to process n frames, b) what the values of those n frames are, and c) where to put the n processed frames when it's done.

2) The same thing happens in turn for every effect in a serial chain until the channel is "done"

3) All of the output buffers from each channel are then added together to make a master buffer

4) Master fx are applied to this master buffer in the same way as 1 and 2

5) Dither, quantize to 16 or 24 bit fixed point, then that's your output.

Unless something is terribly, terribly wrong, the only difference between processing in realtime and rendering offline is that in realtime the host will only start rendering the next section when the soundcard needs another buffer full of audio - leaving a gap of CPU downtime in between - whereas offline it'll start rendering the next section as soon as the previous one is finished. It's possible that the buffer size might differ between the two, but again, any audio effect worth its salt should operate the same independent of buffer size. There's dithering, but that only happens at the very end, and is generally agreed to be totally inaudible. And in fact it probably also happens during realtime processing too, since you're always reducing the bit depth from 32 bits to 24 or 16 when you deliver audio to your soundcard.

Basically what I'm trying to say is that rendering offline does *exactly the same thing*, it just does it faster because it uses more or less 100% of the available CPU power (and takes less time) whereas realtime processing uses less (and takes more time).

Brankis has a more valid point; as soon as you return to the analogue domain, there's the opportunity for signal degradation at every step. I don't have an intimate knowledge of D/A converters but I don't find it hard to believe that cheap ones may be less accurate at the top of their dynamic range.

(yeah, i jumped in again.)
o b j e k t

www.keinobjekt.de
User avatar
hydrogen
mnml maxi
mnml maxi
Posts: 2689
Joined: Tue Oct 17, 2006 2:41 am

Post by hydrogen »

All i'm saying is that when you render a track to wave it has nothing to do with the d/a converters. The clipping will be in the file. There is no a/d conversion in the data... its still digital data and has nothing to do with your audio interface.

And for sure... Bob Katz's book. In particular he mentions that the metering in most software isn't accurate. Which makes it even more important to stay away from the reds.

One last thing, are you guys also implying that when playing back an audio file on a computer that is mastered to say -.3db, that it will clip or cause distortion in the a/d converters?
------------------------------------------------------
http://soundcloud.com/kirkwoodwest
Post Reply