Thursday, April 17, 2008

Engineering Ashi's Tracks (Sorta Technical)

This past week, I've been getting back to work, putting in a couple of hours each day, mixing and mastering. For those of you who don't know what it's like being a studio engineer, even a broke one like me, I'll give you a rundown.

When the vocals were first recorded, Ashi wanted to hear his takes on some parts just to make sure everything was good. The first thing I had to do was to make a template of plugins that brought out his voice and was close to what the finished product would sound like.

Future audio engineers, PAY ATTENTION!

First I added some compression to the voice in order to make sure that he could be clearly heard through the mix. For that, I set the compression threshold to about 10 dB under the peaks. I set the ratio to about 3:1 or 4:1. For those of you who don't speak audio, I set compression in order to make sure the voice volume sounds "smoother" instead of "spikier."

Next step was the EQ. When you first record using a new mic, you have to sort of feel around the EQ with it to find the sweet spot. Every mic can sound good, but of course, it can sound terrible too, if you don't know what you're doing. I know how to make my mic sound good, and I was able to bring Ashi's voice from out of the mix. It's usually good to cut everything out from under 200 Hz (cut out the unnecessary bass) and then add a little bit from 1 KHz to 5 KHz to make that voice enunciate properly. For the uninitiated, the EQ is used because the raw audio from the mic can oftentimes be harsh or muffled, and by tweaking a bit of EQ, you can make the voice sound a lot better (and noticeable) in comparison to the rest of the music.

Some people forget a de-esser. I never forget, because I have ssssenssssitive earssssss. Some people when saying words with lots of "s" sounds in them, are very "sibiliant." Their "s" sounds cut through everything, and it hurts my ears to hear something so high-pitched so loud. This is the same when we're talking about recording voices. Some people tend to be more sibilant than others. Ashi, however, wasn't too sibilant so it was easy to deal with. I always add a de-esser to the chain after the EQ. What this does is that it makes the "s" sounds a lot less harsh. It squashes them so your ears don't hurt (but won't make the signal muffled or kill the enunciation as if you tried to do this with EQ).

The last thing I do is put in some reverb. For Ashi, only one track required this. Usually for hip-hop, I like to leave the voice bone-dry with no reverb at all. However, for other types of music, engineers like to add in varying levels of reverb to different instruments. Remember that 80s Genesis sound that all the rock bands had? That big boomy snare and toms? The echoey voices? That's all reverb, and a bit for vocals for a lot of genres of music will sweeten up the sound just a little bit.

OK. So much for the effects that are required to produce a good sound for vocals.

The next step happened after Ashi went home and I got to play with all the audio like some sick doctor. Audio levels.

When you mix a bunch of audio track together, you have to gingerly adjust the levels on everything to make sure it sounds right. "Damn, I can't hear the voice, the guitar is too loud! The drums are too loud, but the bass isn't there!" Getting the levels right is pretty frustrating, because there isn't a right way of doing things. There is no magical formula for getting everything correct. It's all trial and error, and engineers get a feel for a rough settings after spendings tons of time in front of the audio, tweaking this and that. I've done a lot of audio mixing in the past, and I'm pretty anal about getting every single channel level just right, so I can usually get a rough sense of what everything needs just by looking at the meters. For Ashi's tracks, once I get a rough idea of what I need, I set all the levels relative to each other. This is no easy task, because some songs can have 10-20 tracks or even more. Getting them to balance each other out can be super frustrating.

Today, I started work with all the rough levels finished. I needed to tweak them to make sure they're all consistent and everything sounds good. I listened to some hip-hop and some club stuff to get my ears warmed up for the right sound. Then I just dove into Ashi's tracks and tweaked the hell out of everything! I probably spent at least half an hour just getting every track's levels correct.

Next step was effects. A flanger on this part here, some more reverb there, etc. This is usually not so tedious because you can be sort of creative and you get to mess with the audio in weird ways.

Once all the effects were set, guess what time it was? Back to fine-tuning levels! When you use an effect, it changes the sonic character of everything and you have to readjust the levels to make sure there are no problems, that no channel overpowers something else.

Getting closer!

Next step was rendering everything out and sending them to Ashi. It's always important to make sure that the client knows what direction you're taking, and whether or not he or she likes the direction. In the case with Ashi, he definitely liked what he heard, but of course made some artistic suggestions for some changes which I readily accepted.

I finally rendered them one last time and checked the meters to make sure they all sound the same in loudness with each other.

OK, a little note here on some history of audio.

As you probably know, CD audio is amazing. At 44.1 KHz, 16-bit, audio can sound reaaaaally good, reproducing very well the human ability to hear a range of frequencies from 20 Hz to 20,000 Hz. That 16-bit audio allows for a large dynamic range, from silent to whisper-soft to devastatingly loud.

Sony and Philips knew what they were doing when they developed the medium in the early 80s.

What people might not know is that audio engineers can do a lot to a 44.1 KHz, 16-bit track to make it sound louder. Because a digital CD is limited to 16 bits of dynamic range, there is a limit to how loud you can get a signal. What engineers like to do to make a signal louder is to compress the signal, clip it, brick-wall limit it, etc. Basically, it is possible to take shortcuts to make the audio louder, by staying within those 16 bits of dynamic range.

The problem with this is that too much compression or limiting can really make the sound dull. Clipping just makes the sound harsh. But hey, it makes the signal loud.

When CDs first came out, tracks came out with an average loudness of -15 dBfs to -17 dBfs. As the years wore on, they got louder and louder and a lot of pop tracks today come out to be -10 dBfs to -5 dBfs. I personally don't like to master my own Steg Rex material to be more than roughly -15 dBfs. However, I knew I couldn't do that with Ashi's material, so I tried to do the best I could to preserve some sort of dynamic range. I ended up mastering his stuff to around -11 dBfs or so. With his type of music, this made sense, because his stuff is really poppy.

I really hope that one day, engineers all over the world can come back down and maybe master to around -15 dBfs or so again, because it just sounds good. For those of you who listen to music, you are in control of the volume knob. Use it. If it isn't loud enough for you, just crank it. It makes the audio quality of a CD really annoying if it's mastered too loud. Why would any artist want people to turn DOWN their music? It also sucks for the mastering engineer because people actually request for music to be mastered louder because of "convention."

OK, enough of my rant on loud mastering, back to what I was working on.

When I was done with the 5 tracks, it was already night, so I burned a CD of the day's renders and brought it to my car. I usually listen to tracks through 3 or 4 audio sources to make sure everything sounds consistent. I know that my headphones tend to accentuate the vocal frequencies, while my desktop speakers hit the bass pretty hard (and muddy too, yechhh). The speakers in my car are in general pretty balanced, in between those two extremes.

So I walked out to my car.

Turned it to ACC.

Waited for the stupid seat belt warning beep to shut up.

Cracked open my beer in the back seat.

Cranked the stereo.

Listen.

Listen.

Did everything sound alright?

Yes.

YES!

Finally fucking finished!

OK, shut up, time to back the files up.

No comments: