EQ is not fundamentally "bad". There is nothing inherently "evil" about EQ - in fact it is a rather wonderful thing. The real problem here is just bad engineering - or rather "overengineering" (in the negative sense - like "overacting").
One of the artists I work with - a very promising singer/songwriter - has recently produced a demo CD. Half the songs were recorded and mixed by a friend of mine, and these sound pretty good.
However, the rest of the songs sound abysmal. Guitars and keyboards sound distorted in an unnatural "twisted" way, the drums sound like they were recorded on an audio cassette, and the vocals sound like they've gone through a transistor radio. I was shocked to hear such bad quality from recording sessions done in 1999.
What's particularly surprising about this, is that the songs were recorded in *digital* studios - so there's no technical excuse for it.
Part of the problem is that less-experienced sound engineers can feel guilty if a recording session seems too easy. They think they haven't worked hard enough. They think that the magic only comes from a recording if the engineer has personally squeezed it out. So the first thing they reach for is the EQ.
But there's nothing intrinsically wrong with having the EQ punched into every signal path during tracking, mixing, and mastering - PROVIDED it is doing no more than absolutely necessary to achieve the right result.
Using EQ during tracklaying is pretty much essential if you're recording a complex track and want to be able to perform overdubs, quickly and easy, later on during production. Your ideal scenario is to create a multitrack recording that MIXES ITSELF on playback (some people are well-known for their ability to do this), so that rough monitor mixes of excellent quality can be produced in a matter of seconds at a new studio (much to the delight of the next engineer working on the project).
Recording such a multitrack involves recording through some carefully judged EQ, and also (and this takes a fair degree of skill) to ride the levels DURING RECORDING to anticipate when things may be about to play too loudly or too softly.
This is sometimes referred to appropriately as "Manual compression", and obviously requires either a good knowledge of the song, or a good memory from the first couple of run-throughs. Most pro engineers (especially ones who've been around a few years) do this in ADDITION to using a compressor unit.
However, the problem is that many junior engineers go WAY OVER THE TOP with EQ (and indeed, compression) during recording, and it can be difficult - if not impossible - to undo the damage they have done later. This leads some people to conclude "EQ is bad". But this is only the case when it is innapropriately used by unskilled engineers.
Unfortunately, you have to take comments said by professionals who say "I never use EQ" with a little bit of a pinch of salt I'm afraid. I have worked with such producers and engineers like that before, and when you're "babysitting" on a recording session for them, you frequently find that all the EQ's are punched in on every channel!
"I thought you said you didn't use EQ - isn't that what you said?" you ask them. "Oh - by that I simply meant that I don't use EQ to "create" my sound - I use mic placement and instrument choices. But of course I use EQ for all the "normal" things, naturally - I mean, who doesn't?" - Aha!
So in fact they DO use EQ after all. The "normal things" they are referring to are the standard things that ALL well-brought-up sound engineers do as a matter of course throughout the working day. These include:
*) Rolling the bass of all signals that don't need it (including vocals)
The trick here is NOT to affect the desired sound at all, but simply to protect yourself against things you're not interested in recording, such as accidental microphone bumps (kicking the mic stand etc.), underground train rumbles (hard to get away from in any part of central London), air conditioning noise (professionally damped air conditioning systems don't "roar" faintly like normal ones, but they *still* have an audible distant bass rumbling sound).
This rolling off of LF (low frequency) also provides you with a degree of protection against vocal "popping", and (if you do outside recording for sound effects), protection from signal overloading caused by wind noise. Rolling off the LF is so important, and so practical, that high end microphones (such as the famous Neumann U87 - the "standard" vocal microphone), have a low-frequency cut-off switch built-in to stop the low frequencies from even getting into your desk in the first place.
*) Rolling the treble off all signals that don't need it.
Again, you are trying to have no audible effect on the desired sound here, but trying to eliminate certain things. Noise is the main thing. Rolling the very top end off things like basses, (some) guitars and synthesizers and certain other amplified instruments can dramatically cut down noise without otherwise affecting the final mix at all. Also, some instruments (such as electric pianos) are susceptible to ultrasonic interference from radio and other sources. Nasty things can happen if this goes unchecked (especially if using analog recording, where other ultra-high frequencies such as tape bias and low frequencies artifacts like tape modulation, come into the equation).
Rolling off the stuff you don't need at either end of the spectrum is a very practical thing to do, and leads to a "cleaner" sounding result. Of course everyone knows that this isn't a "hi-fi" thing to do, but we simply don't live in that kind of perfect world. Top-of-the-range professional desks like Solid State Logic consoles, costing several hundred thousand dollars, even have an extra switch to place the low-frequency and high-frequency cut-off filters right at the very *start* of the channel - well before it goes into the (inbuilt) compressor/gate, insert point, and main channel equaliser.
Trimming the high and low frequencies of desk channels without intending to change the sound of the instrument being recorded, is also important when trying to reduce "spill". "Spill" is the sound leaking into the microphones from other musicians instruments - drums being the classic example - being so goddamn loud in real life, they tend to leak into anything in the studio that vaguely even resembles a microphone! You have to try to eliminate at least *some* of the spill from the drums, otherwise the drums will sound too distant and echoey.
This often takes the form - in addition to frequency "trimming" - of putting blankets and screens all over everything (effectively building lots of baby rooms and cubicles within your recording area) to keep the drums out. But this unfortunately (and unsurprisingly) "suffocates" the sound of everyone. This was OK in the seventies, but as radios and records got better in the eighties and more able to reproduce "spectacular" sound, people wanted to capture a more open sound, so the blankets went into the dustbin. This meant the drums were more echoey and less controllable (and tended to "spill" into everything), but records could reproduce that ambience faithfully too, and folks started to like the "bigger" sound of echoey drums - the ultimate being of course to stick a compressor/gate across some overhead mics to get that "Genesis" drum sound.
Anyway, the key point here is that all good engineers simply use their EARS to tell them if something is right or wrong - there are simply no rules beyond that - you're meant to be in a creative industry after all. If it sounds good then you're doing the right thing - which sometimes means doing nothing, and sometimes means doing a great deal - even taking EQ to extremes (just make sure that it actually sounds GOOD and that you're not just doing it because you think you should).
A further use of EQ is when you are well and truly stuck with a "ringy" sound such as a metalic snare drum. You just suck out the ringiness with a tight EQ carefully tuned to the offensive frequency. You find this frequency by BOOSTING first, and sweeping through the frequency range until you hit the ring. You'll know when you've hit it, because it leaps out painfully (keep the monitors low). After you've found the offending frequency then cut it back to a respectable level. This technique is also useful if a particular guitar note leaps out way too much, which can often happen with a mic in front of a real guitar amp and loudspeaker.
It's "theoretically" correct of course to try and get everything right by mic placement, but on a real session with paying customers, your options are in practice somewhat limited. You tend to experiment over time spent during your career, and not within the confines of a single session. On a session, you will likely get - at best - two shots at getting the mic position right if you just have a single mic. Try any more attempts than that, and people will start getting mad at you for taking up too much (expensive) studio time.
In addition, if you are recording several musicians all at the same time, you tend to place the microphones in a compromise between what "sounds" best, and the position that prevents too much of the sound from the other musicians leaking into the microphone in the form of "spill". And it *is* a very careful balance (and you have very little time to figure it out). Engineers who resort to far too much EQ, will prefer to place the microphones in the "safest" position from a "spill" point of view, because they are terrified that when they get up to their "bad practices" of extreme EQ whilst mixing, they will end up screwing up the sound of all surrounding instruments as well (and they are right - they will!).
However, more experienced engineers are surprisingly relaxed about "spill" and place the mics in the position where from an OVERALL perspective, with little EQ, the entire band sounds great (although it can severely limit your ability to "overdub" (replace) those instrumental parts later). When you're inexperienced, and you are working with such a cool producer or engineer, you can feel very nervous about the situation - irrespective of the fact that it sounds great. "You better be right about this..." you mumble to them. "I'm taking an awful risk here..."
However, if you have several good microphones, and have lots of spare tracks, then it is a sensible practice to record using several mics in different positions onto seperate tracks simultaneously. You can then make a decision - after the artists have gone - as to which mic position worked best, and mentally remember it for next time you're working under similar circumstances.
Ultimately, the trick in knowing how to "trim" the frequency spectrum properly for different instruments, is to know your studio (aircon/train problems?), your artist (do they tend to accidentally kick mic stands?), the environment (do local taxi's get picked up on the Fender Rhodes here?), and most importantly of all - know your EQ (so you know what "damage" it can accidentally do).
Other EQ "tricks" include:
*) "Sucking out" 800Hz
Something that mix engineers regularly do, is to use EQ to "suck out" (cut) the area around 800Hz on tracks that aren't "cutting through" the mix properly. When a mix gets "muddy", this trick can make each instrument sound a lot clearer.
But why does this trick work?
You don't have to think too hard to realise why this works. What exactly is there at 800Hz? - Well, it's the "notes" isn't it? The fundamental musical frequencies of most of the instruments playing. When too many things are playing at once, then it all gets cluttered up around there. By "sucking out" that area on a given instrument, you effectively increase the overall *harmonic* content of that instrument sound - the very thing that defines its character and uniqueness. So you start to hear the "character" of each instrument clearly and the mix gets "cleaner" overall.
But - as I'm sure Bill Wall would rightly point out - such activity is undoubtedly a "bodge" - you shouldn't be writing such messy musical arrangements in the first place - at least, not if it is going to take dramatic EQ tricks to untangle them in the mix. You should use your MUSICAL arranging skills to give each instrument an appropriate musical range where it doesn't interfere too much with anything else.
Nevertheless, if you are "just the engineer" and you are not directly involved in the arranging or production process, you're not really in a position to influence this. Similarly, if you are a remix engineer and consider it is important to keep all the original parts, then remembering the "suck out 800Hz" trick is well worth remembering in order to prevent everything muddying up the mix. Again, use the figure of 800Hz just as a rough guide, and let your ears find the exact spots to cut down on each instrument.
*) "Scooping out" the mix
Another trick - beloved of American producers, is to "scoop out" frequency areas of the basic mix in order to place the final "key" elements like guitars and vocals, in their own frequency "space" in the mix. People frequently use an expensive tool such as a Klark Technik Spectum Analyser to give a complete picture of the frequency balance of the song. They identify the content of (eg) the lead vocal by itself, and then literally try to scoop a hole at that frequency range out of the rest of the mix (perhaps using something as vicious as a graphic equaliser), to leave a space for the vocal to sit in.
Don't try this with the five-element spectrum analysers built into your Hi-Fi! (that's simply not enough resolution. Pro models have many, many bands showing content all at once, like a big graph - very pretty).
If you feel that this is philisophically extremely dodgy, then I'm right behind you. I agree totally. Perhaps it was appropriate back in the days of Tamla Motown (where the technique is generally credited as being invented), but back in those days record quality (and indeed radio quality) was extremely poor and engineers needed to use every trick in the book in order to make their records sound better than everyone elses on the same equipment of limited ability. I'm not sure it's valid in todays digital world though, and twisting sound to such extremes is something I try my best to avoid, and I'm really unhappy about resorting to a spectrum analyser to solve these problems.
Such a "mixing by numbers" approach - as you can tell - I'm really not very keen on. I don't mind too much if people achieve this using entirely their ears, because if they have good ears then they will always get a good result. What I don't like is when I see people doing it purely by looking at the graphical display of a spectrum analyser.
If mixing was *really* that simple, then everyone would do a good job.
Homepage of Jezar:
Back to the Recording FAQ link list