THE 10 MOST COMMON MIXED MISTAKES - AND HOW TO AVOID THEM

The quality of Mastering is always from mixing or the quality of the mixdown dependent. We often get ready-mixed tracks for production. It is important for us that these are of the highest possible quality so that we can offer the perfect online master can make! That's why we've compiled the most common mixing mistakes for you. Alternatively, of course, we also offer ours Online mixing service

1. LIMIT / MAXIMIZE

The final process in mastering is maximizing to the optimal volume. If this step is already carried out during mixing, the options for regulating the loudness during the mastering process are limited. This can then lead to distortion of the individual frequencies.

Our advice:

In order to process the mastering optimally, no Limiter in total to be available.

2. FILE FORMAT

After the mixdown has been completed, it must be coded. With MP3 encoding, most of the quality is lost, even if not everyone can hear it.

Our advice:

Desirable file formats are Wave, Aiff, SD2, Wave64 or Podcast Wave. The bit resolution should be as high as possible, 32 bits floating point or 24 bit for other sequencers like Logic or Pro Tools. The sampler rate is also important and should be 44,1 khz for Redbook CD, 48 or 96 for DVD-Audio, Streaming or video production.

3. HIHAT IN THE MIX

The hihat is in the very frequency range that is most sensitive to the human ear, between 1000 and 4000 Hz. It is often used in a mix down mixed too dominantly, which is difficult or impossible to compensate for later in the mastering, since there are other tracks in this frequency range, such as voice, synthesizer or snare drum.

Our advice:

I prefer the hihat in the mix take something back, can later be highlighted in the master.

4. VOICE VOLUME

Basically, it is not easy to strike a balance between a voice and the various instruments. Mastering is about getting the best overall sound out of a song. Changing the voice or individual instruments in a finished mix is ​​only possible to a limited extent. Therefore, when mastering it is not always possible to put a voice that is too quiet in the foreground and make it louder.

Our advice:

It is important to ensure that the balance between the voices and the instruments is balanced during production. It is helpful if there are two mixdowns, one Vocal-up version and on the other hand one Vocal-down version, to be created.

5. REVERB PART OF THE VOICE

The question of how much space should be for the lead vocal in a track usually causes extensive discussions during the mix. The decision is on the one hand a matter of taste, but on the other hand it is also tailored to the tempo of the song. If you want to achieve maximum loudness at the end of the mastering, you have to compressors and use limiters. This causes the quiet passages to get louder while not maximizing the loud passages, so dynamics are reduced. Normally, the reverb also belongs to the quiet passages. To avoid too much reverberation, only the compression can be reduced or that stereo image be reduced. However, both processes lead to a minimization of the overall result.

Our advice:

It is important to be aware of this problem and to keep it in mind when mixing down and when in doubt, that Reverb- Minimize share.

DON'T WANT TO DO MIXING?

You don’t want to deal with mixing, but would rather be creative and make music?

6. SNARE DRUM VOLUME

Since the snare drum contains quite strong transients (the first rapid impulses of a percussion instrument), it often fades into the background during the mastering process. Since they are important for shaping a sound, they have a high level and are usually very energetic. Unfortunately, this is then attacked when limiting.

Our advice:

The snare drum should not be too loud in the mix, otherwise it will unfortunately be lost during mastering. However, you can make them easily audible, which is particularly important if you want to increase the loudness through mastering.

7. BASS DRUM VOLUME

If you start with mixing, with the bass drum and adding more and more instruments in the further steps, the attack (between 1000 and 3000 Hz) of the bass drum falls more and more into the background. This has to do with the fact that there are different sounds in this middle frequency range, e.g. voices, guitars in the Rock mixing, Synths at EDM. These then cover up the attack of the bass drum, which is still clearly audible at the beginning. This is known as the masking effect. It is almost impossible to make them more audible during the mastering process, since the equalizer mainly amplifies the instruments that are already in the foreground.

Our advice:

It is better to use two bass drums if boosting the corresponding frequencies is not enough when mixing. One kick drum is only responsible for the low frequencies (everything below 200 Hz) and the other for all high frequencies (attack). If you filter out all frequencies below 500 Hz with a high-pass filter, they can then be controlled in the mix so that the attack is loud enough.

8. SUBBASS

One of the most important tasks in mastering is the bass range. Superimposing the low frequencies of the bass requires a lot of sensitivity. In addition, about 90% of the files delivered to us are, unfortunately still, too much below 100 hz in the side signals.

Our advice:

It is important that the low frequency range between 20 and 150 Hz is not lost. Especially for HipHop/Rock/Pop productions this is important! Make sure that your bass range stays mono!

9. "S" IN THE VOICE

In many vocal recordings there is the problem of the "s" being too sharp. This is particularly problematic if you double voices, as it can lead to shifts in the "s" sound. There is a compressor that can minimize the problematic "s" sounds, but this operates in a fairly narrow frequency range. This can also be used for mastering, but this affects the entire title and not just the problematic parts. This means that all instruments in the same frequency range are also compressed, i.e. mainly the snare drum and hihat.

Our advice:

If there is a problem with the "S" sound, the de-esser should be used in the mixdown. However, this must be included great care happen because this device forms a “ch” from an “s” and the overall sound image of the voice has to be compromised due to excessive reduction.

10. PHASE CANCELLATION

When two waves of the same frequency meet, which are only slightly offset in time, a wave occurs phase cancellation. This leads to interference, which can lead to a reduction in the level or even to a complete cancellation of the respective frequency. This problem is mainly found in the low frequency ranges. They can be caused by technical errors or by modulation effects (e.g. flanger, chorsus, phaser) or when recording a single sound source with multiple microphones. It is important to also switch the mix to mono to check that the bass has not become too quiet, otherwise the track is dead mono compatible.

Our advice:

It's important that mono-compatibility to be observed during the mixing; this can be done, for example, with the phase correlator display instrument. Small phase shifts can hardly be avoided, or are even often accepted in order to make the sound more interesting. It is important to note that in the low frequencies (below 200Hz) no shifts occur, as this will have significant consequences.

IS YOUR MIX READY?

In our free mix analysis, you can find out whether your song can be refined in analogue form.

Chris Jones

Chris Jones

CEO - Mixing and mastering engineer. Has been running Peak Studios since 2006 and is the first online service provider for audio services. More about Chris

your contact

Do you have any questions or would you like to speak personally about your project?

Chris Jones

Managing Director of Peak Studios

Contact form

Request your successful project now without obligation!

Upload your song

Conveniently submit your projects for our Online mixing servicesonline masteringDDP imagemix analysis (mix feedback) or online podcast mixing.

We will usually get back to you within 3 hours (on working days).