<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=364338823902536&amp;ev=PageView&amp;noscript=1">

Editing

How to Perfect Your Audio Levels For Video (Advice from Engineers)

Drew Gula

Aug 16, 2021

At Soundstripe, we spend a lot of time thinking about how audio affects video production. But most of the time, that’s focused on the artistic side of things — how music and sound effects can help a creator add emotion, power, or energy to any project. Don't worry, we were even able to consult some of Soundstripe's professional audio engineers for some advice on the topic. Let's dive in.

The relationship between audio and video goes much deeper than most believe. There’s also the technical side of things, and how sound quality can make or break a video, regardless of how beautiful the footage is. 

A lot of things can go wrong when you’re editing a video. But one thing we’ve all experienced is watching something and not being able to hear the dialogue or the music. And let’s not even talk about the absurd volume levels of ads on most streaming platforms.  

All of these issues come down to a single thing: audio levels. And when you’re thinking about sound design for your next project, a great place to start is how you approach audio levels for video content.

To really cover everything you need to know about working with audio levels, we’ll have to split this topic down into two very different stages: capturing the audio (the tracking process), and then editing that recording (the mixing process).

 

Two things to always watch out for with your audio levels

When it comes to mixing audio, it can seem like you’ve got a lot to keep an eye on. But the two things that you should always watch out for are peaks and compression.

Now, both of those things are natural occurrences in audio. But audio peaks create distortion, which means you’re losing both quality and the content itself. And the same goes for compression — a lot of people think this is just loudness normalization, a routine part of the work. But you have to be careful. Don’t go overboard or you’ll degrade the quality so much that the highest and lowest ends will be lost, leaving you with a squashed audio file that hurts to listen to (and quietly crushes the souls of audio engineers everywhere). There are other potential drawbacks when applying to much compression, like augmenting background noise too much, over-amplifying the perceived loudness of the resulting sound, or even causing unnatural-sounding fluctuations in your audio meters. Long story short, just use it sparingly when you're in post.

You didn’t do all that work just to throw it away. But if you find yourself doing a lot of cleanup, it may be that you’re trying to overcompensate for problems that could have been avoided entirely. And that comes back to how you’re capturing the audio in the first place.

What to consider when tracking audio

The only clean solution to a distorted audio track is to re-record the audio, which is obviously a hassle and something that you might not even be able to do depending on the project. And that makes what you do during production just as (if not more) important than what you do in post.

Headroom, headroom, headroom

You can’t do anything with an audio mix that was recorded badly, so giving yourself headroom when recording will open up more flexibility in the mixing stage. You’ll see lots of recommendations online (especially Reddit) that giving yourself 3db of headroom is enough to work with while tracking.

Let’s clear the air now: 3db is likely not enough headroom. Even relatively quiet laughter will push the audio signals by more than 3db. And if your cushion isn’t enough, you’ll be stuck with audio clipping that makes post-production a nightmare. 

Having a “golden standard” for audio levels is great in theory, but the truth is that any live performance — even a casual podcast conversation — is never as predictable as a mic test. We recommend giving yourself a good cushion of 10-12db. This should protect you against big peaks when they occur, plus raising levels in post without sacrificing quality is easier these days than it ever has been.

 

 

You don’t want to risk an entire project because you underestimated how much headroom you would need to mix the audio, so give yourself more space than you think you’ll ever need.

Use backup mics whenever possible

One way to anticipate this (and generally avoid complications) is to always use backup mics. And since microphones are pretty affordable now, this isn't a big investment but it will be a huge asset.

You can give yourself options, which is always good. But you’ll have a better chance of mitigating clipping if you have multiple sources of audio at different distances, angles, etc. 

That could be a combination of a room mic, but also using a mounted shotgun as the backup. Or maybe you want to use LAVs for everyone involved, but still keep that shotgun mic on hand to give you a different sound or a fallback audio source.

Headroom and backup mics aren’t lucky charms to get rid of any flaws. But they are tried-and-true methods that professional sound engineers rely on and recommend to others.

What to consider when in your audio mix, post-production

Good audio levels for video are meant to highlight any dialogue or spoken word. Whether you consider yourself a content creator, a filmmaker, or a marketing video guru, this is sort of a universal truth.

But what is not universal is audio mixing. There is no magic recipe of which volume each audio track should target. So take everything here with a grain of salt, and think of these tips and a good place to start rather than some sort of divine mandate.

Compression is a tool, not a solution

Let’s start off by talking about compression a little more thoroughly. 

First and foremost, compressing is good for a lot of things in audio. It can actually help you deal with steep audio peaks or low valleys. But overdoing compression makes audio sound squashed or muddy — in this situation, too much of a good thing is a bad thing.

Create space with EQ

And if you’re spending a lot of time balancing audio levels, you can use EQ to create space and draw attention to the different parts of your mix. Let’s say you’re mixing different components together — like a song from our library as well as a voiceover track —  and you’re having trouble hearing one.

If you go to the audio tracks and look for high peaks, you’ll be able to see where the two sources are competing for space in the mix. Reduce the peaks in the song, and suddenly the voiceover comes through clearly because you made some space for it.

Using stereo and mono tracks

Another common mistake is adding a stereo music track onto a mono channel. Not only does it clutter the mix, but it also destroys the stereo capabilities of the song.

Always be sure music is added to a stereo track — by taking advantage of that single aspect, you will create space in the mix so you can layer in sound effects or voiceovers without muddying things.

(This video tutorial is kind of corny, but the visualization of stereo vs. mono does a great job of explaining what this is and why it matters in the audio mixing process.)

 

 

The listening experience

Panning can help you deal with this problem too. If multiple audio tracks are occupying the same space in the stereo field (the direction a sound “comes from” through a speaker), they’ll compete with each other.

So find the thing you want to focus on, pan it a little bit, and create some space so it stands out from everything else. You’ll want to use headphones for this, because if you pan too much, it’ll sound janky. 

And that’s true for a lot of audio level mixing techniques. You’ll want to reference your mixes on multiple speakers every time. It’s definitely common for people to have completely different listening experiences because a laptop speaker can’t express as much as a studio setup can.

When to ask for help

Unless you’re an experienced sound engineer, you probably can’t do everything you wish you could do with audio. One of the biggest ways to make a difference is to mix your audio separate from the video footage — a digital audio workstation (DAW) has tools that Adobe’s products or DaVinci just don’t offer.

This is the point when outsourcing can make your life easier and make your audio better. Connecting with a professional audio engineer is worth the investment — they’ll be able to tweak and adjust in an hour what might take you a week.

As an added bonus to saving yourself time and hitting that professional tier quality, you’ll also learn some tips and tricks along the way. That can help you in future projects too.

Where to begin with your audio levels

Sound design is a blend between science and art. And like any other creative project, setting the audio levels will vary based on the project — there’s no “industry standard” to follow. But there are some pretty common places where you can start. 

Think of this starting point as the spotlight on a stage. If people are speaking in the video, that’s the content you want front and center in your audience’s attention. For that reason, you should make sure the audio track from your video footage is the highest audio level in your timeline.

Let’s say you’ve decided to work in the -22dB to -12dB range to make sure you’ve got a good amount of headroom. In that case, your dialogue can sit anywhere from -15dB to -12dB.

Most of the time, the loudest audio track will be speech: voiceover, character dialogue, music video vocals, etc. And you’ll want to keep background music well below that to keep from muddying the sound. (In our previous example, you’d probably put music close to -20db.)

When you think about shot composition or color grading, you’re trying to layer things in a way that draws the viewer’s eye. Your audio levels can do the same thing for your viewer’s ear. 

But if you aren’t super familiar with sound design and audio mixing, all of these numbers might remind you of high school math.

Here’s a video that should clear things up...or at least explain it with visuals:

 

 

The sound team on Ford v Ferrari won one Academy Award and got nominated for a second. This video and their explanation provides a pretty fantastic example of why audio levels matter, and what they sound like once you’ve got things in a perfect harmony.

Sound effects add a new level of immersion for the audience experience. But if they aren’t mixed in appropriately, they can quickly overpower each other. In a worst case scenario, they’ll form a blanket of noise that completely overwhelms the viewer.

At the end of the day, audio engineering is a very subjective science. There aren’t official rules to learn that will help you get the perfect audio levels every time. You just have to know how to use the tools available to you, or when to reach out for help.

And if it all sounds good from multiple systems, then you’re probably okay.

"What should my audio levels be?"

This is far and away the most common question filmmakers have about mixing audio. The reality here is that there’s no hard answer to use in every situation.

Really, the best we can offer is starting points and advice to protect yourself from big problems. Don’t set your audio levels so loud that they fight each other and create distortion. Don’t let them hit your compressor so hard that they squad the sound by flattening the peaks.

But the main thing to remember is this: Develop your mix based on what you’re hearing from your speakers. It’s possible that things will sound strange if you just copy and paste the suggestions you read here, on Reddit, or a message from a friend.

Audio engineering is a creative process, like an artform in and of itself. So as long as what you’ve got sounds good, you should feel free to experiment, keep searching for the best gear, and seek out more ways to develop the audio experience.

In other words, do what sounds good.

The only rule is that you don’t want to hit your limits and lose the quality or clarity of the audio. And that’s something that only you will be able to judge and work around.