What is digital audio? A guide for music producers

Image Credit: Rev.com

A music producer spends hours behind their computer screen manipulating audio. But not just any audio. They’re playing with digital audio.

Digital audio has been a huge driving force in making music production cheaper, and therefore more accessible. If you’re spending so much time using digital synthesizers and making music, it pays to understand what digital audio is. More specifically, it pays to know the difference between analog and digital audio.

Analog and digital audio are two very different types of audio. In this article, we’re exploring everything you need to know about digital audio: the difference between digital and analog audio, and how digital audio works too.


What is the difference between analog and digital audio?

Analog technology replicates an acoustic soundwave perfectly. For example, tape recorders and vinyl players reproduce what you record. Well, until your vinyl is all scratched up after however many plays.

More examples of analog technology include modular synths that use control voltages to produce audio signals and apply effects. Also, compressors like Warm Audio’s WA-2A Optical Compressor uses light-dependent resistors to determine how much compression to apply. Analog equipment like these sends a direct current around their circuitry which interacts with onboard transistors, transformers, and other electrical circuit components. The end result is an electrically generated audio signal or effect.

Analog recording uses a microphone that converts acoustic soundwaves (vocals, guitar strumming, etc.) into an electrical signal. Then this signal is imprinted directly onto analog master tapes (somewhat like a cassette tape).

In contrast, digital technology converts your acoustic soundwave into binary data. Rather than electrical signals that rely on control voltages and circuit technology, digital technology relies on binary 1s and 0s.


Analog to digital conversion

To illustrate, we record our acoustic signal with a microphone. And microphones do convert our soundwaves into electrical signals still – nothing has changed there. But rather than imprinting our electrical signal on analog tape, digital conversion dices our signal up and samples our signal at a sample rate and bit depth that we can specify. Both sample rate & bit depth work together and create a bandwidth that defines the quality of our digital signal. In turn, the total amount of bandwidth is what defines how accurate our digital signal is to our original signal.

Therefore, more bandwidth = more accurate reproduction. It’s the same principle with digital images too. You can’t enlarge a low-resolution image to see it in better quality, can you?

Here, the analog soundwave represents the original soundwave in every way. But you'll notice that the digital soundwave only represents segments of the original sound wave. Those that have been sampled! This example represents a sample rate and bit depth that is less than ideal.
Image Credit: Center Point Audio

Here, the analog soundwave represents the original soundwave in every way. But you’ll notice that the digital soundwave only represents segments of the original sound wave. Those that have been sampled! This example represents a sample rate and bit depth that is less than ideal.

Here’s a closer look at how increasing the sample rate gives you a more accurate reproduction of an analog soundwave.

Increasing the sample rate gives you a more accurate reproduction of an analog soundwave. the accuracy of digital audio conversions depends on both sample rate and bit depth.
Image Credit: iZotope

So, the accuracy of digital audio conversions depends on both sample rate and bit depth. In contrast, the accuracy of analog gear depends on the sensitivity of the equipment!

Unlike analog gear, digital technology cannot reproduce an acoustic soundwave perfectly. The amplitude values of acoustic soundwaves don’t always match the digital values (bits) available inside digital systems.

But fear not! Even the most trained ears may not be able to call a digital soundwave out for its imperfections.


Digital quantisation: sample rate vs bit depth

With all this talk of sample rate and bit depth, it’s a good idea to explore these topics in more detail.

Quantization is the process we have just discussed: converting a continuous audio signal to a digital signal with a multitude of numerical values.

Characteristics like the frequency and amplitude of acoustic signals are converted into binary data that computers can then read. Then, this digital audio allows you to edit and manipulate your signal inside your DAW. But before your computer receives any binary 1s & 0s that it can read, your acoustic signal must be converted into that data through a series of snapshot measurements – samples.


What is audio sample rate?

To reproduce any signal near-accurately, thousands of samples must be taken from your original signal per second. Metrics measured include signal amplitude and any frequencies too, and samples are measured at particular times (more on this soon). To summarize this brief introduction to sample rates: by measuring enough amplitude values extremely quickly, we can reconstruct the resolution of your signal.

And the speed of said sample measurements is the sample rate! It’s both the speed of measurement and the number of sample measurements made. We measure the sample rate of a signal in kilohertz.


How the sample rate affects your digital signal

A soundwave is made up of cycles. One cycle has one positive and one negative amplitude value. To find its wavelength, which is the length of an individual cycle, we need to measure both the positive and negative amplitudes. To do so, both the positive and negative amplitudes must be sampled at once. Therefore, each cycle is sampled twice.

This means that the sample rate determines the frequency range measured in the analog to digital conversion. So by sampling a soundwave twice per cycle we can work out the frequency of the final waveform itself. As a result, we can reconstruct a soundwave in digital form with a sample rate that’s double its frequency – because we are sampling every cycle twice.


Bit depth in audio

Bit depth is what sets out the number of potential amplitude values that a digital system reads. In turn, this determines the dynamic range of the digital audio.

Acoustic soundwaves are continuous waves, meaning they have a countless number of possible amplitude values. So in order for us to measure soundwaves accurately, we need to establish their amplitude values as set binary values each time we sample them. 

The most common bit depths are 16-bit, 24-bit, and 32-bit because of the audio resolution they offer.

But these are just binary terms that represent the number of possible amplitude values that we measure.

16-bit = 65,536 amplitude values

24-bit:16,777,216 amplitude values

32-bit =  4,294,967,296 amplitude values

A higher bit depth gives you a higher audio resolution. This is because more amplitude values are available!

Think of bit depth as a big box with lots of smaller boxes inside it. By placing smaller boxes inside the big box, you can fit more boxes inside it. So, a higher bit depth means you can fit more boxes inside the bigger box.

A cheesy analogy, we know. But we think it serves its purpose. As a result, the actual amplitude of the original soundwave is closer to an available bit. Therefore, you’ll get a more accurate reproduction of the original soundwave!

Image Credit: iZotope

If you know the sample rate and bit depth of a song, you can work out the bit rate of that song.

How to work out the bit rate of a song: sample rate x bit depth x number of channels.

Streaming services use bit rate to describe audio stream quality, so it’s pretty handy to know!


What is audio dithering?

A higher bit depth & sample rate will reproduce an acoustic sound wave in digital form with far more detail. However, there is a catch. The original soundwave with its fluid shape doesn’t always line up with a digital bit. And it doesn’t matter how much resolution there may be.

By rounding your original soundwave to a bit depth that doesn’t accurately represent its amplitude, quantization distortion occurs at lower bit depths. This is because of the limited number of digital bits available.

The quantization process can create certain patterns, and we call this correlated noise. Correlated noise is particular resonances in the noise floor at certain frequencies in definitive parts of the audio. In these parts, our noise floor is higher than elsewhere and is taking up more amplitude values that the recorded signal cannot get access to.

Quantization noise sounds just like white noise because it is just that. However, it sounds like distortion when you severely lower the bit depth of the signal. This rounds off the signal in a fashion not so different from a square wave.

Dithering is the process of masking quantisation errors when mapping digital bits to an analog wave. We prevent noise patterns by randomising how the final digital bit gets rounded to 1 or 0, and mask them with uncorrelated noise (randomised noise).

The process of audio dithering consists of adding more noise to our audio signal. This allows us to make quantization noise far less perceivable. As an example, quantisation noise occurs when we are reducing the bit depth of an audio signal. Therefore, quantization noise is the difference between your original signal and your quantized signal.


What does dithering do?

Dithering is the noise we introduce to an audio signal that masks the noise generated from random quantization.  

So, we’re masking quantisation errors by randomising how the final digital bit gets rounded. This creates uncorrelated noise (randomised noise), rather than correlated noise, and leaves room for more amplitude values, and this puts the amplitude of the noise floor at the bottom of our dynamic range.

Audio dithering is the noise we introduce to an audio signal that masks the noise generated from random quantization.
Image Credit: Sage Audio

Now that we’ve randomised the noise that quantisation has created, it’s now more difficult to hear the noise in the audio. It’s still there, but our ears can’t hear it as well because there are no definitive patterns anymore.

So, you’re just swapping harmonic distortion that quantisation has created… for more noise!


Digital clipping vs analog distortion

Digital clipping occurs when you breach the digital ceiling of 0dBFS. 0 dBFS (dB Full Scale) is the loudest point our digital audio can reach before it begins to digitally distort (clip).

Digital clipping sounds like a squared-off soundwave. As your signal hits and breaches the dBFS ceiling, the soundwave is squared off and even sounds like a square wave with its gritty characteristics and harmonic content.

Analog distortion, on the other hand, occurs when your input signal is pushed louder than the peak voltage of a particular piece of analog gear can handle.

Digital clipping occurs when you breach the digital ceiling of 0dBFS. 0 dBFS (dB Full Scale) is the loudest point our digital audio can reach before it begins to digitally distort (clip).
Image Credit: Modern Mixing

Digital clipping sounds the same no matter what plugin, gear, or sound has overdriven the signal. But analog distortion will sound different on different pieces of analog hardware. 

In short, this means that digital clipping doesn’t inherit any unique characteristics of a particular piece of gear or software. But analog distortion does! Analog distortion can change based on the unique circuitry of the particular hardware that the signal is travelling through.

For example, analog distortion from a tube-based circuit would sound different to analog distortion from a voltage-controlled amplifier.


How to fix audio clipping

I share music production tips, tricks, and tutorials for RouteNote. When I'm not writing, I'm listening to and making music, occasionally DJ'ing, and reading - but I haven't figured out how to do them all at the same time.

How useful are the new Instagram and Facebook Reels updates for artists?

Meta says new Reels updates will make creating content on Instagram and Facebook easier for creators, letting them sync Reels to music and more. How can creators like musicians use the tools?

New Facebook Reels algorithm will make Facebook look like TikTok

A Facebook company memo let slip that soon the difference between Reels and TikTok will be harder to spot than ever.