What is “TikTok Music”? TikTok trademark the name

TikTok look to take on Spotify and Apple Music, as “TikTok Music” is discovered in a US Patent and Trademark Office filing.

Business Insider found the patent that was filed back in May from TikTok’s parent company ByteDance. This is the second reference that the Chinese technology company has publicly released, after the “TikTok Music” trademark was first filed in Australia in November last year. The application references the following:

“Downloadable mobile applications for mobile phones and tablet computers allowing users to purchase, play, share, download music, songs, albums, lyrics, quotes, create, recommend, share his/her playlists, lyrics, quotes, take, edit and upload photographs as the cover of playlists, comment on music”

“Allowing users to live stream audio and video”

US Patent and Trademark Office

These details aren’t too surprising. ByteDance launched music streaming service Resso in Brazil, India and Indonesia in 2020. While offering typical music streaming services like playlists, Resso stands out from the crowd with a heavy focus on social media, allowing for easy sharing across channels, as well as a built-in community. Thanks in part to a push from TikTok in Brazil, with a link to listen to the full track on Resso, the music streaming service has already seen huge success. Reports show Resso with over 40 million monthly users as of November 2021 and seeing 304% growth between January 2021 to January 2022 in India alone.

Will TikTok Music have a similar impact on the music industry as TikTok had on social media and video? Only time will tell.


Get discovered and earn revenue from your music with free uploads to TikTok and Resso (and maybe even TikTok Music one day) thanks to RouteNote.

Spotify app update launches Home Feeds so you can avoid podcasts

Image Credit: Spotify

New Home Feeds on Spotify’s latest Android update keep music and podcasts separate and improve the discovery features of the app.

Spotify is splitting music and podcasts into two separate feeds on the app. Whilst the current Home page remains, the new scrollable feeds will be focused on recommendations for the user, as the audio company aims to make the Spotify app even more personalised for each user.

The changes are coming to Spotify on Android first, with iOS users’ apps updating too in the near future. There’ll be two new feeds, accessed from buttons at the top of the home screen, separated into Podcasts & Shows and Music.

The feeds will have playlist and album suggestions and podcasts Spotify thinks you’ll like, based on your listening history and interactions on the app. Discovery on Spotify is one of the biggest draws of a Premium subscription to the streaming platform. Provided you’re listening to a range of content, its recommendations are spot on.

On the Podcasts & Shows feed you can see descriptions, save episodes of podcasts to listen to later by hitting the plus icon, or go ahead and press play straight away.

Over on the Music feed, suggestions of new playlists and albums will be made based on your listening taste. A button makes sharing to social media and other platforms accessible.

If you don’t like to listen to podcasts on Spotify, that feed will be empty, and you can scroll the Music feed without being interrupted by the episode recommendations you’d get if combined.


Want your own music on Spotify? Check out RouteNote’s quick and easy free distribution. Since 2007 we’ve been helping unsigned artists and indie record labels to release their music to all the major digital music streaming platforms.

Keep the rights to your songs with no sneaky small print or hidden fees. Find out more here!

How to add music to a Facebook Story or Instagram Stories in Bangladesh, Ukraine, Vatican City, and San Marino

Image Credit: Meta

Facebook and Instagram Music has arrived in new regions including Bangladesh and Ukraine.

No more messages saying “Instagram Music isn’t available in your region” when adding music to Stories on Instagram and Facebook in Ukraine, San Marino, Bangladesh, and Vatican City. Instagram and Facebook users in these regions can now put music on their content for the first time.


How to add music to a Facebook story or Instagram story

Instagram Music option

Fans can now show off their favourite songs, from local artists and further afield.

Simply select Stickers in your Story window and tap Music to browse tracks and search for songs.


In another expansion, Instagram Reels have also finally arrived in Pakistan and Nepal as well as in Ukraine, Vatican City, San Marino and Bangladesh.

You can find out more about adding music to Reels and Stories, as well as how to upload your music to Instagram and Facebook, here:

Social media is increasingly becoming the main place that listeners are discovering new music. Having a song on a viral social media post can give your music streams a massive boost.

With RouteNote you can get your tracks into Instagram Reels and Facebook and Instagram Stories for free, so your music can soundtrack creations across the world. No matter where you are, you can upload your tracks to TikTok, Instagram, Facebook, and YouTube Shorts and monetize your music.


Head here to find out how RouteNote’s award winning music distribution can help you get your music heard.

YouTube may let creators use copyright protected music while keeping revenue

YouTube expanding their “partnership with the music industry” will let video creators use licensed music and keep a portion of the revenue.

Just three days after Meta announced Music Revenue Sharing on Facebook, YouTube have also announced expanded partnerships with the music industry. As we previously mentioned, smaller creators (who don’t have the contacts or budget to license copyright-protected music) are usually stuck with two less than ideal options when including music in their videos: royalty-free music that lets them keep all of the revenue generated, but use music from unrecognised artists, or copyright-protected music from higher profile artists, but all of the revenue goes to the music rightsholder. Facebook’s Music Revenue Sharing hopes to strike a balance, allowing creators to use copyrighted music while keeping 20% of the revenue.

While the details are currently limited, Google’s updated “YouTube test features and experiments” support article and video below hints that the video platform are to implement a similar system in the near future:

[July 28, 2022] Expanding our partnership with the music industry to give creators more options to include music in their videos: We are starting to experiment with ways to grow creators’ music options for their content. This includes introducing the ability for creators to access our partners’ music while still being able to earn revenue on their videos. Right now we’re still building and testing with a limited set of creators, and will have more news to share in the coming months. Stay tuned!


As with Facebook’s feature, this could be seen as controversial from the artist’s point of view, with essentially less revenue going to the artist, but I think it makes sense given the direction of the music streaming industry, as long as artists have control of which tracks are included and know exactly how much percentage they are giving up in exchange for the increased exposure. It’s also worth noting that creators’ content must also hit certain standards in order to qualify. For example, with Facebook’s Music Revenue Sharing, the audio must not be the primary purpose of the video.

These moves from Facebook and YouTube will undoubtedly be noticed by other video platforms like TikTok, and will surely shake up the sync industry, forcing companies to adapt in order to stay relevant against some of the biggest players in tech.

No word on which artists/creators the feature will roll out to first, the catalogue size, or the percentage split yet, as YouTube continue to build and test the feature. Google says there will be more news in the coming months.


While we don’t currently offer this feature, RouteNote artists can claim 100% of the royalties generated by YouTube videos that use their music. Click here to learn more and sign up for free!

How to play at a music festival

Read our top tips for how to play a music festival, from how to get on a festival lineup, preparing to play, and advice for the day itself.

Playing your first music festival feels like a big, exciting step. You can reach new fans who’ve just happened to wander past the stage, play to bigger crowds than you ever have before, meet other artists, and get paid.

Plus – you never know which industry professionals might be watching from the side. Music festivals are also about networking. It’s work – but it’s a lot of fun, too.

Want to see your name on a lineup? Check out these seven tips for how to perform at a music festival.


Different festivals have different booking processes

Big, famous music festivals are filled through artists’ booking agents liaising with festival bookers. Many festivals, such as SXSW, will have their own submission process.

In the UK for example, BBC Introducing host stages at festivals and book artists just starting out who need that initial push to get them up onto the next ladder of the industry.  


Getting a music booking agent

Some artists employ a booking agent to approach festival organisers. They’ll have the contacts needed to pitch your act as worthy of being included on lineups.

When you introduce yourself, send an Electronic Press Kit, featuring examples of your music alongside stats of your social following and streaming numbers; and where you’ve played previously, including footage of you performing live if you have it.

If you present yourself to the booker with a good idea of your genre and vibe, they’ll be able to choose appropriate festivals. You need to make it as easy as possible for them to take you on.


How to play a festival without a booking agent

For an artist without a booking agent, the DIY approach is the only route. You’ll need to research how each festival books artists, and contact them to introduce yourself and persuade them to put you on their lineup.

For smaller festivals that don’t have a submission process, be sure to find out exactly who the festival booker is for each event and reach out personally, introducing yourself and sending an EPK.

Find out where the booker is based, and you could even invite them along if you play a show nearby, so they can see you in action.

After the initial contact, if you’ve heard nothing, be sure to follow up every couple of months, especially if you have new releases going out.


Build hype first to secure your first festival booking

Release your music to streaming services to start building a fanbase and get your artist name out there. Bookers need to see that you can draw a crowd. At RouteNote our music distribution is free and unlimited forever, so you can put your music on Spotify, Apple Music and platforms around the world and people can start discovering your music.

Alongside this is making sure you have a solid brand, both onstage and online. You want to be memorable to festivalgoers and the promoters who will get you there. That means a good logo and a solid social media presence.

Once you’re on festival lineups, use social media marketing to generate hype and encourage your followers to grab tickets to support you on the day. With RouteNote you can get your music on TikTok, as well as Facebook and Instagram to use in videos and give fans a taste of what to expect.


When to contact music festivals

If you’re aiming for summer festivals, start reaching out to festival bookers in November. The beginning of the next year is the time that festival lineups start taking shape. You don’t want to get left behind.

Bigger festivals want to see progression – that you’ve had experience playing, and you’ve moved to bigger venues as you’ve grown as an artist. Larger venues mean a bigger fanbase – which means ticket sales for the festival, which is of course what the promoter is interested in.

You should also try and focus on festivals within your music genre, especially at first. If you’re an EDM producer, for example, research dance festivals with smaller stages.


Start local

When you’re starting out as an artist, music festivals feel like a step up, and require more professionalism than your local bar. You need to make a good impression and start making a name for yourself, to get booked year after year and play a wider circuit.

Aim for local festivals first. The earlier, opening slots will go to local bands and solo artists, for one thing because this encourages their local fanbase to buy tickets.

Once you’ve performed at a festival and made a fantastic impression, you can reach out in subsequent years and already be on their radar.


What to expect as an artist at a music festival

On the day, be prepared with everything you need to get onstage and set up quickly. There’ll be quick changeovers and an unfamiliar setup. Practise getting on and offstage and make sure everyone knows what the setlist is.

When you play a festival its easy to get swept up in the party atmosphere. But you’re there to play a show, not to attend the festival, and if you act like you would as a ticketholder there’s a danger you’ll have too much fun.

Turn up drunk or late and you’re going to ruffle some feathers, and get a reputation that will make life harder for yourself in both the short and long term.


Start building hype for your first festival appearance by getting your music onto streaming platforms. RouteNote can put your music on Spotify and all the major services and stores quickly, easily and for FREE. Find out more here and sign up today.

Perks of Spotify Premium now include separate shuffle and play buttons

Image Credit: Spotify

How to shuffle on Spotify Premium is changing as users finally get separate play and shuffle buttons on the Spotify app.

Spotify is separating the shuffle and play buttons for Spotify Premium users. Previously the icons were one button – now they’ll be two, giving users a clear choice of whether or not to randomise music on playlists and albums within the Spotify app.

In a statement Spotify said it was a move aimed at “improving the listening experience.” Android and iOS Spotify users will see the new buttons roll out in the next few weeks.

The change is small, sure, but one that will put a stop to the grumbles from Spotify Premium users who have been asking for a separate shuffle button for years. Spotify Free users will miss out – it’s a feature you’ll have to subscribe to access, along with ad-free listening and other perks.

Shuffle on Spotify sparked a debate in the music industry last year when superstar Adele criticised the service for making albums shuffle by default. The company listened, and subsequently amended the feature so albums could automatically be listened to in the running order artists intended.


Release your own music on Spotify for free with RouteNote. Our award-winning music distribution covers 95% of the digital market, putting your songs on all the big music streaming services and stores around the world.

And you keep all the rights to your music. Find out more here!

How to put a sleep timer on Spotify

Image Credit: Spotify

Can you put a sleep timer on Spotify, to listen to music while sleeping? Yes, and it’s easy – here’s how to set it up.

What is a Spotify Sleep Timer? Designed to help you drift off whilst listening to a podcast or music, the feature shuts off whatever’s playing at a time of your choosing.

If you’re one of those people who just can’t fall asleep without some relaxing tunes or a podcast nightcap then the feature is perfect for you. It’s available on the Spotify app on iOS and Android. Hopefully when your timer for Spotify runs out you’ll be fast asleep, and won’t even notice the audio has stopped playing.


How to use Sleep Timer on Spotify

You can access Sleep Timer from the Now Playing screen. Select the Now Playing bar to see the full screen view

To put a timer on podcast episodes, tap the moon icon on the righthand side of the play button

To set a timer when playing songs, tap the three dot icon then scroll down to find Sleep Timer at the bottom of the menu

Pick a time limit for when you want the music or episodes to finish: End of Track; 5, 10, 15, 30, or 45 minutes, or 1 hour

To check how long is left on your countdown, tap the three dots and scroll down the menu to find the time remaining next to the Sleep Timer logo.


If you’re lacking inspiration for something relaxing to listen to as you drop off then check out Spotify’s Sleep playlist, with a range of selections made for late night listening.


Use RouteNote to upload your songs to Spotify for free and who knows – people around the world could soon be falling asleep to your tracks. Find out about our digital music distribution here.

The Spotify Car Thing has been discontinued

Image Credit: Spotify

RIP Car Thing. Spotify has stopped production of its in-car hardware device, quietly announcing it will no longer be making the dashboard Bluetooth accessory.

News of the axing of the Spotify Car Thing Bluetooth player came as part of Spotify’s quarterly earnings report. It’s been a long road to a dead end for the now-written off podcast and music hands-free device.

The hardware device had an initial invite-only release in April 2021, followed by an eventual US rollout this February that saw the car music player retailing for $89.99. Updates were still rolling out as recently as April this year.

So why is Car Thing being discontinued? According to TechCrunch, Spotify said in a statement: “Based on several factors, including product demand and supply chain issues, we have decided to stop further production of Car Thing units.”

“Existing devices will perform as intended. This initiative has unlocked helpful learnings, and we remain focused on the car as an important place for audio.”

Car Thing is still available – on sale at a reduced price of $49.99, presumably until stocks run out, at which point it will be dead forever. According to Android Police you can also get a discount on the device for a limited time only. How long support for the device will continue hasn’t been announced.

When so many vehicles have entertainment systems built in that easily connect to multiple platforms, Spotify’s device, locked to their platform alone, was always going to be a hard-sell.


Did you know you can release your own music on Spotify for FREE with RouteNote? Find out more here.

Top 20 all time best album covers

From the weirdest to the coolest, what’s the best album cover? In an attempt to find the best album cover of all time, we’ve compiled a list of the top 20 – check it out here.

Whether there’s hidden Easter eggs, a controversial political point being made, crazy fonts, stylish design decisions, famous art features, or just a really well-positioned band photo, the best album cover arts catch the eye and make you think.

Scroll on for a collection of what we consider to be the coolest album art. And once you’re suitably inspired, head here for tips on how to design your own album artwork! 


Nicki Minaj – The Pinkprint

Nicki Minaj Pinkprint

Joanna Newsom – Have One On Me


Rage Against the Machine – Rage Against the Machine


The Beatles – Abbey Road


The Beach Boys – Pet Sounds


Pink Floyd – Dark Side of the Moon


Miles Davis – Bitches Brew


Fleetwood Mac – Rumours


The Clash – London Calling


Kanye West – Graduation


Patti Smith – Horses

Patti Smith - Horses

Grace Jones – Island Life


Sex Pistols – Never Mind The Bollocks, Here’s the Sex Pistols


Janet Jackson – Rhythm Nation 1814

Janet Jackson - Rhythm Nation

Blur – Best of

Blur: Best of album

Lizzo – Cuz I Love You


Run the Jewels – Run the Jewels 3


Neil Young – On The Beach


Meat Loaf – Bat Out Of Hell

Meat Loaf Bat out of Hell

Nirvana – Nevermind

Nirvana - Nevermind album cover

At RouteNote we give you everything you need to release your own music. For FREE you can get your songs on Spotify, Apple Music, and all the major streaming services and stores around the world, keeping all the rights to your music and 85% of revenue.

For quick and easy unlimited music distribution with no hidden fees, sign up to RouteNote today.

What is digital audio? A complete guide for artists

Independent artists around the world use digital audio to make music in their home studios everyday.

Thanks to digital audio, home recording equipment is far more affordable than it was in the days of analog. Rewind 30 to 40 years and recording music was a drastically different process.

Back in the 70s, you’d have needed some chunky equipment like a reel-to-reel tape deck to record anything professionally at all. Now you just need a semi-decent computer and an audio interface!

To understand why digital audio has made such a positive impact on the music industry and created an ever-growing web of independent artists, we’re going to explore the following topics:

Let’s get stuck in!


What is the difference between analog and digital audio?

Digital audio technology converts soundwaves into digital/binary data that a computer can read.

Meanwhile, analog technology like a microphone converts sound waves into electrical signals that rely on circuit technology.

To record at home today, you’ll need an audio interface and a laptop. However, if you were recording at home in the 70s then you’d have needed a reel-to-reel tape deck or a cassette.

‘Analog’ recording imprints a waveform onto tape via charges or onto vinyl via grooves.

Before digital audio, you'd need a reel-to-reel tape deck to record at home. The 1981 NAGRA-TA-A analog tape deck is one such example.
Before digital audio, you’d need a reel-to-reel tape deck to record at home. The 1981 NAGRA-TA-A analog tape deck is one such example. Image Credit: Nagra Audio

Then, we can play the recording back in speakers via an electromechanical transducer.

For example, a tape deck reproduces an electrical signal from the tape for the speakers to play after amplification.

In contrast, a vinyl stylus (the needle) generates an electrical signal as it scratches the vinyl grooves and transports the signal via wire to an amplifier.

With that said, vinyl is only a means of playback while a tape deck is a means of recording and playback.

It’s often said that digital recording can’t reproduce a soundwave as accurately as analog.

Both analog and digital recording tech can reproduce the original audio perfectly.

This isn’t true for reasons we will explore together throughout this post.

Analog to digital conversion (ADC)

The two processes of recording with analog and digital technology are incredibly different.

While analog recording equipment imprints soundwaves as electrical charges on magnetic tape, at the core of digital audio is sampling.

For example, a digital device like an audio interface will take snapshot samples of an input signal at specific intervals.

Altogether, this process of sampling audio and converting it to digital is called Analog to Digital Conversion (ADC).

Analog to digital conversion consists of sampling a soundwave at specific times and reconstructing it in the digital realm. Here, the analog soundwave represents the original soundwave in every way. But you'll notice that the digital soundwave only represents segments of the original sound wave. Those that have been sampled! This example represents a sample rate and bit depth that is less than ideal.
Here the analog soundwave (top right) represents the original soundwave (left) in every way. But the digital soundwave (bottom right) only represents rigid segments of the original sound wave.

A fitting name, don’t you think?

So rather than imprinting a soundwave onto analog tape or vinyl, ADC samples a microphone or instrument’s electrical signal at specific points in time to reconstruct it in the digital realm.

And once the A2D conversion is finished, you can store or send this digital audio file to a digital device such as a computer, an iPad, or a smartphone.

Moreover, on computers, we can process digital audio files in a DAW in what seems like an unlimited amount of ways. And this is called digital signal processing (DSP).

To illustrate this concept, the diagram below shows how DSP is used in an MP3 audio player during the recording phase.

When we sing into a microphone, the mic converts our acoustic soundwave into an electrical signal. Then it sends the electrical signal to a receiving input device like a audio interface which samples the audio at specific intervals in order to reconstruct it as a digital (binary) signal. This is the process of analog to digital conversion (ADC).
Analog to Digital Conversion (ADC)

After ADC, DSP captures the digitized signal and processes it. Once we’re done, we can send the signal back out into the real world, so to speak.

However, when you listen back to the digital audio through headphones or speakers another process must occur before the audio reaches your ear.

This process is the reverse of Analog to Digital – it’s Digital to Analog Conversion (DAC). Through the DAC process, digital equipment like an audio interface reconstructs the original analog (electrical) signal from the digital signal and powers the speaker cones.

Once you click 'Play' on your digital audio file, your audio interface receives back the digital signal it sent to your computer. Then it reconstructs the electrical signal from the binary and sends that to your speakers. Finally, your speaker amplifies the signal before playing it back.
Digital to Analog Conversion (DAC)

Finally, both the ADC and DAC processes utilize filters that smoothen the final audio throughout recording and playback.

A low pass antialiasing filter ensures there are no frequencies too high in the signal. In other words, an antialiasing filter ensures all frequencies in the signal are sampled correctly in the ADC.

Don’t worry if that last bit is a bit confusing. We’re coming back to aliasing later on with lots more detail!

Finally, a smoothing filter interpolates the waveform between the samples during the digital to analog conversion too. In other words, the interpolating filter ensures the segments between the sample intervals represent the original signal.


Digital audio resolution

If you’ve been shopping for audio interfaces before then I know you’ve seen the phrases “sample rate” and “bit depth”.

Well, those are concepts that determine the quality of the digital reconstruction of an analog signal, or audio resolution for short.

Digital image resolution and color information depend on pixel amount and bit depth. Likewise, the resolution of a digital audio signal depends on sample rate and bit depth.

In digital audio, both sample rate & bit depth work to create a total bandwidth that defines the quality of our digital audio signal.

Therefore it’s the bandwidth that defines how accurate our digital signal is compared to our original signal.

More bandwidth = more accurate reproduction

In contrast, the accuracy of analog gear depends on the sensitivity of the recording equipment you’re using!

Sample rate

The number of sample measurements taken per second, and therefore also the speed at which they’re taken, is the sample rate.

To reproduce any signal near-accurately, thousands of samples must be taken from an audio signal per second.

In the image below, you can see that taking more samples per second gives us a smoother waveform.

Increasing the sample rate gives you a more accurate reproduction of an acoustic soundwave. 

The accuracy of the ADC process depends on both sample rate and bit depth.

The digital result in example A is blocky and it doesn't capture the smoothness of the original audio signal.

But the digital result in example 2 takes more samples of the audio signal and gives a closer representation of the audio signal.

In the final example, we've taken enough samples to reconstruct the audio signal accurately.
Taking more samples of the soundwave gives you a more accurate digital reconstruction.

The digital result in example A is blocky and it doesn’t capture the smoothness of the original audio signal.

But the digital result in example 2 takes more samples of the audio signal and gives a closer representation of the audio signal.

In the final example, we’ve taken enough samples to reconstruct the audio signal accurately.

So the sample rate of a digital signal describes the number of samples taken and the speed at which they were taken.

As you’ll notice when shopping for an interface, the unit of measurement for sample rate is kilohertz (kHz).

This means that the common sample rate of 44.1 kHz translates to 44,100 samples taken per second.

A list and a brief history of sample rates

To understand why 44.1 Khz is the most common sample rate, we have to back in time.

When digital audio was first entering the scene, file size and storage capacity was a big concern. After all, computers were not what they are now – storage was quoted in megabytes, not gigabytes or terabytes.

That’s one reason the CD standard of 44.1 kHz was and is a decent halfway line between audio quality and mitigating storage issues.

But the main reason it became the standard CD format is because a sample rate of 44.1kHz is enough to reproduce the frequencies humans can hear – for reasons we will shortly explore.

Below is a list of the common sample rates:

  1. 44.1 kHz 44,100 samples per second
  2. 48 kHz – 48,000 samples per second
  3. 88.2 kHz – 88,200 samples per second
  4. 96 kHz – 96,000 samples per second
  5. 192 kHz – 192,000 samples per second

While shopping for any recording gear, it’s essential to note that you can’t record in any resolution that exceeds that of your interface.

To illustrate, you can’t record at 96kHz if your interface has a limited sample rate of 44.1kHz. Make sure you look at the tech specs!

How the sample rate affects your digital signal

If you’re looking to record a stereo signal, recording at 44.1 kHz will not do unless you record each channel individually (as is the common practice).

Recording in stereo gives you double the signal to capture. Therefore you’ll need to double the sample rate at which you’re recording to 88.2 kHz!

In other words, recording a 1-second long stereo file requires you to take 88,200 samples.

A soundwave is made up of cycles. Furthermore, each cycle has one positive and one negative amplitude value – the measure of its signal strength.

To find the length of an individual cycle we must measure the positive and negative amplitudes of the cycle.
Measuring the positive and negative amplitude values of a wave cycle allows us to determine the length of the cycle.

To find its wavelength – the length of an individual cycle – we need to measure both the positive and negative amplitudes of said cycle.

As a result, we must sample every cycle’s positive and negative amplitudes in the ADC process to accurately measure and reconstruct the amplitude of the original signal.

And sampling every cycle twice is the minimum amount required to avoid aliasing.

Aliasing: why do we take at least two samples per cycle?

But the process of sampling can go wrong.

Taking too few samples per cycle leads to the DSP unit interpreting the soundwave incorrectly and creating an unwanted ‘alias‘ of the original.

An alias is an unwanted false represenation of a soundwave. To demonstrate, le’s take a look at these examples:

In the first image, the reconstructed soundwave (3) resembles the original soundwave (1) perfectly. That’s because enough samples were taken (2) to capture the frequency of the soundwave.

But in the second image, we haven’t increased how many samples we want to take (5) despite rapidly increasing the frequency of the audio (4).

As a result, our reconstructed waveform (6) resembles that of the first example with a much lower frequency.

Nyquist-Shannon Theorem: the method behind the madness

With these above examples in mind, you can see why we need to take at least two samples per wave cycle.

Taking two per cycle gives us a digital waveform with the frequency information of the original wave intact.

So taking less than two cycles per second leads to a digitized signal with too low a frequency than our original signal.

Therefore we can conquer that we need a sample rate of at least double the highest frequency of our signal. And this is what we call the Nyquist-Shannon theorem.

Using the most common rate as an example, the highest frequency we can record at a sample rate of 44.1 kHz is 22.05 kHz. This is perfect for us humans as our ears can only hear frequencies between 20 Hz – 20 kHz.

Though we could use a higher sample rate, we do still have the issue of a bigger file size. As we discussed earlier, 44.1 kHz became the standard sample rate due to (lack of) storage space and audio quality.

However, with the Nyquist theory, it’s clear to see that 44.1 kHz is a sample rate high enough to capture all frequencies that humans can hear.

Using a higher sampling rate does have its uses though. Sampling frequencies above the threshold of human hearing allows you to capture the nuances in the upper-frequency range which can add value to the listening experience of a track.

Unfortunately, fans will only be able to play your track at 44.1 kHz on streaming services like Spotify anyway.

No matter how high a sample rate you use while recording, CD quality audio is the standard for streaming services.

Bit depth

After sampling the audio, your computer needs to store your recording. And computers store information in binary 1s and 0s, otherwise known as bits.

The number of available bits determines the number of amplitude values that a digital system can store. By way of illustration, more bits allow for more storage space.

Soundwaves are continuous waves, meaning they have a countless number of possible amplitude values. 

So to measure soundwaves accurately we need to establish their amplitude values as a set of binary values.

Going back to our digital image example, if you had a 4K image but a low bit depth then the colour of the image wouldn’t be very exotic despite the potential details available with 4000 pixels!

Likewise, recording a 1-second audio signal with a sample rate of 96kHz but an 8-bit depth really limits how your gear can map those 96,000 samples to digital bits. What you’d get is something like this:

To be able to map the amplitude of a wave cycle accurately, a digital system must have enough available bits.

Consequently, too few bits leads to a jagged reconstruction that isn't a true reconstruction of your recording.

You can see how sample rate and bit depth work together to reconstruct one wave cycle in the image above.

As a result, the result of more available bits is a wave cycle that’s much more accurate due to the correct mapping of amplitude values.

The popular bit depths are 16-bit24-bit, and 32-bit, but bear in mind that these phrases are just binary terms that represent the number of possible amplitude values available.

  • 16-bit = 65,536 amplitude values
  • 24-bit: = 16,777,216 amplitude values
  • 32-bit =  4,294,967,296 amplitude values

While 24-bit depth provides more bits than 16-bit depth, there is almost no point in recording with any higher than 16-bit due to the standards of streaming stores.

As with the 44.1 kHz sample rate issue, stores require tracks to be in 16-bit too. So streaming stores quite literally ask for CD quality audio (44.1 kHz sample rate with 16-bit depth).

But a higher bit depth gives you:

  1. Greater dynamic range
  2. A greater signal-to-noise ratio (SNR)
  3. More headroom before clipping occurs while recording

Take note, though, that a higher sample rate and bit depth will also give you a bigger audio file too.

How to calculate the file size of your digital recording

Using the sample rate, bit depth, and the number of channels in our project, we can calculate what its file size will be when we export it as an audio file.

The following formula will give you the bit rate of your audio file, measured in megabits per second (Mbps).

Sample rate x bit depth x number of channels = bit rate (Mpbs)

The bit rate of your audio file determines how much processing your computer must undertake to play your audio file.

So let’s say we record one guitar riff with a sample rate of 44.1 kHz and 24-bit depth.

44,100 x 24 x 1 = 1,058,400 / 1.1 megabits per second (Mbps)

Then, after calculating the bit rate of your recording, multiply it by the number of seconds of your recording. Let’s say we recorded for 20 seconds.

1.1 x 20 = 22 / 22MB

What is the Signal-to-Noise Ratio (SNR)?

SNR refers to the difference in level between the maximum signal strength your gear can output relative to its inherent electrical noise, measured in dB.

All electrical equipment produces noise. But a higher bit depth in recording gear allows for a louder recording before the noise becomes apparent.

The noise of equipment can be a negative number, a positive one, or even zero. In fact, an SNR over 0 dB tells us that the signal level is greater than the noise level. Therefore we can see that a higher ratio means better signal quality!

So if your interface has a noise floor of 100dB, this means that you can record at a level up to 100dB before noise infiltrates your signal. In other words, a higher number means a better signal-to-noise ratio.

This is why we wouldn’t push our input gain as hard as we can while recording. Doing so may only make the noise floor of our device more apparent.

In summary, higher bit depths do give us a lower noise floor.

What is dynamic range?

The dynamic range of a signal is the difference between its loudest and softest parts of a signal, measured in decibels (dB).

And if a digital system like an audio interface has 100 dB of dynamic range, this means it can accurately register signal strengths between -100 dBFS to 0 dBFS (decibels at Full Scale).

Dynamic range is important in music due to how our hearing works. To perceive something as loud, we need to perceive something as quiet – like a frame of reference.

So a greater dynamic range in recording gear allows for a bigger contrast between loud and quiet sounds in your recording.

And we can calculate the maximum dynamic range of a digital recording like this:

Maximum dynamic range in decibels = number of bits x 6

8 x 6 = 48dB of dynamic range – less than the dynamic range of analog tape recorders.

16 x 6 = 96dB of dynamic range – ample room for recording. But for hi-res audio, we need more.

24 x 6 = 144 dB of dynamic range

From this brief maths session, we can see that a higher bit depth allows us to record a louder signal with more clarity.

With these two rather confusing concepts in mind, we only really need a bit depth with an SNR high enough to accommodate the dynamic range between the softest and loudest signal strength we want to record accurately.

Additional headroom

Finally, headroom is the difference between your loudest peak level and the 0dBFS ceiling.

For illustration, driving a 4-meter high bus through a 5-meter high tunnel gives you 1-meter of headroom.

With 144 dB of dynamic range available with a bit depth of 24-bit, we can avoid clipping with ease while practically eliminating our noise floor too!


Digital distortions

You may be thinking that using a higher bit depth to achieve a higher dynamic range means we can record as loud as we can. As fun as this would be, recording as loud as you can is not necessary. In fact, it’s counterproductive.

If you do then you’re likely to ‘clip’ your signal in your DAW.

But before we dive into that, let’s talk about this ‘dBFS’ metric we’ve been talking about.

What is dBFS?

Decibels relative to full scale (dBFS or dB FS) is a unit of measurement in digital systems. More specifically, we use dBFS to measure amplitude levels like those in pulse-code modulation (PCM).

All digital audio gear, whether an audio interface or a DAW, uses dBFS to measure the amplitude of signals.

Ableton dBFS peak meters. Image Credit: Ableton

0 dBFS is the maximum level a digital signal can reach before clipping interferes with your signal.

Furthermore, a signal half of the maximum level is −6 dBFS (6 dB below full scale).

This is so because 6dBFS doubles the volume of your signal.

Peak measurements smaller than the maximum are always negative levels.

And when your signal exceeds 0dBFS it will clip – a form of digital distortion.

Clipping

Before I confuse you, a higher bit depth does allow for more headroom before clipping the output on your recording gear. However, it does not allow for more headroom in your DAW.

Recording at a higher bit depth only allows for more headroom on your recording device.

By recording at a level higher than 0 dBFS you’ll hear a noise sounding like horrible static in your signal. In actual fact, clipping squares off the soundwave – where the ceiling and floor flatten the positive and negative amplitudes.

If you are clipping your signal while recording then that static sound will be in your recording. And once it’s recorded, you can’t get it out.

No matter how much dynamic range your audio interface gives you, your dBFS ceiling is the final boss – and it’s unbeatable. Therefore we must use gain staging while recording and mixing if we want to keep our signal clean and preserve its dynamics.

Keep those faders down!

Digital clipping vs analog distortion

Analog distortion, on the other hand, occurs when your input signal is pushed louder than the peak voltage of a particular piece of analog gear can handle.

Digital clipping sounds the same no matter what plugin, gear, or sound has overdriven the signal. But analog distortion will sound different on different pieces of analog hardware. 

In short, this means that digital clipping doesn’t inherit any unique characteristics of a particular piece of gear or software. But analog distortion can change based on the unique circuitry of the hardware that the signal is traveling through!

Quantisation error/noise

Due to their fluid shape soundwaves won’t always line up with an available bit. Especially in bit depth below 24-bit, one amplitude value of the soundwave is going to sit between two bits.

As a result, the ADC converter in your interface will round it off to the nearest value – known as quantizing.

And rounding a soundwave to a bit that doesn’t accurately represent its amplitude leads to quantization distortion.

This is because of the limited number of digital bits available.

Information gets deleted and ‘re-quantized’ when reducing the bit depth of a file, resulting in a grainy static sound – quantization noise.

This is mainly a problem when decreasing the bit rate of a pre-recorded file.

So when a 24-bit file is reduced to CD – 16-bit/44.1kHz – that’s when quantization distortion becomes a problem.

Therefore, quantization noise is the difference between the analog signal and the closest available digital value at each sampling instance.

Quantization noise sounds just like white noise because it is just that. However, it sounds more like distortion when you lower the bit depth severely.

This process of re-quantization creates certain patterns in the noise – correlated noise. Correlated noise is particular resonances in definitive parts of the signal and at particular frequencies too.

These resonances mean our noise floor is higher at these timestamps than elsewhere in our track. In other words, the noise is swallowing amplitude values that our digital signal cannot get access to.

But correlated noise isn’t the kind of gatekeeper we want to come across. Thankfully, there is a way of reducing correlated noise so it’s unnoticeable to listeners upon playback.

What is audio dithering?

In our above example of bit depth reduction from 24-bit to 16-bit, the dithering process adds random noise to the lowest 8-bits of the signal before the reduction. Though this fail-safe is in place, avoid alternating between bit depths!

Increasing the bit depth to 24-bit after you have reduced it will only add more distortion to your signal and you will lose quality.

As we mentioned earlier, severely reducing the bit depth of an audio file will add severe quantisation noise. And applying dither to that signal would look something like this:

In practice, dithering hides quantization error by randomizing how the final digital bit gets rounded and masks correlated noise with uncorrelated noise (randomized noise).

As a result, the dithering process frees up amplitude values and decreases the noise floor.

We should say that DAWs automatically utilize algorithms to take care of dithering. You don’t need to worry about it so much, but it’s handy to know about.


Making music with digital audio

All of that technical information can be a little jarring, right?

Well, hold on to your hat. Now it’s time to discuss the equipment and software that makes music-making so accessible for independent artists and home studios: DAWs, audio interfaces, MIDI keyboards, virtual instruments and plugins.

What is a DAW?

A Digital Audio Workstation – is software that allows us to record and edit our audio, apply a huge range of effects to instruments or groups of instruments, and mix all of our instruments so they work together in a song.

Each DAW has its own quirks and unique features, but the principle is the same for all. We use them for recording and editing audio & MIDI, mixing, and mastering.

Ableton Live 11 DAW. Image Credit: Sound On Sound
Logic Pro X DAW. Image Credit: 9to5Mac

After the ADC process, your computer receives your digital audio and feeds it into your DAW. Once inside, you’ll find numerous DAW controls and tools that allow you to manipulate your recording too.

Moreover, every DAW, whether Reaper, Pro Tools, Ableton, Cubase, etc, has its own project file formats for storing projects too.

Audio interfaces: ADC converters for home studios

After touching on them briefly, it’s a good idea to talk more about audio interfaces.

An audio interface is the most common example of a digital recording device. While recording with an audio interface, the ADC converter utilizes every principle we have discussed – sample rate, bit depth, dynamic range, etc. – and converts your performance into a digital signal.

Interfaces connect to computers via USB Type B cable, so no auxiliary cable is required for computer connectivity. Furthermore, interfaces leverage XLR and 1/4″ jack inputs for connecting microphones and instruments.

Audio interfaces connect to computers via USB Type B cable, so no auxiliary cable is required for computer connectivity.
Audio interfaces leverage XLR and 1/4" jack inputs for connecting microphones and instruments and recording to your DAW.

To get the most out of your interface, it’s important to investigate the tech specs when you’re shopping for an interface. The audio/recording resolution that different interfaces offer. Below is a list of the important specifications to look out for:

  1. Sample rate
  2. Bit depth
  3. Dynamic range
  4. THD+N – all output leaving your interface that isn’t your signal (electrical noise)
  5. Frequency response

Most interfaces have a frequency response between 20Hz and 20kHz. The frequency response of an interface determines how an interface captures and processes the frequencies of a given signal.

On a product page, you’ll notice that the frequency response will display like:

20Hz – 20KHz, ± 0.01 (dB).

The value that follows the ± represents volume cuts and/or boosts at specific frequencies. Therefore – in this brief example – the interface will only apply cuts or boosts of 0.01 dB.

Now that we understand how computers receive signals, and how audio interfaces and DAWs make home music production possible, let’s talk about another digital production tool. It’s time to talk about MIDI.

Recording with MIDI

Akai MPK Mini MK 3. Imsge Credit: MusicTech

MIDI stands for Musical Instrument Digital Instrument, and it’s become a staple of modern music production.

A MIDI controller provides you with a physical interface for recording virtual notes in a DAW.

You can also use them to control virtual instruments, plugins, and your DAW itself – not to mention other MIDI controllers!

A MIDI recording only captures information that describes how you use the MIDI keyboard, which in turn manipulates software instruments, your DAW, etc.

This requires a simpler setup than recording audio – all you need is a

  • MIDI keyboard
  • USB Type-B cable
  • A DAW
  • A computer
While recording, playing physical MIDI notes inserts virtual MIDI notes into the our DAWs piano roll.
MIDI piano roll with MIDI notes. Image Credit: Beat Lab Acadamy

As we play, our DAW draws little blocks in its piano roll that virtually represents the notes we press on our hardware.

Don’t worry, I’ll come back with more detail on these little blocks.

In contrast, converting an audio signal into a digital one gives us a visual waveform of our recording.

When we record audio with a microphone, our DAW creates a visual soundwave that represents what our recording looks like.
The waveform of an audio recording FL Studio’s Edison Audio Editor. Image Credit:

Via a USB Type B cable, sometimes known as MIDI over USB cable, we can connect MIDI keyboards (and audio interfaces) straight to our computer and software thusly.

With that said, some audio interfaces even provide a USB input now too, so we can connect MIDI straight to our interface instead.

On the other hand, we can use 5-Din MIDI cables to connect different MIDI controllers to one another in the same signal chain if our MIDI controller has the correct port.

Both cables send MIDI data. But their only difference is that USB is for computer connectivity while 5-DIN MIDI is for connecting multiple controllers together, allowing you to control them all from just one interface.

But no matter which cable is connected, both sed the information. And recording with MIDI isn’t the same as recording audio either way.

USB AND 5-Din cables are digital cables, meaning they transport binary/digital data.

So whether you’re pressing a key, adjusting a fader or a knob on your MIDI keyboard, these cables transport that information via digital data to your computer. But it’s not just any digital data…

That’s right, it’s sending MIDI files.

MIDI files (.MID)

MIDI files (.MID) house the digital data that we send and receive when playing with parameters on our keyboard.

.MID files do not contain any information about how a track, channel, or instrument sounds. More specifically, they only contain information about your playing of the hardware.

Whereas analog equipment relies on Control Voltage (CV) to transport information about parameter changes, MIDI control surfaces, keyboards, and pads rely on MIDI files for control information that determines how you’re playing on the MIDI hardware – thus affecting the sound of a virtual instrument, software synthesizer, or adjusting plugin/DAW parameters.

Then your computer reads and executes the commands in these files; playing equivalent notes or adjusting settings based on your commands.

Examples of data sent in MIDI files:

  1. Note duration
  2. Note velocity (how fast and how much pressure)
  3. Global controls: play, record, stop
  4. Pitch bend
  5. Modulation amount
Composing with MIDI

As we saw earlier, a MIDI recording looks very different to an audio recording. Where an audio recording gives us a waveform, a MIDI recording gives us blocks.

These blocks are MIDI notes. While they couldn’t look any more computeristic, these notes are full of the MIDI data that represents your musical technique while playing on the MIDI equipment.

When the DAW’s metronome reaches a MIDI sequence, the DAW will process this data and output the resulting sound.

MIDI keyboards vs standalone keyboards

I think this makes for a good time to highlight the difference between MIDI keyboards and standalone keyboards. That is, MIDI keyboards generally don’t have any onboard sounds of their own whereas standalone keyboards do.

And if they do, like the AKAI MPK Mini MK 3, you’ll still need accompanying software to use them.

So rather than playing a characteristic sound of the keyboard, what does play is the virtual instrument it’s controlling – including the audio result of parameters changes made while recording like pitch bend.

So MIDI keyboards do emulate standalone keyboards and pianos with some added digital functionality – I have never seen a pitch bend wheel on a piano. They are just a means of controlling sounds on your computer.

Virtual instruments

A virtual instrument is software that emulates the sound of an instrument or multiple instruments.

Developers sample physical instruments with different recording techniques and in different locations to capture different characteristics of the sound too.

Then, they turn these recordings into a virtual recreation of that instrument!

If you wanted to be a musician 100 years ago you’d have needed a physical instrument. And if you didn’t have a big wallet in spare cash when the first synth entered the market you’d be hard pressed to find one you could afford.

But thanks to digital audio we now have (much cheaper) virtual instruments and software synthesizers.

As a result, we can write music with instruments that we don’t own in the physical realm – but we do in the digital space. The most notorious virtual instrument software is Kontakt by Native Instruments.

Kontakt virtual instruments. Image Credit: Native Instruments

Software synthesizers

Like a hardware synth, a software synthesizer is a program for generating your own sounds.

Soft synths are far cheaper than hardware synths – they don’t really go higher than $150!

There are a number of different types of synthesizers that use different styles of synthesis to produce sound. The most popular today is wavetable synthesis.

xFer Serum, a wavetable synthesizer. Image Credit: xFer

A software synthesizer utilizes the same basic principles as a hardware synthesizer, but each synth has its own characteristic tools and effects too.

Universal features include:

  • Oscillators for generating a signal
  • Amplitude envelopes for shaping the sound
  • Filters for refining the sound
  • LFOs and additional envelopes for applying various modulation to multiple settings

When the first consumer hardware synths entered the market – like the Roland Juno – electronic music grew in popularity with speed.

Now, electronic music is a huge part of the music industry and it’s more affordable than ever to make your own sounds without breaking the bank.

FX plugins

Plugins are pieces of software that we can use to apply effects to our audio.

Before digital, all music production was hardware based with analog technology. Effects like compression and EQ would be standalone pieces of equipment (outboard gear) which were expensive and certainly not small.

Illustrative examples are plugin emulations of expensive outboard gear. In the mid 60’s, Universal Audio’s CLA 1176 compressor rack unit was (and still is) all the rage. But it’s a pretty expensive unit.

Image Credit: Universal Audio

However, today you can pick up plugin emulations of the unit for a fifth of the price!

Image Credit: Gear News

Plugins digitally emulate specific or multiple effects and reside solely on our they live on our computers. Developers build plugins to integrate with our DAWs so we can insert them on single channels or groups of channels.

Whereas today we can insert a reverb plugin onto a channel strip, recording engineers would often manually record time-based effects like reverb and delay by having artists perform in a room with lots of inherent reverb in its room signature.

The price of FX plugins can vary depending on what they offer, but they’re much more affordable than hardware units!

What is latency in digital audio recording?

Despite their differences, audio & MIDI recording have a common enemy: latency.

Latency is a time delay between your input of an instruction such as ‘Play’, striking a MIDI note, or hitting ‘Record’ your computer executing your command or registering your input. So when latency occurs with an instruction such as ‘Play’, there is a delay in time before you hear any playback.

If you’ve ever tried writing words with a slow laptop, you’ll have experience latency first-hand. The forever seeming delay between your key press and resulting letter generation is latency!

Due to all of the calculations occurring in digital audio, latency can often disrupt your creative process. It can accumulate anywhere in your signal chain and can be as long as a few milliseconds to a few seconds.

Here are some examples:

  • 0-11ms is an unnoticeable amount of latency.
  • 11-22 ms delay is audible latency and it sounds like a slapback effect (think slapback delay).
  • Anything more than 22 ms makes it impossible to perform in time with a track.

If you do experience latency in your digital audio setup there are a number of things you can do to solve the problem.

  1. Windows users can use ASIO4ALL universal low latency drivers.
  2. Decrease your DAW buffer settings to the shortest time your computer can process without crashing, distorting the audio, or freezing.
  3. While performing, the ‘direct monitor’ feature on your audio interface allows you to hear your audio signal as you play it.
  4. If you’re using low latency drivers in Windows, make sure your interface isn’t set to be a plug-and-play device.

Exporting your digital music production

Exporting your project is the final step of production. Often referred to as “bouncing” or “freezing”, exporting your audio is simply converting the DAW project into an audio file.

In addition to exporting the final master project, there are scenarios in the production process where producers export unfinished projects too!

For instance, the more processor-hungry effect plugins you insert will slow your system down as the project plays, especially those with less than 16GB of RAM.

Therefore it’s necessary to export that loop and all of its effects to a singular audio file. That way, you can reinsert the audio, play it in the context of the track, and eat up less processing power.

Additionally, it’s somewhat of a standard to “bounce” a final mixdown to a handful of audio files; known as “stems“.

And collaborating with artists across different DAWs is made possible by exporting “multitracks” rather than stems.

Multitracks

Multitracks are exports of every DAW channel of a project.

They’re the individual elements. From an individual kick sample to a 16-bar guitar loop. Furthermore, multitracks are defined by their channel name in the project. For example:

  • Kick 1
  • Kick 2
  • Bass slide
  • Guitar loop 1
  • Snare
  • Crash Hat 1
  • Crash Hat 2

Further examples of multitracks include clap loops, drum fills, and vocal adlibs too.

Multitracks can be mono or stereo – it all depends on what the multitrack is. After all, you wouldn’t export a bass sample in stereo!

We would utilize multitrack when we’re collaborating with other musicians with different DAWs. Due to how DAWs have their own project file types, we need to export our channels to audio files if we want a collaborator to be able to do anything with them.

When our collaborator inserts the multitrack files into their DAW, the samples will jump to where they were sitting in your own DAWs timeline!

Stems

If multitrack are individual channel exports, stems are groups of channels. They’re effectively the layers of your project!

The channel groupings themselves are dictated by instrument groups and their pace in the mix. For example:

  • ‘Drums’
  • ‘Leads’ – lead instruments
  • ‘Vocals’
  • ‘Bass’
  • ‘Pads’

So playing every stem together simultaneously plays your whole project as you made it. Though there aren’t really any written rules, a mastering engineer may have a preference for what should be grouped with what to get the best results.

A mastering engineer will ask for project stems rather than a whole-project export because stems offer creative control to make more precise adjustments.

Digital audio file formats

Finally, let’s talk about the digital audio file formats. Different digital file types have different characteristics and can offer you something different, and the file type best to export to depends on your needs.

You’ve heard of MP3 files, and possibly FLAC and WAV file types. These are three of the common file types, but streaming stores only accept MP3 or FLAC file types for reasons we will soon explore.

.MP3

First, let’s talk about the most popular file format.

MP3 files are a lossy file format. Lossy is one of three techniques of compressing a digital file in size for storage.

Lossy file formats such as MP3 have audio content that human ears have difficulty with hearing or can’t hear removed to reduce the size of the file. Additionally, parts of the audio that make next to no difference to the sound of the project are discarded too.

While often a subject of debate, an advantage of MP3 files is the drastic reduction in file size; making them a good choice for preserving storage space on limited-capacity devices.

But their reduced size also gives MP3s the advantage of fast distribution over the internet. And that, you guessed it, is why streaming stores like Spotify use MP3 files.

So if your project is finished and ready for the ears of your fans, export it to an MP3 and get it online!

.FLAC

FLAC  files are the most popular lossless file format.

Lossless file formats don’t cut any of the audio away like lossy formats. However, they do cut the file up.

FLAC holds second place on the podium of general file type popularity – just behind  MP3 files. However, because no audio is cut away in the compression process the file size can be double that of an MP3.

To illustrate how lossless compression works, let’s get hands-on.

Moving house can require fitting a king-size bed through small doors. After trying every possible angle, you’ll realise that it’s far better to disassemble the bed and transport its pieces to your new place before reassembling it.

And this is how lossless compression works! The file is snipped into smaller pieces to preserve storage space and reassembled perfectly when you load and play the file. In contrast to lossy files, lossless compression maintains sound quality and accounts for every piece of audio data. 

Though they can be double the file size of MP3s and therefore slower to transport across the internet, it’s the price to pay if you’re a die-hard audiophile.

.WAV

Finally, we have WAV files. A hard drive’s worst nightmare, but an audiophile’s best friend.

WAV files are often confused as to being a lossless file type, but this isn’t true. They are a PCM file format.

PCM (Pulse Code Modulation) files actually replicate the ADC process that we have discussed at length! The difference, of course, is that we’re resampling the digital signal rather than an electrical analog signal.

PCM file types take thousands of snapshot samples of the project in question and accurately represent the original audio. As the Nyquist-Shannon theory suggests, PCM files sample each wave cycle twice to read the frequency accurately.

As a result, PCM files can be double the size of lossless file types. PCM files are the biggest storage space swallowers, but as the file isn’t compressed or cut whatsoever they have incredible audio fidelity.

And it’s for their audio fidelity that PCM file types like WAV are the industry’s favourite choice when exporting to stems and multitrack. When the time comes, export your stems/multitracks to WAV to preserve the audio quality for mixing/mastering.


Final thoughts

It was in the late 80s and early 90s that music production and recording became so accessible. 20 years later, a jaw-dropping number of artists record and release their music independently of record labels.

But 50 years ago, reel-to-reel tape decks were revolutionary tools for professional recording. But thanks to the unlimited possibilities of digital, the landscape of the music industry has changed – and continues to change – forever.

Now that you understand how digital audio works and how it empowers anyone with a computer to express themselves musically, isn’t it time you joined the millions of independent artists around the world in changing the music industry?



The tireless search for an exclusive record deal is no longer necessary. RouteNote will distribute your music globally, and you’ll keep 85% of all revenues.