We’re uncovering the biggest pinch-me-moments that the world’s most esteemed producers, engineers and mixers have experienced, from working with childhood heroes to winning awards or landing their first hit record, hear the stories that make you #PinchYourself.
In the latest episode of #PinchYourself we sit down with number-one-selling songwriter, record producer, and co-founder of LYRE Music, Alina Smith (ITZY, Red Velvet, Fall Out Boy) in her LA home studio.
Alina spoke to us about the importance of staying grounded, appreciating where you’re at in your career, and knowing that what once made you #PinchYourself might not do so now. It might just feel earned and even normal.
WATCH FULL VIDEO
TRANSCRIPT
When you’re very young, you think you’re gonna feel a certain way about your successes.
You think, wow, I’m just gonna be so happy if this thing happens for me. But when that thing happens, the thing is you’ve been working toward that moment for so long that it, it isn’t anti-climactic for necessarily.
It just feels very like, Normal. So all of my successes, I’ve never been like, wow, I can’t believe this is happening, cuz I’ve been building toward it and you know, really going through the grind to get to that spot.
So when you’re there you’re like, okay, great, I’m so, I’m so glad I’m here. As long as I’ve been doing this, everything feels kind of calm and nice and honestly I prefer it that point.
How did the Abbey Road Studios’ recording engineers stream orchestral sessions before LISTENTO?
In this #TilYouMakeIt episode, Senior Recording Engineer, Andrew Dudman, explains the expensive, unorthodox method that was used.
In this example, it was a session for ‘Star Wars: The Phantom Menace’ taking place in Studio One where the engineers streamed over satellite to John Williams, George Lucas and the entire ‘Star Wars’ team based in LA while running lengthy cables to a broadcast truck in the studios’ car park.
“LISTENTO has revolutionized how we work on these remote projects.” – Andrew Dudman
WATCH FULL VIDEO
TRANSCRIPT
Cables running through the doors out to the, out to the car park, streaming over satellite to, to get to them
Before the rise of internet streaming and quick, simple, easy, easy methods to stream, uh, music on a certain Star Wars session.
That was just a day’s recording, so no one came over. Producers John Williams and George Lucas, everyone stayed in in America to make it work. We actually had an outside broadcast truck he’d pay through the nose.
For bandwidth and quality wasn’t great. Nowadays, obviously we don’t have to worry about getting trucks in or ISDN lines.
It’s so much easier these days using LISTENTO for the stream of the audio, whether it be stereo stems or five one, it literally has revolutionized how we work on these remote projects.
#DreamCollabs uncovers the reasons behind these collaborations and why it’s essential to dream big, whether meeting their childhood music heroes, working with an emerging artist that excites them or collaborating with an artist whose sound has influenced the creative choices these industry powerhouses make.
In this latest episode, we speak to the number-one-selling songwriter, record producer, and co-founder of LYRE Music, Alina Smith (ITZY, Red Velvet, Fall Out Boy) in her LA home studio.
Watch to discover who her dream collaborator is or, rather, who they could be. Creating magical collaborations does not always involve having the world’s biggest names in music. More often than not, it boils down to connection and what feels natural in the moment.
WATCH FULL VIDEO
TRANSCRIPT
A lot of people ask me about what’s your dream collaborator like? What artist could you work with if you had all the access in the world? No restrictions, and I have kind of a funny answer.
I don’t have anyone in mind because my modus operandi is I will work with who I’m meant to work with.
Yes, it could be maybe a very well-known artist, or it could be a person no one knows yet that’s gonna become this amazing person.
You will push sometimes for collaborations cuz somebody’s successful where they have this thing going on or this thing going on. But if it’s not natural, It doesn’t gel, it doesn’t work. And sometimes a person comes in and it doesn’t make logical sense, but you try it and it’s like the most beautiful thing.
To give an example, when we were starting LYRE, my partner, Ellie and I, my publisher at the time, really tried to discourage me from working with her cuz she was quite young. She was eighteen I believe when we started.
And they were like, oh, why are you working with those young girls? Kid, like you could be working with more established people and I just felt something really special there, so I pursued it regardless.
I just have an open mind to anything that feels right. Whoever is right will be right.
‘The first take you record could be the magic take. So you just need to have everything slick’ – Andrew Dudman.
In the latest #TilYouMakeIt episode Abbey Road Studios’ Senior Recording Engineer, Andrew Dudman speaks on the importance of precise planning ahead of a recording session and how the support of Abbey Road Studios’ team of assiduous engineers, recordists and runners enables big orchestral recordings to run like a well-oiled machine.
Many films are as famous for their soundtracks as they are for anything that happens on screen, and Andrew has been responsible for recording some of the most iconic and beloved scores in cinema, with credits including the Star Wars prequels, the Lord of the Rings trilogy, Hacksaw Ridge, Baby Driver and Gravity, to name just a few.
WATCH FULL VIDEO
TRANSCRIPT
The first take you record could be the magic take. So you just need to have everything slick.
Certain sessions, certain band recordings, you can kind of adjust as you go get the band in.
Everyone gets a feel of the vibe and then you kind of work out how you’re gonna do it. But when you’ve got a big orchestra, it needs to be organized.
That’s the joy of having the staff of this building where. You know, we’ve got six runners and nine assistants, and seven or eight engineers.
We’re very well organized, very well planned. When you come in with an empty studio, everyone has a detailed setup sheet, and we draw floor diagrams so you know where everything’s gonna be set out.
It would be chaos if you didn’t have a plan. Do you need to find out all the information in advance? Plan it all out. Get it ready.
Over the last few years, there has been an extreme growth in demand for immersive audio content, transforming the way we experience entertainment.
Immersive audio has been utilised in various forms of media, from music and motion pictures, to video games and virtual reality.
However, immersive audio is still very much the wild west, and there’s no right or wrong way to mix for immersive audio.
When it comes to mixing and especially mastering for immersive audio, similar to mixing for mono or stereo formats, it’s excellent to A/B your mix/master with similar tracks.
Using OMNIBUS and the Apple Music application, you can quickly and easily reference your immersive mixes to available immersive audio content from Apple Music.
Let’s walk through the setup process.
Mac & Apple music audio settings:
Set your computer’s audio playback device as one of the OMNIBUS drivers. I’ll use OMNIBUS A as my system’s audio for this demonstration. This will now allow any audio from your Mac to pass through OMNIBUS and allow you to designate the output. We’ll look into this later.
You’ll first need to open up the ‘Audio MIDI Setup’ application on your device and navigate to the OMNIBUS driver you’re using as your system’s audio device.
Click on configure speakers and select 7.1.4 as your speaker configuration. Make sure your speaker channels are from 1-12 as seen below.
Apple music playback settings:
In the Apple Music application, navigate to Music > Preferences > Playback. You will see Dolby Atmos playback settings. Select ‘Automatic’ here. These steps will allow the 7.1.4 discrete channels to output through channels 1-12 of your selected OMNIBUS device.
DAW’s playback engine:
In your chosen DAW or Dolby renderer (if using), select one of the other OMNIBUS drivers as your playback engine; I’m using OMNIBUS B for this; however, you could use any of the OMNIBUS drivers as long as it is different to the one selected as your Mac’s audio device.
Repeat the previous steps with your DAW’s OMNIBUS Driver in the audio midi setup.
Routing and Snapshots:
Now that the appropriate steps are complete, we can open OMNIBUS and begin our routine.
Route channels 1-12 of OMNIBUS A to your designated monitor outputs channels 1-12. This will allow the audio from Apple Music to travel through OMNIBUS into your designated 7.1.4 monitor outputs.
Now you’ve finished routing your audio to your designated monitor outputs, you can now save this routing configuration as a snapshot. Go to the snapshots tab in OMNIBUS, add a new snapshot, and save. You can label this snapshot if you wish; I’ve labelled mine as .7.1.4 Reference’.
Let’s now go to settings and clear our routing configuration. Now we can route channels 1-12 from OMNIBUS B (or your selected driver for your DAW) to your designated monitor output and save this as a snapshot. I’ve named this one as ‘7.1.4 DAW’.
Once you playback your session and the track you wish to reference, you can toggle between these two snapshots to hear our immersive mix and reference track separately with only a few clicks.
By using a combination of LISTENTO and OMNIBUS, you can now seamlessly transmit your Dolby Atmos Session’s audio to collaborators for remote sign-off . For this, you need to have a LISTENTO Pro Subscription, OMNIBUS, and ensure that your client has the LISTENTO iOS Player and headphones.
WATCH VIDEO
Getting Started
Firstly, in the Dolby Atmos Renderer application, assign your output to one of the OMNIBUS virtual drivers – for this tutorial we will be using OMNIBUS B.
You’ll see that when we now start playback, the 7.1.4 discrete channels will travel through OMNIBUS, which can then be routed to any audio device on your Mac.
Let’s start off by routing the discrete channels from the Dolby Renderer to Hardware outputs. In this case it’s channels 1-12 of a UAD Apollo.
Now we’ve outputted the Dolby Renderer’s discrete channels to our hardware output and can monitor ourselves, let’s get ready to transmit to our remote collaborator.
In the LISTENTO application, select your audio input device – in this case we’ll be using OMNIBUS B. Navigate to the transmitter region and create 12 channels for the 7.1.4 channels which will be travelling into the LISTENTO application.
In OMNIBUS, route channels 1-12 from the OMNIBUS B outputs into OMNIBUS B input channels 1-12. Now if we go back into the LISTENTO application you’ll see that our 7.1.4 channels from the Dolby renderer are travelling through OMNIBUS into the LISTENTO application.
Upon receiving a 7.1.4 LISTENTO stream, paste the link into a 7.1.4 instance of the receiver plugin or into the LISTENTO App’s receiver. You can then assign the output destination of incoming LISTENTO streams on a track-by-track basis, allowing you to correctly assign the 12 channels to your preferred speaker layout.
Today we take you on a step-by-step journey of how to utilize INJECT and OMNIBUS for rapid sampling. Discover how to source the right sample, route audio from browser to DAW, and unlock creativity and inspiration in ways you didn’t expect.
Step 1: Find a sample and make sure it’s right for you
The first step is to find a sample that you like. This could be a drum beat, vocal riff, or sound effect, whatever you want. You’ll want a feel for the sample’s tempo, rhythm, and key to ensure it fits your composition. Otherwise, you may be wasting your time and not doing something beneficial for your track.
You may also be wondering where you can find samples to use in your tracks. Firstly, it is essential to ensure that you have the right to use the audio that you are sampling or that it is within the public domain.
Public domain work is all creative work that has no exclusive intellectual property rights applied to it. Therefore, you can use samples of this audio within your tracks without encountering any copyright issues.
Many resources are available online to find royalty-free samples to use in your production.
Step 2: Recording the sample
This is where we are here to help. Both INJECT and OMNIBUS allow you to record sampled audio straight from your web browser into your DAW.
Sampling audio with INJECT
Select your computer’s audio as the INJECT driver.
In your DAW, open up an aux track, insert the INJECT plugin on this track, select the external plugin input to the INJECT driver, and choose channels Stereo 1 & 2.
To record the audio into your DAW from INJECT, you can either route the aux to a bus and then into an audio track or use the built-in recorder, which supports up to 16 channels and allows you to drag and drop recordings directly into your DAW.
After you’ve listened to the sample and understand its structure well, it’s time to either “chop it up” or arrange it. Chopping a sample means cutting it into smaller pieces, usually by using a DAW of your choice. The goal is to extract the parts of the sample that you want to use in your own music.
Once you’ve chopped up the sample, you can start to arrange it in your DAW. This involves placing the chopped pieces of the sample onto your project timeline, where you can sequence them in any order you want. To enhance the sample, producers often use effects and other processing tools to manipulate the sample further.
Step 4: Add your own elements and mix it together
After you’ve arranged the sample, it’s time to add your own elements to the mix. This can include additional instruments, vocals, or effects. The goal is to create a unique music piece creatively incorporating the sample rather than just an exact recreation.
In conclusion, sampling music can be an innovative and rewarding way to create new music. Remember always to have the right to use the audio you’re testing, and if not, use royalty-free or public-domain samples.
We recently caught up with drummer, producer, mixer and mastering engineer extraordinaire, Sam Brawner, also known as SammyB, at his Blue Dream Studios in Los Angeles.
Owned and operated by Sam, he offers a range of services at his studio including producing, recording, mixing, mastering and post-production services as well as songwriting, filming services and dedicated workshops.
Sam has worked with the likes of Anderson .Paak, Mac Ayers, Moonchild, Allen Stone and many more. His secret weapon for many of these sessions? LISTENTO.
With LISTENTO, Sam can collaborate in real-time with lossless audio, maintaining the momentum, connection and communication he has in real life whilst working with artists including Gareth Donkin and Dux.
“LISTENTO is this cool plug-in which transmits high-quality audio across the world.”
Head here to see how Sam uses the plug-in whilst on remote sessions with artists from around the world.
Introduction
If you collaborate on music remotely via Zoom, the preferred option is to ensure maximum audio quality to create the highest productivity and effective collaboration. Below we will walk you through how to use Audiomovers’ industry-standard plugin LISTENTO in conjunction with Zoom to enable your remote music collaborations to reach that next level of excellence and mimic the standards achieved during in-person recording sessions.
First, let’s ensure you have everything you need for the remote recording session.
A stable internet connection
The most up-to-date version of Zoom
The DAW of your choice
The LISTENTO plugin (or desktop application)
In order to get the best audio quality possible for your remote sessions, the person transmitting audio will need to have an active LISTENTO license. Don’t fret if you don’t have one, you can grab a free 2-day trial to test this setup and see how it works for you before committing to the paid subscription.
Setting up your session
Step 1: Start your video call
Your video call will be your main form of communication during remote sessions. We recommend starting with this step to ensure you can easily communicate with each other during the next steps.
When using Zoom you can see the latency of your video call by going into Settings, the Statistics and clicking across to Video. You can then match your latency preferences in LISTENTO.
Step 2: Setting up your DAW
Now that you can freely communicate with each other, you can now begin setting up the session within your DAW. Open your DAW of choice and insert an instance of the LISTENTO plugin on the master bus of the project that you are collaborating on.
Launch the plugin and enter your Audiomovers username/email and password and click ‘login’. Once logged in, the ‘Start Transmission’ button will become available and you can begin streaming your audio.
Set your session name, go with the default, or select a random session name. Bear in mind that if you wish to run multiple sessions and use the same session name, anyone with the link will still be able to listen in to your stream.
This is great if you’re collaborating with the same people over a number of days or weeks, but if you’re jumping from project, you should use a session name that relates to the project, or use the random session name generator and share new links each time.
Click “Copy Link” to copy a stream session link to your clipboard.
4. Getting started and testing your stream
Press “Start Transmission” to begin streaming. You can send your session link to anyone you want to share your stream with.
Before you share the link, you’ll likely want to test that it’s working. You can copy the stream link, and test it yourself. Simply hit ‘thru mute’ to mute the audio from the DAW, and test that the stream is audible by pasting the link in your own web browser or mobile app. NB.
LISTENTO streaming links have been heavily tested in multiple web browsers, but we recommend Google Chrome for optimum performance.
Factors that affect audio quality when collaborating via Zoom
When collaborating on music remotely via Zoom, there are several factors that can affect audio quality. Firstly, the quality of the internet connection can have a significant impact on the clarity and consistency of the audio. Poor internet connection can lead to drop outs, high latency, and other distortions in the sound.
Secondly, the type and quality of microphones being used by the collaborators can also affect the audio quality. Low-quality microphones may produce muffled or distorted sound, while high-quality microphones can capture the nuances of the music accurately.
Additionally, the software and equipment used to record and mix the music can also have an impact on the final sound quality. It is essential to ensure that all collaborators are using compatible software and hardware, and that the recording and mixing processes are carried out carefully to produce the best possible sound.
Overall, careful consideration of these factors can help ensure that remote music collaborations over Zoom result in high-quality audio output.
What is Zoom’s High Fidelity audio feature? And why should I still opt for LISTENTO?
Zoom’s High Fidelity Audio feature is an advanced audio codec that is designed to improve the audio quality of Zoom meetings. This feature uses a new audio codec called Opus, which provides high-quality, low-latency audio for real-time communication.
The High Fidelity Audio feature is particularly useful for remote music collaborations or other situations where high-quality audio is essential. It supports sample rates of up to 48 kHz and a bit depth of up to 96 bits, which allows for high-resolution audio transmission.
Although Zoom’s High Fidelity Audio feature is useful for remote music collaboration, it does not trump the power, accessibility, and versatility of LISTENTO.
LISTENTO creates a remote streaming and recording experience that reflects many of the benefits of the in-person studio experience. With LISTENTO you can stream uncompressed, lossless audio in real-time to anyone, anywhere in the world, allowing you to collaborate on projects in real-time, symbiotically prioritizing audio quality and user experience in tandem. It’s easy to use and is compatible with most DAW’s, you can collaborate seamlessly with your team regardless of their location.
For faultless remote audio recording, Audiomovers easily outshines other remote audio collaboration tools on the market, supporting lossless multichannel audio, delivering up to 7.1.4 surround sound, and offers stability and the unique ability to adjust latency and bit rate.
Turning your DAW into an online recording studio and allowing you to stream lossless audio with as low as 0.1 latency.
The buzz around lossless audio is on the rise in the music streaming industry, as major players such as Apple Music and Spotify upgrade their platforms to allow lossless quality audio streaming. This move towards lossless audio by these streaming giants has garnered significant attention, highlighting the increasing demand for high-quality sound among music enthusiasts.
The standard lossy audio formats used by most streaming services may not be sufficient. This is where lossless audio comes in. In this article, we will explore what lossless audio is, how it differs from lossy formats, and how you can stream lossless audio to get the most out of your listening experience. Whether you’re a music enthusiast, a sound engineer, or just curious about the world of high-quality audio, read on to learn more about lossless audio and how you can start streaming it today.
What does ‘lossless’ mean?
In the process of converting audio to digital file format, compression is often used to minimize file size. However, this compression can result in the loss of frequencies at the highest and lowest ends of the recording, which is why it’s called “lossy audio.” In contrast, when there’s no loss of frequencies, the resulting digital copy is considered “lossless” audio, meaning that it’s identical to the original recording.
Common lossy audio formats include MP3, MP4, WMA, and AAC, while lossless audio formats include WAV, AIFF, ALAC and FLAC. Despite using compression to reduce data storage, these lossless formats still maintain the full waveform of the audio piece.
Which lossless audio format is the best?
FLAC, short for Free Lossless Audio Codec, is a popular open-source audio format used by various brands such as Tidal and Amazon Music. In contrast Apple Music uses its proprietary lossless format called ALAC (Apple Lossless Audio Codec).
Ultimately, the choice of lossless format while streaming music may not be necessary for most users, as it depends on the specific service and device they prefer. For Apple users, ALAC is the default format, whereas other streaming platforms typically offer FLAC or WMA (Windows Media Audio).
Is lossless audio the same as high-resolution audio?
Lossless audio and high-resolution audio are often used interchangeably, but they are not the same thing. While both offer better sound quality than standard compressed audio formats, there are some key differences between the two.
As mentioned above, lossless audio refers to digital audio compression that preserves all the original data and information of the audio file without losing any quality. Lossless audio retains all of this valuable information and delivers a file identical in quality to the original audio recording.
On the other hand, high-resolution audio refers to audio files that have a higher sampling rate and/or bit depth than the standard lossy quality audio (44.1 kHz / 16-bit). High-resolution audio typically has a sampling rate of 96 kHz, and a bit depth of 24 bits. This means that high-resolution audio has a greater frequency range and dynamic range, capturing more of the nuances and details of the original recording.
So, while lossless audio preserves all the information of the original recording, high-resolution audio captures more detail in the recording, but still some of the original audio information is lost during compression, compared to lossless where no data is lost. It is worth noting that all high-resolution audio is lossless, and not all lossless audio is high-resolution.
Can I collaborate remotely on lossless audio?
Yes, it is possible to collaborate remotely on lossless audio, but it does require some additional set up and consideration compared to collaborating on lossy audio or other standard file types.
LISTENTO allows you to stream uncompressed, lossless audio in real-time to anyone, anywhere in the world. Whether you’re a musician, producer, audio engineer or voice over artist, LISTENTO allows you to collaborate on projects in real-time, symbiotically prioritising audio quality and user experience. It’s easy to use and is compatible with most DAW’s, you can collaborate seamlessly with your team regardless of their location.
LISTENTO allows you to transmit uncompressed audio in the format of PCM (Pulse Code Modulation). PCM is another method of translating analogue signals into digital data. It makes use of the binary language to store information about an audio signal in a digital medium.
Sampling – Samples are snapshots of the incoming signal which record the amplitude of the signal at that given moment.
Quantization – Quantization rounds those amplitude values to the nearest available value in the digital system, based on its bit rate.
Encoding – Encoding is the final stage where the newly sampled audio information is written to a hard drive or other digital storage medium in a given format to be used elsewhere.
For bandwidth details, please refer to page 9 of the LISTENTO User Guide under ‘Resources’ to see our recommended streaming and internet settings.
Conclusion
Lossless audio streaming is a game-changer in the music industry, allowing users to enjoy high-quality uncompressed audio without sacrificing any of the original audio quality or information, in essence acting as a replica of the original audio recording.
With the popularity and necessity for high quality remote music collaboration, LISTENTO exceeds industry standards and provides a seamless and intuitive platform for music creators to collaborate in an efficient and controlled manner. Negating the need to worry about technical issues and flowing as naturally as an in-person recording session would, facilitating real-time lossless audio streaming.