Musicians, producers, amateurs and professionals alike have to manage audio signals constantly. In amplified music, the live performance sector and the music production world, audio signals are everywhere. They’re unavoidable. Building up a knowledge-base and competency in how to process and manage audio signals effectively and efficiently, takes practice. In this article, you will learn the basics of audio signals and how to manage them. You will also learn about the different types of audio signals, audio connections and how to insert audio chains correctly.
Audio signals are impressions of sound, captured either digitally or electronically.
What do you mean impressions of sound?
Well, exactly that. They are digital or electrical impressions, or representations of sound. In reality, sound is a perpetuating energy that makes air particles vibrate and surrounding objects resonate, making more things vibrate. These vibrations reach our ears and our brains organise them into sound.
When it comes to capturing and reproducing sound, we can either do it electronically, with a voltage or digitally, with a computer. All signals that are produced with a voltage are referred to as analogue audio signals. All signals produced with computer binary are referred to as digital audio signals. Analogue and digital signals both have their own unique history.
Analogue signals are very old compared to digital audio signals. They have been around for almost a hundred years. All music and audio was recorded, processed and reproduced with analogue technology from the early 1900s onwards.
Pioneered in the late 80s’, digital audio began to overtake analogue audio once the mp3 player and home computers became common. From here, digital audio became increasingly popular with the rise of CD’s, digital stereo systems and the internet.
In 2020, analogue and digital audio signals and technologies co-exist. Producers and consumers profit from, and enjoy both.
So, in your day to day audio signal management, you will be dealing with both analogue and digital audio signals. You can’t have one without the other.
The pathway an analogue or digital signal flows through on an audio summing device, like a mixer for example, is called a channel.
A mono audio signal flows through one channel, and a stereo audio signal flows through two channels. For a long time, analogue signals were captured, produced and played back in mono. Stereo audio signals made their way onto music recordings in the late 50s’ and eventually became the best way to listen to music.
So, when you are working with audio signals, be aware that you will be working with both mono and stereo.
You can’t talk about audio signals without mentioning what carries them – cables!
The two common types of cables that we use to carry audio signals are XLR cables and TRS cables. XLR cables or XLRs for short, are used to connect microphones. We like to use XLR cables because their circuitry is more reliable, resilient and efficient than TRS cables.
TRS cables are used to connect electro-magnetic, electro-acoustic, electronic instruments and devices. For example, electric guitars, synthesisers, keyboards and FX pedals.
There are lots of reasons why the industry uses these two types of audio cables instead of just using one. Some have to do with the physics of sound, some have to with tradition and some have to with practicality. But there isn’t a solid, reliable source that explains exactly why we use two different cable types.
Signal flow is the movement of our audio signal from a sound source to an output – like a speaker or headphones etc.
The basic idea of a signal flow is input and output. Sound comes in (input) one end, and is transformed into an electrical voltage, which travels through a cable and out (output) somewhere else, as sound.
A guitarist’s pedal board setup for a live performance is a typical example of signal flow. Think of an electric guitar hooked up to a pedal board, amplifier, microphone and PA system. The signal flow will look like this:
Guitar output – input Pedal 1 output – input Pedal 2 output – input Pedal 3 output – input Amplifier output to input Microphone output – input Mixer (mixing desk) output – input PA System output to sound
The above is an analogue signal from start to finish.
Remember, analogue signals are made of electrical signals that eventually turn into sound at the output. Another example of an analogue signal could be a vocalist’s audio signal in a live performance setting:
Vocals – input Microphone output – input Mixer output – input PA System output – to sound
In a live performance setting, all audio signals are analogue (unless you are using digital equipment, of course). For the everyday bedroom producer or singer-songwriters recording at home, they will no doubt be working with an analogue and digital signal flow. They will be using an audio interface to turn analogue signals into digital signals for their computer to process. This is how all commercial studios operate (unless the artist/producer wants to record everything traditionally with analogue equipment).
A bedroom producer’s audio signal flow would typically look like this:
Instrument output – input Audio Interface output – input Computer output – input Audio Interface output – input Studio Monitors/Speakers output to sound
The examples above are very basic signal flows that any beginner can set up. Things get complicated once you start adding instruments.
Imagine the first example of a guitarist’s pedal board setup in a live performance setting. It isn’t that complicated. Except now, add the other band members: the drummer, the other guitarist with a pedal board, bassist with a pedal board and a vocalist. Managing all these audio signals effectively is quite difficult.
Now, imagine this setup on a larger scale at a festival for example where you can have 10 acts performing one after the other with different set-ups and signal flows, some being more digital or analogue than others, and all this audio being summed (added together) through mixers and outputted through multiple speaker systems…
You begin to understand why audio engineers are called audio engineers.
A signal chain refers to effects inserted into a signal flow, one after the other, to affect the final output of the audio.
For example, our guitarist’s pedal board set up (above) has a signal chain within it’s signal flow: pedal 1, pedal 2 and pedal 3. Signal chains can be made up of audio effects and audio processors, like reverbs, delays, compressors, equalisers and noise gates. In short, a signal flow is the flow of an audio signal in its entirety, from input to output and all the processing along the way. A signal chain is the chain of audio processors that are inserted into the signal flow.
Putting together a chain of audio effects and processors is a part of learning to manage a signal flow. Be aware that signal flow and signal chain are sometimes used interchangeably to mean the same thing. Whether you refer to both as two different things will depend on context.
When you put together a signal chain of audio processors, you need to be aware of what is happening to the audio along the chain.
For example, if you put a processor on at the beginning of the chain that damages the audio from the start, you can bet that any processor or fx you add to the chain after will make it a lot worse. Even if audio is only slightly damaged, the subsequent processors and fx you add to the chain will still have a compounding effect on the final sound of the audio. More often than not, this will damage your audio even more.
But the opposite is also true. You can add audio processors to a chain that enhance the audio’s sound and aesthetics. So, how do you know what to put where in the chain? Well, just remember to always put audio processors before fx in a signal chain. That’s it. Always put processors like EQs, compressors and noise-gates in the signal before FX pedals or plugins.
If you think about it, it makes sense. Processors are built to attenuate and fix problems with the audio. For example, too muddy: use an EQ, too much bleed: use a noise gate, too dynamic: use a compressor etc. FX are used to enhance the aesthetics of audio. You don’t want to enhance the aesthetics of audio without processing the audio first. It is like putting the milk in before brewing the tea – so always remember to put audio processors before FX.
All good productions have to start with good, high quality signal flows and signal chains. Learning to manage and process audio signals takes time, and a lot of trial and error.
You will most likely make a few dodgy recordings and mixes along the way. But once you have got the hang of it, it becomes second nature and your productions only get better. The best thing you can do though, is to pay attention and listen to the audio. If it doesn’t sound good, stop doing it and reevaluate your signal.
Now you have learned all about audio signal flow, you will hopefully be utilising your new skills when producing music! Allow us to help you amplify your music, collaborate with others, and even get your music in TV, film and more. Why not try Music Gateway for free?