One of the things that I find interesting about music is the fact that it can be classified into two distinct types: analog and digital. Analog music refers to music that has been recorded on some physical medium. For example, a vinyl record or an audio cassette tape. Digital music is music that exists as data – it only becomes audible when played through a device that converts the data back into sound waves. A digital format could be a CD, but MP3 is far more common these days.
A whole world of different techniques and technologies exist within these two categories. There are also many other ways of thinking about and classifying the different kinds of sounds we hear every day. The way we classify sounds defines our relationship to them. This can have profound implications for the way we think of our own bodies and minds as well as how we engage with society at large. For example, when we hear a bird singing in the woods, do we see this as an act of communication (as many animals do) or simply an expression of joy (as humans tend to)?
When I was growing up in the 1990s, I had never really thought much about these distinctions until one day my father brought home an old record player from work and showed me how to use it. He told
Most music is digital. And most of that is electronic.
The first thing to understand about music is that it’s sound. And the sound waves you hear are analog waves. These waves have peaks and valleys, and an oscilloscope can show these to you. Analog synthesizers (and other instruments) make these sounds by generating waveforms that are then run through various filters and other effects to shape the sound into something interesting and useful.
Digital synthesizers create their waveforms by storing numbers in a table, called a wavetable, which is then accessed in a variety of ways to generate all sorts of sounds. A digital waveform can be shaped and processed in the same way as an analog one (though each does it differently), but because it’s “numbers” instead of “voltage” or “current” it can be more easily manipulated and controlled than its analog counterpart.
The main difference between analog and digital synthesis is how they manipulate waveforms to create sound. This is why analog synths are often called subtractive synthesizers—they take the harmonics out of a waveform through filters to create new timbres. A digital synth, on the other hand, can do things like resynthesis
A digital audio recording is a series of numbers that represents the air pressure changes detected by the microphone. The higher the sampling rate, the better the digital recording can approximate what is actually happening. When you play back this stream of numbers, it generates frequencies that are interpreted by your ears as sound.
Analog and digital recording have different strengths and weaknesses. Analog tape produces a warm, natural sound, but it degrades every time it’s played. Digital recordings are pristine, but they can sound cold and lifeless if not recorded well. Most engineers today record digitally while monitoring on analog equipment to get the best of both worlds.
Most people cannot tell the difference between analog and digital recordings. If you want to hear for yourself, here are some excellent examples: “Analog vs Digital” (YouTube)
Analog and digital signals are the types of audio signals we use to represent audio. There is no fundamental difference between analog and digital audio, but there are some technical differences that make it more convenient for people to use digital audio today.
I’m going to be honest with you: this article is not likely to make you a better musician. The difference between analog and digital signals is kind of esoteric and technical. But understanding it will give you a better understanding of how electronic music works, so read on if you dare…
Imagine that you have a microphone set up in front of a singer. As the singer sings, the microphone converts the sound waves into an electrical signal, which is just a stream of numbers. I know that sounds impossible, but it’s not. It just requires a lot of math.
The first step in converting sound waves into an electrical signal is sampling—a process by which we record the amplitude of the wave at specific intervals. For example:
Analog and digital signals are used to transmit information, usually through electric signals. In both these systems, the information, such as any audio or video, is transformed into electric signals. The main difference between both these types is that the analog signals that carry information of any type are continuous electrical signals, whereas, in the case of digital data, it is transmitted in the form of electronic pulses.
The concept of analog and digital is best explained with the help of an example. An analog clock, in which you have an hour hand and a minute hand, uses an analog system. The position of the hands at any point in time completely represents the time; for example, if it is quarter past five then the hour hand will be at 5 and the minute hand will be at 12. If you keep changing this position continuously by moving the hands then you will get all possible positions in a day (24 * 60 * 60 = 86400). This shows that there are 86400 positions that can be taken by these hands and each of them represents a different time. Analog systems use such continuous signals to represent information as they can take any value between two points.
On the other hand digital clocks use a digital system to represent time. Here you have 4 digits representing
While the world is full of digital devices, analog devices have a frequency response that extends to infinity. With an analog device, what you put in is what you get out. Digital devices, on the other hand, have a finite frequency response that approximates the actual audio signal being recorded in small pieces. What this means is that the higher the sampling rate, the closer one gets to an accurate representation of audible sounds. It also means that as you increase sampling rates beyond what is required to capture audible sounds (usually considered 20kHz), you are increasing accuracy but not adding anything new or different to the recording.
In short, if a sound is out of range of human hearing, it cannot be perceived no matter how high the sampling rate or how good your speakers are.
This is a question that many people wonder about but few can answer correctly. The difference between analog and digital recordings relates to the way the signal is recorded and stored. The waveform of an analog recording is a continuous sine wave, representing air pressure changes caused by vibrating objects (the sound source). The waveform of a digital recording is a series of numbers that represent the amplitude of the wave at regular intervals.
To understand better the difference between these two methods, imagine you were able to hold a piece of graph paper up to your stereo speaker while it was playing music. The vibrations in the air pressure would cause the graph paper to move up and down, tracing out a pattern that represents the original sound wave. If you could measure how far up or down the graph paper moved for each point in time, and then connect all those points with a line, you would have what’s known as an analog recording. Since this is pretty difficult to do with real-world graph paper and speakers, we’ll just draw it here:
Analog Recording: Continuous Waveform
The second method is known as sampling; in this case, we will still use our imaginary piece of graph paper, but instead of measuring its movement every second or so, we’ll do it only