Synthesis modulation

Oscillator, Filter, Amplifier, Envelope and LFO are the basic elements in Primary and Secondary Modulation in sound synthesis. Although this modules are in all synthesizers, the way they are shaped is a little bit different from one to another. I’m explaining how they look like within a SonicCell synthesizer in a brass and piano patches.

BRASS PATCH  PIANO PATCH

How a patch is structured?

In this synthesizer Patches are the basic sound configurations. Each patch can be configured by combining up to four tones. The tones are the smallest unit of sound. However, it is not possible to play a tone by itself. The patch is the unit of sound which can be played, and the tones are the basic building blocks which make up the patch.

Captura de pantalla 2015-06-04 11.45.43

How the four tones are combined is determined by the Structure Type parameter.

Captura de pantalla 2015-06-04 11.40.48

As we see in the scheme, we have three modules: OSCILLATOR (WG) – FILTERS (TVF) – AMPLIFIER (TVA). The audio signal is created in the oscillator and goes trough filter and reaches the amplifier.

An ENVELOPE can be applied to each of this modules to module the signal.

Two LFOs are applied in the SonicCell patches to module the signal in each of the modules (oscillator, filter and amplifier)

– WG (Wave Generator) (=VCO voltage control oscillator)

This is the synth oscillator. Create a specific waveform that is the basis of the sound, and determines how the pitch of the sound will change.

BRASS OSCILLATOR       PIANO WG

Here you can see the envelope of the four tones in each patch. In both patches, brass and piano, the same envelope is applied to each tone.

 – TVF (Time Variant Filter) (=VCF voltage controlled filter) Specifies how the frequency components of the sound will change.

BRASS FILTER       PIANO TVF

The same envelope is applied to the four piano tones. On the other hand, in the brass patch, are applied two different envelopes to each two tones.

– TVA (Time Variant Amplifier) (=VCA amplifier) Specifies the volume changes and the sound’s position in a stereo soundfield.

BRASS AMPLIFIER      PIANO TVA

We observe the difference between the envelopes in the brass and piano patches. The shape of the wave means the attack time, the decay time and the release time.  

– Envelope

You use an envelope to initiate changes to occur to a sound overtime. There are separate envelopes for Pitch, TVF (filter), and TVA (volume) as we have seen above.

– LFO (Low Frequency Oscillator)

Use the LFO to create cyclic changes (modulation) in a sound. The waveform applied in both of the patches, brass and piano is triangle wave. The SonicCell has two LFOs. Either one or both can be applied to effect the WG (pitch), TVF (filter) and/or TVA (volume).

  • When an LFO is applied to the WG pitch, a vibrato effect is produced.
  • When an LFO is applied to the TVF cutoff frequency, a wah effect is produced.
  • When an LFO is applied to the TVA volume, a tremolo effect is produced.

BRASS LFO     PIANO LFO

EQ FILTERS

I’m trying a multi band EQ in one of my works. It’s a piano performance in which we have an introduction perform with some chords, basically harmony in the beginning and then we have melody and accompaniment.

What I’d like to do is:

  • highlight the melody; in some points melody and accompaniment are not quite clear. I want to give the different frequencies their own spaces.
  • boost the low frequencies to give more support and warmth to the performance.

Let’s listen to the performance before any filter:


I’ve chosen a multi-band EQ from WAVES. We have seven types of filters here:

  • The five in the middle are all of them SHELVING FILTERS. We can manipulate the F frequency and the Q width of the curve.
    • the three in the middle: LMF, MF and HMF are mid-range filters.
    • the other two are LF low range filter and HF high range filter.
  • The two in the edges are LOW PASS FILTER AND HIGH PASS FILTER. We can manipulate the frequency but not the shape of the curve.

MULTIBAND-EQ

FIRST

I’m going to activate the LOW PASS FILTER in order to eliminate noises or rumbles below the fundamental of the sound. At the same time I want to use the LF SHELVING filter to boost my lowest frequencies and give warmth to my performance.

LOW PASS FILTER                    LOW SHELVING

Let’s listen how it sounds after passing trough low filters:

 

SECOND

One of my goals was to give different space to the melody and the accompaniment, so I’m setting the mid-range SHELVING filters to give the proper space to the accompaniment.

MID-RANGE SHELVING

Let’s listen how it sounds after applying the mid-range filters:

FINALLY

I want to boost the highest frequencies to guide the listener to the melody. To achieve that, I’m utilizing the HF SHELVING FILTER and the HIGH PASS FILTER.

HIGH SHELVING                                     HIGH PASS FILTER

All in all, this is the final performance after applying the different EQ. It seems like they have enhanced the good things of the instrument and now it’s more understandable and easier to listen to.

 

Now we can save the as a preset to use it in another occasion we need to. It’s as simple as going to the options, pick the options «save as», name it and it’s done.

SAVING EQ PRESET

Dynamics: compression process

I’m going to explain the main parts of a compressor and how I have worked with one of them within one of my projects.

I have a percussion track recorded with some samplers. I need to adjust the performance in two senses: reducing the dynamics differences between some of the hits and shortening the final of the sound in some of the hits.

This is how it sounds before compression

To do the adjustment, I have used two different types of compressors:

  • a downward compressor
  • a limiter

Captura de pantalla 2015-05-21 12.46.40

Reducing the dynamic differences

First I want to reduce the dynamic differences to do the sound more compact. I have manipulated four parameters in the downward compressor. I need to set:

  1. The point where I want the compressor starting to work THRESHOLD
  2. The amount of sound I want to be reduced RATIO
  3. The time the compressor is going to take in reacting ATTACK
  4. The time the compressor is going to take in coming-back to the original position RELEASE

Once I have set up all of this parameters the compressor is going to reduce the dynamic differences I don’t want to be heard in my music.

Shortening the sound released

The second thing I want is to short some amount of sound that is sounding a long time after the sound is produced.

To do that, I am using a Limiter in which I am setting the Release parameter in a point where there is no going to be sound. I have set it in 2,0ms. It means that after two milliseconds the sound is going to stop.

This is how it sounds after compression

 By clicking on the picture below you can see both devices, the specific settings I have done and the explanations.

dynamics

The Signal Flow in «La Puela» project

I’m explaining the Signal Flow in one of my projects and why I have configured it that way.

To explain that I’m alternating between two pictures that represent two windows in my DAW: the tracks’ window and the mixing board’s window

TRACKS’ WINDOW

Project tracks

MIXING BOARD’S WINDOW

Project mixing board

I have in this project two different routings in the signal flow:

  • track stack
  • sends

TRACK STACK

A «track stack» is a group of tracks. We create a track stack to collect instruments that have similarities in sound or to group tracks that we want to put together to apply them similar effects.

I have created this track stack to accomplish both: grouping instruments with similar features and applying similar effects.

When a track stack is created the system creates a BUS. Trough this BUS we send the signal of each single track of the group to the STACK. So, in the output of each individual track is going to be the number of the track stack bus.

Track stack with single tracks explicado

Track Stack mixing board explicado

In this track stacks I have inserted the Dynamic Effects I want to apply to this tracks: EQ, COMPRESSOR AND LIMITERS. The signal goes trough this effects and finally go to the output.

SENDS

This is another kind of routing I have used in this project. It is a back and forth routing.

  1. I have created a Track Stack to collect all the instruments with sound similarities together and to apply the same effect to all of them. In this case they are all String Instruments.
  2. Inside this «String Group» there are sub-groups: Violin I, Violin II, Viola, Chelo and Bass. I have created five Auxiliary Channels to send each of this sub-groups.
  3. In each of this Auxiliary Channels I have inserted the Dynamics Effects I want to apply to each sub-group.
  4. I have send back the signal to the Track Stack  and I have inserted a Delay Effect: REVERB.
  5. Finally the signal goes to the output.

sends

Sends mixing board

I have another tracks in my project. They contain 6 SAMPLER instruments that I have added to make the performance more realistic.

Sampler tracks

I have created an Auxiliary Channel to apply them a Delay Effect (reverb) that I have inserted in this auxiliary. So the signal flow goes to the Auxiliary Channel to the effect and them goes to the output.

Sampler mixing board

OUTPUT

Finally I have in my project another two Single Tracks that they go directly to the output.

Single tracks

In the output I have inserted a plugin that checks the energy of my output signal and a compressor in case I need to rise the energy.

OUTPUT

Here you have a video where I explain all these steps

PREPRODUCTION PROCESS CHECKLIST

I’ve created this Prezi presentation to show you five steps in the Preproduction Process we need to be aware of. These steps are:

  1. Proper Project Name and Location
  2. Set Digital Audio Preferences
  3. Set the Recording File Type
  4. Hardware Settings
  5. Buffer Size

When watching the presentation, you can run a youtube video with a short tutorial in each step. In this videos I explain how to set up all this options in my Logic DAW.

Click on the image below to see the presentation.

Captura de pantalla 2015-05-07 00.13.18

Sonic Cell: Expandable synthesizer module with Audio Interface

My name is Ana Silva and I’m from Asturias in the north of Spain. I’ve been producing music for some years since my first Berklee online experience in the Orchestration course.

My audio interface is a device called «Sonic Cell». It’s not only an interface but a synthesizer module with a MIDI interface and another features.

Although I’ve been usign my device for some years I feel like I’m not informed enough about their Audio Interface properties, so I’ve decided to investigate about them.

By using this properties I’ll mention some of the concepts we’ve been trough during the first week of this Music Production Coursera MOOC.

In my opinion SonicCell is ideal for musicians who use a PC as the core of our writing, recording, and performing universe. More than a mere sound module, SonicCell is equipped with a built-in USB audio interface.

Simply I have conected SonicCell directly to my computer’s USB port, and  I can record and create music with no additional hardware required.

In adittion I can plug a microphone, guitar, or other instruments into SonicCell and record my live audio tracks directly into my computer. And since SonicCell can help minimize the burden on the computer’s processor, I get more efficient and stable performance.

image

These are the features of my device in its AUDIO INTERFACE SECTION:

– Number of Audio Input/Output Channels

Alternatively referred to as the input channel, the I/O channel is a line of communication between the input/output bus or memory to the CPU or computer peripherals.

An audio channel or audio track is an audio signal communications channel in a storage device, used in operations such as multi-track recording and sound reinforcement.

An audio signal is a representation of sound, typically as an electrical voltage. Audio signals have frequencies in the audio frequency range of roughly 20 to 20,000 Hz (the limits of human hearing). Audio signals may be synthesized directly, or may originate at a transducer such as a microphone, musical instrument pickup, phonograph cartridge, or tape head. Loudspeakers or headphones convert an electrical audio signal into sound. Digital representations of audio signals exist in a variety of formats.

Input: 1 pair of stereo (MIC, GUITAR: Monaural/LINE: Stereo)
Output: 1 pair of stereo

– Signal Processing

PC interface: 24 bits
AD/DA Conversion: 24 bits

DA (DAC, D/A, D2A or D-to-A) or  Digital-to-analog converter is a function that converts digital data (usually binary) into an analog signal (current, voltage, or electric charge)

An analog-to-digital converter AD (ADC) performs the reverse function. Unlike analog signals, digital data can be transmitted, manipulated, and stored without degradation, albeit with more complex equipment.

We need a DAC in our DAW to convert the digital signal to analog to drive an earphone or loudspeaker amplifier in order to produce sound (analog air pressure waves).

What does it mean 24 bits resolution?

Resolution in this context refers to the conversion of an analog voltage to a digital value in a computer (and vice versa). A computer is a digital machine and thus stores a number as a series of ones and zeroes.

If you are storing a digital 2-bit number you can store 4 different values: 00, 01, 10, or 11. Now, say you have a device which converts an analog voltage between 0 and 10 volts into a 2-bit digital value for storage in a computer. This device will give digital values as follows:

Voltage                                 2-Bit Digital Representation
0 to 2.5.                                                       00
2.5 to 5.                                                       01
5 to 7.5.                                                       10
7.5 to 10                                                      11

So in this example, the 2-bit digital value can represent 4 different numbers, and the voltage input range of 0 to 10 volts is divided into 4 pieces giving a voltage resolution of 2.5 volts per bit.

A 3-bit digital value can represent 8 (23) different numbers. A 12-bit digital value can represent 4096 (212) different numbers. A 16-bit digital value can represent 65536 (216) different numbers. It might occur to you at this point that a digital input could be thought of as a 1-bit analog to digital converter. Low voltages give a 0 and high voltages give a 1.

What is the diferente of having 12-bit, 16-bit, or 24-bit resolution?

When you see analog input DAQ devices from various manufacturers called 12-bit, 16-bit, or 24-bit, it generally just means they have an ADC (analog to digital converter) that returns that many bits. When an ADC chip returns 16 bits, it is probably better than a 12-bit converter, but not always. The simple fact that a converter returns 16-bits says little about the quality of those bits.

It is hard to simply state «the resolution» of a given device. What we like to do, is provide actual measured data that tells you the resolution of a device including typical inherent noise.

If you look at a device called «24-bit» just because it has a converter that returns 24-bits of data per sample, you will find that it typically provides 20 bits effective or 18 bits noise-free.

You will see with these devices we might mention they have a 24-bit ADC (as that is what people look and search for), but we try not to call them «24-bit» and try to stick with the effective resolution.

Another interesting thing about your typical 24-bit sigma-delta converter, is that you can look at them as only having a 1-bit ADC inside, but with timing and math they can produce 24-bit readings:

– Sampling Frequency

AD/DA Conversion: 44.1/48/96 kHz

In signal processing, sampling is the reduction of a continuous signal to a discrete signal. A common example is the conversion of a sound wave (a continuous signal) to a sequence of samples (a discrete-time signal).

A sample is a value or set of values at a point in time and/or space.

A sampler is a subsystem or operation that extracts samples from a continuous signal.

A theoretical ideal sampler produces samples equivalent to the instantaneous value of the continuous signal at the desired points.

Sampling frequency (or sample rate) is the number of samples per second in a Sound. For example: if the sampling frequency is 44100 hertz, a recording with a duration of 60 seconds will contain 2,646,000 samples.

Usual values for the sampling frequency are 44100 Hz (CD quality) and 22050 Hz (just enough for speech, since speech does not contain relevant frequencies above 11025 Hz; see aliasing)

– Nominal Level

Usually NOMINAL input level (or output level, or ambient temperature, or power supply voltage, etc.) is a characteristic of a device, within which other device properties are guaranteed.

There is some confusion over the use of the term «nominal», which is often used incorrectly to mean «average or typical». The relevant definition in this case is «as per design»; gain is applied to make the average signal level correspond to the designed, or nominal, level.

Nominal implies that something is according to plan, with only insignificant differences. A nominal level implies a “normal” or, perhaps, typical level in equipment. The nominal operating level of a piece of equipment is thought of as the typical signal level with which it operates. Though this is somewhat vague, the phrase often gets generically used in audio to specify a signal level. For example, on equipment with +4 dBu inputs and outputs the nominal operating level is said to be +4 dBu. This level, which is also its zero reference level, is what it is designed to deal with in terms of typical audio program material. There is sufficient headroom above this level to accommodate peaks or loud sections of audio without distortion. When we refer to nominal levels in audio equipment we are generally referring to zero reference levels. The two phrases are often used interchangeably even though “zero reference” is much more precise.

Nominal level is the operating level at which an electronic signal processing device is designed to operate. The electronic circuits that make up such equipment are limited in the maximum signal they can output and the low-level internally generated electronic noise they add to the signal. The difference between the internal noise and the maximum output level is the device’s dynamic range. When a signal is chained improperly through many devices, the dynamic range of the signal is reduced. The nominal level is the level that these devices were designed to operate at, for best dynamic range.

In audio, a related measurement, signal-to-noise ratio, is usually defined as the difference between the nominal level and the noise floor, leaving the headroom as the difference between nominal and maximum output. It is important to realize that the measured level is a time average, meaning that the peaks of audio signals regularly exceed the measured average level. The headroom measurement defines how far the peak levels can stray from the nominal measured level before clipping. The difference between the peaks and the average for a given signal is the crest factor.

Nominal Input Level

Input jack (MIC/GUITAR/LINE (L) )
Mic: -50– -30 dBu
Guitar: -30– -10 dBu
Line: -30– -10 dBu

Input jack (LINE (R) )
Line: -30– -10 dBu

– Nominal output level

Output jacks: -10 dBu

 

image

All in all, after this research I know deepler my Audio Interface by understanding some concepts that I’m going to find in this and other devices.

I wish it was useful for you too. Another litle step in the vast world of audio and music production.