General - Sound Perception
Sound is the perception of acoustic waves. Which means if a tree falls and no one is there to hear it, it doesn’t make a sound!
Understanding how we perceive sound is an important part of being a thought leader in the space and being able to regulate it well. In this section we hope to answer:
- What is A-Weighting?
- What is C weighting?
- Why have A and C weightings?
- What is Sound Localisation?
- What is a Head Related Transfer Function (HRTF)?
What is A-Weighting
Below is the A-Weighting is a filter that attempts to mimic the hearing sensitivity of humans at different frequencies.
If a sound level has been recorded after applying the A-weighting filter (often described as A-weighting the signal), the measurement will have an A in the unit and be displayed as 80 dB(A) or will have an A in the metric such as: L = 80 dB or A-weighted sound pressure level 80 dB. The A-Weighting is used for the majority of noise exposure regulations or regulations relating to the average sound levels.
What is a C weighting?
C-weighting is a filter, as seen above that is similar to A-Weighting but allows much more low frequency through than the A-weighting.
If a sound level has been recorded after applying the C-weighting filter (often described as C-weighting the signal), the measurement will often have an C in the unit and be displayed as 80 dB(C) or will have an C in the metric such as: L = 80 dB or C-weighted sound level 80 dB. The C-Weighting is often used for peak sound levels such as L in the noise at work regulations.
Both:
Why have A and C weightings?
At most noise levels, the A-weighting is a good approximation, however at very loud sound levels, the C-weighting is often used as a better approximation as the hearing sensitivity flattens out with frequency.
If we did not weight the signal before working out how loud it is, we might over estimate the impact of low frequencies on human hearing.
What is Sound Localisation?
Localisation is the ability to understand where a sound is coming from. Humans are particularly good at this and are able to differentiate the location of a sound source with only a few degrees of error. There there three main devices that humans use to localise sound:
- Interaural Level Difference (ILD) - The SPL difference between the two ears.
- Interaural Time Delay (ITD) - The time delay between the two ears.
- Spectral Cues - Particular frequencies that are amplified or missing from the signal.
When a sound arrives from an angle, the sound will be louder at the closest ear and quieter at the furthest ear. This is particularly true for high frequency sounds where the head creates a sound ‘shadowing' effect. The brain understands the impact of the angle and distance of sound source on the Interaural Level Difference (ILD) and can decode this into a location.
When a sound arrives from an angle, the sound will also hit the closest ear before the furthest ear. When the sound is very transient (like a clap) or complex (like speech), humans can decode this Interaural Time Delay (ITD) to a location.
If the human head was a sphere with only holes for ears it would be really hard to work out if a sound was coming from above you or below you, as sounds inside of the ‘cone of confusion’ would have the same ITD and ILD. Thankfully, the pinna 👂 (part of the human outer ear) encodes even more information into the signal based on the source location.
Sound will reflect off of the features of the pinna and will cancel out the sound at specific frequencies depending on the angular height of the sound source. Sound from behind you will also get small amount of shadowing from the pinna to help differentiate sounds front to back. All of these sound localisation cues are wrapped up in a package called a Head Related Transfer Function (HRTF) which is discussed next.
What is a Head Related Transfer Function (HRTF)?
Head Related Transfer Functions (HRTFs) or Head Related Impulse Responses (HRIRs) are a concept from sound localisation (see Sound Perception | What is Sound Localisation? above). A HRTF describes how the human body, head and ears change a sound based on the source location.
For every possible sound source position, there is a unique HRTF to describe how the sound is changed and received at each ear. We can use these to create an artificial sound environment and to trick your brain into thinking it is in a 3D space when it isn’t. If you haven’t experienced this, put on some headphones and close your eyes and listen to this: Virtual Barber Shop (Audio...use headphones, close your eyes).
The recording above was taken using a dummy head which aims to replicate all of the HRTFs using realistic geometry for the head, shoulders and ears. As each person’s head is a slightly different size and our ears are all different, every person will have their own personal HRTFs. The more similar you are to the dummy head used in the recording, the more convincing the effect will be.
Your own personal HRTFs can be measured used specialist equipment to create virtual 3D sound environments that sound very realistic. But for now that technology is for the consumers of the future!