‘Pipgate’, Radio Four’s Pips, what happened?

‘The Pips’ are series six of short tone bursts transmitted on Radio 4, they are known as the Greenwich time signal and are intended to accurately mark the start of the hour. They have been transmitted since 1924, and originate from an atomic clock.

On the 21st July 2014 a listener wrote to the Radio 4 programme ‘pm’ to ask why the pips had been changed. The programme played the offending pips and the originals. (here is a link to the program, the item is at 28m 31s http://www.bbc.co.uk/programmes/b049y9pn)

Here is an ‘old’ pip:

and a ‘new’ pip,

You may think that ‘new’ pip sound harsher, by looking at the wave form and spectra we can begin to understand what has happened. Here are the two waveforms of the pips,

Waveforms of the two pips

Waveforms of the two pips

and the two spectra.

Frequency Spectra of the two pips

We can see from the spectra there are additional lines in the spectrum known as harmonics, comparing the two waveforms we can see that the ‘new’ pips appear to be similar to the older ones except that the peaks of the waveform have been flattened or ‘Clipped’ a little.

This clipping is a form of distortion, it occurs when the gain applied to the signal is to great or if there is a fault in a preamp and the amplifier is no longer able to properly replicate the signal at the input.  We can clearly hear the difference between the two signals and according to the concerned listener (and his cat) it has a very negative impact on the sound quality.  Denis Nolan, the network manager for radio 4, identified the fault as being due to a particular desk the signal was going through.

In our project we are writing an algorithm to perform a similar function to the upset listener, we don’t mean that our algorithm will write pithy letters to Eddie Mair, we want to build an algorithm to automatically detect when something like this has gone wrong and the sound is being distorted.  The way we are going about this is to simulate all sorts of types of faults on many different types of sounds, and then see if we can look for ‘features’ of the audio which seem to be very dependant on theses faults.  We can then build automated systems that look for occurrences of these features to locate them, and try and estimate how bad the error is from the features themselves.

You too choose two YouTubes…

In two previous blog posts we discussed a mixed picture of findings for the relationship between audio quality and real world usage/popularity of audio files on the website Freesound. In one of our Web experiments, Audiobattle, we found that the number of downloads for recordings of birdsong predicted independent ratings of quality reasonably well. In a follow up experiment, however, we found that this effect did not generalise well to other categories of sound – there was almost no relationship between quality ratings and the number of plays or downloads for recordings of thunderstorms or Church bells, for example.

For our next Web test, Qualitube, we reasoned that people might find it easier to compare samples if they were recordings of the same event. Continue reading

What you told us about recording audio: an overview of our web survey.

In an earlier blog post we presented some findings from our web survey on the differences between iPhones and other brands of mobile phone. In this post we look beyond mobiles and give a brief overview of some of the other findings from the survey. Continue reading

The Listening Machine

One aspect of the Good Recording project is to develop algorithms which will be able ‘listen’ to audio and make judgement on the quality.  I thought it would be interesting to have a look into the history of machines which can listen and act upon audio.  This is application area is known as machine audition .  The most well known modern algorithm is that of apple’s speech recognition personality Siri.  But there are other aspect of our lives where machine audition is carried out.  Think of the song identification applications Shazaan and Soundhound.  These applications are great for identifying a song you just heard on the radio.  These devices and algorithms are  sound identifiers or classifiers  where a sound is recorded and then classified, perhaps identified as a particular piece of music or a particular word, or classified as being a particular style of music or language. Continue reading

What you told us about recording audio with mobile phones (and what your phone says about you…).

Early on in the project we put a survey on the web to ask questions about where and how people make audio recordings, and what they make recordings of. We also wanted to know what issues people reported as having the biggest impact on audio quality in their recordings (you can still take part in the survey by clicking here, it only takes a couple of minutes). Three months on, over 150 people have taken part and we have begun to analyse the data. One of many interesting trends to emerge is a series of differences between iPhones and other brands of mobile. Continue reading

Wind noise – a starting point

So for the past few months have been have been investigating microphone wind noise.  We choose microphone wind noise as this came very high in our online survey into the main issues that can degrade the audio quality. The survey is ongoing so do please take the time to carry it out if you are interested.

To investigate microphone wind noise, the first task is to understand how it is generated.  Luckily has already been significant research that has already been carried out to this aim, so a thorough literature review was carried out.  The dominant source of wind noise in outdoor microphones are turbulent velocity fluctuations in the wind which interact with the microphone and are converted to pressure fluctuations.  There are other less significant factors which can contribute, for example when the microphone is embedded in a device and this is placed in a flow, this can cause vortex shedding and other resonant type behaviors. Continue reading