Research 22.11.24

Salford improves audio on BBC Casualty

BRITISH TV took a technological leap forward in June thanks to engineers at the University of Salford.

For the first time, a programme – Casualty – was sound mixed with a technique called object-based audio meaning all the elements you hear onscreen are mixed in separate streams – dialogue, sound effects, music. Viewers using iPlayer could control these elements themselves making it easier to enjoy the drama and follow the dialogue.

“Our hearing is very personal and we know that not everyone wants the same thing out of TV audio. The perfect cinematic mix for one viewer might be too loud or too busy for another,” explained project lead and PhD researcher Lauren Ward.

Hearing impairment

“For many people the level of background sound in a TV programme can make understanding the dialogue difficult, particularly for viewers who have a hearing impairment.

The advance is called ‘Accessible & Enhanced’ (A&E) audio, and follows well-publicised controversies over the ability of viewers to follow dialogue on shows like SS-Great Britain and Taboo.

The BBC R&D unit aims to make it easier for people with hearing impairment to enjoy their dramas and follow all the details.

Lauren Ward portrait

So they teamed up with Lauren (pictured right) and Dr Ben Shirley in Acoustic Engineering to put to the test some of their research.

The result was this weekend’s trial in which viewers, for the first time on the BBC can customise the sound of their favourite programme and then supply feedback to the experts.

Try it yourself

Dr Matt Paradis, of BBC R&D said: “We’re asking Casualty fans to try out a new feature where you can set the audio mix to suit your needs and preferences.”

Lauren explained: “At the right-hand side of the slider, you get the same mix as heard on TV. At the left, the dialogue is enhanced and some of the other sounds are quieter. You can adjust between these two extremes to get the right balances of dialogue, important sounds and atmosphere for you.”

And she added: “Currently programmes are transmitted as a single pre-mixed stream. This includes vision, speech, sound-effects, background music and everything in between. However, this makes trying to change the volume of the individual elements, like dialogue, virtually impossible. It’s like trying to take the eggs out of an already baked cake.

“Object-based audio sends all the sounds separately with instructions – known as metadata - on how to play the elements separately.”