Carol Chermaz

Change your cover photo
Upload
cchermaz
Change your cover photo
Computer scientist specialised in signal processing, sound engineer, ham radio. Sound geek from birth.
Committed to medical research and keen on science communication.
This user account status is Approved

This user has not added any information to their profile yet.

United Kingdom

Speech playback is part of our everyday lives: from TV to laptops, from radio to PA systems in public spaces. Understanding the message that is being played might not always be easy, because of noise and reverberation. There are a number of ways in which we can try to tackle this issue: my research is focused on algorithms that modify the signals before they are played - so that they can be more intelligible for the listener.

Such algorithms (known as NELE – Near End Listening Enhancement) are often tested in controlled “lab” settings, which might lead to inaccurate predictions on their performance. In my first study I have tested three state-of-the-art NELE algorithms in two common realistic environments: a living room and a busy cafeteria (listen to the samples here). The resulting paper was awarded as best student paper at Interspeech 2019, Graz, Austria.
With the things I have learned from this study, I am currently developing my own NELE algorithm, which is meant to represent a bridge between expert knowledge in sound engineering and science. ASE (the Automatic Sound Engineer) will analyse the speech signal to understand how to process it in the best way. Its goal is to enhance intelligibility in noise and reverberation, while producing a signal that is potentially pleasant to listen to also in quiet conditions or over headphones.
The beta version of the algorithm has been submitted as an entry to the Hurricane Challenge 2.0.

In the meantime, I am running and evaluation of NELE algorithms in realistic conditions for hearing aid users, in collaboration with Hörzentrum (Oldenburg, Germany). We have set up a very versatile platform, in which the listening test can be delivered via headphones - as the platform includes a simulation of hearing aids. Our participants listen to speech playback in simulated realistic scenarios, while the openMHA provides a compensation for hearing loss (which is tailored to the user as a normal hearing aid). We presented a poster about this work in progress at SPIN 2020 in Toulouse, France.

I am also looking into strategies for embedding data into the acoustic signal, in a way that is inaudible to the human ear but readable by hearing devices. The data should carry information about the speech signal itself, and help the devices better separate speech from noise.

As regards science communication, I have recently taken part in an event at the London College of Communication, for the “Points of Listening” series: I have explained to the audience how my listening tests are made (giving a live demo) and what is the purpose of my research within the ENRICH network.
In September 2019 I was awarded the third prize in the “5 minute research story” competition at the International Congress on Acoustics in Aachen, Germany.