We have now recruited Early Stage Researchers (ESR) for most of these positions. There will be a further recruitment stage for remaining positions later in 2017.
Note that unless otherwise stated, each position is funded for 36 months.
Develop and validate metrics for listening effort, and apply them in the generation of speech enriched to lower its cognitive processing burden.
Design of data- or knowledge-driven voice/speech transformations to enable communication in the context of disordered speech. Speech is to be enriched by boosting its intelligibility and pleasantness.
Investigate how speech perception in challenging situations impacts on cognitive load and physical stress. Examine effects of speaker age, gender, language, hearing.
Investigate effects on discourse comprehension of visual clarity, phonetic composition, syntactic structure, lexical familiarity, degree of predictability and word meaning.
Discover what additional information would assist enhancement algorithms running on hearing aids. Use the speech signal to carry this overtly or covertly (audio watermarking).
Investigate cognitive load imposed by synthetic speech, which is 'hard to listen to'. Create novel forms of synthetic speech that impose the lowest possible cognitive load.
Quantify cognitive speaking effort, discover why speech enrichment is easier for some people, and measure effectiveness of different strategies.
Discover how non-native speakers enrich their speech, the effort needed to do this, and the efficiency of different strategies for different listener groups.
Design new approaches for enhancing the intelligibility of distorted speech, under loudness constraints.
Devise parametric mappings of temporal envelope and fine structure modulations from conversational to clear speech.
Develop visual materials (gestures, face models, body movements) to enrich speech emotion information, for training hearing-impaired children and adults in better emotion identification.
Investigate positive effects of musical training on speech intelligibility and cognitive effort of speech recognition, then determine if this applies to hearing-impaired children.
Improve hearing-aids by multimodal analysis of the acoustic environment and the user’s head- and eye-movements to control the signal processing and provide additional visual cues to the user.
Improve synthetic speech and sound output for AAC devices, so that it is easier to comprehend in real environments.