Research Project

Helping to Aid Hearing using Fuzzy Logic

Principal Investigator
Dr Faiyaz Doctor
A small child in a yellow top, mostly out of shot, holding out a hearing aid in both hands.

Context-Aware Audio-Visual Speech Enhancement Based on Computational Intelligence and User Preference Learning

Adaptive Speech Enhancement (SE) strategies in speech and hearing technologies are essential due to the diverse needs of users and the dynamic environments in which they communicate.

Traditional approaches often prove inadequate as they fail to consider individual listener preferences and specific contextual challenges. By incorporating user preferences and environmental context, adaptive strategies can customise SE to the specific situation and individual: thereby optimising battery life and overall sound quality while improving clarity and comprehension.

This personalisation ensures a more effective communication experience, overcoming the limitations of standard methods that may not suit all scenarios or meet the needs of individual users.

Outcomes

The research developed a novel application of neuro-fuzzy modelling for personalised AV speech processing. The Audio Visual (AV) SE model was able to improve speech clarity and intelligibility by contextually leveraging audio and visual information in a user aware manner.

The system was dynamically able to adapt SE strategies based on user-specific contexts, listeners environment and preferences. This made the system adaptable and capable of learning from user feedback to enhance the listening experience.

Partners

This research was a joint collaboration with Edinburgh Napier University as part of their EPSRC funded "Towards cognitively-inspired 5G-IoT enabled, multi-modal Hearing Aids" (COG-MHEAR) programme

Publications