
Oticon (Denmark)
Oticon (Denmark)
4 Projects, page 1 of 1
assignment_turned_in Project2015 - 2019Partners:Oticon (Denmark), Oticon Eriksholm Research Centre, Imperial College LondonOticon (Denmark),Oticon Eriksholm Research Centre,Imperial College LondonFunder: UK Research and Innovation Project Code: EP/M026698/1Funder Contribution: 983,623 GBPAge-related hearing loss affects over half the UK population aged over 60. Hearing loss makes communication difficult and so has severe negative consequences for quality of life. The most common treatment for mild-to-moderate hearing loss is the use of hearing aids. However even with aids, hearing impaired listeners are worse at understanding speech in noisy environments because their auditory system is less good at separating wanted speech from unwanted noise. One solution for this is to use speech enhancement algorithms to amplify the desired speech signals selectively while attenuating the unwanted background noise. It is well known that normal hearing listeners can better understand speech in noise when listening with two ears rather than with only one. Differences between the signals at the two ears allow the speech and noise to be separated based on their spatial locations resulting in improved intelligibility. Technological advances now make feasible the use of two hearing aids that are able to share information via a wireless link. By sharing information in this way, it becomes possible for the speech enhancement algorithms within the hearing aids to localize sound sources more accurately and, by jointly processing the signals for both ears, to ensure that the spatial cues that are present in the acoustic signals are retained. It is the goal of this project to exploit these binaural advantages by developing speech enhancement algorithms that jointly enhance the speech received by the two ears. Most current speech enhancement techniques have evolved from the telecommunications industry and are designed to act only on monaural signals. Many of the techniques can improve the perceived quality of already intelligible speech but binary masking is one of the few techniques that has been shown to improve the intelligibility of noisy speech for both normal and hearing impaired listeners. In the binary masking approach regions of the time-frequency domain that contain significant speech energy are left unchanged while regions that contain little speech energy are muted. In this project we will extend existing monaural binary masking techniques to provide binaural speech enhancement while preserving the inter-aural time and level differences that are critical for the spatial separation of sound sources. To train and tune our binaural speech enhancement algorithm we will also develop within the project an intelligibility metric that is able to predict the intelligibility of a speech signal for a binaural listener with normal or impaired hearing in the presence of competing noise sources. This metric is the key to finding automatically the optimum settings an individual listener's hearing aids in a particular environment. The final evaluation and development of the binaural enhancement algorithm assess speech perception in noise in a panel of hearing-impaired listeners who will also be asked to assess the quality of the enhanced speech signals.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::987620044bd053726add61f7ddb3d505&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::987620044bd053726add61f7ddb3d505&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2024 - 2028Partners:Macquarie University, Lancashire Teaching Hospitals NHS Foundation Trust, Oticon (Denmark), Lancaster University, National Deaf Children's Society +2 partnersMacquarie University,Lancashire Teaching Hospitals NHS Foundation Trust,Oticon (Denmark),Lancaster University,National Deaf Children's Society,British Association Teachers of the Deaf,East Lancashire Hospitals NHS TrustFunder: UK Research and Innovation Project Code: MR/X035999/1Funder Contribution: 1,509,280 GBPFive percent of children have disabling hearing loss. These children often experience delayed speech and language development. Although the majority of these children attend mainstream schools in the UK, only 34% achieve two A-levels (or the equivalent), compared to 55% of their hearing peers. Mild-to-moderate hearing loss (MMHL) is the most common hearing impairment in children. However, despite the effect of their hearing impairment on development it is the least understood form of hearing loss in children. This means there is an urgent need for research on this group in order to meet the goal set by the National Deaf Children's Society (the UK's biggest children's hearing charity and a partner on this project) of making sure that "by 2030, no deaf child will be left behind". Children with MMHL are prescribed auditory technology (AT) to assist them. Hearing aids are more advanced and accessible than ever, and assisted listening devices - where a talker's speech is streamed directly to the hearing aid to reduce the effects of a noisy background - are now common in classrooms. However, AT is designed based on how adults communicate: adults generally look at the person they are talking with and ask for information to be repeated when they do not hear clearly. On the other hand, children with normal hearing do not look. It is unknown if children with MMHL look at the talker while they listen. This has an impact on the effectiveness of the AT algorithms. PI Stewart has shown that children with MMHL do not have the same improvements in attention, memory and learning as adults do when using AT. This could be due to 1) the children are not wearing their AT; 2) the ATs are "too much of a good thing" and have short- or long-term effects on key hearing and listening skills (e.g. children have found that they can hear without turning to look at the talker); or 3) the ATs are not appropriate for children. To test these hypotheses, we will first systematically review children's AT usage across the UK. Second, we will gather data on the developmental impact of ATs over an 18-month period. Key hearing and listening skills including working out where a sound came from and combining audio with visual information will be assessed. Third, we will assess how children with MMHL communicate with adults and children. We will do this in a research lab in the form of a classroom where eye and head movements and brain activity can be measured. This will allow iCAT to evaluate if AT algorithms (e.g. designed for the listener to look at the talker) are appropriate for children. iCAT will work with industry, audiologists and teachers of the deaf throughout the project to ensure change towards providing child-appropriate ATs for the benefit of children with MMHL. Through the publication of white papers, iCAT will work with UK-based charities and professional bodies to create evidence-based recommendations for policy regarding the use and fitting of the AT in children with MMHL.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::2230f7ec1b296e4d69e7aec21404fe8b&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::2230f7ec1b296e4d69e7aec21404fe8b&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2019 - 2021Partners:Imperial College London, Oticon Eriksholm Research Centre, Oticon (Denmark), Imperial College Healthcare NHS Trust, Imperial College Healthcare NHS Trust +6 partnersImperial College London,Oticon Eriksholm Research Centre,Oticon (Denmark),Imperial College Healthcare NHS Trust,Imperial College Healthcare NHS Trust,UCL,Sorbonne University,Google Inc,HSG,Sorbonne University,Google (United States)Funder: UK Research and Innovation Project Code: EP/R032602/1Funder Contribution: 1,029,420 GBPThere are more than 10 million people in the U.K., one in six, with some form of hearing impairment. The only assistive technology currently available to them are hearing aids. However, they can only aid people with a particular type of hearing impairment, and hearing aid users still have major problems with understanding speech in noisy backgrounds. A lot of effort has therefore been devoted on signal processing to reduce the background noise in complex sounds, but this has not yet been able to significantly improve speech intelligibility. The research vision of this project is to develop a radically different technology for assisting people with hearing impairments to understand speech in noisy environments, namely through simplified visual and tactile signals that are engineered from a speech signal and that can be presented congruently to the sound. Visual information such as lip reading can indeed improve speech intelligibility significantly. Haptic information, such as through a listener touching the speakers face, can enhance speech perception as well. However, touching a speakers face in real life is often not an option, and lip reading is often not available such as when a speaker is too far or not in the field of view. Moreover, natural visual and tactile stimuli are highly complex and difficult to substitute when they are not available naturally. In this project I will engineer simplistic visual and tactile signals from speech that will be designed to enhance the neural response to the rhythm of speech and thereby its comprehension. This builds on recent breakthroughs in our understanding of the neural mechanisms for speech processing. These breakthroughs have uncovered a neural mechanism by which neural activity in the auditory areas of the brain tracks the speech rhythm, set by the rates of syllables and words, and thus parses speech into these functional constituents. Strikingly, this speech-related neural activity can be enhanced by visual and tactile signals, improving speech comprehension. These remarkable visual-auditory and somatosensory-auditory interactions thus open an efficient and non-invasive way of increasing the intelligibility of speech in noise through providing congruent visual and tactile information. The required visual and tactile stimuli need to be engineered to efficiently drive the cortical response to the speech rhythm. Since the speech rhythm is evident in the speech envelope, a single temporal signal, either from a single channel or a few channels (low density) will suffice for the required visual and tactile signals. They can therefore later be integrated with non-invasive wearable devices such as hearing aids. Because this multisensory speech enhancement will employ existing neural pathways, the developed technology will not require training and will therefore be able to benefit young and elderly people alike. My specific aims are (1) to engineer synthetic visual stimuli from speech to enhance speech comprehension, (2) to engineer synthetic tactile stimuli from speech to enhance speech comprehension, (3) to develop a computational model for speech enhancement through multisensory integration, (4) to integrate the engineered synthetic visual and tactile stimuli paired to speech presentation, and (5) to evaluate the efficacy of the developed multisensory stimuli for aiding patients with hearing impairment. I will achieve these aims by working together with six key industrial, clinical and academic partners. Through inventing and demonstrating a radically new approach to hearing-aid technology, this research will lead to novel, efficient ways for improving speech-in-noise understanding, the key difficulty of people with hearing impairment. The project is excellently aligned with the recently founded Centre for Neurotechnology at Imperial College, as well as more generally with the current major U.S. and E.U. initiatives on brain research.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::a5e4d4a6340414cfaa31f4463713f843&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::a5e4d4a6340414cfaa31f4463713f843&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2021 - 2025Partners:BlackRock Microsystems, GripAble, Philips Neuro, Ottobock (Germany), Huawei Technologies (United Kingdom) +22 partnersBlackRock Microsystems,GripAble,Philips Neuro,Ottobock (Germany),Huawei Technologies (United Kingdom),ASU,Imperial College London,Oticon (Denmark),BIOS Health Ltd,Brainbox Ltd,Oticon Eriksholm Research Centre,Brainbox Ltd,Bios Health Ltd,Otto Bock HealthCare GmbH,Philips Neuro,Huawei Technologies (UK) Co. Ltd,GripAble,Guger Technologies (Austria),CTRL-labs Corporation,CTRL-labs Corporation,Fourier Intelligence,Huawei Technologies (UK) Co. Ltd,Blackrock Microsystems (United States),Rippleneuro,Fourier Intelligence,Rippleneuro,g tec Guger TechnologiesFunder: UK Research and Innovation Project Code: EP/T020970/1Funder Contribution: 5,593,020 GBPWe propose the development of a new technology for Non-Invasive Single Neuron Electrical Monitoring (NISNEM). Current non-invasive neuroimaging techniques including electroencephalography (EEG), magnetoencephalography (MEG) or functional magnetic resonance imaging (fMRI) provide indirect measures of the activity of large populations of neurons in the brain. However, it is becoming apparent that information at the single neuron level may be critical for understanding, diagnosing, and treating increasingly prevalent neurological conditions, such as stroke and dementia. Current methods to record single neuron activity are invasive - they require surgical implants. Implanted electrodes risk damage to the neural tissue and/or foreign body reaction that limit long-term stability. Understandably, this approach is not chosen by many patients; in fact, implanted electrode technologies are limited to animal preparations or tests on a handful of patients worldwide. Measuring single neuron activity non-invasively will transform how neurological conditions are diagnosed, monitored, and treated as well as pave the way for the broad adoption of neurotechnologies in healthcare. We propose the development of NISNEM by pushing frontier engineering research in electrode technology, ultra-low-noise electronics, and advanced signal processing, iteratively validated during extensive tests in pre-clinical trials. We will design and manufacture arrays of dry electrodes to be mounted on the skin with an ultra-high density of recording points. By aggressive miniaturization, we will develop microelectronics chips to record from thousands of channels with beyond state-of-art noise performance. We will devise breakthrough developments in unsupervised blind source identification of the activity of tens to hundreds of neurons from tens of thousands of recordings. This research will be supported by iterative pre-clinical studies in humans and animals, which will be essential for defining requirements and refining designs. We intend to demonstrate the feasibility of the NISNEM technology and its potential to become a routine clinical tool that transforms all aspects of healthcare. In particular, we expect it to drastically improve how neurological diseases are managed. Given that they are a massive burden and limit the quality of life of millions of patients and their families, the impact of NISNEM could be almost unprecedented. We envision the NISNEM technology to be adopted on a routine clinical basis for: 1) diagnostics (epilepsy, tremor, dementia); 2) monitoring (stroke, spinal cord injury, ageing); 3) intervention (closed-loop modulation of brain activity); 4) advancing our understanding of the nervous system (identifying pathological changes); and 5) development of neural interfaces for communication (Brain-Computer Interfaces for locked-in patients), control of (neuro)prosthetics, or replacement of a "missing sense" (e.g., auditory prosthetics). Moreover, by accurately detecting the patient's intent, this technology could be used to drive neural plasticity -the brain's ability to reorganize itself-, potentially enabling cures for currently incurable disorders such as stroke, spinal cord injury, or Parkinson's disease. NISNEM also provides the opportunity to extend treatment from the hospital to the home. For example, rehabilitation after a stroke occurs mainly in hospitals and for a limited period of time; home rehabilitation is absent. NISNEM could provide continuous rehabilitation at home through the use of therapeutic technologies. The neural engineering, neuroscience and clinical neurology communities will all greatly benefit from this radically new perspective and complementary knowledge base. NISNEM will foster a revolution in neurosciences and neurotechnology, strongly impacting these large academic communities and the clinical sector. Even more importantly, if successful, it will improve the life of millions of patients and their relatives
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::f7d3acf71a10ea7a6c3a0a3f991fc3b9&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::f7d3acf71a10ea7a6c3a0a3f991fc3b9&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eu