Powered by OpenAIRE graph
Found an issue? Give us feedback

Amazon Development Center Germany

Amazon Development Center Germany

7 Projects, page 1 of 2
  • Funder: UK Research and Innovation Project Code: EP/S031448/1
    Funder Contribution: 371,114 GBP

    One in six people in the UK have a hearing impairment, and this number is certain to increase as the population ages. Yet only 40% of people who could benefit from hearing aids have them, and most people who have the devices don't use them often enough. A major reason for this low uptake and use, is the perception that hearing aids perform poorly. Perhaps the most serious problem is hearing speech in noise. Even the best hearing aids struggle in such situations. This might be in the home, where the boiling kettle forces conversations with friends to stop, or maybe in a railway station, where noise makes it impossible to hear announcements. If such difficulties force the hearing impaired to withdraw from social situations, this increases the risk of loneliness and depression. Moreover, recent research suggests hearing loss is a risk factor for dementia. Consequently, improving how hearing devices deal with speech in noise has the potential to improve many aspects of health and well-being for an aging population. Making the devices more effective should increase the uptake and use of hearing aids. Our approach is inspired by the latest science in speech recognition and synthesis. These are very active and fast moving areas of research, especially now with the development of voice interfaces like Alexa. But most of this research overlooks users who have a hearing impairment. There are innovative approaches being developed in speech technology and machine learning, which could be the basis for revolutionising hearing devices. But to get such radical advances needs more researchers to consider hearing impairments. To do this, we will run a series of signal processing competitions ("challenges"), which will deal with increasingly difficult scenarios of hearing speech in noise. Using such competitions is a proven technique for accelerating research, especially in the fields of speech technology and machine learning. We will develop simulation tools, models and databases needed to run the challenges. These will also lower barriers that currently prevent speech researchers from considering hearing impairment. Data would include the results of listening tests that characterise how real people perceive speech in noise, along with a comprehensive characterisation of each test subject's hearing ability, because hearing aid processing needs to be personalised. We will develop simulators to create different listening scenarios. Models to predict how the hearing impaired perceive speech in noise are also needed. Such data and tools will form a test-bed to allow other researchers to develop their own algorithms for hearing aid processing in different listening scenarios. We will also challenge researchers to improve our models of perception. The scientific legacy of the project will be improved algorithms for hearing aid processing; a test-bed that readily allows further development of algorithms, and more speech researchers considering the hearing abilities of the whole population.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/L016710/1
    Funder Contribution: 4,280,290 GBP

    The Oxford-Warwick Statistics Programme will train a new cohort of at least 50 graduates in the theory, methods and applications of Statistical Science for 21st Century data-intensive environments and large-scale models. This is joint project lead by the Statistics Departments of Oxford and Warwick. These two departments, ranked first and second for world leading research in the last UK research assessment exercise, can provide a wonderful stimulating training environment for doctoral students in statistics. The Centre's pool of supervisors are known for significant international research contributions in modern computational statistics and related fields, contributions recognised by over 20 major National and International Awards since 2008. Oxford and Warwick attract students with competitively won international scholarships. The programme leaders expect to expand the cohort to 11 or 12 per year by bringing these students into the CDT, and raising their funding up to CDT-level using £188K in support from industry and £150K support from donors. The need to engage in large-scale highly structured statistical models has been recognized for some time within areas like genomics and brain-imaging technologies. However, the UK's leading industries and sciences are now also increasingly aware of the enormous potential that data-driven analysis holds. These industries include the engineering, manufacturing, pharmaceutical, financial, e-commerce, life-science and entertainment sectors. The analysis bottleneck has moved from being able to collect and record relevant data to being able to interpret and exploit vast data collections. These and other businesses are critically dependent on the availability of future leaders in Statistics, able to design and develop statistical approaches that are scalable to massive data. The UK can take a world lead in this field, being a recognized international leader in Statistics; and OxWaSP is ideally placed to realize the potential of this opportunity. The Centre is focused on a new type of training for a new type of graduate statistician in statistical methodology and computation that is scalable to big data. We will bring a new focus on training for research, by teaching directly from the scientific literature. Students will be thrown straight into reading and summarizing journal papers. Lecture-format contact is used sparingly with peer-to-peer learning central to the training approach. This is teaching and learning for research by doing research. Cohort learning will be enhanced via group visits to companies, small groups reproducing results from key papers, student-orientated paper discussions, annual workshops and a three-day off-site retreat. From the second year the students will join their chosen supervisors in Warwick and Oxford, five in each Centre coming together regularly for research group meetings that overlap Oxford and Warwick, for workshops and retreats, and teaching and mentoring of students in earlier years. The Centre is timely and ambitious, designed to attract and nurture the brightest graduate statisticians, broadening their skills to meet the new challenge and allowing them to flourish in a focused, communal, research-training environment. The strategic vision is to train the next generation of statisticians who will enable the new data-intensive sciences and industries. The Centre will offer a vehicle to bring together industrial partners from across the two departments to share ideas and provide an important perspective to our students on the research challenges and opportunities within commercial and social enterprises. Student's training will be considerably enhanced through the Centre's visits, lectures, internships and co-supervision from global partners including Amazon, Google, GlaxoSmithKline, MAN and Novartis, as well as smaller entrepreneurial start-ups Deepmind and Optimor.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/R021643/2
    Funder Contribution: 44,608 GBP

    Research in natural language processing (NLP) is driving advances in many applications such as search engines and personal digital assistants, e.g. Apple's Siri and Amazon's Alexa. In many NLP tasks the output to be predicted is a graph representing the sentence, e.g. a syntax tree in syntactic parsing or a meaning representation in semantic parsing. Furthermore, in other tasks such as natural language generation and machine translation the predicted output is text, i.e. a sequence of words. Both types of NLP tasks have been tackled successfully with incremental modelling approaches in which prediction is decomposed into a sequence of actions constructing the output. Despite its success, a fundamental limitation in incremental modelling is that the actions considered typically construct the output monotonically, e.g. in natural language generation each action adds a word to the output but never removes or changes a previously predicted one. Thus, relying exclusively on monotonic actions can decrease accuracy, since the effect of incorrect actions cannot be amended. Furthermore, these actions will be used to predict the following ones, likely to result in an error cascade. We propose an 18-month project to address this limitation and learn non-monotonic incremental language processing models, i.e. incremental models that consider actions that can "undo" the outcome of previously predicted ones. The challenge in incorporating non-monotonic actions is that, unlike their monotonic counterparts, they are not straightforward to infer from the labelled data typically available for training, thus rendering standard supervised learning approaches inapplicable. To overcome this issue we will develop novel algorithms under the imitation learning paradigm to learn non-monotonic incremental models without assuming action-level supervision, relying instead on instance-level loss functions and the model's own predictions in order to learn how to recover from incorrect actions to avoid error cascades. To succeed in this goal, this proposal has the following research objectives: 1) To model non-monotonic incremental prediction of structured outputs in a generic way that can be applied to a variety of tasks with natural language text as output 2) To learn non-monotonic incremental predictors using imitation learning and improve upon the accuracy of monotonic incremental models both in terms of automatic measures such as BLEU and human evaluation. 3) To extend the proposed approach to structured prediction tasks with graph as output. 4) To release software implementations of the proposed methods to facilitate reproducibility and wider adoption by the research community. The research proposed focuses on a fundamental limitation in incremental language processing models, which have been successfully applied to a variety of natural language processing tasks, thus we anticipate the proposal to have a wide academic impact. Furthermore, the tasks we will evaluate it on, namely natural language generation and semantic parsing, are essential components to natural language interfaces and personal digital assistants. Improving these technologies will enhance accessibility to digital information and services. We will demonstrate the benefits of our approach through our collaboration with our project partners Amazon who are supporting the proposal both in terms of cloud computing credits but also by hosting the research associate in order to apply the outcomes of the project to industry-scale datasets.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/N005538/1
    Funder Contribution: 658,569 GBP

    The field of mathematical optimization experienced a paradigm shift in the last decade: while the 20 years prior to about year 2005 were dominated by the development of interior-point methods, research activity since has almost entirely been focused on first-order methods. This was caused by several factors. Most notably, there has been a surge in the demand from practitioners, in fields such as machine learning, signal processing and data science, for new methods able to cope with new large scale problems. Moreover, an important role in the transition was played by the fact that accuracy requirements in many modern applications (such as classification and image denoising) were only moderate or low, which was in sharp contrast with the preceding focus on applications in classical domains such as engineering and physics where accuracy requirements were typically high. The paradigm shift would not have been possible, however, were it not for the development and success of modern gradient methods, the complexity of which improved upon classical results by an order of magnitude, using sophisticated tools such as the estimate sequence method and smoothing. At the moment, mathematical optimization is experiencing yet another revolution, related to the introduction of randomization as an algorithmic design and analysis tool, much in the same way that probabilistic reasoning has recently begun to transform several other "continuous" fields, including numerical linear algebra and control theory. The import of randomization is at least twofold: it makes it possible to design new algorithms which scale to extreme dimensions, and at the same time it often leads to improved theoretical complexity bounds. This project focuses on the design, complexity analysis and high-performing implementations of efficient randomized algorithms suitable for extreme convex optimization.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/S023151/1
    Funder Contribution: 6,463,860 GBP

    The CDT will train the next generation of leaders in statistics and statistical machine learning, who will be able to develop widely-applicable novel methodology and theory, as well as create application-specific methods, leading to breakthroughs in real-world problems in government, medicine, industry and science. The research will focus on the development of applicable modern statistical theory and methods as well as on the underpinnings of statistical machine learning. The research will be strongly linked to applications. There is an urgent national need for graduates from this CDT. Large volumes of complicated data are now routinely collected in all sectors of society, encompassing electronic health records, massive scientific datasets, governmental data, and data collected through the advent of the digital economy. The underpinning techniques for exploiting these data come from statistics and machine learning. Exploiting such data is crucial for future UK prosperity. However, several reports from government and learned societies have identified a lack of individuals able to exploit this data. In many situations, existing methodology is insufficient. Off-the-shelf approaches may be misleading due to a lack of reproducibility or sampling biases which they do not correct. Furthermore, understanding the underlying mechanisms is often desired: scientifically valid, interpretable and reproducible results are needed to understand scientific phenomena and to justify decisions, particularly those affecting individuals. Bespoke, model-based statistical methods are needed, that may need to be blended with statistical machine learning approaches to deal with large data. Individuals that can fulfill these more sophisticated demands are doctoral level graduates in statistics who are well versed in the foundations of machine learning. Yet the UK only graduates a small number of statistics PhDs per year, and many of these graduates will not have been exposed to machine learning. The Centre will bring together Imperial and Oxford, two top statistics groups, as equal partners, offering an exceptional training environment and the direct involvement of absolute research leaders in their fields. The supervisor pool will include outstanding researchers in statistical methodology and theory as well as in statistical machine learning. We will use innovative and student-led teaching, focussing on PhD-level training. Teaching cuts across years and thus creates strong cohort cohesion not just within a year group but also between year groups. We will link theoretical advances to application areas through partner interactions as well as through a placement of students with users of statistics. The CDT has a large number of high profile partners that helped shape our application priority areas (digital economy, medicine, engineering, public health, science) and that will co-fund and co-supervise PhD students, as well as co-deliver teaching elements.

    more_vert
  • chevron_left
  • 1
  • 2
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.