Powered by OpenAIRE graph
Found an issue? Give us feedback

Vector Institute

Vector Institute

2 Projects, page 1 of 1
  • Funder: UK Research and Innovation Project Code: EP/Y028783/1
    Funder Contribution: 8,576,840 GBP

    Probabilistic AI involves the embedding of probability models, probabilistic reasoning and measures of uncertainty within AI methods. The ProbAI hub will develop a world leading, diverse and UK-wide research programme in probabilistic AI, that will develop the next generation of mathematically-rigorous, scalable and uncertainty-aware AI algorithms. It will have far-reaching impact across many aspects of AI, including: (1) The sudden and rapid growth of AI systems has led to a new impetus for businesses, governments and creators of AI tools to understand and convey the inherent uncertainties in their systems. A probabilistic approach to AI provides a framework to represent and manipulate uncertainty about models and predictions and already plays a central role in scientific data analysis, robotics and cognitive science. The consequential impact arising from from such developments has the potential to be wide-ranging and substantial: from utilising a probabilistic approach for effective resource allocation (healthcare), prioritisation of actions (infrastructure planning), pattern recognition (cyber security) and the development of robust strategies to mitigate risks (finance). (2) It is possible to gain important theoretical insights into AI models and algorithms through studying their, often probabilistic, limiting behaviour in different asymptotic scenarios. Such results can help with understanding why AI methods work, and how best to choose appropriate architectures - with the potential to substantially reduce the computational cost and carbon footprint of AI. (3) Recent breakthroughs in generative models are based on simulating stochastic processes. There is huge potential to both use these ideas to help develop efficient and scalable probabilistic AI methods more generally; and also to improve and extend current generative models. The latter may lead to more computationally efficient and robust methods, to generative models that use different stochastic processes and are suitable for different types of data, or to novel approaches that can give a level of certainty to the output of a generative model. (4) Models from AI are increasingly being used as emulators. For example, fitting a deep neural network to realisations of a complex computer model for the weather, can lead to more efficient approaches to forecasting the weather. However, in most applications for such methods to be used reliably requires that the emulators report a measure of uncertainty -- so the user can know when the output can be trusted. Also, building on recent generalisations of Bayes updates gives new approaches to incorporate known physical constraints and other structure into these neural network emulators, leading to more robust methods that generalise better outside the training sampler and that have fewer parameters and are easier to fit. Developing these new, practical, general-purpose probabilistic AI methods requires overcoming substantial challenges, and at their heart many of these challenges are mathematical. The hub will unify a fragmented community with interests in Probabilistic AI and bring together UK researchers across the breadth of Applied Mathematics, Computer Science, Probability and Statistics. The hub will promote the area of probabilistic AI widely, encouraging and facilitating cross-disciplinary mathematics research in AI, and has substantial flexibility to fund the involvement of researchers from across the breadth of the UK during its lifetime. ProbAI will draw on and benefit from the well-established world-leading strength in areas relevant to probabilistic AI across different areas of Mathematics and Computer Science, with the aim of making the UK the world-leader in probabilistic AI.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/S023151/1
    Funder Contribution: 6,463,860 GBP

    The CDT will train the next generation of leaders in statistics and statistical machine learning, who will be able to develop widely-applicable novel methodology and theory, as well as create application-specific methods, leading to breakthroughs in real-world problems in government, medicine, industry and science. The research will focus on the development of applicable modern statistical theory and methods as well as on the underpinnings of statistical machine learning. The research will be strongly linked to applications. There is an urgent national need for graduates from this CDT. Large volumes of complicated data are now routinely collected in all sectors of society, encompassing electronic health records, massive scientific datasets, governmental data, and data collected through the advent of the digital economy. The underpinning techniques for exploiting these data come from statistics and machine learning. Exploiting such data is crucial for future UK prosperity. However, several reports from government and learned societies have identified a lack of individuals able to exploit this data. In many situations, existing methodology is insufficient. Off-the-shelf approaches may be misleading due to a lack of reproducibility or sampling biases which they do not correct. Furthermore, understanding the underlying mechanisms is often desired: scientifically valid, interpretable and reproducible results are needed to understand scientific phenomena and to justify decisions, particularly those affecting individuals. Bespoke, model-based statistical methods are needed, that may need to be blended with statistical machine learning approaches to deal with large data. Individuals that can fulfill these more sophisticated demands are doctoral level graduates in statistics who are well versed in the foundations of machine learning. Yet the UK only graduates a small number of statistics PhDs per year, and many of these graduates will not have been exposed to machine learning. The Centre will bring together Imperial and Oxford, two top statistics groups, as equal partners, offering an exceptional training environment and the direct involvement of absolute research leaders in their fields. The supervisor pool will include outstanding researchers in statistical methodology and theory as well as in statistical machine learning. We will use innovative and student-led teaching, focussing on PhD-level training. Teaching cuts across years and thus creates strong cohort cohesion not just within a year group but also between year groups. We will link theoretical advances to application areas through partner interactions as well as through a placement of students with users of statistics. The CDT has a large number of high profile partners that helped shape our application priority areas (digital economy, medicine, engineering, public health, science) and that will co-fund and co-supervise PhD students, as well as co-deliver teaching elements.

    more_vert

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.