Powered by OpenAIRE graph
Found an issue? Give us feedback

DeepMind

8 Projects, page 1 of 2
  • Funder: UK Research and Innovation Project Code: EP/Y028783/1
    Funder Contribution: 8,576,840 GBP

    Probabilistic AI involves the embedding of probability models, probabilistic reasoning and measures of uncertainty within AI methods. The ProbAI hub will develop a world leading, diverse and UK-wide research programme in probabilistic AI, that will develop the next generation of mathematically-rigorous, scalable and uncertainty-aware AI algorithms. It will have far-reaching impact across many aspects of AI, including: (1) The sudden and rapid growth of AI systems has led to a new impetus for businesses, governments and creators of AI tools to understand and convey the inherent uncertainties in their systems. A probabilistic approach to AI provides a framework to represent and manipulate uncertainty about models and predictions and already plays a central role in scientific data analysis, robotics and cognitive science. The consequential impact arising from from such developments has the potential to be wide-ranging and substantial: from utilising a probabilistic approach for effective resource allocation (healthcare), prioritisation of actions (infrastructure planning), pattern recognition (cyber security) and the development of robust strategies to mitigate risks (finance). (2) It is possible to gain important theoretical insights into AI models and algorithms through studying their, often probabilistic, limiting behaviour in different asymptotic scenarios. Such results can help with understanding why AI methods work, and how best to choose appropriate architectures - with the potential to substantially reduce the computational cost and carbon footprint of AI. (3) Recent breakthroughs in generative models are based on simulating stochastic processes. There is huge potential to both use these ideas to help develop efficient and scalable probabilistic AI methods more generally; and also to improve and extend current generative models. The latter may lead to more computationally efficient and robust methods, to generative models that use different stochastic processes and are suitable for different types of data, or to novel approaches that can give a level of certainty to the output of a generative model. (4) Models from AI are increasingly being used as emulators. For example, fitting a deep neural network to realisations of a complex computer model for the weather, can lead to more efficient approaches to forecasting the weather. However, in most applications for such methods to be used reliably requires that the emulators report a measure of uncertainty -- so the user can know when the output can be trusted. Also, building on recent generalisations of Bayes updates gives new approaches to incorporate known physical constraints and other structure into these neural network emulators, leading to more robust methods that generalise better outside the training sampler and that have fewer parameters and are easier to fit. Developing these new, practical, general-purpose probabilistic AI methods requires overcoming substantial challenges, and at their heart many of these challenges are mathematical. The hub will unify a fragmented community with interests in Probabilistic AI and bring together UK researchers across the breadth of Applied Mathematics, Computer Science, Probability and Statistics. The hub will promote the area of probabilistic AI widely, encouraging and facilitating cross-disciplinary mathematics research in AI, and has substantial flexibility to fund the involvement of researchers from across the breadth of the UK during its lifetime. ProbAI will draw on and benefit from the well-established world-leading strength in areas relevant to probabilistic AI across different areas of Mathematics and Computer Science, with the aim of making the UK the world-leader in probabilistic AI.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/V025279/1
    Funder Contribution: 1,283,430 GBP

    Machine learning (ML) systems are increasingly being deployed across society, in ways that affect many lives. We must ensure that there are good reasons for us to trust their use. That is, as Baroness Onora O'Neill has said, we should aim for reliable measures of trustworthiness. Three key measures are: Fairness - measuring and mitigating undesirable bias against individuals or subgroups; Transparency/interpretability/explainability - improving our understanding of how ML systems work in real-world applications; and Robustness - aiming for reliably good performance even when a system encounters different settings from those in which it was trained. This fellowship will advance work on key technical underpinnings of fairness, transparency and robustness of ML systems, and develop timely key applications which work at scale in real world health and criminal justice settings, focusing on interpretability and robustness of medical imaging diagnosis systems, and criminal recidivism prediction. The project will connect with industry, social scientists, ethicists, lawyers, policy makers, stakeholders and the broader public, aiming for two-way engagement - to listen carefully to needs and concerns in order to build the right tools, and in turn to inform policy, users and the public in order to maximise beneficial impacts for society. This work is of key national importance for the core UK strategy of being a world leader in safe and ethical AI. As the Prime Minister said in his first speech to the UN, "Can these algorithms be trusted with our lives and our hopes?" If we get this right, we will help ensure fair, transparent benefits across society while protecting citizens from harm, and avoid the potential for a public backlash against AI developments. Without trustworthiness, people will have reason to be afraid of new ML technologies, presenting a barrier to responsible innovation. Trustworthiness removes frictions preventing people from embracing new systems, with great potential to spur economic growth and prosperity in the UK, while delivering equitable benefits for society. Trustworthy ML is a key component of Responsible AI - just announced as one of four key themes of the new Global Partnership on AI. Further, this work is needed urgently - ML systems are already being deployed in ways which impact many lives. In particular, healthcare and criminal justice are crucial areas with timely potential to benefit from new technology to improve outcomes, consistency and efficiency, yet there are important ethical concerns which this work will address. The current Covid-19 pandemic, and the Black Lives Matter movement, indicate the urgency of these pressing issues.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/Y028732/1
    Funder Contribution: 7,691,560 GBP

    Artificial intelligence (AI) is on the verge of widespread deployment in ways that will impact our everyday lives. It might do so in the form of self-driving cars or of navigation systems optimising routes on the basis of real-time traffic information. It might do so through smart homes, in which usage of high-power devices is timed intelligently based on real- time forecasts of renewable generation. It might do so by automatically coordinating emergency vehicles in the event of a major incident, natural or man-made, or by coordinating swarms of small robots collectively engaged in some task, such as search-and-rescue. Much of the research on AI to date has focused on optimising the performance of a single agent carrying out a single well-specified task. There has been little work so far on emergent properties of systems in which large numbers of such agents are deployed, and the resulting interactions. Such interactions could end up disturbing the environments for which the agents have been optimised. For instance, if a large number of self-driving cars simultaneously choose the same route based on real-time information, it could overload roads on that route. If a large number of smart homes simultaneously switch devices on in response to an increase in wind energy generation, it could destabilise the power grid. If a large number of stock-trading algorithmic agents respond similarly to new information, it could destabilise financial markets. Thus, the emergent effects of interactions between autonomous agents inevitably modify their operating environment, raising significant concerns about the predictability and robustness of critical infrastructure networks. At the same time, they offer the prospect of optimising distributed AI systems to take advantage of cooperation, information sharing, and collective learning. The key future challenge is therefore to design distributed systems of interacting AIs that can exploit synergies in collective behaviour, while being resilient to unwanted emergent effects. Biological evolution has addressed many such challenges, with social insects such as ants and bees being an example of highly complex and well-adapted responses emerging at the colony level from the actions of very simple individual agents! The goal of this project is to develop the mathematical foundations for understanding and exploiting the emergent features of complex systems composed of relatively simple agents. While there has already been considerable research on such problems, the novelty of this project is in the use of information theory to study fundamental mathematical limits on learning and optimisation in such systems. Information theory is a branch of mathematics that is ideally suited to address such questions. Insights from this study will be used to inform the development of new algorithms for artificial agents operating in environments composed of large numbers of interacting agents. The project will bring together mathematicians working in information theory, network science and complex systems with engineers and computer scientists working on machine learning, AI and robotics. The aim goal is to translate theoretical insights into algorithms that are deployed onreal world applications real systems; lessons learned from deploying and testing the algorithms in interacting systems will be used to refine models and algorithms in a virtuous circle.

    more_vert
  • Funder: UK Research and Innovation Project Code: BB/W013770/1
    Funder Contribution: 1,259,580 GBP

    Our vision for this Transition Award is to leverage and combine key emerging technologies in Artificial Intelligence (AI) and Engineering Biology (EB) to enable and pioneer a new era of world-leading advances that will directly contribute to the objectives of the National Engineering Biology Programme. Realisation of the benefits of Engineering Biology technologies is predicated on our ability to increase our capability for predictive design and optimisation of engineered biosystems across different biological scales. Such a scaled approach to Engineering Biology would serve to significantly accelerate translation of scientific research and innovation into applications of wide commercial and societal impact. Synthetic Biology has developed rapidly over the past decade. We now have the core tools and capabilities required to modify and engineer living systems. However, our ability to predictably design new biological systems is still limited, due to the complexity, noise, and context dependence inherent to biology. To achieve the full capability of Engineering Biology, we require a change in capacity and scope. This requires lab automation to deliver high-throughput workflows. With this comes the challenge of managing and utilising the data-rich environment of biology that has emerged from recent advances in data collection capabilities, which include high-throughput genomics, transcriptomics, and metabolomics. However, such approaches produce datasets that are too large for direct human interpretation. There is thus a need to develop deep statistical learning and inference methods to uncover patterns and correlations within these data. On the other hand, steady improvements in computing power, combined with recent advances in data and computer sciences have fuelled a new era of Artificial Intelligence (AI)-driven methods and discoveries that are progressively permeating almost all sectors and industries. However, the type of data we can gather from biological systems does not match the requirements for off-the-shelf ML/AI methods and tools that are currently available. This calls for the development of new bespoke AI/ML methods adapted to the specific features of biological measurement data. AI approaches have the potential to both learn from complex data and, when coupled to appropriate systems design and engineering methods, to provide the predictive power required for reliable engineering of biological systems with desired functions. As the field develops, there is thus an opportunity to strategically focus on data-centric approaches and AI-enabled methods that are appropriate to the challenges and themes of the National Engineering Biology Programme. Closing the Design-Build-Test-Learn loop using AI to direct the "learn" and "design" phases will provide a radical intervention that fundamentally changes the way that we design, optimise and build biological systems. Through this AI-4-EB Transition Award we will build a network of inter-connected and inter-disciplinary researchers to both develop and apply next-generation AI technologies to biological problems. This will be achieved through a combination of leading-light inter-disciplinary pilot projects for application-driven research, meetings to build the scientific community, and sandpits supported by seed funding to generate novel ideas and new collaborations around AI approaches for real-world use. We will also develop an RRI strategy to address the complex issues arising at the confluence of these two critical and transformative technologies. Overall, AI-4-EB will provide the necessary step-change for the analysis of large and heterogeneous biological data sets, and for AI-based design and optimisation of biological systems with sufficient predictive power to accelerate Engineering Biology.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/Y028805/1
    Funder Contribution: 10,250,200 GBP

    Generative Models are AI models that can generate data. Recently researchers have shown that by training these models on large amounts of data (text data from the internet and images) these models learn to understand the regularities of our text and image world so well that they can generate responses to questions and create new images with surprising fidelity. This heralds a new era in which computers can assist humans to carry out tasks more efficiently than ever with significant opportunities for society, science and industry. However, these advances need significant research still -- how to make them train efficiently on different problems, how to understand their reliability and adherence to ethical norms.

    more_vert
  • chevron_left
  • 1
  • 2
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.