Powered by OpenAIRE graph
Found an issue? Give us feedback

Microsoft

Country: United Kingdom
10 Projects, page 1 of 2
  • Funder: UK Research and Innovation Project Code: EP/V056522/1
    Funder Contribution: 3,156,740 GBP

    The ambition of this partnership between NATS and The Alan Turing Institute is to develop the fundamental science to deliver the world's first AI system to control a section of airspace in live trials. Our research will take a hierarchical approach to air traffic control (ATC) by developing a digital twin alongside a multi-agent machine-learning control system for UK airspace. Furthermore, the partnership will develop technical approaches to deploy trustworthy AI systems, considering how safety, explainability and ethics are embedded within our methods, so that we can deliver new tools which work in harmony with human air traffic controllers in a safety-critical environment. Little has changed in the fundamental infrastructure of UK airspace in the past 50 years, but demand for aviation has increased a hundredfold. Aviation 2050, a recent government green paper, underlines the importance of the aviation network to the prosperity of the UK to the value of £22 billion annually. Yet our nation is at risk without rapid action to modernise our airspace and control methods, to ensure they can handle a future increase in UK passenger traffic of over 50% by 2050 and new challenges arising from unmanned aircraft, both against a backdrop of increasing global pressures to transform the sector's environmental impact. The augmentation of live air traffic control through the use of AI agents which can handle the complexity and uncertainties in the system has transformative potential for NATS's business. This will positively impact live operations, as well as a research tool and training facility for new ATCOs. Correspondingly, NATS's research vision is to exploit new approaches to AI that enable increases in safety, capacity and environmental sustainability while streamlining air traffic controller training. The anticipated benefits of AI systems to air traffic control have come at a critical time, providing us with an opportunity to respond effectively to the unprecedented challenges which arise from a triad of crises: the coronavirus 2019 (Covid-19) pandemic, Brexit and global warming. The UK must develop independent technical advances in the sector, without compromising sustainability targets. The Alan Turing Institute is positioned at the rapidly evolving frontiers of probabilistic machine learning, safe and trustworthy AI and reproducible software engineering. Matching this with the world-leading expertise of NATS, supported by a world-first data set of more than 20 million flight records, means that this partnership is in a unique position to build the first multi AI agents system to deliver tactical control of UK airspace.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/T026723/1
    Funder Contribution: 1,155,320 GBP

    There is an unprecedented integration of AI assistants into everyday life, from the personal AI assistants running in our smart phones and homes, to enterprise AI assistants for increased productivity at the workplace, to health AI assistants. Only in the UK, 7M users interact with AI assistants every day, and 13M on a weekly basis. A crucial issue is how secure AI assistants are, as they make extensive use of AI and learn continually. Also, AI assistants are complex systems with different AI models interacting with each other and with the various stakeholders and the wider ecosystem in which AI assistants are embedded. This ranges from adversarial settings, where malicious actors exploit vulnerabilities that arise from the use of AI models to make AI assistants behave in an insecure way, to accidental ones, where negligent actors introduce security issues or use AIS insecurely. Beyond the technical complexities, users of AI assistants are known to have mental models that are highly incomplete and they do not know how to protect themselves. SAIS (Secure AI assistantS) is a cross-disciplinary collaboration between the Departments of Informatics, Digital Humanities and The Policy Institute at King's College London, and the Department of Computing at Imperial College London, working with non-academic partners: Microsoft, Humley, Hospify, Mycroft, policy and regulation experts, and the general public, including non-technical users. SAIS will provide an understanding of attacks on AIS considering the whole AIS ecosystem, the AI models used in them, and all the stakeholders involved, particularly focusing on the feasibility and severity of potential attacks on AIS from a strategic threat and risk approach. Based on this understanding, SAIS will propose methods to specify, verify and monitor the security behaviour of AIS based on model- based AI techniques known to provide richer foundations than data-driven ones for explanations on the behaviour of AI-based systems. This will result in a multifaceted approach, including: a) novel specification and verification techniques for AIS, such as methods to verify the machine learning models used by AIS; b) novel methods to dynamically reason about the expected behaviour of AIS to be able to audit and detect any degradation or deviation from that expected behaviour based on normative systems and data provenance; iii) co-created security explanations following a techno-cultural method to increase users' literacy of AIS security in a way that users can comprehend.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/V02678X/1
    Funder Contribution: 1,272,140 GBP

    The proposed programme of research will establish the machine learning foundations and artificial intelligence methodologies for Digital Twins. Digital Twins are digital representations of real-world physical phenomena and assets, that are coupled with the corresponding physical twin through instrumentation and live data and information flows. This research programme will establish next-generation Digital Twins that will enable decision makers to perform accurate but simulated "what-if" scenarios in order to better understand the real world phenomena and improve overall decision making and outcomes.

    more_vert
  • Funder: UK Research and Innovation Project Code: ES/V003666/1
    Funder Contribution: 3,570,740 GBP

    Technological advances have done, and will do, much to improve cybersecurity. But, a technological approach is only part of the solution - achieving digital security is inherently a socio-technical endeavour. By combining world-leading research with challenge fellows from across the social sciences, expert working groups, innovative approaches to networking and agile, industry-facing commissioning, the DiScriBe Hub+ will not only address the challenges faced by the ISCF Digital Security by Design (DSbD) initiative, but will fundamentally reshape the ways in which social sciences and STEM disciplines work together to address the challenges of digital security by design in the 21st Century. The core missions of the DiScriBe Hub+ are to provide interdisciplinary leadership to realise digital security by design by connecting social science to a hardware layer that rarely receives support or engagement from social science. This social science input will help to unleash the transformational potential that the hardware innovations within Digital Security by Design makes possible. The Hub+ has five main ways of doing this: 1) Running a series of deep engagements with DSbD stakeholders using techniques from the arts and humanities in order to elicit a shared view of 'Digital Security by Design Futures' 2) Conducting an innovative programme of interdisciplinary research to improve our understanding of the barriers and incentives around adoption of new secure architectures, business readiness levels and adoption, regulatory opportunities and challenges, and ways these are experienced and understood across diverse sectors; 3) Commissioning a range of agile, responsive, industry-facing projects and 'connecting capabilities' grants to address specific DSbD challenges; 4) Establishing a network of 'challenge fellows' tasked with synthesising research outcomes (core and commissioned), connecting insights to the wider Digital Security by Design initiative, and ensuring impact, alongside expert working groups comprising industry and researchers to tackle specific problems in a sharp, focussed way; and, 5) Building a community of social scientists, hardware engineers, software developers, industry and policy makers who are deeply engaged in applying a socio-technical lens to digital security by design. DiScriBe is unique in its focus on the benefits of connecting security architecture innovation with leading social science - and will provide a step change in how cybersecurity is treated as an inter-disciplinary, social as well as technical, problem. Many of the lessons on cross disciplinary working will be tested and embedded through close working with the Bristol Digital Futures Institute - a £70m investment in how our ways of working will need to change in the digital future. We have expert challenge fellows who are leading social scientists applying their work to cybersecurity for the first time. These fellows will also lead working groups on specific topics connecting industry, policy and academia, which in turn will lead to a range of open calls for commissioned industry-facing research. This research will be both theoretically rigorous within social science, while also remaining responsive and agile enough to meet the needs of the wider DSbD programme. As a consequence, a major outcome of DiScriBE will not only be a vibrant, new community, but novel insights that can be applied to the development and implementation of new security-related developments.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/S023992/1
    Funder Contribution: 5,492,190 GBP

    We live in a society dominated by information. The collection of data is an ongoing and continuous process, covering all aspects of life, and the amount of data available in recent years has exploded. In order to make sense of this data, utilise it, gain insights and draw conclusions, new computational methods to analyse and infer have been developed. This is often described by the general term "artificial intelligence" (AI), which includes "machine learning" or "deep learning", which rely on the processing of information by computers to extract nontrivial information, without providing explicit models. Highly visible are developments driven by social media, as this affects every person in a very explicit manner. However, AI is widely adopted across the industrial sectors and hence underpins a successful growth of the UK's economy. Moreover, also in academic research AI has become a toolset used across the disciplines, beyond the traditional realms of computer and data science. Research in science, health and engineering relies on AI to support a wide range of activities, from the discovery of the Higgs boson and gravitational waves via the detection of breast cancer and diabetic retinopathy to autonomous decision- making and human-machine interaction. In order to sustain the industrial growth, it is necessary to train the next generation of highly-skilled AI users and researchers. In this Centre for Doctoral Training, we deliver a training programme for doctoral researchers covering a broad range of scientific and medical topics, and with external partners engaged at every level, from large international companies via government agencies to SMEs and start-ups. AI relies on computing and with data sets growing ever larger, the use of advanced computing skills, such as optimisation, parallelisation and scalability, becomes a necessity for the bigger tasks. For that reason the CDT has joined forces with Supercomputing Wales (SCW), a new £15 million national supercomputing programme of investment, part-funded by the European Regional Development Fund. The CDT will connect researchers working at Swansea, Aberystwyth, Bangor, Cardiff and Bristol universities with regional and national industrial partners and with SCW. Our CDT is therefore ideally placed to link AI and high-performance computing in a coordinated fashion. The academic foundation of our training programme is built on research excellence. We focus on three broad multi- disciplinary scientific, medical and computational areas, namely - data from large science facilities, such as the Large Hadron Collider, the Square Kilometre Array and the Laser Interferometer Gravitational-Wave Observatory; - biological, health and clinical sciences, including access to electronic health records, maintained in the Secured Anonymised Information Linkage databank; - novel mathematical, physical and computer science approaches, driving future developments in e.g. visualisation, collective intelligence and quantum machine learning. Our researchers will therefore be part of cutting-edge global science activities, be able to modernise public health and determine the future landscape of AI. We recognise that AI is a multidisciplinary activity, which extends far beyond single disciplines or institutions. Training and engagement will hence take place across the universities and industrial partners, which will stimulate interaction. Ideally, a doctoral researcher should be able to apply their skills on a research topic in, say, health informatics, particle physics or deep learning, and be able to contribute equally. To ensure our training is aligned with the demands from industry, the CDT's industrial partners will co-create the training programme, provide input in research problems and highlight industrial challenges. As a result our researchers will grow into flexible and creative individuals, who will be fluent in AI skills and well-placed for both industry and academia.

    more_vert
  • chevron_left
  • 1
  • 2
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.