Powered by OpenAIRE graph
Found an issue? Give us feedback

BMT Defence Services

BMT DEFENCE SERVICES LIMITED
Country: United Kingdom

BMT Defence Services

Funder
Top 100 values are shown in the filters
Results number
arrow_drop_down
9 Projects, page 1 of 2
  • Funder: UK Research and Innovation Project Code: EP/X030156/1
    Funder Contribution: 887,960 GBP

    Mobile autonomous robots offer huge potential to help humans and reduce risk to life in a variety of potentially dangerous defence and security (as well as civilian) applications. However, there is an acute lack of trust in robot autonomy in the real world - in terms of operational performance, adherence to the rules of law and safety, and human values. Furthermore, poor transparency and lack of explainability (particularly with popular deep learning methods) add to the mistrust when autonomous decisions do not align with human "common sense". All of these factors are preventing the adoption of autonomous robots and causing a barrier to the future vision of seamless human-robot cooperation. The crux of the problem is that autonomous robots do not perform well under the many types of ambiguity that arise commonly in the real world. These can be caused by inadequate sensing information or conflicting objectives of performance, safety, and legality. On the other hand, humans are very good at recognising and resolving these ambiguities. This project aims to imbue autonomous robots with a human-like ability to handle real-world ambiguities. This will be achieved through the logical and probabilistic machine learning approach of Bayesian meta-interpretive learning (BMIL). In simple terms, this approach uses a set of logical statements (i.e., propositions, connectives, etc.) that are akin to human language. In contrast, the popular approach of deep learning uses complex multi-layered neural networks with millions of numerical connections. It is through the logical reprsentation and human-like reasoning of BMIL that it will be possible to encode expert human knowledge into the perceptive "world model" and deliberative "planner" of the robot's "artificial brain". The human-like decision-making will be encoded in a variety of ways: A) By design from operational and legal experts in the form of initial logical rules; B) Through passive learning of new logical representations and rules during intervention by human overrides when the robot is not behaving as expected; and C) Through recognising ambiguities before they arise and active learning of rules to resolve them with human assistance. A general autonomy framework will be developed to incorporate the new approach. It is intended that this will be applicable to all forms of autonomous robots in all applications. However, as a credible and feasible case study, we are focusing our real-world experiments on aquatic applications using an uncrewed surface vehicle (USV) or "robot boat" with underwater acoustic sensors (sonar) for searching underwater spaces. This problem is relevant in several areas of defence and security, including water gap crossing, naval mine countermeasures, and anti-submarine warfare. Specifically, our application focus will be on the police underwater search problem, which has challenging operational goals (i.e., finding small and potentially concealed objects underwater and amidst clutter), as well as considerations for the safety of the human divers and other users of the waterway (e.g., akin to the International Regulations for Preventing Collisions at Sea), and legal obligations relating to preservation of the evidence chain and timeliness due to custodial constraints.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/M023281/1
    Funder Contribution: 3,994,060 GBP

    The Centre for the Analysis of Motion, Entertainment Research and Applications (CAMERA) will build on and extend existing impactful relationships between leading researchers at the University of Bath supported by investment from the University, from external partners and with the close participation of Bath's EPSRC Doctoral Training Centre for Digital Entertainment (CDE). Building on existing expertise in Applied Visual Technology and closely linked with the CDE, CAMERA will draw on knowledge, skills and outputs across multi-disciplinary research areas. These include Computer Vision, Graphics, Motion-Capture, Human-Computer Interaction, Biomechanics and Healthcare, underpinned by a strong portfolio of DE research funding from RCUK and other funders. CAMERA will deliver Applied Visual Technology into our partners companies and their industries, to achieve high economic, societal and cultural impact. Bath leads the UK in innovative creative industry research and training for post-graduates through our CDE, which is contractually partnered with 35 innovative UK companies. Growing from our established core strength in the area of Visual Technology - capturing, modelling and visualising the real world - and our strong historical foundation of entertainment-delivered research, CAMERA will focus on high-impact work in movies, TV visual vffects (VFX) and video games with partners at the The Imaginarium and The Foundry, two of the world's leading visual entertainment companies. This focused collaboration will push the boundaries of technology in the area of human motion capture, understanding and animation, and artist driven visual effects production, feeding into our existing CDE partnerships. From this strong foundation, we will extend and apply visual technology to new areas of high economic, societal and cultural impact within the digital economy theme. These include Human Performance Enhancement, with partners in British Skeleton and BMT Defence Services; and Health, Rehabilitation and Assistive Technologies, with partners in the Ministry of Defence. CAMERA is well placed to lead the application of Visual Technology in these new directions: Bath researchers have helped athletes to win 15 Olympic and World Championship medals in the last 10 years and have contributed significantly to national efforts in integrating ex-soldiers with disabilities into civilian life.

    more_vert
  • Funder: European Commission Project Code: 218599
    more_vert
  • Funder: UK Research and Innovation Project Code: EP/S023437/1
    Funder Contribution: 7,062,520 GBP

    Research Area: ART-AI is a multidisciplinary CDT, bringing together computer science, social science and engineering so that its graduates will be specialists in one subject, but have substantial training and experience in the others. The ART-AI management team brings together research in AI, HCI,politics/economics, and engineering, while the CDT as a whole has a team of >40 supervisors across seven departments in three faculties and the institutes for policy research (IPR) and for mathematical innovation (IMI). This is not a marriage of convenience: many CDT members have experience of interdisciplinary working and together with CDT cohorts and partners, we will create accessible, transparent and intelligible AI, driven by ethical and responsible principles, to address issues in, for example, policy design and political decision-making, development of trust in AI for humans and organisations, autonomous systems, sensing and data analysis, explanation of machine decision-making, public service design, social simulation and the ethics of socio-technical systems. Need: Hardly a day passes without a news article on the wonders and dangers of AI. But decisions - by individuals, organisations, society and government - on how to use or not use AI should be informed and ethical. We need policy experts to recognise both opportunities and threats, engineers to extend our technical capabilities, and scientists to establish what is tractable and to predict likely outcomes of policies and innovations. We need mutually informed decisions taking account of diverse needs and perspectives. This need is expressed in measured terms by a slew of major reports (see Case for Support) and Commons and Lords committees, all reflecting the UKCES Sector Insights (Evidence report #92, 2015) prediction of a need by 2022 for >0.5M additional workers in the digital sector against just a third of that number graduating annually. To realise the government vision for AI (White Paper), a critical fraction of those 0.5M workers need to be leaders and innovators with in-depth scientific and technical knowledge to make the right calls on what is possible, what is desirable, and how it can be most safely deployed. Beyond the UK, a 2018 PwC report indicates AI will impact ~10% of jobs, or ~326 million globally by 2030, with ~33% in high-skill jobs across most economic sectors. The clear conclusion is a need for a significant cadre of high-skill workers and leaders with a detailed knowledge of AI, an understanding of how to utilise it, and its political, social and economic implications. The ART-AI is designed to deliver these in collaboration and co-creation with stakeholders in these areas. Approach: ART-AI will produce interdisciplinary graduates and interdisciplinary research by (i) exposing its students to all three disciplines in the taught elements, (ii) fostering development of multi-discipline perspectives throughout the doctoral research process, and (iii) establishing international and stakeholder perspectives whilst contributing to immediate, real-world problems through a programme of visiting lecturers, research visits to leading institutions and internships. The CDT will use some conventional teaching, but the innovations in doctoral training are: (i) multi-disciplinary team projects; (ii) structured and facilitated horizontal (intra-cohort) peer learning and vertical (inter-cohort) mentoring, and in the interdisciplinary cross-cohort activities in years 2-4; (iii) demonstrated contextualisation of the primary discipline research in the other disciplines both at transfer (confirmation) at the end of year 2 and in the final dissertation. Each student will have a primary supervisor from their main discipline, a co-supervisor from at least one of the other two, and where appropriate, one from a CDT partner, reflecting the interdisciplinarity and co-creation that underpin the CDT.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/R008787/1
    Funder Contribution: 1,143,860 GBP

    The overarching aim is to develop a facility for the testing and evaluating of large structures, called Structure 2025. To construct such a facility it is necessary to purchase specialist equipment, which comprises imaging, loading and control systems. Structures 2025 will provide a novel integrated imaging and loading system that is flexible, and can be used for the testing and assessment of a wide range of structures across industry sectors. The unique feature of Structures 2025 is that it will, for the first time, enable data-rich studies of the behaviour of large components and structures subjected to realistic loading scenarios mimicking the behaviour of a structure in service. It will be possible to model the loads felt by aircraft in flight, railway structures, bridges and cars and understand better how the structure supports the load experienced in service. Structures 2025 will enable the introduction of new lightweight materials into transport systems allowing energy savings and a more sustainable approach to design. The uniqueness of Structures 2025 is predicated on imaging, where large amounts of data can be collected to provide information about the structural response. The imaging will be based on both visible light and infra-red camera systems which capture data from the loaded structure and used to evaluate strains and deformations. Traditional sensors take only point readings, whereas images provide data over a wide field of view, since each sensor in the imaging device provides a measurement, the terminology 'data-rich' is applied. A complete system integration will be developed and implemented, that combines the load application using a multi-actuator loading system with the imaging systems. The combination of techniques into a single integrated system will be unique internationally, and will enable the accurate assessment of the interactions between material failure mechanisms/modes and structural stiffness/strength driven failure modes on a hitherto unattainable level of physical realism. Structures 2025 will provide what can be termed high-fidelity data-rich testing of structural components, to integrate with multi-scale computational modelling to provide better predicitive models of structural failure and create safer and more efficient structures. Structures 2025 will be developed in close collaboration with 16 industry partners, representing the rail infrastructure, civil engineering, experimental technique development, energy systems, marine and offshore, and aerospace sectors, as well as several university partners.

    more_vert
  • chevron_left
  • 1
  • 2
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.