
Bloc Digital
Bloc Digital
2 Projects, page 1 of 1
assignment_turned_in Project2021 - 2024Partners:Guidance Automation Ltd, Shadow Robot (United Kingdom), Guidance Automation Ltd, Connected Places Catapult, D-RisQ Ltd +22 partnersGuidance Automation Ltd,Shadow Robot (United Kingdom),Guidance Automation Ltd,Connected Places Catapult,D-RisQ Ltd,Consequential Robotics Ltd,KCL,Scoutek Ltd,Sheffield Childrens NHS Foundation Trust,ClearSy,Amazon Web Services, Inc.,British Telecommunications plc,Amazon (United States),D-RisQ (United Kingdom),Shadow Robot Company Ltd,BAE Systems (UK),Consequential Robots,Sheffield Children's NHS Foundation Trust,BAE Systems (Sweden),Bloc Digital,Bloc Digital,Connected Places Catapult,Scoutek Ltd,Cyberselves Universal Limited,BT Group (United Kingdom),ClearSy,Cyberselves Universal LimitedFunder: UK Research and Innovation Project Code: EP/V026801/2Funder Contribution: 2,621,150 GBPAutonomous systems promise to improve our lives; driverless trains and robotic cleaners are examples of autonomous systems that are already among us and work well within confined environments. It is time we work to ensure developers can design trustworthy autonomous systems for dynamic environments and provide evidence of their trustworthiness. Due to the complexity of autonomous systems, typically involving AI components, low-level hardware control, and sophisticated interactions with humans and the uncertain environment, evidence of any nature requires efforts from a variety of disciplines. To tackle this challenge, we gathered consortium of experts on AI, robotics, human-computer interaction, systems and software engineering, and testing. Together, we will establish the foundations and techniques for verification of properties of autonomous systems to inform designs, provide evidence of key properties, and guide monitoring after deployment. Currently, verifiability is hampered by several issues: difficulties to understand how evidence provided by techniques that focus on individual aspects of a system (control engineering, AI, or human interaction, for example) compose to provide evidence for the system as whole; difficulties of communication between stakeholders that use different languages and practices in their disciplines; difficulties in dealing with advanced concepts in AI, control and hardware design, software for critical systems; and others. As a consequence, autonomous systems are often developed using advanced engineering techniques, but outdated approaches to verification. We propose a creative programme of work that will enable fundamental changes to the current state of the art and of practice. We will define a mathematical framework that enables a common understanding of the diverse practices and concepts involved in verification of autonomy. Our framework will provide the mathematical underpinning, required by any engineering effort, to accommodate the notations used by the various disciplines. With this common understanding, we will justify translations between languages, compositions of artefacts (engineering models, tests, simulations, and so on) defined in different languages, and system-level inferences from verifications of components. With such a rich foundation and wealth of results, we will transform the state of practice. Currently, developers build systems from scratch, or reusing components without any evidence of their operational conditions. Resulting systems are deployed in constrained conditions (reduced speed or contained environment, for example) or offered for deployment at the user's own risk. Instead, we envisage the future availability of a store of verified autonomous systems and components. In such a future, in the store, users will find not just system implementations, but also evidence of their operational conditions and expected behaviour (engineering models, mathematical results, tests, and so on). When a developer checks in a product, the store will require all these artefacts, described in well understood languages, and will automatically verify the evidence of trustworthiness. Developers will also be able to check in components for other developers; equally, they will be accompanied by evidence required to permit confidence in their use. In this changed world, users will buy applications with clear guarantees of their operational requirements and profile. Users will also be able to ask for verification of adequacy for customised platforms and environment, for example. Verification is no longer an issue. Working with the EPSRC TAS Hub and other nodes, and our extensive range of academic and industrial partners, we will collaborate to ensure that the notations, verification techniques, and properties, that we consider, contribute to our common agenda to bring autonomy to our everyday lives.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::faccc30e620d44ff77e69a9fc68ec2eb&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::faccc30e620d44ff77e69a9fc68ec2eb&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2020 - 2021Partners:Shadow Robot Company Ltd, University of Leicester, British Telecommunications plc, ClearSy, BT Group (United Kingdom) +25 partnersShadow Robot Company Ltd,University of Leicester,British Telecommunications plc,ClearSy,BT Group (United Kingdom),D-RisQ (United Kingdom),Consequential Robots,Sheffield Children's NHS Foundation Trust,University of Leicester,D-RisQ Ltd,Guidance Automation Ltd,Cyberselves Universal Limited,BAE Systems (United Kingdom),Connected Places Catapult,Amazon Web Services, Inc.,Consequential Robotics Ltd,Scoutek Ltd,Sheffield Childrens NHS Foundation Trust,ClearSy,Amazon (United States),BAE Systems (Sweden),Bloc Digital,Scoutek Ltd,Bloc Digital,Cyberselves Universal Limited,BAE Systems (UK),Connected Places Catapult,BT Group (United Kingdom),Guidance Automation Ltd,Shadow Robot (United Kingdom)Funder: UK Research and Innovation Project Code: EP/V026801/1Funder Contribution: 2,923,650 GBPAutonomous systems promise to improve our lives; driverless trains and robotic cleaners are examples of autonomous systems that are already among us and work well within confined environments. It is time we work to ensure developers can design trustworthy autonomous systems for dynamic environments and provide evidence of their trustworthiness. Due to the complexity of autonomous systems, typically involving AI components, low-level hardware control, and sophisticated interactions with humans and the uncertain environment, evidence of any nature requires efforts from a variety of disciplines. To tackle this challenge, we gathered consortium of experts on AI, robotics, human-computer interaction, systems and software engineering, and testing. Together, we will establish the foundations and techniques for verification of properties of autonomous systems to inform designs, provide evidence of key properties, and guide monitoring after deployment. Currently, verifiability is hampered by several issues: difficulties to understand how evidence provided by techniques that focus on individual aspects of a system (control engineering, AI, or human interaction, for example) compose to provide evidence for the system as whole; difficulties of communication between stakeholders that use different languages and practices in their disciplines; difficulties in dealing with advanced concepts in AI, control and hardware design, software for critical systems; and others. As a consequence, autonomous systems are often developed using advanced engineering techniques, but outdated approaches to verification. We propose a creative programme of work that will enable fundamental changes to the current state of the art and of practice. We will define a mathematical framework that enables a common understanding of the diverse practices and concepts involved in verification of autonomy. Our framework will provide the mathematical underpinning, required by any engineering effort, to accommodate the notations used by the various disciplines. With this common understanding, we will justify translations between languages, compositions of artefacts (engineering models, tests, simulations, and so on) defined in different languages, and system-level inferences from verifications of components. With such a rich foundation and wealth of results, we will transform the state of practice. Currently, developers build systems from scratch, or reusing components without any evidence of their operational conditions. Resulting systems are deployed in constrained conditions (reduced speed or contained environment, for example) or offered for deployment at the user's own risk. Instead, we envisage the future availability of a store of verified autonomous systems and components. In such a future, in the store, users will find not just system implementations, but also evidence of their operational conditions and expected behaviour (engineering models, mathematical results, tests, and so on). When a developer checks in a product, the store will require all these artefacts, described in well understood languages, and will automatically verify the evidence of trustworthiness. Developers will also be able to check in components for other developers; equally, they will be accompanied by evidence required to permit confidence in their use. In this changed world, users will buy applications with clear guarantees of their operational requirements and profile. Users will also be able to ask for verification of adequacy for customised platforms and environment, for example. Verification is no longer an issue. Working with the EPSRC TAS Hub and other nodes, and our extensive range of academic and industrial partners, we will collaborate to ensure that the notations, verification techniques, and properties, that we consider, contribute to our common agenda to bring autonomy to our everyday lives.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::33785a3196662ef86b35499f6dfecc3f&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::33785a3196662ef86b35499f6dfecc3f&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eu