Loading
AI applications have become pervasive: from mobile phones and home appliances to stock markets, autonomous cars, robots and drones. As AI takes over a wider range of tasks, we gradually approach the times when security laws, or policies, ultimately akin to Isaac Asimov's "3 laws of robotics" will need to be established for all working AI systems. A homonym of Asimov's first name, the project AISEC (``Artificial Intelligence Secure and Explainable by Construction"), aims to build a sustainable, general purpose, and multidomain methodology and development environment for policy-to-property secure and explainable by construction development of complex AI systems. We will create and deploy a novel framework for documenting, implementing and developing policies for complex deep learning systems by using types as a unifying language to embed security and safety contracts directly into programs that implement AI. The project will produce a development tool AISEC with infrastructure (user interface, verifier, compiler) to cater for different domain experts: from lawyers working with security experts to verification experts and system engineers designing complex AI systems. AISEC will be built, tested and used in collaboration with industrial partners in two key AI application areas: autonomous vehicles and natural language interfaces. AISEC will catalyse a step change from pervasive use of deep learning in AI to pervasive use of methods for deep understanding of intended policies and latent properties of complex AI systems, and deep verification of such systems.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::1d809e41b00c1bd090f336813db82f0b&type=result"></script>');
-->
</script>