
Intelligent Ultrasound
Intelligent Ultrasound
2 Projects, page 1 of 1
assignment_turned_in Project2020 - 2026Partners:Nielson, Continental Teves AG & Co. oHG, Plexalis Ltd, University of Oxford, British Broadcasting Corporation - BBC +13 partnersNielson,Continental Teves AG & Co. oHG,Plexalis Ltd,University of Oxford,British Broadcasting Corporation - BBC,Samsung,Intelligent Ultrasound,Toshiba (Japan),Samsung,Intelligent Ultrasound,Continental (Germany),Plexalis Ltd,Nielson,BBC,Samsung (South Korea),British Broadcasting Corporation (United Kingdom),Continental Teves AG & Co. oHG,Toshiba CorporationFunder: UK Research and Innovation Project Code: EP/T028572/1Funder Contribution: 5,912,100 GBPWith the advent of deep learning and the availability of big data, it is now possible to train machine learning algorithms for a multitude of visual tasks, such as tagging personal image collections in the cloud, recognizing faces, and 3D shape scanning with phones. However, each of these tasks currently requires training a neural network on a very large image dataset specifically collected and labelled for that task. The resulting networks are good experts for the target task, but they only understand the 'closed world' experienced during training and can 'say' nothing useful about other content, nor can they be applied to other tasks without retraining, nor do they have an ability to explain their decisions or to recognise their limitations. Furthermore, current visual algorithms are usually 'single modal', they 'close their ears' to the other modalities (audio, text) that may be readily available. The core objective of the Programme is to develop the next generation of audio-visual algorithms that does not have these limitations. We will carry out fundamental research to develop a Visual Transformer capable of visual analysis with the flexibility and interpretability of a human visual system, and aided by the other 'senses' - audio and text. It will be able to continually learn from raw data streams without requiring the traditional 'strong supervision' of a new dataset for each new task, and deliver and distill semantic and geometric information over a multitude of data types (for example, videos with audio, very large scale image and video datasets, and medical images with text records). The Visual Transformer will be a key component of next generation AI, able to address multiple downstream audio-visual tasks, significantly superseding the current limitations of computer vision systems, and enabling new and far reaching applications. A second objective addresses transfer and translation. We seek impact in a variety of other academic disciplines and industry which today greatly under-utilise the power of the latest computer vision ideas. We will target these disciplines to enable them to leapfrog the divide between what they use (or do not use) today which is dominated by manual review and highly interactive analysis frame-by-frame, to a new era where automated visual analytics of very large datasets becomes the norm. In short, our goal is to ensure that the newly developed methods are used by industry and academic researchers in other areas, and turned into products for societal and economic benefit. To this end open source software, datasets, and demonstrators will be disseminated on the project website. The ubiquity of digital images and videos means that every UK citizen may potentially benefit from the Programme research in different ways. One example is smart audio-visual glasses, that can pay attention to a person talking by using their lip movements to mask out other ambient sounds. A second is an app that can answer visual questions (or retrieve matches) for text-queries over large scale audio-visual collections, such as a person's entire personal videos. A third is AI-guided medical screening, that can aid a minimally trained healthcare professional to perform medical scans.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::5c1c237d8d04f5287366f923b6c2a7f0&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::5c1c237d8d04f5287366f923b6c2a7f0&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2015 - 2020Partners:Microsoft Research (United Kingdom), Intelligent Ultrasound, Yotta Ltd, British Broadcasting Corporation (United Kingdom), MICROSOFT RESEARCH LIMITED +25 partnersMicrosoft Research (United Kingdom),Intelligent Ultrasound,Yotta Ltd,British Broadcasting Corporation (United Kingdom),MICROSOFT RESEARCH LIMITED,Oxford University Hospitals NHS Trust,Max-Planck-Gymnasium,Oxford University Hospitals NHS Trust,MirriAd,Mirada Medical (United Kingdom),University of Oxford,Mirada Medical UK,The Wellcome Trust Sanger Institute,Max Planck Institutes,BBC,Wellcome Sanger Institute,Yotta Ltd,Qualcomm (United States),Oxford Uni. Hosps. NHS Foundation Trust,Skolkovo Institute of Science and Technology,Intelligent Ultrasound,General Electric (Germany),Mirriad (United Kingdom),GE Global Research,Skolkovo Inst of Sci and Tech (Skoltech),British Broadcasting Corporation - BBC,Qualcomm Incorporated,BP British Petroleum,BP (United States),GE Global ResearchFunder: UK Research and Innovation Project Code: EP/M013774/1Funder Contribution: 4,467,650 GBPThe Programme is organised into two themes. Research theme one will develop new computer vision algorithms to enable efficient search and description of vast image and video datasets - for example of the entire video archive of the BBC. Our vision is that anything visual should be searchable for, in the manner of a Google search of the web: by specifying a query, and having results returned immediately, irrespective of the size of the data. Such enabling capabilities will have widespread application both for general image/video search - consider how Google's web search has opened up new areas - and also for designing customized solutions for searching. A second aspect of theme 1 is to automatically extract detailed descriptions of the visual content. The aim here is to achieve human like performance and beyond, for example in recognizing configurations of parts and spatial layout, counting and delineating objects, or recognizing human actions and inter-actions in videos, significantly superseding the current limitations of computer vision systems, and enabling new and far reaching applications. The new algorithms will learn automatically, building on recent breakthroughs in large scale discriminative and deep machine learning. They will be capable of weakly-supervised learning, for example from images and videos downloaded from the internet, and require very little human supervision. The second theme addresses transfer and translation. This also has two aspects. The first is to apply the new computer vision methodologies to `non-natural' sensors and devices, such as ultrasound imaging and X-ray, which have different characteristics (noise, dimension, invariances) to the standard RGB channels of data captured by `natural' cameras (iphones, TV cameras). The second aspect of this theme is to seek impact in a variety of other disciplines and industry which today greatly under-utilise the power of the latest computer vision ideas. We will target these disciplines to enable them to leapfrog the divide between what they use (or do not use) today which is dominated by manual review and highly interactive analysis frame-by-frame, to a new era where automated efficient sorting, detection and mensuration of very large datasets becomes the norm. In short, our goal is to ensure that the newly developed methods are used by academic researchers in other areas, and turned into products for societal and economic benefit. To this end open source software, datasets, and demonstrators will be disseminated on the project website. The ubiquity of digital imaging means that every UK citizen may potentially benefit from the Programme research in different ways. One example is an enhanced iplayer that can search for where particular characters appear in a programme, or intelligently fast forward to the next `hugging' sequence. A second is wider deployment of lower cost imaging solutions in healthcare delivery. A third, also motivated by healthcare, is through the employment of new machine learning methods for validating targets for drug discovery based on microscopy images
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::1a6d7fe6440cd42491a17101de8c3bc0&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::1a6d7fe6440cd42491a17101de8c3bc0&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eu