
Mirriad (United Kingdom)
Mirriad (United Kingdom)
2 Projects, page 1 of 1
assignment_turned_in Project2015 - 2020Partners:Microsoft Research (United Kingdom), Intelligent Ultrasound, Yotta Ltd, British Broadcasting Corporation (United Kingdom), MICROSOFT RESEARCH LIMITED +25 partnersMicrosoft Research (United Kingdom),Intelligent Ultrasound,Yotta Ltd,British Broadcasting Corporation (United Kingdom),MICROSOFT RESEARCH LIMITED,Oxford University Hospitals NHS Trust,Max-Planck-Gymnasium,Oxford University Hospitals NHS Trust,MirriAd,Mirada Medical (United Kingdom),University of Oxford,Mirada Medical UK,The Wellcome Trust Sanger Institute,Max Planck Institutes,BBC,Wellcome Sanger Institute,Yotta Ltd,Qualcomm (United States),Oxford Uni. Hosps. NHS Foundation Trust,Skolkovo Institute of Science and Technology,Intelligent Ultrasound,General Electric (Germany),Mirriad (United Kingdom),GE Global Research,Skolkovo Inst of Sci and Tech (Skoltech),British Broadcasting Corporation - BBC,Qualcomm Incorporated,BP British Petroleum,BP (United States),GE Global ResearchFunder: UK Research and Innovation Project Code: EP/M013774/1Funder Contribution: 4,467,650 GBPThe Programme is organised into two themes. Research theme one will develop new computer vision algorithms to enable efficient search and description of vast image and video datasets - for example of the entire video archive of the BBC. Our vision is that anything visual should be searchable for, in the manner of a Google search of the web: by specifying a query, and having results returned immediately, irrespective of the size of the data. Such enabling capabilities will have widespread application both for general image/video search - consider how Google's web search has opened up new areas - and also for designing customized solutions for searching. A second aspect of theme 1 is to automatically extract detailed descriptions of the visual content. The aim here is to achieve human like performance and beyond, for example in recognizing configurations of parts and spatial layout, counting and delineating objects, or recognizing human actions and inter-actions in videos, significantly superseding the current limitations of computer vision systems, and enabling new and far reaching applications. The new algorithms will learn automatically, building on recent breakthroughs in large scale discriminative and deep machine learning. They will be capable of weakly-supervised learning, for example from images and videos downloaded from the internet, and require very little human supervision. The second theme addresses transfer and translation. This also has two aspects. The first is to apply the new computer vision methodologies to `non-natural' sensors and devices, such as ultrasound imaging and X-ray, which have different characteristics (noise, dimension, invariances) to the standard RGB channels of data captured by `natural' cameras (iphones, TV cameras). The second aspect of this theme is to seek impact in a variety of other disciplines and industry which today greatly under-utilise the power of the latest computer vision ideas. We will target these disciplines to enable them to leapfrog the divide between what they use (or do not use) today which is dominated by manual review and highly interactive analysis frame-by-frame, to a new era where automated efficient sorting, detection and mensuration of very large datasets becomes the norm. In short, our goal is to ensure that the newly developed methods are used by academic researchers in other areas, and turned into products for societal and economic benefit. To this end open source software, datasets, and demonstrators will be disseminated on the project website. The ubiquity of digital imaging means that every UK citizen may potentially benefit from the Programme research in different ways. One example is an enhanced iplayer that can search for where particular characters appear in a programme, or intelligently fast forward to the next `hugging' sequence. A second is wider deployment of lower cost imaging solutions in healthcare delivery. A third, also motivated by healthcare, is through the employment of new machine learning methods for validating targets for drug discovery based on microscopy images
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::1a6d7fe6440cd42491a17101de8c3bc0&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::1a6d7fe6440cd42491a17101de8c3bc0&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2021 - 2026Partners:Intel (United States), Figment Productions, BT Group (United Kingdom), To Play For Ltd, Audioscenic +31 partnersIntel (United States),Figment Productions,BT Group (United Kingdom),To Play For Ltd,Audioscenic,Figment Productions,Audioscenic,Sony (Europe),FOUNDRY,Mirriad (United Kingdom),Boris FX (United Kingdom),BT Group (United Kingdom),Dimension Studios,To Play For Ltd,Telefonica Research and Development,Framestore CFC,University of Surrey,Sony (Europe),Imagination Technologies (United Kingdom),Framestore,Imagination Technologies Ltd UK,Network Media Communications,MirriAd,Foundry (United Kingdom),Intel (United States),Imagineer Systems Ltd,British Broadcasting Corporation (United Kingdom),Synthesia,University of Surrey,Synthesia,Telefonica I+D (Spain),SalsaSound,Network Media Communications,BBC,Dimension Studios,SalsaSoundFunder: UK Research and Innovation Project Code: EP/V038087/1Funder Contribution: 3,003,240 GBPPersonalisation of media experiences for the individual is vital for audience engagement of young and old, allowing more meaningful encounters tailored to their interest, making them part of the story, and increasing accessibility. The goal of the BBC Prosperity Partnership is to realise a transformation to future personalised content creation and delivery at scale for the public at home or on the move. Evolution of mass-media audio-visual 'broadcast' content (news, sports, music, drama) has moved increasingly towards Internet delivery, which creates exciting potential for hyper-personalised media experiences delivered at scale to mass audiences. This radical new user-centred approach to media creation and delivery has the potential to disrupt the media landscape by directly engaging individuals at the centre of their experience, rather than predefining the content as with existing media formats (radio, TV, film). This will allow a new form of user-centred media experience which dynamically adapts to the individual, their location, the media content and producer storytelling intent, together with the platform/device and the network/compute resources available for rendering the content.The BBC Prosperity Partnership will position the BBC at the forefront of this 'Personalised Media' revolution enabling the creation and delivery of new services, and positioning the UK creative industry to lead future personalised media creation and intelligent network distribution to render personalised experiences for everyone anywhere. Realisation of personalised experiences at scale presents three fundamental research challenges: capture of object-based representations of the content to enable dynamic adaption for personalisation at the point of rendering; production to create personalised experiences which enhance the perceived quality of experience for each user; and delivery at scale with intelligent utilisation of the available network, edge and device resources for mass audiences. The BBC Prosperity Partnership will address the major technical and creative challenges to delivering user-centred personalised audience experiences at scale. Advances in audio-visual AI for machine understanding of captured content will enable the automatic transformation of captured 2D video streams to an object-based media (OBM) representation. OBM will allow adaptation for efficient production, delivery and personalisation of the media experience whilst maintaining the perceived quality of the captured video content. To deliver personalised experiences to audiences of millions requires transformation of media processing and distribution architectures into a hybrid and distributed low-latency computation platform, allowing flexible deployment of compute-intensive tasks across the network. This will achieve efficiency in terms of cost and energy use, while providing optimal quality of experience for the audience within the technical constraints of the system.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::e29b79416e7b8eb4ccf6fe2fd0305ac2&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::e29b79416e7b8eb4ccf6fe2fd0305ac2&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eu