Powered by OpenAIRE graph
Found an issue? Give us feedback

DNEG (United Kingdom)

Country: United Kingdom

DNEG (United Kingdom)

Funder
Top 100 values are shown in the filters
Results number
arrow_drop_down
9 Projects, page 1 of 2
  • Funder: UK Research and Innovation Project Code: EP/M021793/1
    Funder Contribution: 99,139 GBP

    Scene modelling is central to many applications in our society including quality control in manufacturing, robotics, medical imaging, visual effects production, cultural heritage and computer games. It requires accurate estimation of the scene's shape (its 3D surface geometry) and reflectance (how its surface reflects light). However, there is currently no method capable of capturing the shape and reflectance of dynamic scenes with complex surface reflectance (e.g. glossy surfaces). This lack of generic methods is problematic as it limits the applicability of existing techniques to scene categories which are not representative of the complexity of natural scenes and materials. This project will introduce a general framework to enable the capture of shape and reflectance of complex dynamic scenes thereby addressing an important gap in the field. Current image or video-based shape estimation techniques rely on the assumption that the scene's surface reflectance is diffuse (it reflects light uniformly in all directions) or assume it is known a priori thus limiting the applicatibility to simple scenes. Reflectance estimation requires estimation of a 6-dimensional function (the BRDF) which describes how light is reflected at each surface point as a function of incident light direction and viewpoint direction. Due to high dimensionality, reflectance estimation remains limited to static scenes or requires use of expensive specialist equipment. At present, there is no method capable of accurately capturing both shape and reflectance of general dynamic scenes, yet scenes with complex unknown reflectance properties are omnipresent in our daily lives. The proposed research will address this gap by introducing a novel framework which enables estimation of shape and reflectance for arbitrary dynamic scenes. The approach is based on two key scientific advances which tackle the high dimensionality issue of shape and reflectance estimation. First, a general methodology for decoupling shape estimation from reflectance estimation will be proposed; this will allow decomposition of the original high dimensional problem, which is ill-posed, into smaller sub-problems that are tractable. Second, a space-time formulation of reflectance estimation will be introduced; this will utilise dense surface tracking techniques to extend reflectance estimation to the temporal domain and thereby increase the number of observations available to overcome the inherently low number of observations at a single time instant. This will build on the PI's pioneering research in 3D reconstruction of scenes with arbitrary unknown reflectance properties and his expertise in dynamic scene reconstruction, surface tracking/animation and reflectance estimation. This research represents a radical shift in scene modelling which will result in several major technical contributions: 1) a reflectance independent shape estimation methodology for dynamic scenes, 2) a non-rigid surface tracking method suitable for general scenes with complex and unknown reflectance and 3) a general and scalable reflectance estimation method for dynamic scenes. This will benefit all areas requiring accurate acquisition of shape and reflectance for real-world scenes with complex dynamic shape and reflectance without the requirement for complex and restrictive hardware setups; such scenes are a common occurrence in natural environments, manufacturing (metallic surfaces) and medical imaging (human tissue) but accurate capture of shape is not possible with existing approaches which assume diffuse reflectance and fail dramatically for such cases. This will achieve for the first time accurate modelling of dynamic scenes with arbitrary surface reflectance properties thus opening up novel avenues in scene modelling. The application of this technology will be demonstrated in digital cinema in collaboration with industrial partners to support the development of the next generation of visual effects.

    more_vert
  • Funder: European Commission Project Code: 316564
    more_vert
  • Funder: UK Research and Innovation Project Code: EP/R019606/1
    Funder Contribution: 100,963 GBP

    Estimating integrals of functions forms the cornerstone of many general classes of problems such as optimisation, sampling and normalisation; these problems, in turn, are central tools for a plethora of applications across various fields such as computer graphics, computer vision and machine learning. The integrand, or function to be integrated, is complicated and rarely available in closed form. Its domain spans spaces of arbitrarily high dimensionality. Exact integration is hopeless and approximation is unavoidable in practice. An estimate of the integral is typically constructed using evaluations of the integrand at a number of sampled locations in the domain. The set of points where the function is sampled is often referred to collectively as a sampling pattern. For computer graphics applications, a modern animation feature film of length 1.5h typically involves the generation of a total of a few hundreds of trillions of high-dimensional samples that are mapped into light paths. Although a number of strategies have been proposed towards generating samples, measuring the quality of high-dimensional sampling patterns is an open problem. Sampling strategies are currently compared on a case-by-case basis by explicitly computing errors in the context of each application independently. The computation associated with measures such as discrepancy and Fourier analysis scale exponentially with dimensionality and are therefore not practicable for samples in high-dimensional domains. The proposed work seeks to quantify equidistribution of high-dimensional point sets using an alternative measure to discrepancy that is tractable. This project will establish mathematical connections between computational topology, stochastic geometry and error analysis for Monte Carlo integration. The goal is to develop a measure for assessing the quality of sampling-based estimators purely based on the samples used. The derived theory will be evaluated and applied on Monte Carlo rendering for Computer Graphics applications.

    more_vert
  • Funder: UK Research and Innovation Project Code: AH/W009323/1
    Funder Contribution: 80,237 GBP

    As the fourth largest film market worldwide, UK films (25% of global box office revenues) has become an essential driver of economic growth and a key cultural export. The Chinese box office (20bn CNY Box office Revenue) has now overtaken North America as the largest film market in the world. Although there is a large drop on the revenue due to Covid-19, forecasts indicate that in post-pandemic ages the movie market will recover at a fast pace due to the development of new creative technology, such as the Cloud based Virtual Film Production, where different teams of production practice could be carried out simultaneously through cloud based remote collaboration and practitioners work in and interact directly with a virtual set. Virtual Production (VP) reduces the need to move crews and equipment to location and enables remote working in VR, reducing Covid-19 risks, the environmental footprint, and production costs. The global Virtual Production Market size is expected to reach £2.2b by 2026, rising at a rate of 14.3%. The industry of both countries has attracted enormous investment in this direction. Some of the successful examples being DNeg (UK) and Mirror Pictures (China). Since 2019, Mirror Pictures in Shanghai has developed a Cloud platform for Virtual Production including state-of-the-art professional equipment and cloud computing resources. A similar film production platform was developed by our partner in UK - DNEG, which however focus more on virtual production pipeline development, integrating UK's world leading film production techniques into the new system framework. Interestingly, businesses in this sector from UK and China have taken weight in two different directions. UK companies have more focus on virtual production, whereas Chinese counterparts have put more efforts on cloud system development, pointing to important bases for complements and collaboration between both. As this technology is only emerging and developing, there are undoubtedly a lot of challenges and difficulties from various perspectives, e.g. technical issues, standardisation of pipeline, intellectual property, data security, cultural impact, policy development, etc. Attracted by this promising opportunity, this project seeks to address the above-mentioned challenges to bring academic researchers, system developers, film production practitioners, market researchers from both countries together to apply our complementary expertise to explore this next-generation film production technology, identify the challenges and develop strategic plans for solutions. This project will match one specific theme of this AHRC call: Sector Mapping, which is to offer market intelligence and "horizon scanning" for the sector in the UK and China in terms of RD&I. This project aims to investigate the key market forces and industry dynamics shaping the evolution of the film production industry in the UK and China. The market research will analysis the difference and intersection between the two markets, identify the policy trends and barriers to overcome, investigate the way for businesses of both countries to collaborate to maximise the market potentials, develop a new guidance and strategy for potential collaboration on research and innovation. The project will produce a report for AHRC outlining recommendations for UK-China collaboration, informing emerging UKRI's strategy. This project team includes Higher Education Institutions, independent research organisation and industry companies from both UK and China. The UK team include researchers from the National Centre for Computer Animation (Bournemouth University, Project lead), University of Durham (UK market research) and industry lead DNEG. The China side includes researchers from Shanghai Jiaotong University (China market research), Shanghai Film Academy (film production), Shanghai Institute for Advanced Study of Zhejiang University (Artificial Intelligence) and Mirror Pictures (China industry Lead).

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/S016260/1
    Funder Contribution: 409,880 GBP

    Consumers enjoy the immersive experience of 3D content in cinema, TV and virtual reality (VR), but it is expensive to produce. Filming a 3D movie requires two cameras to simulate the two eyes of the viewer. A common but expensive alternative is to film a single view, then use video artists to create the left and right eyes' views in post-production. What if a computer could automatically produce a 3D model (and binocular images) from 2D content: 'lifting images into 3D'? This is the overarching aim of this project. Lifting into 3D has multiple uses, such as route planning for robots, obstacle avoidance for autonomous vehicles, alongside applications in VR and cinema. Estimating 3D structure from a 2D image is difficult because in principle, the image could have been created from an infinite number of 3D scenes. Identifying which of these possible worlds is correct is very hard, yet humans interpret 2D images as 3D scenes all the time. We do this every time we look at a photograph, watch TV or gaze into the distance, where binocular depth cues are weak. Although we make some errors in judging distances, our ability to quickly understand the layout of any scene enables us to navigate through and interact with any environment. Computer scientists have built machine vision systems for lifting to 3D by incorporating scene constraints. A popular technique is to train a deep neural network with a collection of 2D images and associated 3D range data. However, to be successful, this approach requires a very large dataset, which can be expensive to acquire. Furthermore, performance is only as good as the dataset is complete: if the system encounters a type of scene or geometry that does not conform to the training dataset, it will fail. Most methods have been trained for specific situations - e.g. indoor, or street scenes - and these systems are typically less effective for rural scenes and less flexible and robust than humans. Finally, such systems provide a single reconstructed output, without any measure of uncertainty. The user must assume that the 3D reconstruction is correct, which will be a costly assumption in many cases. Computer systems are designed and evaluated based upon their accuracy with respect to the real world. However, the ultimate goal of lifting into 3D is not perfect accuracy - rather it is to deliver a 3D representation that provides a useful and compelling visual experience for a human observer, or to guide a robot whilst avoiding obstacles. Importantly, humans are expert at interacting with 3D environments, even though our perception can deviate substantially from true metric depth. This suggests that human-like representations are both achievable and sufficient, in any and all environments. ROSSINI will develop a new machine vision system for 3D reconstruction that is more flexible and robust than previous methods. Focussing on static images, we will identify key structural features that are important to humans. We will combine neural networks with computer vision methods to form human-like descriptions of scenes and 3D scene models. Our aims are to (i) produce 3D representations that look correct to humans even if they are not strictly geometrically correct (ii) do so for all types of scene and (iii) express the uncertainty inherent in each reconstruction. To this end we will collect data on human interpretation of images and incorporate this information into our network. Our novel training method will learn from humans and existing ground truth datasets; the training algorithm selecting the most useful human tasks (i.e. judge depth within a particular image) to maximise learning. Importantly, the inclusion of human perceptual data should reduce the overall quantity of training data required, while mitigating the risk of over-reliance on a specific dataset. Moreover, when fully trained, our system will produce 3D reconstructions alongside information about the reliability of the depth estimates.

    more_vert
  • chevron_left
  • 1
  • 2
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.