Powered by OpenAIRE graph
Found an issue? Give us feedback

Siemens Healthcare Ltd

Siemens Healthcare Ltd

6 Projects, page 1 of 2
  • Funder: UK Research and Innovation Project Code: EP/X026922/1
    Funder Contribution: 265,251 GBP

    Cement production is responsible for 8 % of global CO2 emissions, which mainly come from the processing of limestone. CO2Valorize proposes a new approach to drastically reduce these emissions by partly replacing some of the limestone content with supplementary cementitious materials (SCM). Such materials are additionally carbonated using captured CO2, so this partreplacement process utilises captured CO2. Promising, calcium silicates rich SCM can come from waste materials such as mine tailings and recycled concrete, all of which are available in large quantities. The carbonation process of such materials is complex and barely understood to date. Our networks aim to lay the scientific foundations to create fundamental knowledge on the mechanisms, reaction kinetics, the physico-chemical subprocess, and the performance of the modified cement in order to provide a proof-ofconcept and show that a CO2 reduction by 50 % per tonne of cement produced is feasible. The project is driven by leading companies that represent important parts of the value chain and ensure a fast uptake of the results with the potential to commercialise new equipment, processes and software during and after the project. The structured approach combines complementary research for each individual project in the academic and industry sector. This is accompanied by a balanced mix of high-level scientific courses and transferable skills delivered by each partner locally and in dedicated training schools and workshops at network level. This way, each doctoral candidate builds up deep scientific expertise and interdisciplinary knowledge to deliver game-changing cleantech innovations during and after the project. CO2Valorize is impact-driven and strives for portfolios of high-class joint publications in leading journals and patents. The transfer of the results into first-of-its-kind engineering solutions contribute to the next generation of cement processes that can mitigate climate change.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/T013133/1
    Funder Contribution: 352,920 GBP

    Since the early 1990s, we have been able to use imaging methods such as functional MRI (fMRI) to look into the brain to see how it works. This non-invasive technology has transformed the way that doctors and neuroscientists can answer questions about how the brain is organised and how it processes information, in the way a healthy brain functions, or how it interacts with illness and disease. However, fMRI data can be susceptible to corruption due to motion and physiological fluctuations that reduce image quality, particularly as technological progress leads to imaging at higher spatial resolutions and higher magnetic field strengths, stretching the capabilities of our MRI systems. Nearly everyone has had experience trying to capture images of moving objects in poor lighting conditions (e.g. people in a dimly lit room), often resulting in blurry and terrible looking photos. Now imagine trying to take pictures using a camera that operates quite slowly and indirectly (i.e. an MRI scanner), of a living, breathing human brain that won't sit still. Even for a head that is motionless, physiological factors like breathing and heart beats cause the brain inside to pulse, move and cause unwanted image corruption. This is particularly problematic in lower parts of the brain, like the brain-stem, which is involved in important physiological functions like processing pain and modulating blood pressure, for example. Coupled with the fact that the brain activity signals we want to extract are quite subtle, these physiological image corruptions can significantly impact the quality of the imaging data we can acquire in these clinically important brain regions. There are two primary ways of dealing with this problem using existing methods. The first approach modifies the acquisition of data through a process referred to as "gating", which synchronises imaging with a certain part of the cardiac cycle. The second approach uses image post-processing to try and "correct" the corrupted images. However, gating is inefficient and image post-processing can be imperfect, presenting a large opportunity for significant improvement in the efficiency and quality of functional brain imaging data. This proposal brings new developments in multi-dimensional ("tensor") signal processing to bear on this problem. Tensor-based methods allow us to represent and manipulate signals with higher dimensionality, allowing us to resolve more features in our data. For example, a black and white movie might have dimensions corresponding to space and time, but a colour movie has dimensions of space, time and colour, where the extra dimension allows us to capture more information about the signals of interest. For our physiological corruption problem, we use these new tools to represent our 3D brain images over not only time, but also across different points in the breathing and heart beat cycles, to effectively separate, rather than mix all of these signals contributions together. To do this, we will combine new sophisticated methods for acquiring the raw MRI data with advances in image reconstruction to develop a technique for producing imaging data free of physiological corruption, in a time efficient way. This project brings together knowledge and resources across a broad spectrum of fields, ranging from hardware control of MRI systems to nonlinear signal processing and image analysis, to provide better tools for medical and neuroscientific study of the human brain-stem.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/W03722X/1
    Funder Contribution: 2,177,760 GBP

    The IDEA Fellowship is a 5-year programme to pave the way for the UK's industrial decarbonisation and digitalisation, via emerging AI, digital transformations applied to fundamental electrochemical engineering research. Electrochemical engineering is at the heart of many key energy technologies for the 21st century such as H2 production, CO2 reduction, energy storage, etc. Further developments in all these areas require a better understanding of the electrode-electrolyte interfaces in the electrochemical systems because almost all critical phenomena occur at such interface, which eventually determine the kinetics, thermodynamics and long-term performance of the systems. Designing the next generation of electrochemical interfaces to fulfil future requirements is a common challenge for all types of electrochemical applications. Designing an electrochemical interface traditionally relies on high throughput screening experiments or simulations. Given the complex nature of the design space, it comes with no surprise that this brute-force approach is highly iterative with low success rates, which has become a common challenge faced by the electrochemical research community. The vision of the fellowship is to make a paradigm-shift in how future electrochemical interfaces can be designed, optimised and self-evolved throughout their entire life cycle via novel Explainable AI (XAI) and digital solutions. It will create an inverse design framework, where we use a set of desired performance indicators as input for the XAI models to generate electrochemical interface designs that satisfy the requirements, in a physically-meaningful way interpretable by us. The methodology, once developed, will tackle exemplar challenges of central importance to the net zero roadmap, which include improving current systems such as H2 production/fuel cell and CO2 reduction, but also developing new electrochemical systems which do not yet exist today at industrial scale such as N2 reduction and multi-ion energy storage.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/T028270/1
    Funder Contribution: 740,115 GBP

    Aperture synthesis by interferometry in radio astronomy is a powerful technique allowing observation of the sky with antennae arrays at otherwise inaccessible angular resolutions and sensitivities. Image formation is however a complicated problem. Radio-interferometric measurements provide incomplete linear information about the sky, defining an ill-posed inverse imaging problem. Powerful computational imaging algorithms are needed to inject prior information into the data and recover the underlying image. The transformational science envisaged from radio astronomical observations for the next decades has triggered the development of new gigantic radio telescopes, such as the Square Kilometre Array (SKA), capable of imaging the sky at much higher resolution, with much higher sensitivity than current instruments, over wide fields of view. In this context, wide-band image cubes will exhibit rich structure and reach sizes between 1 Terabyte (TB) and 1 Petabyte (PB), while associated data volumes will reach the Exabyte (EB) scale. Endowing SKA and pathfinders with their expected acute vision requires image formation algorithms capable to transform the data and provide the target imaging precision (i.e. resolution and dynamic range), while simultaneously being robust (i.e. addressing calibration and uncertainty quantification challenges), and scalable to the extreme image sizes and data volumes at stake. The commonly used imaging algorithm in the field, dubbed CLEAN, owes its success to its simplicity and computational speed. CLEAN however crucially lacks the versatility to handle complex signal models, thereby limiting the achievable resolution and dynamic range of the formed images. The same holds for the existing associated calibration methods that need to correct for instrumental and ionospheric effects affecting the data. Another major limitation in radio-interferometric imaging is the absence of a proper methodology to quantify the uncertainty around the image estimate. A decade of research pioneered by Wiaux and his collaborators suggests that the theory of optimisation is a powerful and versatile framework to design new radio-interferometric imaging algorithms. In the optimisation framework, an objective function is defined as sum of a data-fidelity term and a regularisation term promoting a given prior signal model. Our research hypothesis is that algorithmic structures currently emerging at the interface of optimisation and deep learning can take the challenge of delivering the expected generation of algorithms for precision robust scalable radio-interferometric imaging, in a wide-band wide-field polarisation context. A novel approach will be developed in this context, based on the decomposition of the data into blocks and of the image cube into small, regular, overlapping 3D facets. Facet-specific regularisation terms and block-specific data-fidelity terms will all be handled in parallel through so-called proximal splitting optimisation methods, thereby unlocking simultaneously the image and data size bottlenecks. Injecting prior information into the inverse imaging problem at facet level also offers potential to better promote local spatio-spectral correlation, and eventually provide the target image precision. Sophisticated prior models based on advanced regularisation simultaneously promoting sparsity, correlation, positivity etc., will firstly be considered, to be substituted by learned priors using deep neural networks in a second stage with the aim to further improve precision and scalability. Facets and neural networks will percolate from the imaging module to calibration and uncertainty quantification for robustness. Our algorithms will be validated up to 10TB image size on High Performance Computing (HPC) machines. A technology transfer at 1GB image size will be performed in medical imaging, specifically 3D magnetic resonance and ultrasound imaging, as proof of their wider applicability.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/T017961/1
    Funder Contribution: 1,295,780 GBP

    In our work in the current edition of the CMIH we have built up a strong pool of researchers and collaborations across the board from mathematics, statistics, to engineering, medical physics and clinicians. Our work has also confirmed that imaging data is a very important diagnostic biomarker, but also that non-imaging data in the form of health records, memory tests and genomics are precious predictive resources and that when combined in appropriate ways should be the source for AI-based healthcare of the future. Following this philosophy, the new CMIH brings together researchers from mathematics, statistics, computer science and medicine, with clinicians and relevant industrial stakeholder to develop rigorous and clinically practical algorithms for analysing healthcare data in an integrated fashion for personalised diagnosis and treatment, as well as target identification and validation on a population level. We will focus on three medical streams: Cancer, Cardiovascular disease and Dementia, which remain the top 3 causes of death and disability in the UK. Whilst applied mathematics and mathematical statistics are still commonly regarded as separate disciplines there is an increasing understanding that a combined approach, by removing historic disciplinary boundaries, is the only way forward. This is especially the case when addressing methodological challenges in data science using multi-modal data streams, such as the research we will undertake at the Hub. This holistic approach will support the Hub aims to bring AI for healthcare decision making to the clinical end users.

    more_vert
  • chevron_left
  • 1
  • 2
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.