Powered by OpenAIRE graph
Found an issue? Give us feedback

Science and Technology Facilities Council

Science and Technology Facilities Council

357 Projects, page 1 of 72
  • Funder: UK Research and Innovation Project Code: ST/Y003152/1
    Funder Contribution: 10,277 GBP

    Discovering the nature of particle dark matter (DM) is a key priority in Physics - and for STFC. Direct detection of electroweak-scale DM in our galaxy is the primary goal of the XLZD consortium, formed by the coming together of the foremost collaborations in the field: XENON-nT, LUX-ZEPLIN (LZ) and DARWIN. XLZD is proposing a large underground experiment based on the leading liquid xenon (LXe) technology: the definitive search for WIMP DM, able to rule out or discover in the accessible parameter space remaining above the irreducible neutrino background. The scientific potential of such a "rare event observatory" is detailed in a comprehensive white paper signed by 600 authors worldwide. LZ and Xenon-nT are the leading experiments in the field at present. Discovery of DM at XLZD would have profound implications for our understanding of the universe, its birth and its structure. STFC is currently considering a proposal to host the XLZD experiment in a state-of-the-art new facility at the Boulby Underground Laboratory in North Yorkshire (XLZD@Boulby). A liquid xenon rare event observatory might well be the largest project hosted at Boulby, supported by the largest international collaboration visiting the facility, and hence it would drive the facility design to a significant degree. This project will engage UK industry in planning for industrialising the construction of XLZD underground at Boulby, identify cost-effective routes to the supply of the necessary xenon stock, and plan for training of the skilled workforce in the local area that will be required to construct, install and operate the experiment. This project will prepare the way for the major investment in goods, services and people in the local area that will contribute substantially to the levelling-up of the North-East.

    more_vert
  • Funder: UK Research and Innovation Project Code: ST/Y003675/1
    Funder Contribution: 308,693 GBP

    Abstracts are not currently available in GtR for all funded research. This is normally because the abstract was not required at the time of proposal submission, but may be because it included sensitive information such as personal details.

    more_vert
  • Funder: UK Research and Innovation Project Code: ST/T006358/1
    Funder Contribution: 378,144 GBP

    See main case

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/Y008200/1
    Funder Contribution: 33,852 GBP

    Randomised linear algebra is an exciting branch of computational mathematics. Randomised linear algebra has had a profound impact in a number of applications where large-scale matrix computation is required; randomised low-rank approximation of matrices is a primary example. In most algorithms in randomised linear algebra, a key idea is to perform a randomised sketch of the matrix. This project aims to develop efficient techniques for sketching a large-scale matrix or (high-order) tensor. The sketches have a tensor structure that allows them to be applied faster than unstructured (e.g. Gaussian) sketches, while maintaining sufficient randomness that allows algorithms to succeed with high probability. We will establish theoretical justifications for such sketches and identify limitations (if any), so that we can make theoretically justified recommendations on when such sketches should be employed. We expect the new sketch to be competitive in many settings. We will keep a close eye on applications, in particular in problems involving tensors. A specific application we will investigate is rounding tensors in the tensor-train (TT) format, which is a key computation required when manipulating TT tensors, a popular decomposition for compressing tensors that are large scale and high order. In addition to employing the new sketch, we aim to devise efficient and stable algorithms for rounding. We will implement the new sketches and algorithms in MATLAB and Fortran, and make the codes publicly available. The Fortran code will be part of RAL's HSL Mathematical Software Library.

    more_vert
  • Funder: UK Research and Innovation Project Code: ST/Y004183/1
    Funder Contribution: 1,314,550 GBP

    Cancer is one of the main causes of mortality, disease, and disability worldwide. While targeted therapy with monoclonal antibodies and small-molecule drugs has changed the natural history of some types of cancer, there are still vast areas of unmet need, where the mechanisms that allow cancer to start, progress and resist therapy are still unknown or are only beginning to be discovered. Hospitals worldwide hold in their archives and biobanks a treasure trove of data about cancer patients. These "cancer motherlodes" have only been mined very superficially so far, as the data that makes them up is very complex, and a lot of factors need to be examined at the same time. The same challenge applies to the biobanks: generally only a very small part of the sample taken from each patient is examined because the process is very labour-intensive and requires the input of experts. Machine Learning, the process by which computers can be made to classify large amounts of data really quickly using training sets of pre-analysed data as templates, provides a unique opportunity to finally mine cancer databases to the fullest extent. However, by itself this is not enough: the data added to the databases needs to be of the highest quality and has to be produced fast from enormous amounts of samples. To do so, we will develop new imaging techniques which will allow us to image biobank samples quickly and deeply, using innovative methods to look at fluorescent molecules which can tell doctors about the growth rate of cancer, or the penetration of drugs inside a tumour. We will also develop software that will allow us to "do more with less" in radiology, producing better quality images from lower doses of X-rays. The mining of these datasets will tell us which proteins and chains of reactions are responsible for making cancer patients sick and preventing them from getting better with therapy. However, knowing about which proteins are responsible is not enough. We need to know how these proteins work at different scales to make cells go rogue and become tumours. We will do this by developing a set of fluorescence and electron microscopy techniques which will work together to allow us to look at tumour proteins at increasingly larger magnification and from different perspectives, so we can look at their structure and their function together, understand how they work and figure out how to make them stop working. To do this, we will need to develop new instruments that work at very cold temperatures and in vacuum, the software needed to run them, and mechanisms to handle very cold samples without damage to them - or to us! All of this is very delicate work, and at the moment it is very slow and tricky, but the cancer patients cannot wait, so we will work together with Machine Learning experts to make taking the data and moving the samples and data between instruments quick, reliable and automatic. This way we will be able to go through more samples in a shorter amount of time and we will be able to tell the doctors in our team how the proteins they have found work and how to put a spanner in their works. By working together, we will be able to attack the problem from multiple fronts and make headway faster.

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • 4
  • 5
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.