
DiRAC (Distributed Res utiliz Adv Comp)
DiRAC (Distributed Res utiliz Adv Comp)
8 Projects, page 1 of 2
assignment_turned_in Project2024 - 2026Partners:eQUANS, DiRAC (Distributed Res utiliz Adv Comp), Durham University, University of EdinburgheQUANS,DiRAC (Distributed Res utiliz Adv Comp),Durham University,University of EdinburghFunder: UK Research and Innovation Project Code: EP/Z530682/1Funder Contribution: 1,276,420 GBPHigh Performance Computers use substantial quantities of energy to keep them cool and operating efficiently. Buildings in the UK require substantial quantities of energy to keep their occupants warm. Today, energy for heating and cooling is carbon intensive, and nationally the supply of heat and cooling is responsible for 1/3 of the UK's greenhouse gases. Heat from cooling a HPC system can be used for space heating elsewhere, and heat storage can store excess heat, retrieving it when required. This proposal seeks to do just that: efficient cooling of HPC systems, and investigating storing the resulting excess heat in flooded coal mines - legacy assets from the UK's mining past acting as carbon-zero heat stores for a zero-carbon future. Here we examine the sustainable energy potential of combining complimentary energy demands - examining societal energy needs without costing the Earth. Current UK large HPC systems all use direct liquid cooling (DLC), where a cooling fluid is piped onto heat sinks within compute nodes, heat extracted and then exchanged via a heat exchanger before being released into the atmosphere. These systems can consume multiple Megawatts of power. Previous air-cooled systems were even less efficient. Immersion cooling is the natural progression in technology which involves fully immersing specially adapted servers in a heat transfer liquid, typically a mineral oil, which removes heat from all components (not just CPUs), simplifying server design (no fans, heat pipes and reduced embodied CO2. Heat is then extracted from the system at higher temperature. This technology uses as little as 5% of power for dealing with waste heat, much better than the ~20% for DLC systems and 40-100% for air-cooled systems, yielding a significant power saving. Current UK HPC data centres do not have any direct current experience with this relatively new technology, and therefore it is seen as a significant risk for adoption in new HPC systems. We therefore propose installing a commercial immersion cooling tank in Durham University's the data centre (which also hosts the EPSRC BEDE N8 Tier-2 system). This immersion test bed system will become a national testing facility for other data centre staff to visit to gain experience with this technology, and for vendors to demonstrate their server technologies. We will analyse the performance of kit hosted within this tank, and investigate properties such as fluid temperature and server energy consumption. A further benefit of immersion cooling is that the waste heat is at a higher (more usable) temperature than from conventional systems. Durham is the ideal location for such facility, currently hosting national ExCALIBUR hardware and enabling software systems and both Tier-1 and Tier-2 facilities. We furthermore propose to study data centre waste heat reuse, by investigating storing HPC waste heat in the abandoned and flooded coal mine workings beneath the data centre site. By storing the waste heat, it can be reused when required. This requires drilling several boreholes into the mine workings to abstract the water, add/subtract heat to it, and re-inject it back to the mine. In short, we would be investigating using the mine workings as a heat battery. This proposal is timely, since it fits well with the university's current plan to install a heat network across its campus. The site would be used as a living lab and exemplar for other potential systems.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::4490cf30172e9d53e00eaf5693c6bc35&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::4490cf30172e9d53e00eaf5693c6bc35&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2023 - 2025Partners:Epistemic AI (EAI), UCL, KUANO LTD, Nvidia (United States), DiRAC (Distributed Res utiliz Adv Comp) +5 partnersEpistemic AI (EAI),UCL,KUANO LTD,Nvidia (United States),DiRAC (Distributed Res utiliz Adv Comp),CECAM (Euro Ctr Atomic & Molecular Comp),Google (United States),Evotec (UK) Ltd,Frederick National Laboratory for Cancer Research,Alces FlightFunder: UK Research and Innovation Project Code: EP/Y008731/1Funder Contribution: 210,360 GBPComputational biomedicine offers many avenues for taking full advantage of emerging exascale computing resources and provides a wealth of benefits as one of the use-cases within the wider ExCALIBUR initiative. The CompBioMedEE project aims to promote and support the use of computational biomedical modelling and simulation at the exascale within the biomedical research community. We shall demonstrate to our community how to develop and deploy applications on emerging exascale machines to achieve increasingly high-fidelity descriptions of the proteins and small molecules of the human body in health and disease. Within the biomedical research domain, we will focus on the discipline of structural and molecular biology. This will enable us to provide the support needed to achieve a wide range of outcomes from determining how the functional and mechanistic understanding of how molecular components in a biological system interact to the use of drug discovery methods to the design of novel therapeutics for a diversity of inherited and acquired diseases. CompBioMedEE will use the IMPECCABLE drug discovery workflow from the UKRI-funded CompBioMedX project. The IMPECCABLE software has been taken through extreme scaling and is eminently suited to bringing computational biomedicine researchers, particularly those from experimental backgrounds who do molecular modelling, to the exascale. The molecular dynamics engine that is part of the IMPECCABLE code is suited to standalone use, enabling biomedical researchers new to HPC to perform molecular dynamics simulations and, through this, to develop the computational expertise required for peta- and exascale use of the IMPECCABLE code. The CompBioMedEE project will engage with biomedical researchers at all career stages, providing them with the compute resource needed to support computational research projects. Through proactive engagement with medical and undergraduate biosciences students, we will illustrate the benefits of using modelling and supercomputers and establish a culture and practice of using computational methods to inform the experimental and clinical work from bench to bedside.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::48382a4a4a27291291321938f159f298&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::48382a4a4a27291291321938f159f298&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2020 - 2022Partners:DiRAC (Distributed Res utiliz Adv Comp), Leiden University, NVIDIA Limited (UK), Durham University, IBM (United Kingdom) +7 partnersDiRAC (Distributed Res utiliz Adv Comp),Leiden University,NVIDIA Limited (UK),Durham University,IBM (United Kingdom),ARM Ltd,Durham University,ARM (United Kingdom),NVIDIA Limited,ARM Ltd,IBM UNITED KINGDOM LIMITED,IBM (United Kingdom)Funder: UK Research and Innovation Project Code: EP/V001523/1Funder Contribution: 294,665 GBPSPH (smoothed particle hydrodynamics), and Lagrangian approaches to hydrodynamics in general, are a powerful approach to hydrodynamics problems. In this scheme, the fluid is represented by a large number of particles, moving with the flow. The scheme does not require a predefined grid making it very suitable for tracking flows with moving boundaries, particularly flows with free surfaces, and problems that involve flows with physically active elements or large dynamic range. The range of applications of the method is growing rapidly and is being adopted by a rapidly growing range of commercial companies including Airbus, Unilever, Shell, EDF, Michelin and Renault. The widespread use of SPH, and its potential for adoption across a wide range of science domains, make it a priority use case for the Excalibur project. Massively parallel simulations with billion to hundreds of billions of particles have the potential for revolutionising our understanding of the Universe and will empower engineering applications of unprecedented scale, ranging from the end-to-end simulation of transients (such as a bird strike) in jet engines to the simulation of tsunami waves over-running a series of defensive walls. The working group will identify a path to the exascale computing challenge. The group has expertise across both Engineering and Astrophysics allowing us to develop an approach that satisfies the needs of a wide community. The group will start from two recent codes that already highlight the key issues and will act as the working group's starting point. - SWIFT (SPH with Interdependent Fine-grained Tasking) implements a cutting-edge approach to task-based parallelism. Breaking the problem into a series of inter-dependent tasks allows for great flexibility in scheduling, and allows communication tasks to be entirely overlapped with communication. The code uses a timestep hierarchy to focus computational effort where is most need in response to the problems. - DualSPHysics draws its speed from effective use of GPU accelerators to execute the SPH operations on large groups of identical particles. This allows the code to gain from exceptional parallel execution. The challenge is to effectively connect multiple GPUs across large numbers of inter-connected computing nodes. The working group will build on these codes to identify the optimal approach to massively parallel execution on exa-scale systems. The project will benefit from close connections to the Excalibur Hardware Pilot working group in Durham, driving the co-design of code and hardware. The particular challenges that we will address are: - Optimal algorithms for Exascale performance. In particular, we will address the best approaches to the adaptive time-stepping and out-of-time integration, and adaptive domain decomposition. The first allows different spatial regions to be integrated forward in time optimally, the second allows the regions to be optimally distributed over the hardware. - Modularisation and Separation of Concerns. Future codes need to be flexible and modularised, so that a separation can be achieved between integration routines, task scheduling and physics modules. This will make the code future-proof and easy to adapt to new science domain requirements and computing hardware. - CPU/GPU performance optimisation. Next generation hardware will require specific (and possibly novel) techniques to be developed to optimally advance particles in the SPH scheme. We will build on the programming expertise gain in DualSPHysics to allow efficient GPU use across multiple nodes. - Communication performance optimisation. Separated computational regions need to exchange information at their boundaries. This can be done asynchronously, so that the time-lag of communication does not slow computation. While this has been demonstrated on current systems, the scale of Excalibur will overload current subsystems, and a new solution is needed.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::fa8ca25a7ede98d89aa2461e877505db&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::fa8ca25a7ede98d89aa2461e877505db&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2022 - 2025Partners:Intel (United States), University of Cambridge, Durham University, Imperial College London, University of Exeter +13 partnersIntel (United States),University of Cambridge,Durham University,Imperial College London,University of Exeter,UNIVERSITY OF EXETER,University of Cambridge,University of Exeter,European Centre for Medium-Range Weather Forecasts,Durham University,UNIVERSITY OF CAMBRIDGE,Intel (United States),Nvidia (United States),nVIDIA,ECMWF (UK),ECMWF,UCL,DiRAC (Distributed Res utiliz Adv Comp)Funder: UK Research and Innovation Project Code: EP/X019497/1Funder Contribution: 592,092 GBPMultigrid (MG) algorithms are among the fastest linear algebra solvers available and are used within simulators for time-dependent partial differential equations to invert the elliptic operators arising from implicit time stepping or constraint equations. Multigrid solvers are available off-the-shelf as black box software components. However, it is not clear whether a mathematically optimal MG algorithm will be able to deliver optimal performance on exascale hardware, and some scientists might not be able to use such black-box software as they have to stick to an existing (MG) software landscape. Most importantly, the optimality of off-the-shelf MG for one elliptic problem does not imply that the same algorithm behaves excellently for time-dependent PDEs, i.e. if we simply cast a time-dependent problem into a series of elliptic snapshots. In MGHyPE, we will develop novel multigrid (MG) ingredients and implementation techniques to integrate state-of-the-art algorithms for time-dependent partial differential equations (PDEs) with state-of-the-art MG solvers. A bespoke MG, co-designed with hardware and the PDE solver workflow, can unleash the potential of exascale: Without outsourcing the linear algebra to black-box libraries and with the right ingredients, skills and techniques, we can optimise the whole simulation pipeline in a holistic way rather than tuning individual algorithmic phases in isolation. To demonstrate the potential of our ideas, we will extend ExaHyPE, a code for purely hyperbolic (wave) equations that was behind ExCALIBUR's DDWG ExaClaw, to support elliptic constraints and implicit time stepping. An outreach and knowledge transfer program ensures that innovations find their way into the RSE community, align with vendor roadmaps, and can be used by other ExCALIBUR projects.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::9542e98d839358bee3f416cd9a57a359&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::9542e98d839358bee3f416cd9a57a359&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2021 - 2025Partners:Duke University, nVIDIA, Nvidia (United States), Numerical Algorithms Group Ltd (NAG) UK, DiRAC (Distributed Res utiliz Adv Comp) +14 partnersDuke University,nVIDIA,Nvidia (United States),Numerical Algorithms Group Ltd (NAG) UK,DiRAC (Distributed Res utiliz Adv Comp),University of Manchester,Durham University,National Renewable Energy Laboratory,Durham University,Duke University,The University of Manchester,Leiden University,Numerical Algorithms Group (United Kingdom),UCL,ETH Zurich,ETHZ,University of Salford,NREL,NAGFunder: UK Research and Innovation Project Code: EP/W026775/1Funder Contribution: 3,041,190 GBPMany recent breakthroughs would not have been possible without access to the most advanced supercomputers. For example, for the Chemistry Nobel Prize winners in 2013, supercomputers were used to develop powerful computing programs and software, to understand and predict complex chemical processes or for the Physics Nobel Prize in 2017 supercomputers helped to make complex calculations to detect hitherto theoretical gravitational waves. The advent of exascale systems is the next dramatic step in this evolution. Exascale supercomputing will enable new scientific endeavour in wide areas of UK science, including advanced materials modelling, engineering and astrophysics. For instance, solving atomic and electronic structures with increasing realism to solve major societal challenges - quantum mechanically detailed simulation and steering design of batteries, electrolytic cells, solar cells, computers, lighting, and healthcare solutions, as well as enabling end-to-end simulation of transients (such as bird strike) in a jet engine, to simulation of tsunami waves over-running a series of defensive walls, or understanding the universe at a cosmological scale. Providing a level of detail to describe accurately these challenging problems can be achieved using particle-based models that interact in complicated dance that can be visualised or analysed to see how our model of nature would react in various situations. To model problems as complex as outlined the ways the particles interact must be flexible and tailored to the problem and vast quantities of particles are needed (and or complicated interactions). This proposal takes on the challenge of efficiently calculating the interacting particles on vast numbers of computer cores. The density of particles can be massively different at different locations, and it is imperative to find a way for the compute engines to have similar amounts of work - novel algorithms to distribute the work over different types of compute engines will be developed and used to develop and run frontier simulations of real-world challenges. There is a high cost of both purchasing and running an exascale system, so it is imperative that appropriate software is developed before users gain access to exascale facilities. By definition, exascale supercomputers will be three orders of magnitude more powerful than current UK facilities, which will be achieved by a larger number of cores and the use of accelerators (based on gaming graphic cards, for example). This transition in computer power represents both an anticipated increase in hardware complexity and heterogeneity, and an increase in the volume of communication between cores that will hamper algorithms used on UK's current supercomputers. Many, if not all, of our software packages will require major changes before the hardware architectures can be fully exploited. The investigators of this project are internationally leading experts in developing (enabling new science) and optimising (making simulations more efficient) state-of-the-art particle-based software for running simulations on supercomputers, based here and abroad. Software that we have developed is used both in academia and in industry. In our project we will develop solutions and implement these in our software and, importantly, train Research Software Engineers to become internationally leading in the art of exploiting exascale supercomputers for scientific research.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::c51cd096152c76792a3693bf7975c7d5&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::c51cd096152c76792a3693bf7975c7d5&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eu
chevron_left - 1
- 2
chevron_right