
Numerical Algorithms Group Ltd (NAG) UK
Numerical Algorithms Group Ltd (NAG) UK
23 Projects, page 1 of 5
assignment_turned_in Project2019 - 2023Partners:NAG, Numerical Algorithms Group Ltd (NAG) UK, University of Edinburgh, Numerical Algorithms Group (United Kingdom)NAG,Numerical Algorithms Group Ltd (NAG) UK,University of Edinburgh,Numerical Algorithms Group (United Kingdom)Funder: UK Research and Innovation Project Code: EP/S027785/1Funder Contribution: 231,607 GBPWhat accurately describes such real-world processes as fluid flow mechanisms, or chemical reactions for the manufacture of industrial products? What mathematical formalism enables practitioners to guarantee a specific physical behaviour or motion of a fluid, or to maximise the yield of a particular substance? The answer lies in the important scientific field of PDE-constrained optimisation. PDEs are mathematical tools called partial differential equations. They enable us to model and predict the behaviour of a wide range of real-world physical systems. From the optimisation point-of-view, a particularly important set of such problems are those in which the dynamics may be controlled in some desirable way, for instance by applying forces to a domain in which fluid flow takes place, or inserting chemical reactants at certain rates. By influencing a system in this way, we are able to generate an optimised outcome of a real-world process. It is hence essential to study and understand PDE-constrained optimisation problems. The possibilities offered by such problems are immense, influencing groundbreaking research in applied mathematics, engineering, and the experimental sciences. Crucial real-world applications for such problems arise in fluid dynamics, chemical and biological mechanisms, weather forecasting, image processing including medical imaging, financial markets and option pricing, and many others. Although a great deal of theoretical work has been undertaken for such problems, it has only been in the past decade or so that a focus has been placed on solving them accurately and robustly on a computer, by tackling the matrix systems of equations which result. Much of the research underpinning this proposal involves constructing powerful iterative methods accelerated by 'preconditioners', which are built by approximating the relevant matrix in an accurate way, such that the preconditioner is much cheaper to apply than solving the matrix system itself. Applying our methodology can then open the door to scientific challenges which were previously out of reach, by only storing and working with matrices that are tiny compared to the systems being solved overall. Recently, PDE-constrained optimisation problems have found crucial applicability to problems from data analysis. This is due to the vast computing power that is available today, meaning that there exists the potential to store and work with huge-scale datasets arising from commercial records, online news sites, or health databases, for example. In turn, this has led to a number of applications of data-driven processes being successfully modelled by optimisation problems constrained by PDEs. It is essential that algorithms for solving problems from these applications of data science can keep pace with the explosion of data which arises from real-world processes. Our novel numerical methods for solving the resulting huge-scale matrix systems aim to do exactly this. In this project, we will examine PDE-constrained optimisation problems under the presence of uncertain data, image processing problems, bioinformatics applications, and deep learning processes. For each problem, we will devise state-of-the-art mathematical models to describe the process, for which we will then construct potent iterative solvers and preconditioners to tackle the resulting matrix systems. Our new algorithms will be validated theoretically and numerically, whereupon we will then release an open source code library to maximise their applicability and impact on modern optimisation and data science problems.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::2869e9b4c95d3924a425240001ac97d3&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::2869e9b4c95d3924a425240001ac97d3&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2015 - 2018Partners:Numerical Algorithms Group (United Kingdom), QUB, Numerical Algorithms Group Ltd (NAG) UK, NAGNumerical Algorithms Group (United Kingdom),QUB,Numerical Algorithms Group Ltd (NAG) UK,NAGFunder: UK Research and Innovation Project Code: EP/M01147X/1Funder Contribution: 963,928 GBPMoore's Law and Dennard scaling have led to dramatic performance increases in microprocessors, the basis of modern supercomputers, which consist of clusters of nodes that include microprocessors and memory. This design is deeply embedded in parallel programming languages, the runtime systems that orchestrate parallel execution, and computational science applications. Some deviations from this simple, symmetric design have occurred over the years, but now we have pushed transistor scaling to the extent that simplicity is giving way to complex architectures. The absence of Dennard scaling, which has not held for about a decade, and the atomic dimensions of transistors have profound implications on the architecture of current and future supercomputers. Scalability limitations will arise from insufficient data access locality. Exascale systems will have up to 100x more cores and commensurately less memory space and bandwidth per core. However, in-situ data analysis, motivated by decreasing file system bandwidths will increase the memory footprints of scientific applications. Thus, we must improve per-core data access locality and reduce contention and interference for shared resources. Energy constraints will fundamentally limit the performance and reliability of future large-scale systems. These constraints lead many to predict a phenomenon of "dark silicon" in which half or more of the transistors on each chip must be powered down for safe operation. Low-power processor technologies based on sub-threshold or near-threshold voltage operation are a viable alternative. However, these techniques dramatically decrease the mean time to failure at scale and, thus, require new paradigms to sustain throughput and correctness. Non-deterministic performance variation will arise from design process variation that leads to asymmetric performance and power consumption in architecturally symmetric hardware components. The manifestations of the asymmetries are non-deterministic and can vary with small changes to system components or software. This performance variation produces non-deterministic, non-algorithmic load imbalance. Reliability limitations will stem from the massive number of system components, which proportionally reduces the mean-time-to-failure, but also from the component wear and from low-voltage operation, which introduces timing errors. Infrastructure-level power capping may also compromise application reliability or create severe load imbalances. The impact of these changes on technology will travel as a shockwave throughout the software stack. For decades, we have designed computational science applications based on very strict assumptions that performance is uniform and processors are reliable. In the future, hardware will behave unpredictably, at times erratically. Software must compensate for this behavior. Our research anticipates this future hardware landscape. Our ecosystem will combine binary adaptation, code refactoring, and approximate computation to prepare CSE applications. We will provide them with scale-freedom - the ability to run well at scale under dynamic execution conditions - with at most limited, platform-agnostic code refactoring. Our software will provide automatic load balancing and concurrency throttling to tame non-deterministic performance variations. Finally, our new form of user-controlled approximate computation will enable execution of CSE applications on hardware with low supply voltages, or any form of faulty hardware, by selectively dropping or tolerating erroneous computation that arises from unreliable execution, thus saving energy. Cumulatively, these tools will enable non-intrusive reengineering of major computational science libraries and applications (2DRMP, Code_Saturne, DL_POLY, LB3D) and prepare them for the next generation of UK supercomputers. The project partners with NAG a leading UK HPC software and service provider.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::6fb865e203f2e6b0d701e09b19cc39c1&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::6fb865e203f2e6b0d701e09b19cc39c1&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2023 - 2026Partners:University of Strathclyde, TRINITY COLLEGE DUBLIN, Numerical Algorithms Group Ltd (NAG) UK, NAG, University of Strathclyde +1 partnersUniversity of Strathclyde,TRINITY COLLEGE DUBLIN,Numerical Algorithms Group Ltd (NAG) UK,NAG,University of Strathclyde,Numerical Algorithms Group (United Kingdom)Funder: UK Research and Innovation Project Code: EP/W035561/1Funder Contribution: 355,811 GBPAccurate mathematical models of scientific phenomena provide insights into, and solutions to, pressing challenges in e.g., climate change, personalised healthcare, renewable energy, and high-value manufacturing. Many of these models use groups of interconnected "partial differential" equations (called PDEs) to describe the phenomena. These equations describe the phenomena by abstractly relating relevant quantities of scientific interest, and how they change in space and time, to one another. These equations are human-readable, but since they are abstract, computers cannot interpret them. As the use of computers is fundamental to effective and accurate modeling, a process of "discretisation" must be undertaken to approximate these equations by something understandable to the computer. Scientific phenomena are generally modelled as occurring in a particular space and during a particular time-span. The process called discretisation samples both the physical space and time at a discrete set of points. Instead of considering the PDEs over the whole space and time, we instead approximate the relationships communicated abstractly by the PDEs only at these discrete points. This transforms abstract, human-readable PDEs into a set of algebraic equations whose unknowns are approximations of the quantities of interest only at these points. In order that the solution to these equations approximates the solution to the PDEs well enough, the discretisation generally must have a high resolution, meaning there are often hundreds of millions of unknowns or more. These algebra equations are thus large-scale and must be treated by efficient computer programs. As the equations themselves are often able to be stored in a compressed manner, iterative methods that do not require direct representation of the equations are often most attractive. These methods produce a sequence of approximate solutions and are stopped when the accuracy is satisfactory for the model in question. The work in this proposal concerns analysing, predicting, and accelerating these iterative methods so they produce a satisfactorily accurate solution more rapidly. It is quite common that the algebraic equations arising from the aforementioned discretisation have an additional structure known as "Toeplitz". A great deal of work has gone into understanding the behaviour of iterative methods applied to these Toeplitz-structured problems. In this proposal, we will extend this understanding further and develop new accelerated methods to treat these problems. Furthermore, a wider class of structured problems called Generalised Locally Toeplitz (GLT) problems can be used to describe the equations arising from an even larger class of mathematical models. We will extend much of the analysis of Toeplitz problems to the GLT setting. The work in this proposal will lead to faster, more accurate modelling of phenomena with lower energy costs, as they will not require as much time running on large supercomputers. Our proposal spans new mathematical developments, the proposal of efficient iterative methods, their application to models of wave propagation and wind turbines, and the production of software for end-users.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::561eda4670aa20c4cf2387f1253bb21d&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::561eda4670aa20c4cf2387f1253bb21d&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2008 - 2011Partners:University of Hertfordshire, Qioptiq Ltd, Numerical Algorithms Group Ltd (NAG) UK, Numerical Algorithms Group (United Kingdom), NAG +2 partnersUniversity of Hertfordshire,Qioptiq Ltd,Numerical Algorithms Group Ltd (NAG) UK,Numerical Algorithms Group (United Kingdom),NAG,Qinetiq (United Kingdom),University of HertfordshireFunder: UK Research and Innovation Project Code: EP/F069383/1Funder Contribution: 236,783 GBPGiven a Fortran program which evaluates numerically a scalar output y = f(x) from a vector x of input values, we are frequently interested in evaluating the gradient vector g = f '(x) whose components are the derivatives (sensitivities) dy/dx.Automatic Differentiation is a set of techniques for automatically transforming the program for evaluating f into a program for evaluating f '. In particular the adjoint, or reverse, mode of Automatic Differentiation can produce numerical values for all components of the gradient g at a computational cost of about three evaluations of f, even if there are millions of components in x and g. This is done by using the chain rule from calculus (but applied to floating point numerical values, rather than to symbolic expressions) so as to evaluate numerically the sensitivity of the output with respect to each floating point calculation performed. However, doing this requires making the program to run backwards, since these sensitivities must be evaluated starting with dy/dy = 1 and ending with dy/dx = g, which is the reverse order to the original calculation. It also requires the intermediate values calculated by f to be either stored on the forward pass, or recomputed on the reverse pass by the adjoint program. Phase II of the CompAD project has already produced the first industrial strength Fortran compiler in the world able to perform this adjoint transformation (and reverse program flow) automatically. Previous Automatic Differentiation tools used either overloading (which was hard to optimize) or source transformation (which could not directly utilize low level compiler facilities).The adjoint Fortran compiler produced by phase II is perfectly adequate for small to medium sized problems (up to a few hundred input variables), and meets the objectives of the second phase of the project. However even moderately large problems (many thousands of input variables) require the systematic use and placement of checkpoints, in order to manage efficiently the tradeoff between storage on the way forward and recomputation on the way back. With the present prototype, the user must place and manage these checkpoints explicitly. This is almost acceptable for experienced users with very large problems which they already understand well, but it is limiting and timeconsuming for users without previous experience of using Automatic Differentiation, and represents a barrier to the uptake of numerical methods based upon Automatic Differentiation. The objective of Phase III of the CompAD project is to automate the process of trading off storage and recomputation in a way which is close to optimal. Finding a tradeoff which is actually optimal is known to be an NP-hard problem, so we are seeking solutions which are almost optimal in a particular sense. Higher order derivatives (eg directional Hessians) can be generated automatically by feeding back into the compiler parts of its own output during the compilation process. We intend to improve the code transformation techniques used in the compiler to the point where almost optimally efficient higher order derivative code can be generated automatically in this way.A primary purpose of this project is to explore alternative algorithms and representations for program analysis and code transformation in order to solve certain hard problems and lay the groundwork for future progress with others. But we will be using some hard leading edge numerical applications from our industrial partners to guide and prove the new technology we develop, and the Fortran Compiler resulting from this phase of the project is designed to be of widespread direct use in Scientific Computing.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::46d530e3023474bad7eab592b836bf4d&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::46d530e3023474bad7eab592b836bf4d&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2018 - 2020Partners:NAG, University of Strathclyde, NERC British Geological Survey, Numerical Algorithms Group (United Kingdom), University of Strathclyde +2 partnersNAG,University of Strathclyde,NERC British Geological Survey,Numerical Algorithms Group (United Kingdom),University of Strathclyde,Numerical Algorithms Group Ltd (NAG) UK,British Geological SurveyFunder: UK Research and Innovation Project Code: EP/R009821/1Funder Contribution: 92,739 GBPMathematical models of the diffusion of signals and particles are important for gaining insights into many key challenges facing the UK today. These problems include understanding how fluid flows through the ground, which helps to ensure that we have safe drinking water; characterising the propagation of electrical impulses through the heart, which aids our understanding of heart disease; and developing accurate models of financial processes, which improve our economy by providing better predictions of financial markets. This project focuses on so-called fractional diffusion problems, which occur when the diffusion process involves a number of different flow rates or long-range effects. Fractional diffusion occurs in many applications, including the groundwater flow, cardiac electrical propagation, and finance problems listed above. Solving mathematical models of fractional diffusion is challenging, and typically requires a numerical method, i.e. a computer simulation. Usually, the most time-consuming part of this simulation is solving thousands, or even millions, of interdependent linear equations on a computer. Indeed, the time required to solve this system of equations may be so large that we are prevented from simulating fractional diffusion problems that capture the true complexity of real-world applications. Reducing this solve time is thus crucial if we are to generate new scientific insights in important applications involving fractional diffusion. This project will develop new methods for solving these huge systems of equations that are guaranteed to be fast. We will focus on iterative solvers, which are well suited to the class of numerical methods (computer simulations) on which we focus. Iterative solvers of systems of equations compute a new approximation to the solution at each step, and so are fast if a good approximation is found after only a few iterations. However, this is generally only possible if we apply a convergence accelerator, called a preconditioner, which captures the 'essence' of the linear system, but is cheap to use. For many fractional diffusion problems, this preconditioner is currently chosen heuristically, i.e. without theoretical justification. Consequently, the preconditioner may fail to reduce the (very large) computation time needed to solve the linear system. The goal of this project is to propose new preconditioners and iterative methods that are theoretically justified, and hence guaranteed to converge quickly, for a range of fractional diffusion problems. We will develop new software that will enable people with fractional diffusion problems to easily use our improved solvers. Additionally we will apply these fast preconditioners and iterative solvers in a fractional diffusion model of groundwater flow of an important UK aquifer. Solving this model quickly will enable us to better track our drinking water, and identify possible sources of contamination.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::482fe7c07943f1b304cd1762fd2534bb&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::482fe7c07943f1b304cd1762fd2534bb&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eu
chevron_left - 1
- 2
- 3
- 4
- 5
chevron_right