Powered by OpenAIRE graph
Found an issue? Give us feedback

Discovering rare, extreme behaviour in large-scale computational models

Funder: UK Research and InnovationProject code: MR/T041862/1
Funded under: FLF Funder Contribution: 1,106,090 GBP
visibility
download
views
OpenAIRE UsageCountsViews provided by UsageCounts
downloads
OpenAIRE UsageCountsDownloads provided by UsageCounts
6
2

Discovering rare, extreme behaviour in large-scale computational models

Description

The construction of high-fidelity digital models of complex physical phenomena, and more importantly their deployment as investigation tools for science and engineering, are some of the most critical undertakings of scientific computing today. Without computational models, the study of spatially-irregular, multi-scale, or highly coupled, nonlinear physical systems would simply not be tractable. Even when computational models are available, however, tuning their physical and geometrical parameters (sometimes referred to as control variables) for optimal exploration and discovery is a colossal endeavour. In addition to the technological challenges inherent to massively parallel computation, the task is complicated by the scientific complexity of large-scale systems, where many degrees of freedom can team up and generate emergent, anomalous, resonant features which get more and more pronounced as the model's fidelity is increased (e.g., in turbulent scenarios). These features may correspond to highly interesting system configurations, but they are often too short-lived or isolated in the control space to be found using brute-force computation alone. Yet, most computational surveys today are guided by random (albeit somewhat educated by instinct) guesses. The potential for missed phenomenology is simply unquantifiable. In many domains, anomalous solutions could describe life-threatening events such as extreme weather. A digital model of an industrial system may reveal, under special conditions, an anomalous response to the surrounding environment, which could lead to decreased efficiency, material fatigue, and structural failure. Precisely because of their singular and catastrophic nature, as well as infrequency and short life, these configurations are also the hardest to predict. Any improvement in our capacity to locate where anomalous dynamics may unfold could therefore tremendously impact our ability to protect against extreme events. More fundamentally, establishing whether the set of equations implemented in a computational model is at all able to reproduce specific, exotic solutions (such as rare astronomical transients [1]) for certain configuration parameters can expose (or exclude) the manifestation of new physics, and shed light on the laws that govern our Universe. Recently, the long-lived but sparse attempts [2] to instrument simulations with optimisation algorithms have grown into a mainstream effort. Current trends in Intelligent-Simulation orchestration stress the need to instruct the computational surveys to learn from previous runs, but they do not address the question of which information it would be most valuable to extract. A theoretical formalism to classify the information processed by large computational models is simply absent. The main objective of this project is to develop a roadmap for the definition of such a formalism. The key question is how one can optimally learn from large computational models. This is a deep, overarching issue affecting experimental as well as computational science, and has been recently proven to be an NP hard problem [3]. Correspondingly, the common approach to simulation data reduction is often pragmatic rather than formal: if solutions with specific properties (such as a certain aerodynamic drag coefficient) are sought, those properties are directly turned into objective functions, taking the control variables as input arguments. This is reasonable when these properties depend only mildly on the input; in the case of anomalous solutions, however, this is often not the case, so one wonders whether more powerful predictors of a simulation's behaviour could be extracted from other, apparently unrelated information contained in the digital model. If so, exposing this information to the machine-learning algorithms could arguably lead to more efficient and exhaustive searches. The investigation of this possibility is the core task that this project aims to undertake.

Data Management Plans
  • OpenAIRE UsageCounts
    Usage byUsageCounts
    visibility views 6
    download downloads 2
  • 6
    views
    2
    downloads
    Powered byOpenAIRE UsageCounts
Powered by OpenAIRE graph
Found an issue? Give us feedback

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

All Research products
arrow_drop_down
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::d7ba46bb5dfc8286b77ab44c32b0668a&type=result"></script>');
-->
</script>
For further information contact us at helpdesk@openaire.eu

No option selected
arrow_drop_down