Powered by OpenAIRE graph
Found an issue? Give us feedback

CVUT

CESKE VYSOKE UCENI TECHNICKE V PRAZE
Country: Czech Republic
222 Projects, page 1 of 45
  • Funder: EC Project Code: 649043
    Overall Budget: 1,499,500 EURFunder Contribution: 1,499,500 EUR

    The goal of the AI4REASON project is a breakthrough in what is considered a very hard problem in AI and automation of reasoning, namely the problem of automatically proving theorems in large and complex theories. Such complex formal theories arise in projects aimed at verification of today's advanced mathematics such as the Formal Proof of the Kepler Conjecture (Flyspeck), verification of software and hardware designs such as the seL4 operating system kernel, and verification of other advanced systems and technologies on which today's information society critically depends. It seems extremely complex and unlikely to design an explicitly programmed solution to the problem. However, we have recently demonstrated that the performance of existing approaches can be multiplied by data-driven AI methods that learn reasoning guidance from large proof corpora. The breakthrough will be achieved by developing such novel AI methods. First, we will devise suitable Automated Reasoning and Machine Learning methods that learn reasoning knowledge and steer the reasoning processes at various levels of granularity. Second, we will combine them into autonomous self-improving AI systems that interleave deduction and learning in positive feedback loops. Third, we will develop approaches that aggregate reasoning knowledge across many formal, semi-formal and informal corpora and deploy the methods as strong automation services for the formal proof community. The expected outcome is our ability to prove automatically at least 50% more theorems in high-assurance projects such as Flyspeck and seL4, bringing a major breakthrough in formal reasoning and verification. As an AI effort, the project offers a unique path to large-scale semantic AI. The formal corpora concentrate centuries of deep human thinking in a computer-understandable form on which deductive and inductive AI can be combined and co-evolved, providing new insights into how humans do mathematics and science.

    visibility855
    visibilityviews855
    downloaddownloads2,079
    Powered by Usage counts
    more_vert
  • Funder: EC Project Code: 101081989
    Funder Contribution: 150,000 EUR

    Unevaluated science is not worth funding. Gone are the days where a scientific breakthrough could be based on scribbles made on a few loose sheets of paper reviewed by a single attentive reader. Most disciplines rely on experimental data that is collected, analyzed, and presented using powerful computational tools. The scientific adventure hinges on our ability to openly and widely share and reproduce such results. The goal of this PoC is to market a tool, R4R, for non-programmer scientists to make their archival work easily reproducible and offer it to them through a non-expensive licence. Affordable reproducibility is key to independent evaluation of previously published results. We will focus on reproducibility of data analysis pipelines written in R with RMarkdown or Jupyter. Creating a reproducible environment is hard, labor-intensive and error-prone, and requires expertise that data analysts lack. We propose to use dynamic program analysis techniques to track dependencies, data inputs, and other sources of non-determinism needed for reproducibility. R4R will synthesize metadata to generate self-contained, portable, fully reproducible environments, based on Docker images.

    more_vert
  • Funder: EC Project Code: 239455
    more_vert
  • Funder: EC Project Code: 101097822
    Overall Budget: 2,499,820 EURFunder Contribution: 2,499,820 EUR

    Computer vision is beginning to see a paradigm shift with large-scale foundational models that demonstrate impressive results on a wide range of recognition tasks. While achieving impressive results, these models learn only static 2D image representations based on observed correlations between still images and natural language. However, our world is three-dimensional, full of dynamic events and causal interactions. We argue that the next scientific challenge is to invent foundational models for embodied perception – that is perception for systems that have a physical body, operate in a dynamic 3D world and interact with the surrounding environment. The FRONTIER proposal addresses this challenge by means of: 1. developing a new class of foundational model architectures grounded in the geometrical and physical structure of the world that seamlessly combine large-scale neural networks with learnable differentiable physical simulation components to achieve generalization across tasks, situations and environments; 2. designing new learning algorithms that incorporate the physical and geometric structure as constraints on the learning process to achieve new levels of data efficiency with the aim of bringing intelligent systems closer to humans who can often learn from only a few available examples; 3. developing new federated learning methods that will allow sharing and accumulating learning experiences across different embodied systems thereby achieving new levels of scale, accuracy, and robustness not achievable by learning in any individual system alone. Breakthrough progress on these problems would have profound implications on our everyday lives as well as science and commerce with safer cars that learn from each other, intelligent production lines that collaboratively adapt to new workflows or a new generation of smart assistive robots that automatically learn new skills from the Internet and each other enabled by the advances from this project.

    more_vert
  • Funder: EC Project Code: 101102708
    Funder Contribution: 166,279 EUR

    Experimental evidence indicates that fluid filtration through unsaturated porous media exhibits a hysteretic behavior, originating at a microscopic level from surface tension at the point of contact between water and air in the pores. As a result, the pressure-saturation constitutive relation turns out to be of hysteresis type, accurately described with the Preisach operator by a thorough fitting procedure. The main objective of the MulPHys project is to expand the knowledge about Preisach hysteresis for fluid filtration and build new mathematical models for unsaturated porous media, employing a multiscale approach. The suitability of the Preisach operator in describing the hysteretic behavior of unsaturated porous solids sparked an intense research effort in the community of experts in PDEs with hysteresis, with the goal of including the Preisach operator in mathematical models. In these endeavors, the presence of a microstructure was neglected, with the approach being directly macroscopic. Another branch of research has considered porous media as objects with a microstructure, and has derived the macroscopic description of fluid flow from local behavior. In this body of research, however, porous media are often assumed to be completely saturated so that no hysteresis can occur. With MulPHys, we will fill the gap between these two research areas, employing homogenization techniques to provide a justification of the Preisach operator as the correct tool for describing filtration. Particular attention will be paid to including gravity effects and understanding solid-liquid interactions at the microscopic level. Numerical simulations and experimental data will be instrumental in achieving these objectives. Potential applications in the preservation of historic buildings are foreseen, thus addressing a European priority and implying significant impact not only for the scientific community, but also for the professional sectors and society as a whole.

    more_vert

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.