Powered by OpenAIRE graph
Found an issue? Give us feedback

Charles University

Charles University

15 Projects, page 1 of 3
  • Funder: UK Research and Innovation Project Code: EP/P019749/1
    Funder Contribution: 444,719 GBP

    Almost all modern electronic devices require memory devices for large scale data storage with the ability to write, store and access information. There are strong commercial drives for increased speed of operation, energy efficiency, storage density and robustness of such memories. Most large scale data storage devices, including hard drives, rely on the principle that two different magnetization orientations in a ferromagnet represent the "zeros" and "ones". By applying a magnetic field to a ferromagnet one can reversibly switch the direction of its magnetisation between different stable directions and read out these states / bits from the magnetic fields they produce. This is the basis of ferromagnetic media used from the 19th century to current hard-drives. Today's magnetic memory chips (MRAMs) do not use magnetic fields to manipulate magnetisation with the writing process done by current pulses which can reverse magnetisation directions due to the spin-torque effect. In the conventional version of the effect, switching is achieved by electrically transferring spins from a fixed reference permanent magnet. More recently, it was discovered that the spin torque can be triggered without a reference magnet, by a relativistic effect in which the motion of electrons results in effective internal magnetic fields. Furthermore the magnetisation state is read electrically in such MRAMs. Therefore the sensitivity of ferromagnets to external magnetic fields and the magnetic fields they produce are not utilised. In fact they become problems since data can be can be accidentally wiped by magnetic fields, and can be read by the fields produced making data insecure. Also the fields produced limit how closely data elements can be packed. Recently we have shown that antiferromagnetic materials can be used to perform all the functions required of a magnetic memory element. Antiferromagnets have the north poles of half of the atomic moments pointing in one direction and the other half in the opposite direction leading to no net magnetisation and no external magnetic field. For antiferromagnets with specific crystal structures we predicted and verified that current pulses produce effective field which can rotate the two types of moments in the same directions. We were able to reverse the moment orientation in antiferromagnets by a current induced torque and to read out the magnetisation state electrically. Since antiferromagnets do not produce a net magnetic field they do not have all the associated problems discussed above. The dynamics of the magnetisation in antiferromagnets occur on timescales orders of magnitude faster than in ferromagnets, which could lead to much faster and more efficient operations. Finally, the antiferromagnetic state is readily compatible with metal, semiconductor or insulator electronic structures and so their use greatly expands the materials basis for such applications. This proposal aims to develop a detailed understanding of current induced switching in antiferromagnets though a program of research extensive experimental and theoretical studies and to pave the way to exploitation of this effect in future magnetic memory technologies. We will develop high quality antiferromagnetic materials and smaller and faster devices. We aim to achieve devices in which the antiferromagnetic state has not disordered (single domain behaviour) which will have improved technical parameters and which will be ideal for advancing fundamental understanding. We also aim to demonstrate and study the manipulation of regions of antiferromagnets in which there is a transition between two types of moment orientation (domain walls) using current-induced torques. As well as electrical measurements we will directly study the magnetic order in the antiferromagnetic devices using X-ray imaging techniques and we will carry out extensive theoretical modelling.

    more_vert
  • Funder: UK Research and Innovation Project Code: NE/Y002636/1
    Funder Contribution: 82,343 GBP

    Chlorophyte "snow algae" and Streptophyte "glacier algae" are found across the cryosphere, forming widespread algal blooms in snowpacks and on glacier ice surfaces during spring/summer melt seasons. These blooms hold significant potential to exacerbate the already rapid loss of snowpack and glacial ice resources driven by climate change because they establish albedo feedbacks that amplify melt. Their presence also leads to the construction of active microbial food-webs that provide important ecosystem functions, e.g. carbon sequestration, nutrient cycling and export of resources to down-steam systems. The algae themselves are also important analogs for what life was like on Earth during past mass glaciations, and for how life may exist on other frozen planets across our solar system. Driven by these series of novelties, the snow and glacier algal research community has significantly expanded over recent years, with active projects now spanning Arctic, Alpine and Antarctic regions of the cryosphere. To-date, however, research projects have tended to work in isolation, employing different methods for the analysis of blooms. This has prevented comparisons of findings between regions of the cryosphere and an overall appreciation for the global role and impacts of blooms at present. In turn, we cannot yet project the fate of snow and glacier algal blooms into the future under climate change, or back to the past during key periods of Earth's history. Yet the critical mass achieved in the snow and glacier algal research community also presents an opportunity to pool knowledge and resources, and align methods to drive the field to new achievements. The CASP-ICE project brings together leaders in the field of snow and glacier algal research (x2 UK investigators and x12 international partners) to undertake the foundational work needed to align efforts across the research community and unlock the next generation of science on snow and glacier algal blooms cryosphere-wide. Specifically, we will tackle the following four major tasks: 1. Define consistent methods for sampling and mapping snow and glacier algal blooms within field sites, so that datasets produced into the future will be completely comparable across different regions and times of sampling. 2. Apply these methods in study sites that the CASP-ICE team are currently working to produce the first set of standardized samples and maps of blooms for the community to work with. 3. Undertake the nuts-and-bolts validation of both laboratory-based methods for analyzing field samples as well as computational methods for integrating field measurements and mapping datasets with larger-scale satellite imagery that is needed to monitor blooms at global scales. 4. Establish a list of field sites that can form the backbone of an ongoing cryospheric algal bloom monitoring network and secure the funding to continue monitoring into the future. CASP-ICE will achieve these tasks through a series of networking and knowledge exchange activities as well as hands-on science. An initial workshop in spring 2024 will provide the platform to define best practice methods for the community and start talks on future network structure and direction. All partners will then undertake sampling and sample/data analysis across their respective study regions to produce the first fully validated datasets on snow and glacier algal blooms across the cryosphere. The protocols defined and datasets produced will be leveraged in subsequent funding bids that will be prepared during a series of networking visits and partner meetings led by the project PI, providing the support needed for ongoing monitoring of blooms into the future as climate change proceeds. CASP-ICE will provide the network and scientific foundation needed to tackle the large-scale questions about the role of cryospheric algal blooms in the Earth System at present, into the future under climate change, and back into the past.

    more_vert
  • Funder: UK Research and Innovation Project Code: AH/T002859/1
    Funder Contribution: 794,148 GBP

    Every day, as we use language, we unconsciously select forms of words that feel "right" for what we want to say. Occasionally, however, multiple forms may compete for a slot, such as the participle of 'prove' (have proved? have proven?); here, users find both forms adequate, although each of us might only use one of them. Elsewhere, we lack a suitable form where one is expected: we may baulk at forming the past tense of the verb 'troubleshoot', where we have a "slot" (past tense of a particular word) but no form that can adequately fill it (troubleshot? troubleshooted?). These examples of "feast" (multiple forms) and "famine" (no forms) show that selecting the "right" word form is not a process of mechanically mapping from function to form; instead, users weigh and select forms from a basket of those available to us, sometimes keeping around more forms than necessary and sometimes failing to find a form that works for us. Linguists term the first sort of mismatch 'overabundance', and the second sort 'defectivity'. These mismatches cause difficulty for traditional linguistic theories, which assume that each form we use fulfils one function, and that each function can be fulfilled by one form. For the most part, this is true: if adult native speakers need a past tense of the verb 'choose', they head unerringly for 'chose', never *choosed or *chost. This results in an assumption that inflection (form-selection) is automatic, judgement-free, and innate rather than learned. Language handbooks then write this assumption into practice, describing forms and features very differently from how a speaker might actually use them. Defectivity and overabundance have traditionally been treated as separate phenomena, arising in different circumstances that explain their divergent outcomes. Our project highlights the commonalities in these circumstances - it is the outcomes (multiple forms or none) that mark them as distinct. Using data from morphologically complex languages - Czech, Croatian, Estonian and a further language chosen by the PDRA - we explore a fundamental question: which factors push users and models of use down one or the other path? We converge on an answer by considering multiple perspectives: - The Czech National Corpus Institute will use CORPUS DATA, particularly from more "naturalistic" subtitle corpora, to uncover new methods for identifying competition (overabundance) and gaps (defectivity) in the "real world" of texts: their methods (collocation, keymorph analysis, frequency distribution) will be refined, elaborated and tested on project languages via experimental and computational methods. - Sheffield will use EXPERIMENTAL DATA from adult native speakers to examine language users' reactions to overabundance and defectivity by confronting them in similarly structured tests - which show in what way our reactions are (or are not) measurably different to these two slot types. These data will also help assess aspects of SOCIOLINGUISTIC VARIATION, such as education and reading skills. - York will test, refine and develop COMPUTATIONAL MODELS OF INFLECTIONAL MORPHOLOGY to see how they handle and predict overabundant and defective slots. - Zagreb and York will collect LANGUAGE ACQUISITION DATA in naturalistic and experimental settings to illuminate how learners make sense of "messy" data such as those found in overabundant and defective slots. Overabundance and defectivity will be examined as potentially differing, temporary responses to structures of uncertain predictability. As an integral part of the project, staff in Zagreb and Prague will translate our findings into GUIDELINES for language users, feeding directly into public-facing language resources such as online dictionaries and handbooks of the target languages. Instead of seeing idealised systems that linguists project for them, as at present, speakers will get information on how their language is used in a clear and user-friendly format.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/V00252X/1
    Funder Contribution: 215,868 GBP

    (Numerical) optimization problems lie at the heart of many modern applications in artificial intelligence (AI), machine learning (ML), and computer science (CS) in general. Crucially many of these problems can be naturally translated into only a handful of very powerful frameworks. One of the most prominent such frameworks is mixed integer linear programming (MILP), which has long developed into an invaluable tool for commercial and academic applications across a wide range of industries and research areas. For instance, according to HG insights (https://discovery.hgdata.com/), the IBM ILOG Suite, which contains the MILP solver CPLEX, is used by 2953 companies in the USA of which more than 1000 have revenues above 1b USD. The fact that so many numerical optimization problems can be naturally translated into (different fragments/classes of) MILP has made MILP invaluable for the theoretical and practical analysis and solution of numerical optimization problems. Indeed, MILP has long become an important part of the algorithmic toolbox for researchers in AI, ML, and TCS, since a translation into MILP is often the only way to analyse the complexity of their numerical optimization problems. Moreover, the wide availability of surprisingly efficient academic and commercial MILP solvers means that a translation into MILP is often the easiest, most efficient, and sometimes even the only known way to solve many real-world optimization problems in practice. Despite all this, our understanding of the fine-grained complexity, a.k.a. the parameterized complexity (PC), of MILP is still only in its infancy. This is in stark contrast to the situation for related non-numerical decision problems such as Boolean satisfiability (SAT) and constraint satisfaction (CSP), where the introduction of PC has led to an almost comprehensive understanding of the complexity of these problems under a wide variety of restrictions. There are two main reasons for the lack of our understanding of the PC of MILP. First MILP is an extremely challenging computational problem requiring very different tools and algorithmic techniques, than non-numerical problems such as SAT and CSP that have been the traditional focus of PC and TCS. Second the tools required for the adequate definition of tractable classes for MILP have only recently been developed by the PC community. This situation has, however, recently started to change with my collaborators and me laying the foundations towards the study of the PC of MILP by pioneering the analysis of MILP in terms of graphical representations of the constraint matrix. Our initial study, which focuses solely on decompositional methods and parameters, has already been picked up by several leading research groups and led to the development of novel algorithmic techniques for MILP. Notably, the obtained results have also resulted in the development of novel algorithms and various algorithmic breakthroughs for combinatorial problems in areas such as scheduling, stringology and social choice, and the travelling salesman problem. Building upon these promising initial results, the overarching vision of this project is to obtain a comprehensive understanding about which structural and numerical properties of MILP instances are responsible for computational hardness or tractability. Towards this aim we will develop novel ways to measure the structural and numerical properties of MILP instances, in terms of so called parameters, and we will then analyse the impact of these parameters on the complexity of MILP using the framework of PC. The main outcomes of this project will be novel and very general tractable classes as well as novel algorithmic upper bound and lower bound techniques for MILP that will have far-reaching consequences for a wide range of optimization problems and will potentially also influence the future development of academic and commercial MILP solvers.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/V05645X/1
    Funder Contribution: 227,201 GBP

    Over the past few months, we have laid the groundwork for the ReproHum project (summarised in the 'pre-project' column in the Work Plan document) with (i) a study of 20 years of human evaluation in NLG which reviewed and labelled 171 papers in detail, (ii) the development of a classification system for NLP evaluations, (iii) a proposal for a shared task for reproducibility of human evaluation in NLG, and (iv) a proposal for a workshop on human evaluation in NLP. We have built an international network of 20 research teams currently working on human evaluation who will actively contribute to this project (see Track Record section), making combined contributions in kind of over ÂŁ80,000. This pre-project activity has created an advantageous starting position for the proposed work, and means we can 'hit the ground running' with the scientifically interesting core of the work. In this foundational project, our key goals are the development of a methodological framework for testing the reproducibility of human evaluations in NLP, and of a multi-lab paradigm for carrying out such tests in practice, carrying out the first study of this kind in NLP. We will (i) systematically diagnose the extent of the human evaluation reproducibility problem in NLP and survey related current work to address it (WP1); (ii) develop the theoretical and methodological underpinnings for reproducibility testing in NLP (WP2); (iii) test the suitability of the shared-task paradigm (uniformly popular across NLP fields) for reproducibility testing (WP3); (iv) create a design for multi-test reproducibility studies, and run the ReproHum study, an international large-scale multi-lab effort conducting 50+ individual, coordinated reproduction attempts on human evaluations in NLP from the past 10 years (WP4); and (v) nurture and build international consensus regarding how to address the reproducibility crisis, via technical meetings and growing our international network of researchers (WP5).

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.