Powered by OpenAIRE graph
Found an issue? Give us feedback

LLR

Leprince-Ringuet Laboratory
13 Projects, page 1 of 3
  • Funder: French National Research Agency (ANR) Project Code: ANR-13-IS05-0001
    Funder Contribution: 208,208 EUR

    Diffuse emission is the most prominent observational signature from the sky at Gigaelectronvolt (GeV) energies. Galactic diffuse emission was established before individual gamma-ray sources started to emerge and constitute a prime source of knowledge about cosmic-ray particle interactions and radiation processes ever since. Diffuse GeV gamma-ray emission still constitutes the systematic limit of source detection near instrumental threshold. In contrast to the GeV domain the search for diffuse emission at Teraelectronvolt (TeV) energies is still in its infancy, largely due to the predominant charged particle background that constitutes a principal instrumental challenge of the atmospheric Cherenkov technique. Diffuse emission is expected in the VHE domain, too: on Galactic scale primarily from hadronic particle interactions with interstellar gas and Inverse Compton scattering of high energy electrons with interstellar radiation fields, but also when encountering intense radiation fields or dense molecular clouds in the local vicinity of cosmic accelerators. Both processes are indicative for particle escape from their acceleration regions. This last, most energetic window for astronomical investigation, the domain of Very High Energies (VHE) gamma-rays, was unveiled by the systematic observations with the H.E.S.S. telescope array, a breakthrough recognized by the award of the Descartes Prize in 2006 and Rossi Prize in 2010. One of the major achievements of H.E.S.S. was the survey of the inner regions of our Galaxy, which led to the discovery of more than 50 new energetic sources. The proposed project aims at establishing the existence, spatial and spectral signature of diffuse emission at TeV energies. H.E.S.S. observations are to be compared with predictions from a model of diffuse VHE emission that will be specifically developed for the project. On the instrumental side, the investigation will push the limits of atmospheric Cherenkov imaging in sensitivity and energy through the development of more precise reconstruction techniques, and more effective background subtraction methods. Advanced modelling of the isotropic charged particle background and development of a likelihood-based analysis technique is proposed, the latter being a novelty for investigating VHE data. Systematics induced by the geomagnetic field and inhomogeneities of the night sky background on the instrument response will be addressed with particular care. The construction of a model of diffuse emission at TeV energies appears to be demanding due to competing phenomena, such as the energy-dependent escape of charged particles from the acceleration region vs. particle transport on larger scales inside our Galaxy. Detection and study of diffuse VHE emission will constitute a major scientific breakthrough, allowing the community to further understand particle propagation in the Galaxy up to the knee (1015 eV) and how particles are released into the interstellar medium. It will allow a closer connection to GeV measurements, benefiting from orthogonal observational techniques – satellite-based direct pair conversion vs. ground-based indirect air shower detections – deployed on a large scale, non-source related investigation. Consequently, the intensity and energy dependence of different constituents of the diffuse emission will extend our understanding of common physics processes to the most energetic end of the electromagnetic spectrum. Through an assessment of the irreducible background it will prepare the advent of the Cherenkov Telescope Array by establishing the hard detection limit for gamma-ray sources and will allow investigation of a putative dark matter component in the suspected WIMP rest mass region. The project results will allow generalizing from single-source detection to source population studies, and, for the first time, estimating the unresolved source component in a comprehensive way.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-24-CE31-7744
    Funder Contribution: 480,121 EUR

    Since the discovery of the Higgs boson at the CERN LHC, the hunt for new physics - new particles that are not predicted in the standard model (SM) of particle physics - has continued. In parallel, increasingly precise measurements of the Higgs boson and other SM particles are being performed. So far, this search for new physics has not led to a discovery. It is therefore important to consider the possibility that the new particles we are looking for are heavier than can be produced at the LHC. In this case, it is only through precise measurements of the Higgs boson, and other SM particles, that we can learn about the presence of new physics, and so this avenue must be explored. In this project, we propose to develop a new method to perform measurements of the Higgs and electroweak sectors at the LHC, to be as sensitive as possible to the effects from new physics whilst providing sufficient information for the results of a measurement to be included in a global combination of measurements, which provides the best constraints on the presence of new physics. The novel measurement approach will be demonstrated through a measurement of Higgs bosons and Z bosons, paving the way for wider adoption of this method.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-21-CE31-0030
    Funder Contribution: 215,338 EUR

    Most of the recent discoveries in particle physics are linked to the increase of the detector volume and / or granularity to observe complex phenomena that were inaccessible previously due to a lack of precision. This approach increases the available statistics and precision by multiple orders of magnitude which facilitate the detection of rare events, at the price of a significant increase of the number of channels. The challenge is that most of the standard techniques for reconstruction and triggering are not operative in such a context. For example, the energy threshold-based triggers fail to handle the complexity of the high pile-up collisions. The neural network methods are known to handle well the noisy and complex data inputs to deliver high level classification and regression. In particular, the convolution techniques have allowed outstanding improvement in the computer vision field. Unfortunately, they do not cope with the very peculiar topologies of the particle detectors and the irregular distribution of their sensors. Alternatives have been discovered to obtain the same classification power in that kind of non-euclidean environment, for example, the spatial graph convolution which applies adapted convolution kernels to the data represented as an undirected graph labeled by the sensor measurements. These techniques have proven to give excellent results on the particle detector data at Large Hadron Collider but also for neutrinos experiments. They allow particle identification and continuous parameter regression, but also segmentation of entangled data which is a typical concern in secondary particle showers. The operations that transform the data into a graph are often very computationally expensive. In particular, all the techniques in which this operation is based on learned parameters (in the sense of machine learning) prevent the system from being used in a context where the computational time or latency are constrained (any triggering electronics, real-time data monitoring systems or even offline systems with a too big data volume). For example, in the Super-Kamiokande neutrino experiment, a complex shape identifier would advantageously replace the current energy cut during the reconstruction phase that rejects many low energy events despite their physical interest. Another example is the future high-granularity endcap calorimeter (HGCal) of CMS for which it becomes crucial to be able to extract high level trigger primitives directly from the electronics to handle the complexity of the high luminosity collisions and take accurate triggering decisions. This is why, it is of utmost importance to design high-performance versions of these algorithms, which can increase the performance in all the constrained situations and allow their realization in the detectors. The objective of this project is to develop and implement a new efficient selection algorithms for constrained computational environments by combining three main ideas • Reducing the graph construction complexity by developing algorithms based on pre-calculated graph connectivity which would allow obtaining an almost linear complexity for the online part by exploiting intrinsic parallelism of the problem. This is made possible by the fixed positionning of the sensors in the particle detectors. • Developing segmented version of graph convolution, allowing to distribute it over multiple computational unit. • Optimizing the size and the nature of the convolution networks with advanced techniques of derivative-free optimization and adaptation to the electronic implementation. These objectives will be declined in the three experiment contexts: Offline HGCal reconstruction, Online HGCal level 1 trigger and Super-Kamiokande reconstruction of the Diffused Supernova Neutrinos Background (DSNB).

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-18-CE31-0007
    Funder Contribution: 295,777 EUR

    New calorimetry techniques developed for future high-energy and high-luminosity accelerators are providing more granular detectors that give access to a 3D view of particle showers. The amount of data produced by such detectors is enormous and raises new challenges for the trigger systems. In addition, the information from the inner trackers will be included in the first level (L1) of these systems, providing the possibility to develop so-called Particle Flow algorithms already at the electronics level of the trigger. The objective of this project is to develop and implement innovative event reconstruction techniques for L1 trigger systems, based on highly granular calorimeters coupled with trackers. This relies on the resolution of technological obstacles in several points of the trigger chain in order to ensure that the best trigger decisions are made. Major discoveries in high-energy physics always relied on the development of innovative detectors and data acquisition techniques. These technologies had also numerous applications in various domains (e.g. health, energy). New calorimetry techniques developed for future high-energy and high-luminosity accelerators are providing more granular detectors that give access to a 3D view of particle showers. This is in particular the case for the calorimeters which are being developed for the very high luminosities at the LHC (HL-LHC). The fine segmentation of these calorimeters is a powerful tool to reconstruct very busy collision events produced in such colliders, made of the products of more than a hundred proton-proton interactions (pile-up). But the amount of data produced by such detectors is enormous and raises new challenges for the trigger systems that need to transfer and process these data in the most effective way. The architecture of these systems and the algorithmic techniques implemented need to be completely redesigned to make use of this unprecedented data flow. In addition, more global pictures of the collision events are also necessary at trigger level to maintain an efficient selection of interesting physics events at high luminosities. This is why the information from the inner trackers will be included in the level-1 (L1) trigger systems of the ATLAS and CMS experiments for the HL-LHC, providing the possibility to develop so-called Particle Flow algorithms already at the electronics level of the trigger system. The topologies of interesting collision events need to be identified rapidly despite the extremely harsh environment induced by the pile-up of more than a hundred of collisions. There are currently no algorithms that can identify electrons, photons, tau leptons and hadron jets in 3D calorimeters and trackers, within the time window available in L1 trigger systems. The objective of this project is to develop and implement innovative event reconstruction techniques for L1 trigger systems, based on highly granular calorimeters coupled with trackers. These techniques will allow to make use of the full potential of the upgraded detectors for the HL-LHC, such as the CMS new highly granular endcap calorimeters (HGCal) and track trigger. This relies on the resolution of technological obstacles in several points of the trigger chain in order to ensure that the best trigger decisions are made.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-23-CE31-0017
    Funder Contribution: 386,312 EUR

    The project aims to study the impact of the time accuracy in calorimeters on the reconstruction quality of Particle Flow Algorithms (PFA) and in particular to determine which time accuracy is necessary to improve the separation of close by hadronic and electromagnetic showers. To do so, the modelling of the timing response of the prototype calorimeters SiWECAL (electromagnetic) and SDHCAL (hadronic) equipped with MGRPC detectors (Multi-layer Glass Resistive Plate Chambers) will be performed and included as input to the ARBOR and APRIL PFA algorithm.

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.