
Leiden University, Leiden Institute of Advanced Computer Science
Leiden University, Leiden Institute of Advanced Computer Science
64 Projects, page 1 of 13
assignment_turned_in Project2016 - 2022Partners:Leiden University, Leiden Institute of Advanced Computer Science, Leiden University, Adelante Zorggroep, Adelante Zorggroep, Technische Universiteit Eindhoven - Eindhoven University of Technology, Faculteit Bouwkunde - Department of the Built Environment, Structural Design of Concrete Structures +4 partnersLeiden University, Leiden Institute of Advanced Computer Science,Leiden University,Adelante Zorggroep,Adelante Zorggroep,Technische Universiteit Eindhoven - Eindhoven University of Technology, Faculteit Bouwkunde - Department of the Built Environment, Structural Design of Concrete Structures,Technische Universiteit Eindhoven - Eindhoven University of Technology, Faculteit Bouwkunde - Department of the Built Environment,Maastricht University, Faculty of Psychology and Neuroscience, Clinical Psychological Science (CPS),Maastricht University,Technische Universiteit Eindhoven - Eindhoven University of TechnologyFunder: Netherlands Organisation for Scientific Research (NWO) Project Code: 451-15-032Tinnitus, or the ringing of the ears, is defined as the perception of a continuous sound, in the absence of a corresponding acoustic stimulus in the external environment. It is estimated that in Europe over 70 million people experience tinnitus and that for 7 million it creates a chronic incapacitating condition, tenaciously haunting them up to the point where it interferes with every aspect of their daily living. Residing within and confined to the individuals subjective and perceptual experience, tinnitus is not measurable or quantifiable by objective physical recordings, and is furthermore not traceable to disease, injury, or pathology in the brain or elsewhere. Empirical evidence for either the effectiveness of curative tinnitus treatments or for audiological interventions, such as hearing-aids, and sound-generating devices to mask the sound, is lacking. Moreover, the audiometric characteristics of the tinnitus sound (loudness/pitch) hardly predict severity of the condition, or treatment outcomes. Contrary to scientific evidence, the clinical practice of masking/attenuating the tinnitus-sound is still the most widespread tinnitus-treatment approach. Presently I propose the counterintuitive conjecture that it is not the sound itself which is so devastating, but rather the fear-conditioned responses and the associated threat appraisals that maintain severe tinnitus disability. Indeed, empirical evidence is growing for the effectiveness of a cognitive behavioral approach and our recent findings support the importance of addressing tinnitus-related fear and fear-responses in the management of patients with disabling tinnitus. In this project I will experimentally test the idea that initial threat-appraisal and fearful responses predict increased tinnitus suffering. In addition, I will test the idea that exposure to the tinnitus sound is an effective way of decreasing fear of tinnitus and disability in the long term, whereas masking the sound is counterproductive. My research may provide an important impetus for the development of novel tinnitus-treatment approaches. Keywords: Tinnitus, threat-appraisal, fear-conditioning, exposure, masking
more_vert assignment_turned_in Project2021 - 2023Partners:Leiden University, Leiden Institute of Advanced Computer Science, Leiden UniversityLeiden University, Leiden Institute of Advanced Computer Science,Leiden UniversityFunder: Netherlands Organisation for Scientific Research (NWO) Project Code: VI.Veni.202.195Reproducibility is the Achilles’ heel in computer systems, which is a largely experimental field. In a time in which performance (i.e., fast response time) is key to our society, common experimental practice nowadays is not sufficient to achieve reproducible performance results. In performance-sensitive systems, such behavior is not only unwanted, but dangerous. This holds especially true for cloud computing, which is powered by large datacenters that offer seamless access to data and compute power to all members of our society. Clouds are unanimously known to being unable to offer performance guarantees, thus suffer from performance variability: similar operations, when run multiple times, offer different response times. My research has shown that irreproducible experiments are very common at all levels of scientific publication in cloud systems, ignoring variability. As our society is relying more and more on clouds, the coupling of performance variability with irreproducible experimental practice is a grand challenge that must be addressed now. To address the grand challenge, I propose here the design and implementation of the first practical approach to reproducibility that takes into consideration cloud performance variability. My research radically changes the way we currently approach reproducible experiment design. I will make three key contributions: (1) I will extend the fundamental knowledge we have about the extent and impact of performance variability in clouds, (2) I will break new ground to provide a principled approach for variability-proof experimentation, which is a fundamental departure from how researchers reason about reproducible experiments in this field, and (3) I will create a novel method for variability-proof cloud operation, focusing on one emerging cloud paradigm, serverless computing. Through my prior research experience and international collaboration network, I am uniquely qualified to solve this challenge. I will disseminate this work as open science---FOSS, FAIR data, and open-access publications.
more_vert assignment_turned_in Project2018 - 2018Partners:Leiden University, Leiden Institute of Advanced Computer Science, Leiden UniversityLeiden University, Leiden Institute of Advanced Computer Science,Leiden UniversityFunder: Netherlands Organisation for Scientific Research (NWO) Project Code: 040.11.633The computing infrastructure aspect has three equally important contexts. In the first context, the algorithms are implemented in a numerical library with a specific computational capability, e.g., numerical linear algebra or the solution of elliptic partial differential equations. The main consideration is an efficient and robust implementation for relatively knowledgeable users In the second context, the numerical algorithms and the associated library are integrated into a more general computing environment, typically interactive, along with other libraries providing a suitable range of algorithmic capabilities and supporting users with expertise concentrated in a specific application and little or none in the underlying algorithms/computations. The computing platform for this context mixes low and moderate computational power. The third context involves a much larger scale of computing with regard to the amount of data, the complexity of the problems, and the complexity of the software interface which usually supports both interactive and automated user interaction. A much larger set of application tasks and users is supported.
more_vert assignment_turned_in Project2010 - 2016Partners:Leiden University, Leiden Institute of Advanced Computer Science, Leiden UniversityLeiden University, Leiden Institute of Advanced Computer Science,Leiden UniversityFunder: Netherlands Organisation for Scientific Research (NWO) Project Code: 612.071.305We request a total of 110,000 Euro to purchase the hardware for building a GPU-based supercomputer at the Science Faculty of Leiden University. We propose to assemble and use a multipurpose parallel computer for scientific production and further development of GPU-accelerated algorithms. By attaching graphical processing units (GPUs) to a cluster of workstations we gain the flexibility, communication characteristics and the raw super-computer power comparable to the national supercomputer Huygens-II, at a fraction of the cost. Normally the GPUs in a computer are used for visualization only, but by programming them smartly their enormous compute power can be used to accelerate general user applications by a factor of 100 or more. The electricity used by such a computer remains comparable to that of a single PC, reducing the CO2 output per TeraFLOP by more than a factor 100 compared to a general-purpose supercomputer. By connecting several of such GPU equipped computers, we can build a low cost parallel computer which matches todays fastest supercomputers. We plan to build and use an 8-node low-latency Beowulf cluster with 4 GPUs per node, providing a total computer power of for example around 64TFLOP/s using 4~NVIDIA GTX295s. We call the computer Little Green Machine (LGM), to indicate that its footprint and power consumption are tiny compared to a regular supercomputer.
more_vert assignment_turned_in ProjectFrom 2024Partners:Leids Universitair Medisch Centrum, Divisie 3, Neurologie, LUMC, Amsterdam UMC, Universiteit Leiden, Faculteit der Wiskunde en Natuurwetenschappen, Leiden Institute of Advanced Computer Science (LIACS), Imaging & BioInformatics, Leiden University, Leiden Institute of Advanced Computer Science +4 partnersLeids Universitair Medisch Centrum, Divisie 3, Neurologie,LUMC,Amsterdam UMC,Universiteit Leiden, Faculteit der Wiskunde en Natuurwetenschappen, Leiden Institute of Advanced Computer Science (LIACS), Imaging & BioInformatics,Leiden University, Leiden Institute of Advanced Computer Science,Leiden University,Amsterdam UMC - Locatie AMC, Neurologie & Klinische Neurofysiologie,Amsterdam UMC - Locatie AMC, Radiologie en Nucleaire Geneeskunde,Leids Universitair Medisch Centrum, Divisie 3, NeurochirurgieFunder: Netherlands Organisation for Scientific Research (NWO) Project Code: 20852Neuromuscular disorders, which affect millions of people in Europe alone, lead to (progressive) muscle weakness or sensory deficits that gravely affect life expectancy and quality of life. To diagnose the disorders, needle electromyography (nEMG) data must be assessed audio-visually by experts, which is subjective and time-consuming. In this project, experts in clinical neurophysiology, data science and instrumentation will develop an artificial-intelligence platform to automatically, objectively and accurately interpret nEMG data. They will validate the method using real nEMG data from around the world, and take first steps towards integrating the platform into existing software for clinical use.
more_vert
chevron_left - 1
- 2
- 3
- 4
- 5
chevron_right