Powered by OpenAIRE graph
Found an issue? Give us feedback


Country: Norway
329 Projects, page 1 of 66
  • Funder: EC Project Code: 101126560
    Funder Contribution: 2,865,600 EUR

    LEAD AI at UiB will provide an exceptional inter- and multidisciplinary research and training programme for 19 experienced researchers (ERs) who are versed in or curious about AI. The recruitment process will be inclusive, transparent and merit-based, and aims to attract excellent researchers locally and internationally. The researchers will be employed in postdoctoral positions for 3-4 years for both incoming and outgoing mobility. In LEAD AI, UiB will exploit its unique cross-disciplinary strengths in AI research, providing ERs in fields as diverse as computer science, medicine, law, psychology, information science and the humanities with both disciplinary expertise and the broader knowledge of AI research that is needed to develop trustworthy and ethical AI that benefits society. The program will promote institutional transformation and pioneering research on AI subjects, while providing exceptional AI research and training opportunities, personal supervision and mentoring for postdoctoral researchers, structured skill-based training, and high-quality working conditions. The program will equip the next generation of research and society leaders with AI knowledge and skills to collaborate across disciplines, sectors and borders, while offering academic freedom, equal opportunities, and a well-connected regional AI ecosystem.

  • Funder: EC Project Code: 101044464
    Overall Budget: 2,000,000 EURFunder Contribution: 2,000,000 EUR

    In the last decade, societies across the world have been challenged by fragmenting public debate, fueled by algorithmically steered social media and new threats of propaganda and misinformation. The dual tendency of political apathy and polarization pose grave problems for a well-functioning democracy. As the social sciences appear unable to mitigate the challenges with a seemingly ignorant and passive citizenry, PREPARE proposes a radically new approach. PREPARE fills a research gap on the impact of algorithmic media and datafied everyday life on citizens? potential for political engagement. Current research on people's connection to the public is predominately interested in measuring the political knowledge of so-called "informed citizens", or studying the everyday micro-aspects of news media use. The leap proposed by PREPARE changes the focus from each citizen's "informedness" to develop and test a groundbreaking theory of distributed preparedness, building a cohesive theory for a fragmented field. PREPARE will stake out a new path for research on citizens? role in democracies. The project will develop a feasible, normative theory of citizens? orientations to the sphere of politics in datafied societies: their networks for public connection. PREPARE's research questions concern 1) how people stay prepared to engage with public issues, and 2) what resources they need to move from stand-by to engage. PREPARE substantiates the new theory through thickening of big data, with three ethnographies integrated with digital methods, of so-called disconnected citizens. The three groups are young urban immigrants, rural manual workers, and women outside the labour market. The path cleared by PREPARE allows research to constructively engage with improving democratic societies and civic awareness.

  • Funder: EC Project Code: 101001133
    Overall Budget: 2,000,000 EURFunder Contribution: 2,000,000 EUR

    At the heart of contemporary politics in the old democracies in Europe and North America is a significant puzzle. How come the far right, advocating a nativist agenda particularly opposed to Muslims and Islam, is advancing at a time when public opinion research documents stability or decline in illiberal values in these populations at large? Current studies understandably focus on accounting for exclusion – opposition to Muslims, prejudice, islamophobia, and nativism. In the INCLUDE project, I propose to expand the scope of inquiry beyond drivers of exclusion to investigate the openness of non-Muslim majorities to the inclusion of Muslim minorities. I ask, under what conditions—on what terms—are they open to inclusion? This research question brings conceptual and empirical attention to different aspects of public opinion and different segments within the public than are currently at the center of attention in research on intergroup attitudes and support for the far right. A major program of new data collection is needed to test the new hypotheses and implications raised. To manage the risks involved in taking research in a new direction, I propose to collect data in sequences of survey experiments in a few countries where we already have a solid base of knowledge to build upon—Norway, Germany, France, the UK, the Netherlands, and Sweden. The potential gains are considerable: The proposed new framework narrows down the conditions under which exclusionary actors, such as the far right, are likely to gain political influence. It highlights different aspects of the strategies and policies adopted by inclusive actors than are currently in focus. It identifies polarization traps that can fuel mistrust between “people” and “elites.” All in all, the project can bring forth vital new knowledge needed by those who seek to address one of the most significant societal challenges of our time—how to live peacefully together as diverse societies.

  • Funder: EC Project Code: 638467
    Overall Budget: 1,877,210 EURFunder Contribution: 1,877,210 EUR

    The Earth's climate system contains a highly complex interplay of numerous components, such as atmospheric greenhouse gases, ice sheets, and ocean circulation. Due to nonlinearities and feedbacks, changes to the system can result in rapid transitions to radically different climate states. In light of rising greenhouse gas levels there is an urgent need to better understand climate at such tipping points. Reconstructions of profound climate changes in the past provide crucial insight into our climate system and help to predict future changes. However, all proxies we use to reconstruct past climate depend on assumptions that are in addition increasingly uncertain back in time. A new kind of temperature proxy, the carbonate ‘clumped isotope’ thermometer, has great potential to overcome these obstacles. The proxy relies on thermodynamic principles, taking advantage of the temperature-dependence of the binding strength between different isotopes of carbon and oxygen, which makes it independent of other variables. Yet, widespread application of this technique in paleoceanography is currently prevented by the required large sample amounts, which are difficult to obtain from ocean sediments. If applied to the minute carbonate shells preserved in the sediments, this proxy would allow robust reconstructions of past temperatures in the surface and deep ocean, as well as global ice volume, far back in time. Here I propose to considerably decrease sample amount requirements of clumped isotope thermometry, building on recent successful modifications of the method and ideas for further analytical improvements. This will enable my group and me to thoroughly ground-truth the proxy for application in paleoceanography and for the first time apply it to aspects of past climate change across major climate transitions in the past, where clumped isotope thermometry can immediately contribute to solving long-standing first-order questions and allow for major progress in the field.

    Powered by Usage counts
  • Funder: EC Project Code: 658602
    Overall Budget: 196,400 EURFunder Contribution: 196,400 EUR

    Climate models of the sort used by the Intergovernmental Panel on Climate Change (IPCC) all predict global warming over the next century, but differ widely in their detailed predictions for any specific region of the globe. The state of the art is just to run the models separately and form a weighted average of their outputs. A new approach put forward by the applicant is that of “supermodeling”: instead of just averaging the outputs of the models, the models are allowed to influence each other in run time. One must specify how much weight a given model gives to corresponding data in each other model. In a supermodel, the weights, or “connection coefficients” are given by a machine learning algorithm. That is one would use a collection of historical data to train the connections in the supermodel, so that the most reliable dynamical features of each model would be combined. Supermodeling is an instance of “chaos synchronization”, the phenomenon wherein chaotic systems can be made to follow corresponding trajectories by exchanging surprisingly little information. In prior investigations with supermodels, it was determined that they are particularly useful for predicting variability, like that in the El Nino cycle in the Pacific. The proposed project would use a supermodel to predict variability in the Atlantic sector due to changes in the Atlantic Meridional Overturning Circulation (AMOC), which has a large effect on climate in the surrounding region on multi-decadal time scales. Existing climate models differ widely in their predictions for AMOC. The proposed application will require changes in the way supermodels are formed and trained so as to focus on the positions and gross characteristics of coherent structures such as ocean currents. The models that will be used to build the supermodel will be a) a collection of European models, and b) a combination of U.S. and European models from which a supermodel is already being built.

    Powered by Usage counts

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.