
OFFICE FOR NATIONAL STATISTICS
OFFICE FOR NATIONAL STATISTICS
51 Projects, page 1 of 11
assignment_turned_in Project2022 - 2026Partners:ECMWF (UK), START Network, US Geological Survey (USGS), ENVIRONMENT AGENCY, University of Colorado at Boulder +23 partnersECMWF (UK),START Network,US Geological Survey (USGS),ENVIRONMENT AGENCY,University of Colorado at Boulder,Arup Group,Insurance Development Group,Global Floods Partnership (GFP),Free (VU) University of Amsterdam,Ministry of Water Resources & Meteorol,NERC BRITISH ANTARCTIC SURVEY,Academy of Social Sciences ACSS,Oasis Loss Modelling Framework Ltd,Jacobs Consultancy UK Ltd,CARDIFF UNIVERSITY,Nat Oceanic and Atmos Admin NOAA,H R Wallingford Ltd,East China Normal University,University of Leeds,National University of the Littoral,OFFICE FOR NATIONAL STATISTICS,Newcastle University,Uni of Illinois at Urbana Champaign,NERC CEH (Up to 30.11.2019),Loughborough University,Royal Geographical Society with IBG,University of Glasgow,Guy Carpenter & Co LtdFunder: UK Research and Innovation Project Code: NE/S015795/2Funder Contribution: 448,106 GBPFlooding is the deadliest and most costly natural hazard on the planet, affecting societies across the globe. Nearly one billion people are exposed to the risk of flooding in their lifetimes and around 300 million are impacted by floods in any given year. The impacts on individuals and societies are extreme: each year there are over 6,000 fatalities and economic losses exceed US$60 billion. These problems will become much worse in the future. There is now clear consensus that climate change will, in many parts of the globe, cause substantial increases in the frequency of occurrence of extreme rainfall events, which in turn will generate increases in peak flood flows and therefore flood vast areas of land. Meanwhile, societal exposure to this hazard is compounded still further as a result of population growth and encroachment of people and key infrastructure onto floodplains. Faced with this pressing challenge, reliable tools are required to predict how flood hazard and exposure will change in the future. Existing state-of-the-art Global Flood Models (GFMs) are used to simulate the probability of flooding across the Earth, but unfortunately they are highly constrained by two fundamental limitations. First, current GFMs represent the topography and roughness of river channels and floodplains in highly simplified ways, and their relatively low resolution inadequately represents the natural connectivity between channels and floodplains. This restricts severely their ability to predict flood inundation extent and frequency, how it varies in space, and how it depends on flood magnitude. The second limitation is that current GFMs treat rivers and their floodplains essentially as 'static pipes' that remain unchanged over time. In reality, river channels evolve through processes of erosion and sedimentation, driven by the impacts of diverse environmental changes (e.g., climate and land use change, dam construction), and leading to changes in channel flow conveyance capacity and floodplain connectivity. Until GFMs are able to account for these changes they will remain fundamentally unsuitable for predicting the evolution of future flood hazard, understanding its underlying causes, or quantifying associated uncertainties. To address these issues we will develop an entirely new generation of Global Flood Models by: (i) using Big Data sets and novel methods to enhance substantially their representation of channel and floodplain morphology and roughness, thereby making GFMs more morphologically aware; (ii) including new approaches to representing the evolution of channel morphology and channel-floodplain connectivity; and (iii) combining these developments with tools for projecting changes in catchment flow and sediment supply regimes over the 21st century. These advances will enable us to deliver new understanding on how the feedbacks between climate, hydrology, and channel morphodynamics drive changes in flood conveyance and future flooding. Moreover, we will also connect our next generation GFM with innovative population models that are based on the integration of satellite, survey, cell phone and census data. We will apply the coupled model system under a range of future climate, environmental and societal change scenarios, enabling us to fully interrogate and assess the extent to which people are exposed, and dynamically respond, to evolving flood hazard and risk. Overall, the project will deliver a fundamental change in the quantification, mapping and prediction of the interactions between channel-floodplain morphology and connectivity, and flood hazard across the world's river basins. We will share models and data on open source platforms. Project outcomes will be embedded with scientists, global numerical modelling groups, policy-makers, humanitarian agencies, river basin stakeholders, communities prone to regular or extreme flooding, the general public and school children.
more_vert assignment_turned_in Project2024 - 2026Partners:OFFICE FOR NATIONAL STATISTICS, University of Strathclyde, University of Stirling, University of Glasgow, University of EdinburghOFFICE FOR NATIONAL STATISTICS,University of Strathclyde,University of Stirling,University of Glasgow,University of EdinburghFunder: UK Research and Innovation Project Code: ES/Z502881/1Funder Contribution: 472,443 GBPAims and Objectives We aim to improve and empower three census-linked products for the 2022 Scottish census. The three products we will work on are: Improving the Equalities Protected Characteristics data Annual Survey of Hours and Earnings (ASHE) to census linkage (ASHE-census) A high-fidelity synthetic data set for ASHE-census These census-linked products will be some of the first be created and accessed by researchers under the new "ready and reuse" model in Scotland. During four interactive training events, we will promote the census-linked products and disseminate knowledge to researchers on the system transformation happening in Scotland. Potential applications and benefits We will achieve our aims by completing four work packages (WP). WP0: Promoting and maximising user engagement and census-linked data value. This work package will underpin all census-linked products. Our approach will enable researchers to understand the original sources of census-linked data, the processes required to provision census-linked data, and how researchers can access and work with census-linked data efficiently. WP1: Improving the Equalities Protected Characteristics Research Dataset (henceforth Equalities). Administrative Data Research Scotland (ADR-Scotland) will deliver a standard update to the Equalities 2022 dataset, which will include data for people living in Scotland in 2022 who were not in Scotland in 2011, and, for the first time, include records of sexual orientation and trans status. While ADR-Scotland process these standard updates, we will be using the proposed ESRC grant to commission critical public engagement work on the acceptability of protected characteristics data and future research use. After the update has been completed, and as part of this ESRC proposal, we will analyse and explore potential improvements to the Equalities 2022 linkage to capture longitudinal changes in protected characteristics, particularly disability status. The exploratory longitudinal analysis and public engagement work are not currently planned by ADR-Scotland. WP2: Empowering a cutting-edge census-linked product with the potential to improve our understanding of Scotland's workforce and start informing a future business spine. We will link the ASHE for Scotland with the 2022 census (hereafter ASHE-census). This data product will independently enable researchers to understand Scotland's workforce surrounding the COVID-19 pandemic. ASHE-census will then enable future linkages to other data, empowering researchers to better understand the health and social experiences of Scotland's workforce. The ASHE-census represents added value and a unique opportunity to test the probabilistic linkage methodology required to inform a future business spine for Scotland. WP3: Developing a synthetic data set. Creating a high-fidelity synthetic dataset for ASHE-census linkage will maximise use by raising awareness of census-linked data and support researchers to work with data prior to full application. Impact on enabling excellent social science led research All collaborators are leading the changes to Scotland's data linkage landscape, moving from "create and destroy" to "ready and reuse". We will use the proposed ESRC grant to accelerate these census-linked products into the Scottish National Safe Haven using a streamlined technical process and the recently launched Researcher Access Service. This represents time and resource efficiency savings for the provision of census data to researchers.
more_vert assignment_turned_in Project2016 - 2023Partners:Yale University, Lancaster University, Shell Global Solutions UK, Yale University, OFFICE FOR NATIONAL STATISTICS +11 partnersYale University,Lancaster University,Shell Global Solutions UK,Yale University,OFFICE FOR NATIONAL STATISTICS,ASTRAZENECA UK LIMITED,Astrazeneca,British Telecommunications plc,Office for National Statistics,British Telecom,ONS,Shell Research UK,AstraZeneca plc,Shell Global Solutions UK,BT Group (United Kingdom),Lancaster UniversityFunder: UK Research and Innovation Project Code: EP/N031938/1Funder Contribution: 2,750,890 GBPWe live in the age of data. Technology is transforming our ability to collect and store data on unprecedented scales. From the use of Oyster card data to improve London's transport network, to the Square Kilometre Array astrophysics project that has the potential to transform our understanding of the universe, Big Data can inform and enrich many aspects of our lives. Due to the widespread use of sensor-based systems in everyday life, with even smartphones having sensors that can monitor location and activity level, much of the explosion of data is in the form of data streams: data from one or more related sources that arrive over time. It has even been estimates that there will be over 30 billion devices collecting data streams by 2020. The important role of Statistics within "Big Data" and data streams has been clear for some time. However the current tendency has been to focus purely on algorithmic scalability, such as how to develop versions of existing statistical algorithms that scale better with the amount of data. Such an approach, however, ignores the fact that fundamentally new issues often arise when dealing with data sets of this magnitude, and highly innovative solutions are required. Model error is one such issue. Many statistical approaches are based on the use of mathematical models for data. These models are only approximations of the real data-generating mechanisms. In traditional applications, this model error is usually small compared with the inherent sampling variability of the data, and can be overlooked. However, there is an increasing realisation that model error can dominate in Big Data applications. Understanding the impact of model error, and developing robust methods that have excellent statistical properties even in the presence of model error, are major challenges. A second issue is that many current statistical approaches are not computationally feasible for Big Data. In practice we will often need to use less efficient statistical methods that are computationally faster, or require less computer memory. This introduces a statistical-computational trade-off that is unique to Big Data, leading to many open theoretical questions, and important practical problems. The strategic vision for this programme grant is to investigate and develop an integrated approach to tackling these and other fundamental statistical challenges. In order to do this we will focus in particular on analysing data streams. An important issue with this type of data is detecting changes in the structure of the data over time. This will be an early area of focus for the programme, as it has been identified as one of seven key problem areas for Big Data. Moreover it is an area in which our research will lead to practically important breakthroughs. Our philosophy is to tackle methodological, theoretical and computational aspects of these statistical problems together, an approach that is only possible through the programme grant scheme. Such a broad perspective is essential to achieve the substantive fundamental advances in statistics envisaged, and to ensure our new methods are sufficiently robust and efficient to be widely adopted by academics, industry and society more generally.
more_vert assignment_turned_in Project2021 - 2022Partners:KCL, ONS, OFFICE FOR NATIONAL STATISTICSKCL,ONS,OFFICE FOR NATIONAL STATISTICSFunder: UK Research and Innovation Project Code: ES/S012729/2Funder Contribution: 498,125 GBPOur Management and Expectations Survey (MES), cited in the ESRC call, arose from a partnership between the ONS and ESCoE: it is the largest ever survey of UK management capabilities, executed on a population of 25,000 firms across industries, regions, firm sizes and ages documenting the variable quality of management practices across UK businesses. Our analysis found a significant relationship between management practices and labour productivity amongst UK firms, and examined whether certain types of firms have poor management practices and stagnant productivity, drawing conclusions about the links between them, ONS (2018). This team, with two seminal contributors to management practice and performance (Bloom, Stanford, and Van Reenen, MIT) who initiated the World Management Survey, partners from the ONS (Awano, Dolby, Vyas, Wales), and the Director and Fellows of the ESCoE (Riley, Mizen, Senga, Sleeman) at the NIESR, will investigate five issues: 1. Longitudinal changes in management practices and performance The initial MES offers a cross section of variation in management practices and expectations between firms, but it does not explore variations within businesses through time due to the missing longitudinal dimension to the data. A second wave of the MES will expand our scope of analysis so that we can interpret how management practices in the UK have varied over time. This extension addresses the 'broad consensus' from the recent ESRC-ONS workshop that 'there is not enough longitudinal data around productivity that allows for consistent, ongoing analysis, and in particular data that enables researchers to identify, isolate and accurately measure changes over time.' 2. International comparisons Drawing on our links through Bloom and Van Reenen with the US Management and Organizational Practices Survey (MOPS) at the US Census Bureau will enable us to i) test identical hypotheses using their methods and variables to draw research insights that help identify causal drivers of productivity at the firm level, and compare and contrast the UK and US data; ii) draw together a unique joint ONS-Census Bureau methodological forum for collecting the most useful micro-data for measuring management, investment and hiring intentions for UK and US firms. Similar data collection exercises have been taking place across other countries. We have established links with German and Japanese teams and we intend to discuss key differences, e.g. between the US and European business environments, and similarities, e.g. the Japanese experience of low productivity. 3. Analysis of linked business surveys and administrative data Partnership between academic researchers and ONS facilitates the matching of data from other sources to answer key questions around: a) management and firms' ability to cope with uncertainty by linking MES responses to trade data, administrative data on VAT, R&D expenditure, and patenting data, and exploiting variation across firms in exposure to EU markets through supply chains and export destination of goods; b) evidence of superior innovation, R&D and export performance from evidence of how business innovation and exporting varies across firms and over time in response to management practices and cultures. This will directly inform practical lessons for UK businesses. 4. Experimental analysis using big data We will use natural language processing and machine learning to investigate big data from job-search companies to objectively identify the factors that affect staff satisfaction and performance in the UK. Matching to the MES and other micro datasets we will examine links between mental health and management practices. 5. Randomised control trials Nearly 9,000 responding businesses in the MES sought 'feedback' on their management score. By varying feedback to respondents we will observe in collaboration with BIT (the 'Nudge Unit') and CMI the impact on firm's subsequent adaptation and performance.
more_vert assignment_turned_in Project2020 - 2023Partners:Newcastle University, OFFICE FOR NATIONAL STATISTICS, Office for National Statistics, INPE, ONS +7 partnersNewcastle University,OFFICE FOR NATIONAL STATISTICS,Office for National Statistics,INPE,ONS,Ordnance Survey,University of Liverpool,MCTI,University of Liverpool,National Inst for Space Research (INPE),Newcastle University,OSFunder: UK Research and Innovation Project Code: ES/T005238/1Funder Contribution: 346,532 GBPThis project will propose an urban grammar to describe urban form and will develop artificial intelligence (AI) techniques to learn such a grammar from satellite imagery. Urban form has critical implications for economic productivity, social (in)equality, and the sustainability of both local finances and the environment. Yet, current approaches to measuring the morphology of cities are fragmented and coarse, impeding their appropriate use in decision making and planning. This project will aim to: 1) conceptualise an urban grammar to describe urban form as a combination of "spatial signatures", computable classes describing a unique spatial pattern of urban development (e.g. "fragmented low density", "compact organic", "regular dense"); 2) develop a data-driven typology of spatial signatures as building blocks; 3) create AI techniques that can learn signatures from satellite imagery; and 4) build a computable urban grammar of the UK from high-resolution trajectories of spatial signatures that helps us understand its future evolution. This project proposes to make the conceptual urban grammar computable by leveraging satellite data sources and state-of-the-art machine learning and AI techniques. Satellite technology is undergoing a revolution that is making more and better data available to study societal challenges. However, the potential of satellite data can only be unlocked through the application of refined machine learning and AI algorithms. In this context, we will combine geodemographics, deep learning, transfer learning, sequence analysis, and recurrent neural networks. These approaches expand and complement traditional techniques used in the social sciences by allowing to extract insight from highly unstructured data such as images. In doing so, the methodological aspect of the project will develop methods that will set the foundations of other applications in the social sciences. The framework of the project unfolds in four main stages, or work packages (WPs): 1) Data acquisition - two large sets of data will be brought together and spatially aligned in a consistent database: attributes of urban form, and satellite imagery. 2) Development of a typology of spatial signatures - Using the urban form attributes, geodemographics will be used to build a typology of spatial signatures for the UK at high spatial resolution. 3) Satellite imagery + AI - The typology will be used to train deep learning and transfer learning algorithms to identify spatial signatures automatically and in a scalable way from medium resolution satellite imagery, which will allow us to back cast this approach to imagery from the last three decades. 4) Trajectory analysis - Using sequences of spatial signatures generated in the previous package, we will use machine learning to identify an urban grammar by studying the evolution of urban form in the UK over the last three decades. Academic outputs include journal articles, open source software, and open data products in an effort to reach as wide of an academic audience as possible, and to diversify the delivery channel so that outputs provide value in a range of contexts. The impact strategy is structured around two main areas: establishing constant communication with stakeholders through bi-directional dissemination; and data insights broadcast, which will ensure the data and evidence generated reach their intended users.
more_vert
chevron_left - 1
- 2
- 3
- 4
- 5
chevron_right