Powered by OpenAIRE graph
Found an issue? Give us feedback

ONS

Office for National Statistics
43 Projects, page 1 of 9
  • Funder: UK Research and Innovation Project Code: EP/S023437/1
    Funder Contribution: 7,062,520 GBP

    Research Area: ART-AI is a multidisciplinary CDT, bringing together computer science, social science and engineering so that its graduates will be specialists in one subject, but have substantial training and experience in the others. The ART-AI management team brings together research in AI, HCI,politics/economics, and engineering, while the CDT as a whole has a team of >40 supervisors across seven departments in three faculties and the institutes for policy research (IPR) and for mathematical innovation (IMI). This is not a marriage of convenience: many CDT members have experience of interdisciplinary working and together with CDT cohorts and partners, we will create accessible, transparent and intelligible AI, driven by ethical and responsible principles, to address issues in, for example, policy design and political decision-making, development of trust in AI for humans and organisations, autonomous systems, sensing and data analysis, explanation of machine decision-making, public service design, social simulation and the ethics of socio-technical systems. Need: Hardly a day passes without a news article on the wonders and dangers of AI. But decisions - by individuals, organisations, society and government - on how to use or not use AI should be informed and ethical. We need policy experts to recognise both opportunities and threats, engineers to extend our technical capabilities, and scientists to establish what is tractable and to predict likely outcomes of policies and innovations. We need mutually informed decisions taking account of diverse needs and perspectives. This need is expressed in measured terms by a slew of major reports (see Case for Support) and Commons and Lords committees, all reflecting the UKCES Sector Insights (Evidence report #92, 2015) prediction of a need by 2022 for >0.5M additional workers in the digital sector against just a third of that number graduating annually. To realise the government vision for AI (White Paper), a critical fraction of those 0.5M workers need to be leaders and innovators with in-depth scientific and technical knowledge to make the right calls on what is possible, what is desirable, and how it can be most safely deployed. Beyond the UK, a 2018 PwC report indicates AI will impact ~10% of jobs, or ~326 million globally by 2030, with ~33% in high-skill jobs across most economic sectors. The clear conclusion is a need for a significant cadre of high-skill workers and leaders with a detailed knowledge of AI, an understanding of how to utilise it, and its political, social and economic implications. The ART-AI is designed to deliver these in collaboration and co-creation with stakeholders in these areas. Approach: ART-AI will produce interdisciplinary graduates and interdisciplinary research by (i) exposing its students to all three disciplines in the taught elements, (ii) fostering development of multi-discipline perspectives throughout the doctoral research process, and (iii) establishing international and stakeholder perspectives whilst contributing to immediate, real-world problems through a programme of visiting lecturers, research visits to leading institutions and internships. The CDT will use some conventional teaching, but the innovations in doctoral training are: (i) multi-disciplinary team projects; (ii) structured and facilitated horizontal (intra-cohort) peer learning and vertical (inter-cohort) mentoring, and in the interdisciplinary cross-cohort activities in years 2-4; (iii) demonstrated contextualisation of the primary discipline research in the other disciplines both at transfer (confirmation) at the end of year 2 and in the final dissertation. Each student will have a primary supervisor from their main discipline, a co-supervisor from at least one of the other two, and where appropriate, one from a CDT partner, reflecting the interdisciplinarity and co-creation that underpin the CDT.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/N031938/1
    Funder Contribution: 2,750,890 GBP

    We live in the age of data. Technology is transforming our ability to collect and store data on unprecedented scales. From the use of Oyster card data to improve London's transport network, to the Square Kilometre Array astrophysics project that has the potential to transform our understanding of the universe, Big Data can inform and enrich many aspects of our lives. Due to the widespread use of sensor-based systems in everyday life, with even smartphones having sensors that can monitor location and activity level, much of the explosion of data is in the form of data streams: data from one or more related sources that arrive over time. It has even been estimates that there will be over 30 billion devices collecting data streams by 2020. The important role of Statistics within "Big Data" and data streams has been clear for some time. However the current tendency has been to focus purely on algorithmic scalability, such as how to develop versions of existing statistical algorithms that scale better with the amount of data. Such an approach, however, ignores the fact that fundamentally new issues often arise when dealing with data sets of this magnitude, and highly innovative solutions are required. Model error is one such issue. Many statistical approaches are based on the use of mathematical models for data. These models are only approximations of the real data-generating mechanisms. In traditional applications, this model error is usually small compared with the inherent sampling variability of the data, and can be overlooked. However, there is an increasing realisation that model error can dominate in Big Data applications. Understanding the impact of model error, and developing robust methods that have excellent statistical properties even in the presence of model error, are major challenges. A second issue is that many current statistical approaches are not computationally feasible for Big Data. In practice we will often need to use less efficient statistical methods that are computationally faster, or require less computer memory. This introduces a statistical-computational trade-off that is unique to Big Data, leading to many open theoretical questions, and important practical problems. The strategic vision for this programme grant is to investigate and develop an integrated approach to tackling these and other fundamental statistical challenges. In order to do this we will focus in particular on analysing data streams. An important issue with this type of data is detecting changes in the structure of the data over time. This will be an early area of focus for the programme, as it has been identified as one of seven key problem areas for Big Data. Moreover it is an area in which our research will lead to practically important breakthroughs. Our philosophy is to tackle methodological, theoretical and computational aspects of these statistical problems together, an approach that is only possible through the programme grant scheme. Such a broad perspective is essential to achieve the substantive fundamental advances in statistics envisaged, and to ensure our new methods are sufficiently robust and efficient to be widely adopted by academics, industry and society more generally.

    more_vert
  • Funder: UK Research and Innovation Project Code: ES/T005238/1
    Funder Contribution: 346,532 GBP

    This project will propose an urban grammar to describe urban form and will develop artificial intelligence (AI) techniques to learn such a grammar from satellite imagery. Urban form has critical implications for economic productivity, social (in)equality, and the sustainability of both local finances and the environment. Yet, current approaches to measuring the morphology of cities are fragmented and coarse, impeding their appropriate use in decision making and planning. This project will aim to: 1) conceptualise an urban grammar to describe urban form as a combination of "spatial signatures", computable classes describing a unique spatial pattern of urban development (e.g. "fragmented low density", "compact organic", "regular dense"); 2) develop a data-driven typology of spatial signatures as building blocks; 3) create AI techniques that can learn signatures from satellite imagery; and 4) build a computable urban grammar of the UK from high-resolution trajectories of spatial signatures that helps us understand its future evolution. This project proposes to make the conceptual urban grammar computable by leveraging satellite data sources and state-of-the-art machine learning and AI techniques. Satellite technology is undergoing a revolution that is making more and better data available to study societal challenges. However, the potential of satellite data can only be unlocked through the application of refined machine learning and AI algorithms. In this context, we will combine geodemographics, deep learning, transfer learning, sequence analysis, and recurrent neural networks. These approaches expand and complement traditional techniques used in the social sciences by allowing to extract insight from highly unstructured data such as images. In doing so, the methodological aspect of the project will develop methods that will set the foundations of other applications in the social sciences. The framework of the project unfolds in four main stages, or work packages (WPs): 1) Data acquisition - two large sets of data will be brought together and spatially aligned in a consistent database: attributes of urban form, and satellite imagery. 2) Development of a typology of spatial signatures - Using the urban form attributes, geodemographics will be used to build a typology of spatial signatures for the UK at high spatial resolution. 3) Satellite imagery + AI - The typology will be used to train deep learning and transfer learning algorithms to identify spatial signatures automatically and in a scalable way from medium resolution satellite imagery, which will allow us to back cast this approach to imagery from the last three decades. 4) Trajectory analysis - Using sequences of spatial signatures generated in the previous package, we will use machine learning to identify an urban grammar by studying the evolution of urban form in the UK over the last three decades. Academic outputs include journal articles, open source software, and open data products in an effort to reach as wide of an academic audience as possible, and to diversify the delivery channel so that outputs provide value in a range of contexts. The impact strategy is structured around two main areas: establishing constant communication with stakeholders through bi-directional dissemination; and data insights broadcast, which will ensure the data and evidence generated reach their intended users.

    more_vert
  • Funder: UK Research and Innovation Project Code: ES/V005456/1
    Funder Contribution: 160,737 GBP

    National Statistical Institutes (NSIs) are directing resources into advancing the use of administrative data in official statistics systems. This is a top priority for the UK Office for National Statistics (ONS) as they are undergoing transformations in their statistical systems to make more use of administrative data for future censuses and population statistics. Administrative data are defined as secondary data sources since they are produced by other agencies as a result of an event or a transaction relating to administrative procedures of organisations, public administrations and government agencies. Nevertheless, they have the potential to become important data sources for the production of official statistics by significantly reducing the cost and burden of response and improving the efficiency of such systems. Embedding administrative data in statistical systems is not without costs and it is vital to understand where potential errors may arise. The Total Administrative Data Error Framework sets out all possible sources of error when using administrative data as statistical data, depending on whether it is a single data source or integrated with other data sources such as survey data. For a single administrative data, one of the main sources of error is coverage and representation to the target population of interest. This is particularly relevant when administrative data is delivered over time, such as tax data for maintaining the Business Register. For sub-project 1 of this research project, we develop quality indicators that allow the statistical agency to assess if the administrative data is representative to the target population and which sub-groups may be missing or over-covered. This is essential for producing unbiased estimates from administrative data. Another priority at statistical agencies is to produce a statistical register for population characteristic estimates, such as employment statistics, from multiple sources of administrative and survey data. Using administrative data to build a spine, survey data can be integrated using record linkage and statistical matching approaches on a set of common matching variables. This will be the topic for sub-project 2, which will be split into several topics of research. The first topic is whether adding statistical predictions and correlation structures improves the linkage and data integration. The second topic is to research a mass imputation framework for imputing missing target variables in the statistical register where the missing data may be due to multiple underlying mechanisms. Therefore, the third topic will aim to improve the mass imputation framework to mitigate against possible measurement errors, for example by adding benchmarks and other constraints into the approaches. On completion of a statistical register, estimates for key target variables at local areas can easily be aggregated. However, it is essential to also measure the precision of these estimates through mean square errors and this will be the fourth topic of the sub-project. Finally, this new way of producing official statistics is compared to the more common method of incorporating administrative data through survey weights and model-based estimation approaches. In other words, we evaluate whether it is better 'to weight' or 'to impute' for population characteristic estimates - a key question under investigation by survey statisticians in the last decade.

    more_vert
  • Funder: UK Research and Innovation Project Code: ES/XX00005/1
    Funder Contribution: 12,668,900 GBP

    ADR UK (Administrative Data Research UK) is a partnership transforming the way researchers access the UK’s wealth of public sector data, to enable better informed policy decisions that improve people’s lives. By linking together data held by different parts of government, and by facilitating safe and secure access for accredited researchers to these newly joined-up data sets, ADR UK is creating a sustainable body of knowledge about how our society and economy function – tailored to give decision makers the answers they need to solve important policy questions. ADR UK is made up of three national partnerships (ADR Scotland, ADR Wales, and ADR NI) and the Office for National Statistics (ONS), which ensures data provided by UK government bodies is accessed by researchers in a safe and secure form with minimal risk to data holders or the public. The partnership is coordinated by a UK-wide Strategic Hub, which also promotes the benefits of administrative data research to the public and the wider research community, engages with UK government to secure access to data, and manages a dedicated research budget. ADR UK is funded by the Economic and Social Research Council (ESRC), part of UK Research and Innovation. To find out more, visit adruk.org or follow @ADR_UK on Twitter. The Office for National Statistics (ONS) plays a crucial role in sourcing, linking and curating public sector data for ADR UK (Administrative Data Research UK), ensuring that all data is accessed by researchers in a safe and secure form. To support the ADR UK partnership, ONS is expanding and improving its established Secure Research Service (SRS) – the organisation’s facility for providing secure access to de-identified public sector data for research – and significantly increasing the range of administrative data available. ONS will focus on increased data reuse to deliver efficiencies to government departments (who only need to provide data once), and maximise the use of this data by identifying shared priorities and objectives with government departments.

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • 4
  • 5
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.