Powered by OpenAIRE graph
Found an issue? Give us feedback

ONS

Office for National Statistics
43 Projects, page 1 of 9
  • Funder: UK Research and Innovation Project Code: ES/S007105/1
    Funder Contribution: 1,786,230 GBP

    The Urban Big Data Centre aims to promote innovative research methods and the use of big data to improve social, economic and environmental well-being in cities. Traditionally, quantitative urban analysis relied on data designed for research purposes: Census and social surveys, in particular. Their qualities are well understood and the skills needed for extracting knowledge from them widely shared by social researchers. With the arrival of the digital age, we produce an ever increasing volume of data as we go about our daily lives from physical sensors, business and public administrative systems, or social media platforms, for example. These data have the potential to provide valuable insights into urban life but there are many more challenges in extracting useful knowledge from them. Some are technical, arising from the volume and variety of data, and its less structured nature. Some are legal and ethical, concerning data ownership rights and individual privacy rights. Above all, there are important social science issues in the use of big data. We need to shape the questions we ask of these data with an informed perspective on urban problems and contexts, and not have data drive the research. There is a need to ask questions about the data themselves and how they affect the resulting representations of urban life. And there is a need to examine the ways in which these data are taken up by policy makers and used in decision making. UBDC is a research centre which brings together an outstanding multi-disciplinary team to address these complex and varied challenges. We are a unique combination of four capacities: social scientists with expertise from a range of disciplinary backgrounds relevant to urban studies; data scientists with expertise in programming, data management, information retrieval and spatial information systems, as well as in legal issues around big data use; a data infrastructure comprising a substantial data collection and secure data management and analysis systems; and an academic group with strong connections to policy, industry and civil society organisations developed over the course of phase one and wider work. In the second phase, our objectives are to maximise the social and economic benefits of activities from phase one. We will do this in particular through partnerships with industrial and government stakeholders, working together to produce analyses which meet their needs as well as having wider application. We will continue to publish world-leading scientific papers across a range of disciplines. We will work to enhance data collections and develop new methods of analysis. We will conduct research to understand the quality of these new data, how well they represent or misrepresent particular aspects of life, and how they are and could be used by policy makers in practice. Lastly, we will build capacity for researchers and others to work with this kind of data in future. Our work programme comprises four thematic work packages. One focuses on understanding the sustainability, equity and efficiency of urban transport systems and on evaluating the impacts on these of infrastructure investments. There is a particular focus on public transport accessibility as well as active travel and hence health outcomes. The second examines the changing residential structure of cities or patterns of spatial segregation, and their consequences for social equity, with a particular focus on the re-growth of private renting. The third studies how urban systems shape skills development and productivity and, in particular, how the combination of home and school environments combine to shape secondary educational attainment. The fourth explores how big data are being taken up by policy makers. It asks what the barriers are to more effective use of these data but also whether they distort the picture of needs which a public body may form.

    more_vert
  • Funder: UK Research and Innovation Project Code: ES/V001035/1
    Funder Contribution: 15,033,200 GBP

    IMPACT stands for 'Improving Adult Care Together'. It is a new £15 million UK centre for implementing evidence in adult social care, co-funded by the ESRC and the Health Foundation. It is led by Professor Jon Glasby at the University of Birmingham, with a Leadership Team of 12 other academics, people drawing on care and support, and policy and practice partners - along with a broader consortium of key stakeholders from across the sector and across the four nations of the UK. IMPACT is an 'implementation centre' not a research centre, drawing on evidence gained from different types of research, the lived experience of people drawing on care and support and their carers, and the practice knowledge of social care staff. It will work across the UK to make sure that it is embedded in, and sensitive to, the very different policy contexts in each of the four nations, as well as being able to share learning across the UK as a whole. As it gets up and running, IMPACT will seek to: Provide practical support to implement evidence in the realities of everyday life and front-line services Overcome the practical and cultural barriers to using evidence in such a pressured, diverse and fragmented sector Bring key stakeholders together to share learning and co-design our work in inclusive and diverse 'IMPACT Assemblies' (based in all four nations of the UK to reflect different policy and practice contexts) Work over three phases of development ('co-design', 'establishment' and 'delivery') to build a centre that creates sustainable change and becomes a more permanent feature of adult social care landscape

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/V00641X/1
    Funder Contribution: 281,269 GBP

    Missing data are a common problem in many application areas. The presence of missing values complicates analyses, and if not dealt with properly can result in incorrect conclusions being drawn from the data. It is often helpful to assume there is a process that produces the missing values, typically called a missing data mechanism. A particularly problematic scenario is when this mechanism is in part determined by some other unknown variables, such as the missing values themselves. This is known as a missing not at random (MNAR) mechanism. If missing values arise due to a MNAR mechanism then conclusions drawn from the data will typically be biased. Also, importantly, it is not possible to know whether this problem occurs or not in the data. This is the challenging problem area that this proposal seeks to address, namely developing procedures that can best test whether or MNAR occurs in the data. The proposal will consider scenarios where it is possible to estimate some of the missing values through a follow up sample. The main purpose of this is to learn about the missing data mechanism and specifically test whether the MNAR assumption is valid or not. Further, the recovered data will also help to correct for the effect the missing data have on conclusions. The proposal makes use of optimal design techniques to decide which missing values to follow up. Essentially certain missing values might yield more information about the type of missing data mechanism than others; in addition some values might be more likely than others to be recovered. In this way we would ensure maximum information from the recovered data is obtained. This will allow data analysts to determine whether the presence of MNAR is likely and take appropriate action. We will collaborate with our project partners, the Office for National Statistics and NHS Blood and Transplant in the development of these methods. Our project partners will provide relevant data for us to consider realistic scenarios and we will discuss interim results with them to ensure our methods are most useful for practitioners. We will also present the work as part of a missing data course at the African Institute of Mathematical Sciences (AIMS) to maximise the global benefit of the work. The methods developed in this proposal will be disseminated through papers and presentations. In addition, we will create a free to use R package that will implement the methods to allow easy uptake by users. We will provide training in using this R package as part of a two-day workshop where we will describe our methods to users. A dedicated website will be updated throughout the project to describe developments and facilitate engagement with interested parties.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/S022074/1
    Funder Contribution: 5,312,500 GBP

    The vision of this CDT is to enhance society's resilience to changes in our environment through the development of Environmental Intelligence (EI): using the integration of data from multiple inter-related sources and Artificial Intelligence (AI) to provide evidence for informed decision-making, increase our understanding of environmental challenges and provide information that is required by individuals, policy-makers, institutions and businesses. Many of the most important problems we face today are related to the environment. Climate change, healthy oceans, water security, clean air, biodiversity loss, and resilience to extreme events all play a crucial role in determining our health, wealth, safety and future development. The UN's 2030 Agenda for Sustainable Development calls for a plan of action for people, planet and prosperity, aiming to take the bold and transformative steps that are urgently needed to shift the world onto a sustainable and resilient path. Developing a clear understanding of the challenges and identifying potential solutions, both for ourselves and our planet, requires high quality, accessible, timely and reliable data to support informed decision making. Beyond the quantification of the need for change and tracking developments, EI has another important role to play in facilitating change through integration of cutting edge AI technology in energy, water, transport, agricultural and other environmentally-related systems and by empowering individuals, organisations and businesses through the provision of personalized information that will support behavioural change. Students will receive training in the range of skills they will require to become leaders in EI: (i) the computational skills required to analyse data from a wide variety of sources; (ii) environmental domain-specific expertise; (iii) an understanding of governance, ethics and the potential societal impacts of collecting, mining, sharing and interpreting data, together with the ability to communicate and engage with a diverse range of stakeholders. The training programme has been designed to be applicable to students with a diverse range of backgrounds and experiences. Graduates of the CDT will be equipped with the skills they need to become tomorrow's leaders in identifying and addressing interlinked, social, economic and environmental risks. Having highly trained individuals with a wide range of expertise, together with the skills to communicate with a diverse range of stakeholders and communities, will have far reaching impact across a wide number of sectors. Traditionally, PhD students trained in the technical aspects of AI have been distinct from those trained in policy and business implementation. This CDT will break that mould by integrating students with a diverse range of backgrounds and interests and providing them with the training, in conjunction with external partners, that will ensure that they are well versed in both cutting edge methodology and on the ground policy and business implementation. The University of Exeter's expertise in inter- and trans-disciplinary environmental, climate, sustainability, circular economy and health research makes it uniquely placed to lead an inter-disciplinary CDT that will pioneer the use of AI in understanding the complex interactions between the environment, climate, natural ecosystems, human social and economic systems, and health. Students will benefit from the CDTs strong relationships with its external partners, including the Met Office. Many of these partners are employers of doctoral graduates in AI and see an increasing need for employees with skills from across multiple disciplines. Their involvement in the planning and ongoing management of the CDT will ensure that, in this rapidly changing domain, the CDT delivers leading-edge research that will enable partners and others to participate effectively in EI and lead to optimal employment opportunities for its graduates.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/V025961/1
    Funder Contribution: 597,263 GBP

    The field of Natural Language Processing (NLP) has made unprecedented progress over the last decade, fuelled by the introduction of increasingly powerful neural network models. These models have an impressive ability to discover patterns in training examples, and to transfer these patterns to previously unseen test cases. Despite their strong performance in many NLP tasks, however, the extent to which they "understand" language is still remarkably limited. The key underlying problem is that language understanding requires a vast amount of world knowledge, which current NLP systems are largely lacking. In this project, we focus on conceptual knowledge, and more in particular on: (i) capturing what properties are associated with a given concept (e.g. lions are dangerous, boats can float); (ii) characterising how different concepts are related (e.g. brooms are used for cleaning, bees produce honey). Our proposed approach relies on the fact that Wikipedia contains a wealth of such knowledge. A key problem, however, is that important properties and relationships are often not explicitly mentioned in text, especially if they follow straightforwardly from other information, for a human reader (e.g. if X is an animal that can fly then X probably has wings). Apart from learning to extract knowledge expressed in text, we thus also have to learn how to reason about conceptual knowledge. A central question is how conceptual knowledge should be represented. Current NLP systems heavily rely on vector representations. Each concept is then represented by a single vector. It is now well-understood how such representations can be learned, and they are straightforward to incorporate into neural network architectures. However, they also have important theoretical limitations in terms of what knowledge they can capture, and they only allow for shallow and heuristic forms of reasoning. In contrast, in symbolic AI, conceptual knowledge is typically represented using facts and rules. This enables powerful forms of reasoning, but symbolic representations are harder to learn and to use in neural networks. Moreover, symbolic representations are also limited because they cannot capture aspects of knowledge that are matters of degree (e.g. similarity and typicality), which is especially restrictive when modelling commonsense knowledge. The solution we propose relies on a novel hybrid representation framework, which combines the main advantages of vector representations with those of symbolic methods. In particular, we will explicitly represent properties and relationships, as in symbolic frameworks, but these properties and relations will be encoded as vectors. Each concept will thus be associated with several property vectors, while pairs of related concepts will be associated with one or more relation vectors. Our vectors will thus intuitively play the same role that facts play in symbolic frameworks, with associated neural network models then playing the role of rules. The main output from this project will consist in a comprehensive resource, in which conceptual knowledge is encoded in this hybrid way. We expect that our resource will play an important role in NLP, given the importance of conceptual knowledge for language understanding and its highly complementary nature to existing resources. To demonstrate its usefulness, we will focus on two challenging applications: reading comprehension and topic/trend modelling. We will also develop three case studies. In one case study, we will learn representations of companies, by using our resource to summarise the activities of companies in a semantically meaningful way. In another case study, we will use our resource to identify news stories that are relevant to a given theme. Finally, we will use our methods to learn semantically coherent descriptions of emerging trends in patents.

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • 4
  • 5
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.