Powered by OpenAIRE graph
Found an issue? Give us feedback

Ocado Limited

9 Projects, page 1 of 2
  • Funder: UK Research and Innovation Project Code: EP/T027746/1
    Funder Contribution: 373,312 GBP

    It is well-known from the biomechanics and ergonomics research that material handling tasks in industry can often cause harmful working postures, potentially leading to musculoskeletal disorders and occupational injuries. Wearable robotic systems like supernumerary (additional) robotic limbs augment human bodies with extra mobility and manipulation capabilities, and they can increase the efficiency when conducting bulky material handling tasks and allow older workers to maintain their jobs. This project aims to create novel techniques to address ergonomics and safety of supernumerary robotic limbs. A novel posture and balance support wearable robotic system will be created and its control will be integrated with the supernumerary robotic limbs for material handling. The scope of the project is to study how the ergonomics of the supernumerary limbs for material handling can be improved through additional back and balance support. The implementation will be based on creating and using innovative mechatronic technologies (soft robotic actuation and sensing; light-weight cable-driven active mechanisms; haptic feedback; human-centred interactive control) and posture assessment and data processing methods (distributed wireless sensing; Cloud data storage; personalised machine-learning based data analysis and decision-making). The outcomes of the projects will have direct impacts on the UK manufacturing, logistics and agriculture industries (>15% of GDP, employing more than 10 million people), through development and evaluation of efficient and safe material handling robotic assistive technologies.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/S00453X/1
    Funder Contribution: 310,597 GBP

    Across the past 50 years, the use of robots in industry has monotonically increased, and it has literally boomed in the last 10 years. In 2016, the average robot density (i.e. number of robot units per 10,000 employees) in the manufacturing industries worldwide was 74; by regions, this was 99 units in Europe, 84 in the Americas and 63 in Asia, with an average annual growth rate (between 2010 and 2016) of 9% in Asia, 7% in the Americas and 5% in Europe. From 2018 to 2020, global robot installations are estimated to increase by at least 15% on average per year. The main market so far has been the automotive industry (i.e. an example of heavy manufacturing), where simple and repetitive robotic manipulation tasks are performed in very controlled settings by big and expensive robots, in dedicated areas of the factories where human workers are not allowed to enter for safety reasons. New growing markets for robots are consumer-electronics and food/beverages (i.e. examples of light manufacturing) as well as other small and medium sized enterprises (SMEs): in particular, the food and beverage industry has increased robot orders by 12% each year between 2011 and 2015, and by 20% in 2016. However, in many cases the production processes of these industries require delicate handling and fine manipulations of several different items, posing serious challenges to the current capabilities of commercial robotic systems. With 71 robot units per 10,000 employees (in 2016), the UK is the only G7 country with a robot density below the world average of 74, ranking 22nd. The industry and SME sector is highly in need of a modernization that would increase productivity and improve the working conditions (e.g. safety, engagement) of the human workers: this requires the development and deployment of novel robotic technologies that could meet the needs of those businesses in which current robots are yet not effective. One of the main reasons why robots are not effective in those applications is the lack of robot intelligence: the ability to learn and adapt that is typical of humans. Indeed, robotic manipulation can be enhanced by relying on humans, both through interaction (i.e. humans as direct teachers) and through inspiration (i.e. humans as models). Therefore, the aim of this project is to develop a system for natural human demonstration of robotic manipulation tasks, combining immersive Virtual Reality technologies and smart wearable devices (to interface the human with the robot) with robot sensorimotor learning techniques and multimodal artificial perception (inspired by the human sensorimotor system). The robotic system will include a set of sensors that allow to reconstruct the real world, in particual by integrating 3D vision with tactile information about contacts; the human user will access this artificial reconstruction through an immersive Virtual Reality that will combine both visual and haptic feedback. In other words, the user will see through the eyes of the robot, and will feel through the hands of the robot. Also, users will be able to move the robot just by moving their own limbs. This will allow human users to easily teach complex manipulation tasks to robots, and robots to learn efficient control strategies from the human demonstrations, so that they can then repeat the task autonomously in the future. Human demonstration of simple robotic tasks has already found its way to industry (e.g. robotic painting, simple pick and place of rigid objects), but still it cannot be applied to the dexterous handling of generic objects (e.g. soft and delicate objects), that would result in a much larger applicability (e.g. food handling). Therefore, the expected results of this project will boost productivity in a large number of industrial processes (economic impact) and improve working conditions and quality of life of the human workers in terms of safety and engagement (social impact).

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/R020833/1
    Funder Contribution: 100,809 GBP

    A multifingered robotic hand and an object that will be grasped and manipulated by the hand are the components of a dexterous manipulation robotic system. Then, the dexterous manipulation problem can be defined as the act of determining how to alter a grasp of an object through the coordinated motion of the fingers to reach a desired change in its position and orientation. In structured environments, that is, in worlds whose characteristics are well known in advance, solving the dexterous manipulation problem reduces to optimising hardware and software for dealing with the specific objects and constraints present in the sought task. Since in these cases all possible ramifications are documented, the design optimisation usually concludes that multi-degree-of-freedom robot arms with simple two-finger jaw or vacuum grippers are enough to position and orient the manipulated objects; thus avoiding the difficulties associated with implementing robot hand dexterity. In other words, as a result of the described analysis, fine manipulation, which refers to the manipulation of objects by small robot parts such as robotic fingers, hands, and wrists, is absorbed by gross manipulation, which refers to the manipulation of objects by large robot parts such as robotic arms or other types of limbs. This is certainly the typical situation observed in many of the current industrial applications, as demonstrated by the design of the most recent collaborative industrial robots. The above reasoning gives a simple explanation about the lack of use of multifingered dexterous robotic hands in industrial settings, an aspect that have been the subject of discussion in the robotic manipulation community recently, and clearly opens the question about why this technology and research on the dexterous manipulation problem is a pressing need. The answer of such a query is simple: the solution of some of the most relevant social, environmental, and economic challenges of this century, and beyond, (e.g., an efficient healthcare, coping with an ageing population, management of mega cities), requires robots that cooperate with humans to manipulate objects designed for human hands. Thus, given the diversity and uncertainty inherent of such settings, robot manipulation technologies require the cooperation, not the absorption, of gross manipulation and fine manipulation. Solving the problem of manipulating objects dexterously in unstructured environments is then a must. However, despite the substantial progress made in the last 35-40 years in robotics, performing reliable dexterous manipulation operations under both shape diversity and shape uncertainty with a robot hand is still an open question. The aim of this research is to help solving this problem and shaping the next generation of robot hand technologies by investigating novel morphologies and low-level control schemes that drastically enhance the dexterous manipulation capabilities of current solutions. Specifically, this research focuses on devising robot hands based on flexible and adaptive mechanical components that generate non-trivial predictable behaviours of the hand-object system that are able to be controlled in open loop, that is, without feedback control and without knowing the particularities of the object beforehand, while still being robust to the size or shape of the object being manipulated. This novel approach, called 'trustable dexterous manipulation', departs from traditional hand-centred strategies to embrace a holistic view that takes into account the manipulated bodies without losing generality; it has the potential to redefine the current practice in design of dexterous robot hands. The success of this project will benefit researchers and practitioners working on technologies that involve robots collaborating with humans in dynamic and uncertain settings across multiple domains, including agriculture, healthcare, manufacturing, and extreme environments.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/R031193/1
    Funder Contribution: 303,126 GBP

    How do you grasp a bottle of milk, nestling behind some yoghurt pots, within a cluttered fridge? Whilst humans are able to use visual information to plan and select such skilled actions with external objects with great ease and rapidity - a facility acquired in the history of the species and as a child develops - *robots struggle*. Indeed, whilst artificial intelligence has made great leaps in beating the best of humanity in tasks such as chess and Go, the planning and execution abilities of today's robotic technology is trumped by the average toddler. Given the complex and unpredictable world within which we find ourselves situated, these apparently trivial tasks are the product of highly sophisticated neural computations that generalise and adapt to changing situations: continually engaging in a process of selecting between multiple goals and action options. Our aim is to investigate how such computations could be transferred to robots to enable them to manipulate objects more efficiently, in a more human-like way than is presently the case, and to be able to perform manipulation presently beyond the state of the art. Let us return to the fridge example: You need to first decide what yoghurt pot is best to remove to allow access to the milk bottle and then generate the appropriate movements to grasp the pot safely- the *pre-contact *phase of prehension. You then need to decide what type of forces to apply to the pot (push it to the left or the right, nudge it or possibly lift it up and place the pot on another shelf etc) i.e. the *contact* phase. Whilst these steps happen with speed and automaticity in real time, we will probe these processes in laboratory controlled situations to systematically examine the pre-contact and contact phases of prehension to determine what factors (spatial position, size of pot, texture of pot etc) bias humans to choose one action (or series of actions) over other possibilities. We hypothesise that we can extract a set of high level rules, expressed using qualitative spatio-temporal formalisms which can capture the essence of such expertise, in combination with more quantitative lower-level representations and reasoning. We will develop a computational model to provide a formal foundation for testing hypotheses about the factors biasing behaviour and ultimately use this model to predict the behaviour that will most probably occur in response to a given perceptual (visual) input in this context. We reason that a computational understanding of how humans perform these actions can bridge the robot-human skill gap. State-of-the-art robot motion/manipulation planners use probabilistic methods (random sampling e.g. RRTs, PRMs, is the dominant motion planning approach in the field today). Hence, planners are not able to explain their decisions, similar to the "black box" machine learning methods mentioned in the call which produce inscrutable models. However, if robots can generate human-like interactions with the world, and if they can use knowledge of human action selection for planning, then this would allow robots to explain why they perform manipulations in a particular way, and also facilitate "legible manipulation" - i.e. action which is predictable by humans since it closely corresponds to how humans would behave, a goal of some recent research in the robotics community. The work will shed light on the use of perceptual information in the control of action - a topic of great academic interest and simultaneously have direct relevance to a number of practical problems facing roboticists seeking to control robots working in cluttered environments: from a robot picking items in a warehouse, to novel surgical technologies requiring discrimination between healthy and cancerous tissue.

    more_vert
  • Funder: UK Research and Innovation Project Code: ES/W010542/1
    Funder Contribution: 510,280 GBP

    It is becoming clear that robotics will be an integral part of the design, planning and operation of future cities and urban infrastructure. This is most evident in the development of driverless cars and drones, but there is potential for a much broader application of robotics in the delivery of goods and the management of people. The use of robots in the public realm of cities has previously been constrained by technological limitations and concerns about human safety. However, that is changing rapidly as technology develops and governments recognize the potential social and environment benefits. Interest in urban robotics has certainly increased because of COVID-19 and the potential of robotics to provide essential goods and services with reduced human contact. There could be significant public benefits from using robotics in the public realm but also social and ethical concerns about employment impacts and extended surveillance and social control, especially when robotics is combined with facial recognition and profiling. There is growing interest in urban robotics but so far the research on wider urban impacts has been limited. The aim of the proposed project is to fill that research gap by undertaking new research on the unfolding development of urban robotics in the UK and internationally. The proposal is therefore for an internationally leading 30 month research project to help understand the potential impacts of urban robotics and provide the knowledge needed to inform public policy and academic research on urban robotics at this critical phase in its development. That includes supporting the development of urban robotic technology and services in the UK by linking social science and robotic engineering and understanding how innovation is shaped by opportunities for real world testing. The research will include (i) a review of international urban robotic research and development; (ii) detailed analysis of the context for urban robotic innovation in the UK, (iii) case-studies of urban robotic experiments in the USA (San Francisco), Australia (Brisbane) and Japan (Yokohama); and (iv) a structured programme of policy support and awareness-raising. The research will lead to a landmark book and other publications that will help define and develop this new and important field of interdisciplinary study.

    more_vert
  • chevron_left
  • 1
  • 2
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.