
Ocado Group
Ocado Group
9 Projects, page 1 of 2
assignment_turned_in Project2019 - 2022Partners:Shadow Robot Company Ltd, Queen Mary University of London, Ocado Group, The Shadow Robot Company, QMUL +3 partnersShadow Robot Company Ltd,Queen Mary University of London,Ocado Group,The Shadow Robot Company,QMUL,DeepMind Technologies Limited,Ocado Limited,DeepMind Technologies LimitedFunder: UK Research and Innovation Project Code: EP/S00453X/1Funder Contribution: 310,597 GBPAcross the past 50 years, the use of robots in industry has monotonically increased, and it has literally boomed in the last 10 years. In 2016, the average robot density (i.e. number of robot units per 10,000 employees) in the manufacturing industries worldwide was 74; by regions, this was 99 units in Europe, 84 in the Americas and 63 in Asia, with an average annual growth rate (between 2010 and 2016) of 9% in Asia, 7% in the Americas and 5% in Europe. From 2018 to 2020, global robot installations are estimated to increase by at least 15% on average per year. The main market so far has been the automotive industry (i.e. an example of heavy manufacturing), where simple and repetitive robotic manipulation tasks are performed in very controlled settings by big and expensive robots, in dedicated areas of the factories where human workers are not allowed to enter for safety reasons. New growing markets for robots are consumer-electronics and food/beverages (i.e. examples of light manufacturing) as well as other small and medium sized enterprises (SMEs): in particular, the food and beverage industry has increased robot orders by 12% each year between 2011 and 2015, and by 20% in 2016. However, in many cases the production processes of these industries require delicate handling and fine manipulations of several different items, posing serious challenges to the current capabilities of commercial robotic systems. With 71 robot units per 10,000 employees (in 2016), the UK is the only G7 country with a robot density below the world average of 74, ranking 22nd. The industry and SME sector is highly in need of a modernization that would increase productivity and improve the working conditions (e.g. safety, engagement) of the human workers: this requires the development and deployment of novel robotic technologies that could meet the needs of those businesses in which current robots are yet not effective. One of the main reasons why robots are not effective in those applications is the lack of robot intelligence: the ability to learn and adapt that is typical of humans. Indeed, robotic manipulation can be enhanced by relying on humans, both through interaction (i.e. humans as direct teachers) and through inspiration (i.e. humans as models). Therefore, the aim of this project is to develop a system for natural human demonstration of robotic manipulation tasks, combining immersive Virtual Reality technologies and smart wearable devices (to interface the human with the robot) with robot sensorimotor learning techniques and multimodal artificial perception (inspired by the human sensorimotor system). The robotic system will include a set of sensors that allow to reconstruct the real world, in particual by integrating 3D vision with tactile information about contacts; the human user will access this artificial reconstruction through an immersive Virtual Reality that will combine both visual and haptic feedback. In other words, the user will see through the eyes of the robot, and will feel through the hands of the robot. Also, users will be able to move the robot just by moving their own limbs. This will allow human users to easily teach complex manipulation tasks to robots, and robots to learn efficient control strategies from the human demonstrations, so that they can then repeat the task autonomously in the future. Human demonstration of simple robotic tasks has already found its way to industry (e.g. robotic painting, simple pick and place of rigid objects), but still it cannot be applied to the dexterous handling of generic objects (e.g. soft and delicate objects), that would result in a much larger applicability (e.g. food handling). Therefore, the expected results of this project will boost productivity in a large number of industrial processes (economic impact) and improve working conditions and quality of life of the human workers in terms of safety and engagement (social impact).
more_vert assignment_turned_in Project2021 - 2024Partners:Queen Mary University of London, Shadow Robot Company Ltd, The Shadow Robot Company, Ocado Group, Adept Ergonomics +3 partnersQueen Mary University of London,Shadow Robot Company Ltd,The Shadow Robot Company,Ocado Group,Adept Ergonomics,QMUL,Adept Ergonomics,Ocado LimitedFunder: UK Research and Innovation Project Code: EP/T027746/1Funder Contribution: 373,312 GBPIt is well-known from the biomechanics and ergonomics research that material handling tasks in industry can often cause harmful working postures, potentially leading to musculoskeletal disorders and occupational injuries. Wearable robotic systems like supernumerary (additional) robotic limbs augment human bodies with extra mobility and manipulation capabilities, and they can increase the efficiency when conducting bulky material handling tasks and allow older workers to maintain their jobs. This project aims to create novel techniques to address ergonomics and safety of supernumerary robotic limbs. A novel posture and balance support wearable robotic system will be created and its control will be integrated with the supernumerary robotic limbs for material handling. The scope of the project is to study how the ergonomics of the supernumerary limbs for material handling can be improved through additional back and balance support. The implementation will be based on creating and using innovative mechatronic technologies (soft robotic actuation and sensing; light-weight cable-driven active mechanisms; haptic feedback; human-centred interactive control) and posture assessment and data processing methods (distributed wireless sensing; Cloud data storage; personalised machine-learning based data analysis and decision-making). The outcomes of the projects will have direct impacts on the UK manufacturing, logistics and agriculture industries (>15% of GDP, employing more than 10 million people), through development and evaluation of efficient and safe material handling robotic assistive technologies.
more_vert assignment_turned_in Project2023 - 2028Partners:QinetiQ, Naimuri, Queen Mary University of London, Ocado Group, Thales Group (UK) +2 partnersQinetiQ,Naimuri,Queen Mary University of London,Ocado Group,Thales Group (UK),Thales Aerospace,The Alan Turing InstituteFunder: UK Research and Innovation Project Code: EP/X02542X/1Funder Contribution: 2,579,840 GBPBEIS recently launched the Innovation Strategy, which the Government will establish 'innovation missions' seeking to address global and UK challenges through innovation. The Government wants to focus on exploiting seven technology areas where the UK has global competitive strengths. The proposed research covers four out of seven areas including: Advanced materials and manufacturing; AI, digital and advanced computing; electronics, photonics, and quantum; and robotics and smart machines. Together with QinetiQ, QMUL have developed a radically broad but new concept as "software defined materials (SDMs)", for which properties can be modified by simply uploading and updating computer software. The impact of SDMs is huge and it leads to tight integration of sensing, actuation, and computation that biological systems exhibit to achieve shape and appearance changes, and tactile sensing at very high dynamic range (like birds in flight). The vision of DREAM Partnership is therefore to unlock fundamental research opportunities promised by SDMs through digital transformation which are centered on design and manufacturing of novel electromagnetic materials for the automation and reconfigurability of future wireless systems. The DREAM Partnership will provide added value to both organisations, including:- Benefits for QinetiQ: Refresh of their technology portfolio using state-of-the-art materials and devices; securing new business by enhancement of the applications in wireless communications; greater international competitiveness through innovation insertion into systems; co-development of IP for enduring benefits in multiple markets. Benefits for QMUL: Stronger and more engaged industrial partnerships; enhanced supervision by using external specialists from QinetiQ; Potential licensing income through innovation; Enhanced knowledge transfer and an applications centric focus aligned to UK industry requirements. Benefits for UK: Gain a foothold in the marketplace with a new technology; Establish a large supply chain through QinetiQ; Access export markets through new products with routes to markets established via QinetiQ and position the UK as a leader in a key growth sector to compete with overseas incumbents. QMUL has agreed a property deal with the Department of Health and Social Care (DHSC) that paves the way for the development of a Whitechapel Life Sciences Cluster in East London, a truly inclusive environment with culture diversity. We envisage that this new space development will house a number of cross-faculty research centres including the Centre of the Internet of Medical Things, which aligns strongly to areas of existing strength in the DREAM partnership. QMUL was one of the first universities to offer degree-level apprenticeships. We have been awarded £28m to lead an Institute of Technology offering degree-level apprenticeships in data-science and engineering with over 30 industrial partners. This provides QinetiQ a ready framework to trial our pilot with people from non-academic routes. QMUL has recently established the Institute for the Digital Environment, investing £3m to establish a University Enterprise Zone incubating digital-health businesses. This provides the space and connectivity with QinetiQ, and offers a critical-mass to test our approach. We will invest a significant of time and effort on developing a body of innovative work on equality, diversity and inclusion (EDI) and resposnsible research and innnovation (RRI), particularly from safe AI and digital manufacturing impact on future workforce linking with QinetiQ ethics and code-of-conduct approaches. Finally, The DREAM Partnership will provide UK the opportunity not only to sustain this talented group with its legacy of more than 50 years of antenna and electromagnetics research innovation, but also to develop technologies relevant to wireless communications and and resilient infrastructures, which are beneficial to all citizens in the UK.
more_vert assignment_turned_in Project2018 - 2021Partners:Dubit Limited, Dubit Limited, Shadow Robot Company Ltd, Ocado Group, University of Leeds +3 partnersDubit Limited,Dubit Limited,Shadow Robot Company Ltd,Ocado Group,University of Leeds,University of Leeds,The Shadow Robot Company,Ocado LimitedFunder: UK Research and Innovation Project Code: EP/R031193/1Funder Contribution: 303,126 GBPHow do you grasp a bottle of milk, nestling behind some yoghurt pots, within a cluttered fridge? Whilst humans are able to use visual information to plan and select such skilled actions with external objects with great ease and rapidity - a facility acquired in the history of the species and as a child develops - *robots struggle*. Indeed, whilst artificial intelligence has made great leaps in beating the best of humanity in tasks such as chess and Go, the planning and execution abilities of today's robotic technology is trumped by the average toddler. Given the complex and unpredictable world within which we find ourselves situated, these apparently trivial tasks are the product of highly sophisticated neural computations that generalise and adapt to changing situations: continually engaging in a process of selecting between multiple goals and action options. Our aim is to investigate how such computations could be transferred to robots to enable them to manipulate objects more efficiently, in a more human-like way than is presently the case, and to be able to perform manipulation presently beyond the state of the art. Let us return to the fridge example: You need to first decide what yoghurt pot is best to remove to allow access to the milk bottle and then generate the appropriate movements to grasp the pot safely- the *pre-contact *phase of prehension. You then need to decide what type of forces to apply to the pot (push it to the left or the right, nudge it or possibly lift it up and place the pot on another shelf etc) i.e. the *contact* phase. Whilst these steps happen with speed and automaticity in real time, we will probe these processes in laboratory controlled situations to systematically examine the pre-contact and contact phases of prehension to determine what factors (spatial position, size of pot, texture of pot etc) bias humans to choose one action (or series of actions) over other possibilities. We hypothesise that we can extract a set of high level rules, expressed using qualitative spatio-temporal formalisms which can capture the essence of such expertise, in combination with more quantitative lower-level representations and reasoning. We will develop a computational model to provide a formal foundation for testing hypotheses about the factors biasing behaviour and ultimately use this model to predict the behaviour that will most probably occur in response to a given perceptual (visual) input in this context. We reason that a computational understanding of how humans perform these actions can bridge the robot-human skill gap. State-of-the-art robot motion/manipulation planners use probabilistic methods (random sampling e.g. RRTs, PRMs, is the dominant motion planning approach in the field today). Hence, planners are not able to explain their decisions, similar to the "black box" machine learning methods mentioned in the call which produce inscrutable models. However, if robots can generate human-like interactions with the world, and if they can use knowledge of human action selection for planning, then this would allow robots to explain why they perform manipulations in a particular way, and also facilitate "legible manipulation" - i.e. action which is predictable by humans since it closely corresponds to how humans would behave, a goal of some recent research in the robotics community. The work will shed light on the use of perceptual information in the control of action - a topic of great academic interest and simultaneously have direct relevance to a number of practical problems facing roboticists seeking to control robots working in cluttered environments: from a robot picking items in a warehouse, to novel surgical technologies requiring discrimination between healthy and cancerous tissue.
more_vert assignment_turned_in Project2018 - 2019Partners:University of Bristol, The Shadow Robot Company, Ocado Group, Imperial College London, Shadow Robot Company Ltd +2 partnersUniversity of Bristol,The Shadow Robot Company,Ocado Group,Imperial College London,Shadow Robot Company Ltd,Ocado Limited,University of BristolFunder: UK Research and Innovation Project Code: EP/R020833/1Funder Contribution: 100,809 GBPA multifingered robotic hand and an object that will be grasped and manipulated by the hand are the components of a dexterous manipulation robotic system. Then, the dexterous manipulation problem can be defined as the act of determining how to alter a grasp of an object through the coordinated motion of the fingers to reach a desired change in its position and orientation. In structured environments, that is, in worlds whose characteristics are well known in advance, solving the dexterous manipulation problem reduces to optimising hardware and software for dealing with the specific objects and constraints present in the sought task. Since in these cases all possible ramifications are documented, the design optimisation usually concludes that multi-degree-of-freedom robot arms with simple two-finger jaw or vacuum grippers are enough to position and orient the manipulated objects; thus avoiding the difficulties associated with implementing robot hand dexterity. In other words, as a result of the described analysis, fine manipulation, which refers to the manipulation of objects by small robot parts such as robotic fingers, hands, and wrists, is absorbed by gross manipulation, which refers to the manipulation of objects by large robot parts such as robotic arms or other types of limbs. This is certainly the typical situation observed in many of the current industrial applications, as demonstrated by the design of the most recent collaborative industrial robots. The above reasoning gives a simple explanation about the lack of use of multifingered dexterous robotic hands in industrial settings, an aspect that have been the subject of discussion in the robotic manipulation community recently, and clearly opens the question about why this technology and research on the dexterous manipulation problem is a pressing need. The answer of such a query is simple: the solution of some of the most relevant social, environmental, and economic challenges of this century, and beyond, (e.g., an efficient healthcare, coping with an ageing population, management of mega cities), requires robots that cooperate with humans to manipulate objects designed for human hands. Thus, given the diversity and uncertainty inherent of such settings, robot manipulation technologies require the cooperation, not the absorption, of gross manipulation and fine manipulation. Solving the problem of manipulating objects dexterously in unstructured environments is then a must. However, despite the substantial progress made in the last 35-40 years in robotics, performing reliable dexterous manipulation operations under both shape diversity and shape uncertainty with a robot hand is still an open question. The aim of this research is to help solving this problem and shaping the next generation of robot hand technologies by investigating novel morphologies and low-level control schemes that drastically enhance the dexterous manipulation capabilities of current solutions. Specifically, this research focuses on devising robot hands based on flexible and adaptive mechanical components that generate non-trivial predictable behaviours of the hand-object system that are able to be controlled in open loop, that is, without feedback control and without knowing the particularities of the object beforehand, while still being robust to the size or shape of the object being manipulated. This novel approach, called 'trustable dexterous manipulation', departs from traditional hand-centred strategies to embrace a holistic view that takes into account the manipulated bodies without losing generality; it has the potential to redefine the current practice in design of dexterous robot hands. The success of this project will benefit researchers and practitioners working on technologies that involve robots collaborating with humans in dynamic and uncertain settings across multiple domains, including agriculture, healthcare, manufacturing, and extreme environments.
more_vert
chevron_left - 1
- 2
chevron_right