Powered by OpenAIRE graph
Found an issue? Give us feedback

MOA: High Efficiency Deep Learning for Embedded and Mobile Platforms (Full EPSRC Fellowship Submission)

Funder: UK Research and InnovationProject code: EP/S001530/1
Funded under: EPSRC Funder Contribution: 608,250 GBP

MOA: High Efficiency Deep Learning for Embedded and Mobile Platforms (Full EPSRC Fellowship Submission)

Description

In just a few short years, breakthroughs from the field of deep learning have transformed how computers perform a wide-variety of tasks such as recognizing a face, tracking emotions or monitoring physical activities. Unfortunately, the models and algorithms used by deep learning typically exert severe energy, memory and compute demands on local device resources and this conventionally limits their adoption within mobile and embedded devices. Data perception and understanding tasks powered by deep learning are so fundamental to platforms like phones, wearables and home/industrial sensors, that we must reach a point where current -- and future -- innovations in this area can be simply and efficiently integrated within even such resource constrained systems. This research vector will lead directly to outcomes like: brand new types of sensor-based products in the home/workplace, as well as enabling increasing the intelligence within not only consumer devices, but also in fields like medicine (smart stethoscopes) and anonymous systems (robotics/drones). The MOA fellowship aims to fund basic research, development and eventual commercialization (through collaborations with a series of industry partners) algorithms that aims to enable general support for deep learning techniques on resource-constrained mobile and embedded devices. Primarily, this requires a radical reduction in the resources (viz. energy, memory and computation) consumed by these computational models -- especially at inference (i.e., execution) time. The proposal seeks will have two main thrusts. First, build upon the existing work of the PI in this area towards achieving this goal which includes: sparse intra-model layer representations (resulting in small models), dynamic forms of compression (models that can be squeezed smaller or bigger as needed), and scheduling partitioned model architectures (splitting models and running parts of them on the processor that suits that model fraction best on certain processors found inside a mobile/embedded device). This thrust will re-examine these methods towards solving key remaining issues that would prevent such techniques from being used within products and as part of common practices. Second, investigate a new set of ambitious directions that seek to increase the utilization of emerging purpose-built small-form-factor hardware processor accelerators designed for deep learning algorithms (these accelerators are suitable for use within phones, wearables and drones). However, like any piece of hardware, it is still limited by how it is programmed - and software toolchains that map deep learning models to the accelerator hardware remain infancy. Our preliminary results show that existing approaches to optimizing deep models, conceived first for conventional processors (e.g., DSPs, GPUs, CPUs), poorly use the new capabilities of these hardware accelerators. We will examine the development of important new approaches that modify the representation and inference algorithms used within deep learning so that they can fully utilize the new hardware capabilities. Directions include: mixed precision models and algorithms, low-data movement representations (that can trade memory operations for compute), and enhanced parallelization.

Data Management Plans
Powered by OpenAIRE graph
Found an issue? Give us feedback

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

All Research products
arrow_drop_down
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::a21e2b578fe2a1fec1bbc9d9b36f4e09&type=result"></script>');
-->
</script>
For further information contact us at helpdesk@openaire.eu

No option selected
arrow_drop_down