Text

Certifierbara bevis och justifieringsteknik

Cyber-fysisk systemanalys

Digitalisering av framtidens energi

Formell modellering och analys av inbyggda system

Förnybar energi

Heterogena system

Industriella AI-system

Industriell programvaruteknik

Komplexa inbyggda system i realtid

Lärande och optimering

Modellbaserad konstruktion av inbäddade system

Artificiell intelligens och intelligenta system

Automatiserade mjukvaruspråkutveckling och mjukvaruteknik

Programmeringsspråk

Programvarutestlaboratorium

Resurseffektivisering

Statsvetenskap

Säkerhetskritisk teknik

Teknisk matematik

Energy-Efficient Hardware Accelerator for Embedded Deep Learning

In this joint project, we aim at decreasing the power consumption and computation load of the current image processing platform by employing the concept of computation reuse.

Start

2019-01-01

Forskningsområde

Forskningsinriktning

Projektansvarig vid MDU

No partial template found

Goals of the project

In this joint project, we aim at decreasing the power consumption and computation load of the current image processing platform by employing the concept of computation reuse. Computation reuse suggests temporarily storing and reusing the result of a recent arithmetic operation for anticipated subsequent operations with the same operands. Our proposal is motivated by the high degree of redundancy that we observed in arithmetic operations of neural networks where we show that an approximate computation reuse can eliminate up to 94% of arithmetic operation of simple neural networks. This leads to up to 80% reduction in power consumption, which directly translates to a considerable increase in battery life time. We further presented a mechanism to make a large neural network by connecting basic units in two UT-MDU joint works.