


default search action
2nd EMC2@HPCA 2019: Washington, DC, USA
- 2nd Workshop on Energy Efficient Machine Learning and Cognitive Computing for Embedded Applications, EMC2@HPCA 2019, Washington, DC, USA, February 17, 2019. IEEE 2019, ISBN 978-1-7281-6763-3

- Partha Maji, Andrew Mundy, Ganesh Dasika, Jesse G. Beu, Matthew Mattina, Robert D. Mullins

:
Efficient Winograd or Cook-Toom Convolution Kernel Implementation on Widely Used Mobile CPUs. 1-5 - Sek M. Chai, Kilho Son, Jesse Hostetler:

Bootstrapping Deep Neural Networks from Approximate Image Processing Pipelines. 6-10 - Xinfeng Xie, Xing Hu, Peng Gu, Shuangchen Li, Yu Ji, Yuan Xie:

NNBench-X: A Benchmarking Methodology for Neural Network Accelerator Designs. 11-15 - Cheng-En Wu, Yi-Ming Chan, Chu-Song Chen:

On Merging MobileNets for Efficient Multitask Inference. 16-20 - Farzad Farshchi, Qijing Huang

, Heechul Yun:
Integrating NVIDIA Deep Learning Accelerator (NVDLA) with RISC-V SoC on FireSim. 21-25 - Urmish Thakker, Jesse G. Beu, Dibakar Gope, Ganesh Dasika, Matthew Mattina:

Run-Time Efficient RNN Compression for Inference on Edge Devices. 26-30 - Sree Harsha Nelaturu, Ziheng Wang, Saman P. Amarasinghe:

Accelerated CNN Training through Gradient Approximation. 31-35 - Dawit Aboye, Dylan Kupsh, Maggie Lim, Jacqueline Mai, Deeksha Dangwal, Diba Mirza

, Timothy Sherwood
:
PyRTLMatrix: An Object-Oriented Hardware Design Pattern for Prototyping ML Accelerators. 36-40

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














