17th INTERNATIONAL FORUM ON MPSoC
for software-defined hardware
Speaker's Profile
Frédéric Pétrot
Professor at TIMA Lab - Grenoble University, France
Scalable High-Performance Architecture for Convolutional Ternary Neural Networks
Download SlidesAbstract
Thanks to their excellent performances on typical artificial intelligence problems, deep neural networks have drawn a lot of interest lately. However, this comes at the cost of large computational needs and high power consumption. Benefiting from high precision at acceptable hardware cost on these difficult problems is a challenge. To address it, we advocate the use of ternary neural networks (TNN) that, when properly trained, can reach results close to the state of the art using floating-point arithmetic. We present a highly versatile architecture for TNN in which we can vary both the number of bits of the input data and the level of parallelism at synthesis time, allowing to trade throughput for hardware resources and power consumption and demonstrate its efficiency with FPGA and ASIC implementations.
Biography
Frederic Petrot received the PhD degree in Computer Science from Universite Pierre et Marie Curie (Paris VI), Paris, France, in 1994, where has been Assistant Professor in Computer Science until September 2004. From 1989 to 1996, F. Petrot was one of the main contributors of the open source Alliance VLSI CAD system, and from 1996 to 2004, he led a team focusing on the specification, simulation and implementation of multiprocessor SoCs. He joined TIMA in September 2004, where he holds a professor position at Grenoble Institute of Technology, France. Since 2006, he heads the System Level Synthesis group of TIMA. His research interests are in multiprocessor systems on chip architectures, including circuits and software aspects, and CAD tools for the design and evaluation of hardware/software systems.