Timezone: »
Neural Architecture Search (NAS) methodologies aim at finding efficient Deep Neural Network (DNN) models for a given application under given system constraints. DNNs are compute-intensive as well as vulnerable to adversarial attack threats. To address multiple design objectives, we propose HARNAS, a novel NAS framework that jointly optimizes for hardware-efficiency and adversarial-robustness of DNNs executed on specialized hardware accelerators. Besides the traditional convolutional DNNs, HARNAS extends the search for complex types of DNNs such as Capsule Networks. For reducing the exploration time, HARNAS selects appropriate values of adversarial perturbations to employ in the NAS algorithm. Our evaluations provide a set of Pareto-optimal solutions leveraging the tradeoffs between the above-discussed design objectives.
Author Information
Alberto Marchisio (Technische Universität Wien)
Vojtech Mrazek (Brno University of Technology)
Andrea Massa (Politecnico di Torino)
Beatrice Bussolino (Politecnico di Torino)
Maurizio Martina (Politecnico di Torino)
Muhammad Shafique (New York University Abu Dhabi)
More from the Same Authors
-
2019 : Spotlight »
Tyler Scott · Kiran Koshy · Jonathan Aigrain · Rene Bidart · Priyadarshini Panda · Dian Ang Yap · Yaniv Yacoby · Raphael Gontijo Lopes · Alberto Marchisio · Erik Englesson · Wanqian Yang · Moritz Graule · Yi Sun · Daniel Kang · Mike Dusenberry · Min Du · Hartmut Maennel · Kunal Menda · Vineet Edupuganti · Luke Metz · David Stutz · Vignesh Srinivasan · Timo Sämann · Vineeth N Balasubramanian · Sina Mohseni · Rob Cornish · Judith Butepage · Zhangyang Wang · Bai Li · Bo Han · Honglin Li · Maksym Andriushchenko · Lukas Ruff · Meet P. Vadera · Yaniv Ovadia · Sunil Thulasidasan · Disi Ji · Gang Niu · Saeed Mahloujifar · Aviral Kumar · SANGHYUK CHUN · Dong Yin · Joyce Xu Xu · Hugo Gomes · Raanan Rohekar