Research Activities早稲田大学 研究活動

DSE-Based Hardware Trojan Attack for Neural Network Accelerators on FPGAs(Published in IEEE Transactions on Neural Networks and Learning Systems, October 2024)

Journal Title
/掲載ジャーナル名
IEEE Transactions on Neural Networks and Learning Systems
Publication Year and Month
/掲載年月
October, 2024
Paper Title
/論文タイトル
DSE-Based Hardware Trojan Attack for Neural Network Accelerators on FPGAs
DOI
/論文DOI
10.1109/TNNLS.2024.3482364
 Author of Waseda University
/本学の著者
SHI, Youhua(Professor, Faculty of Science and Engineering, School of Fundamental Science and Engineering):Correspoinding Author
Related Websites
/関連Web
Abstract
/抄録
Over the past few years, the emergence and development of design space exploration (DSE) have shortened the deployment cycle of deep neural networks (DNNs). As a result, with these open-sourced DSE, we can automatically compute the optimal configuration and generate the corresponding accelerator intellectual properties (IPs) from the pretrained neural network models and hardware constraints. However, to date, the security of DSE has received little attention. Therefore, we explore this issue from an adversarial perspective and propose an automated hardware Trojan (HT) generation framework embedded within DSE. The framework uses an evolutionary algorithm (EA) to analyze user-input data to automatically generate the attack code before placing it in the final output accelerator IPs. The proposed HT is sufficiently stealthy and suitable for both single and multifield-programmable gate array (FPGA) designs. It can also implement controlled accuracy degradation attacks and specified category attacks. We conducted experiments on LeNet, VGG-16, and YOLO, respectively, and found that for the LeNet model trained on the CIFAR-10 dataset, attacking only one kernel resulted in 97.3% of images being classified in the category specified by the adversary and reduced accuracy by 59.58%. Moreover, for the VGG-16 model trained on the ImageNet dataset, attacking eight kernels can cause up to 96.53% of the images to be classified into the category specified by the adversary and causes the model’s accuracy to decrease to 2.5%. Finally, for the YOLO model trained on the PASCAL VOC dataset, attacking with eight kernels can cause the model to identify the target as the specified category and cause slight perturbations to the bounding boxes. Compared to the un-compromised designs, the look-up tables (LUTs) overhead of the proposed HT design does not exceed 0.6%.
Page Top
WASEDA University

早稲田大学オフィシャルサイト(https://www.waseda.jp/inst/research/)は、以下のWebブラウザでご覧いただくことを推奨いたします。

推奨環境以外でのご利用や、推奨環境であっても設定によっては、ご利用できない場合や正しく表示されない場合がございます。より快適にご利用いただくため、お使いのブラウザを最新版に更新してご覧ください。

このままご覧いただく方は、「このまま進む」ボタンをクリックし、次ページに進んでください。

このまま進む

対応ブラウザについて

閉じる