Security of approximate neural networks against power side-channel attacks

Research output: Chapter in Book/Report/Conference proceedingConference contribution

49 Downloads (Pure)

Abstract

Emerging low-energy computing technologies, in particular approximate computing, are becoming increasingly relevant in key applications. A significant use case for these technologies is reduced energy consumption in Artificial Neural Networks (ANNs), an increasingly pressing concern with the rapid growth of AI deployments. It is essential we understand the security implications of approximate computing in an ANN context before this practice becomes commonplace. In this work, we examine the test case of approximate ANN processing elements (PE) in terms of information leakage via the power side channel. We perform a weight extraction correlation Power Analysis (CPA) attack under three approximation scenarios: overclocking, voltage scaling, and circuit level bitwise approximation. We demonstrate that as the degree of approximation increases the Signal to Noise Ratio (SNR) of power traces rapidly degrades. We show that the Measurement to Disclosure (MTD) increases for all approximate techniques. An MTD of 48 under precise computing is increased to at minimum 200 (bitwise approximate circuit at 25% approximation), and under some approximation scenarios >1024. i.e. an increase in attack difficulty of at least x4 and potentially over x20. A relative Security-Power-Delay (SPD) analysis reveals that, in addition to the across the board improvement vs precise computing, voltage and clock scaling both significantly outperform approximate circuits with voltage scaling as the highest performing technique.
Original languageEnglish
Title of host publication 2025 62nd ACM/IEEE Design Automation Conference (DAC): Proceedings
PublisherIEEE Xplore
ISBN (Electronic)9798331503048
ISBN (Print)9798331503055
Publication statusPublished - 15 Sept 2025
Event2025 Design Automation Conference - San Francisco, United States
Duration: 22 Jun 202525 Jun 2025

Publication series

NameDesign Automation Conference: Proceedings
ISSN (Print)0738-100X

Conference

Conference2025 Design Automation Conference
Country/TerritoryUnited States
CitySan Francisco
Period22/06/202525/06/2025

Publications and Copyright Policy

This work is licensed under Queen’s Research Publications and Copyright Policy.

Keywords

  • security
  • approximate neural networks
  • power side-channel

Fingerprint

Dive into the research topics of 'Security of approximate neural networks against power side-channel attacks'. Together they form a unique fingerprint.

Cite this