BrainLeaks: on the privacy-preserving properties of neuromorphic architectures against model inversion attacks

Research output: Chapter in Book/Report/Conference proceedingConference contribution

8 Downloads (Pure)

Abstract

With the mainstream integration of machine learning into security-sensitive domains such as healthcare and finance, con-cerns about data privacy have intensified. Conventional artificial neural networks (ANNs) have been found vulnerable to several attacks that can leak sensitive data. Particularly, model inversion (MI) attacks enable the reconstruction of data samples that have been used to train the model. Neuromorphic architectures have emerged as a paradigm shift in neural computing, enabling asynchronous and energy-efficient computation. However, little to no existing work has investigated the privacy of neuromorphic architectures against model inversion. Our study is motivated by the intuition that the non-differentiable aspect of spiking neural networks (SNNs) might result in inherent privacy-preserving properties, especially against gradient-based attacks. To investigate this hypothesis, we propose a thorough exploration of SNNs' privacy-preserving capabilities. Specifically, we develop novel inversion attack strategies that are comprehensively designed to target SNNs, offering a comparative analysis with their conventional ANN counterparts. Our experiments, conducted on diverse event-based and static datasets, demonstrate the effectiveness of the proposed attack strategies and therefore questions the assumption of inherent privacy-preserving in neuromorphic architectures.
Original languageEnglish
Title of host publication2024 International Conference on Machine Learning and Applications (ICMLA): Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages705-712
ISBN (Electronic)9798350374889
ISBN (Print)9798350374896
Publication statusPublished - 04 Mar 2025
Event2024 International Conference on Machine Learning and Applications (ICMLA) - Hyatt Regency Coral Gables, Miami, United States
Duration: 18 Dec 202420 Dec 2024
https://www.icmla-conference.org/icmla24/

Publication series

NameInternational Conference on Machine Learning and Applications (ICMLA): Proceedings
ISSN (Print)1946-0740
ISSN (Electronic)1946-0759

Conference

Conference2024 International Conference on Machine Learning and Applications (ICMLA)
Abbreviated titleICMLA24
Country/TerritoryUnited States
CityMiami
Period18/12/202420/12/2024
Internet address

Publications and Copyright Policy

This work is licensed under Queen’s Research Publications and Copyright Policy.

Keywords

  • BrainLeaks
  • privacy-preserving properties
  • neuromorphic
  • model inversion attacks

Fingerprint

Dive into the research topics of 'BrainLeaks: on the privacy-preserving properties of neuromorphic architectures against model inversion attacks'. Together they form a unique fingerprint.

Cite this