Are neuromorphic architectures inherently privacy-preserving? an exploratory study

Ayana Moshruba, Ihsen Alouani, Maryam Parsa

Research output: Contribution to journalArticlepeer-review

Abstract

While machine learning (ML) models are becoming mainstream, including in critical application domains, concerns have been raised about the increasing risk of sensitive data leakage. Various privacy attacks, such as membership inference attacks (MIAs), have been developed to extract data from trained ML models, posing significant risks to data confidentiality. While the predominant work in the ML community considers traditional Artificial Neural Networks (ANNs) as the default neural model, neuromorphic architectures, such as Spiking Neural Networks (SNNs), have recently emerged as an attractive alternative mainly due to their significantly low power consumption. These architectures process information through discrete events, i.e., spikes, to mimic the functioning of biological neurons in the brain. While the privacy issues have been extensively investigated in the context of traditional ANNs, they remain largely unexplored in neuromorphic architectures, and little work has been dedicated to investigating their privacy-preserving properties. In this paper, we investigate the question of whether SNNs have inherent privacy-preserving advantages. Specifically, we investigate SNNs’ privacy properties through the lens of MIAs across diverse datasets, in comparison with ANNs. We explore the impact of different learning algorithms (surrogate gradient and evolutionary learning), programming frameworks (snnTorch, TENNLab, and LAVA), and various parameters on the resilience of SNNs against MIA. Our experiments reveal that SNNs demonstrate consistently superior privacy preservation compared to ANNs, with evolutionary algorithms further enhancing their resilience. For example, on the CIFAR-10 dataset, SNNs achieve an AUC as low as 0.59 compared to 0.82 for ANNs, and on CIFAR-100, SNNs maintain a low AUC of 0.58, whereas ANNs reach 0.88. Furthermore, we investigate the privacy-utility trade-off through Differentially Private Stochastic Gradient Descent (DPSGD), observing that SNNs incur a notably lower accuracy drop than ANNs under equivalent privacy constraints.

Original languageEnglish
Pages (from-to)243–257
Number of pages15
JournalProceedings on Privacy Enhancing Technologies (PoPETs)
DOIs
Publication statusPublished - 01 Apr 2025

Keywords

  • neuromorphic
  • neuromorphic architectures
  • privacy-preserving

Fingerprint

Dive into the research topics of 'Are neuromorphic architectures inherently privacy-preserving? an exploratory study'. Together they form a unique fingerprint.

Cite this