Abstract
Deep Neural Networks (DNNs), handling compute- and data-intensive tasks, often utilize accelerators like Resistive- switching Random-access Memory (RRAM) crossbar for energy- efficient in-memory computation. Despite RRAM's inherent non- idealities causing deviations in DNN output, this study transforms the weakness into strength. By leveraging RRAM non-idealities, the research enhances privacy protection against Membership Inference Attacks (MIAs), which reveal private information from training data. RRAM non-idealities disrupt MIA features, increasing model robustness and revealing a privacy-accuracy tradeoff. Empirical results with four MIAs and DNNs trained on different datasets demonstrate significant privacy leakage reduction with a minor accuracy drop (e.g., up to 2.8% for ResNet-18 with CIFAR-100).
Original language | English |
---|---|
Title of host publication | Design, Automation & Test in Europe Conference & Exhibition (DATE 2024): proceedings |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Number of pages | 2 |
ISBN (Electronic) | 9783981926385 |
ISBN (Print) | 9798350348606 |
Publication status | Published - 10 Jun 2024 |
Event | Design, Automation and Test in Europe Conference 2024 - Valencia, Spain Duration: 25 Mar 2024 → 27 Mar 2024 https://www.date-conference.com/ |
Publication series
Name | Design, Automation & Test in Europe Conference & Exhibition: proceedings |
---|---|
ISSN (Print) | 1530-1591 |
ISSN (Electronic) | 1558-1101 |
Conference
Conference | Design, Automation and Test in Europe Conference 2024 |
---|---|
Abbreviated title | DATE 2024 |
Country/Territory | Spain |
City | Valencia |
Period | 25/03/2024 → 27/03/2024 |
Internet address |
Keywords
- harnessing ML privacy
- by design
- crossbar array non-idealities