TY - GEN
T1 - Hardware support for trustworthy machine learning: a survey
AU - Islam, Md Shohidul
AU - Alouani, Ihsen
AU - Khasawneh, Khaled N.
PY - 2024/5/16
Y1 - 2024/5/16
N2 - Machine Learning (ML) are used in an increasing number of applications as they continue to deliver state-of-the-art performance across many areas including computer vision natural language processing (NLP), robotics, autonomous driving, and healthcare. While rapid progress in all aspects of ML development and deployment is occurring, there is a rising concern about the trustworthiness of these models, especially from security and privacy perspectives. Several attacks that jeopardize ML models’ integrity (e.g. adversarial attacks) and confidentiality (e.g. membership inference attacks) have been investigated in the literature. This, in return, triggered substantial work to protect ML models and advance their trustworthiness. Defenses generally act on the input data, the objective function, or the network structure to mitigate adversarial effects. However, these proposed defenses require substantial changes to the architecture, retraining procedure, or incorporate additional input data processing overheads. In addition, often these proposed defenses require high power and computational requirements, which make them challenging to deploy in embedded systems and Edge devices. Towards addressing the need for robust ML at acceptable overheads, recent works have investigated hardware-emanated solutions to enhance ML security and privacy. In this paper, we summarize recent works in the area of hardware support for trustworthy ML. In addition, we provide guidelines for future research in the area by identifying open problems that need to be addressed.
AB - Machine Learning (ML) are used in an increasing number of applications as they continue to deliver state-of-the-art performance across many areas including computer vision natural language processing (NLP), robotics, autonomous driving, and healthcare. While rapid progress in all aspects of ML development and deployment is occurring, there is a rising concern about the trustworthiness of these models, especially from security and privacy perspectives. Several attacks that jeopardize ML models’ integrity (e.g. adversarial attacks) and confidentiality (e.g. membership inference attacks) have been investigated in the literature. This, in return, triggered substantial work to protect ML models and advance their trustworthiness. Defenses generally act on the input data, the objective function, or the network structure to mitigate adversarial effects. However, these proposed defenses require substantial changes to the architecture, retraining procedure, or incorporate additional input data processing overheads. In addition, often these proposed defenses require high power and computational requirements, which make them challenging to deploy in embedded systems and Edge devices. Towards addressing the need for robust ML at acceptable overheads, recent works have investigated hardware-emanated solutions to enhance ML security and privacy. In this paper, we summarize recent works in the area of hardware support for trustworthy ML. In addition, we provide guidelines for future research in the area by identifying open problems that need to be addressed.
KW - Surveys
KW - Privacy
KW - Computational modeling
KW - Machine learning
KW - Medical services
KW - Linear programming
KW - Natural language processing
U2 - 10.1109/ISQED60706.2024.10528373
DO - 10.1109/ISQED60706.2024.10528373
M3 - Conference contribution
SN - 9798350309287
T3 - International Symposium on Quality Electronic Design (ISQED): proceedings
BT - 2024 25th International Symposium on Quality Electronic Design (ISQED): proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 25th International Symposium on Quality Electronic Design (ISQED)
Y2 - 3 April 2024 through 5 April 2024
ER -