DeepVM: A Deep Learning-based Approach with Automatic Feature Extraction for 2D Input Data Virtual Metrology

Marco Maggipinto, Alessandro Beghi, Sean McLoone, Gian Antonio Susto

Research output: Contribution to journalArticlepeer-review

41 Citations (Scopus)
3978 Downloads (Pure)

Abstract

Industry 4.0 encapsulates methods, technologies, and procedures that transform data into informed decisions and added value in an industrial context. In this regard, technologies such as Virtual Metrology or Soft Sensing have gained much interest in the last two decades due to their ability to provide valuable knowledge for production purposes at limited added expense. However, these technologies have struggled to achieve wide-scale industrial adoption, largely due to the challenges associated with handling complex data structures and the feature extraction phase of model building. This phase is generally hand-engineered and based on specific domain knowledge, making it time consuming, difficult to automate, and prone to loss of information, thus ultimately limiting portability. Moreover, in the presence of complex data structures, such as 2-dimensional input data, there are no established procedures for feature extraction. In this paper, we present a Deep Learning approach for Virtual Metrology, called DeepVM, that exploits semi-supervised feature extraction based on Convolutional Autoencoders. The proposed approach is demonstrated using a real world semiconductor manufacturing dataset where the Virtual Metrology input data is 2-dimensional Optical Emission Spectrometry data. The feature extraction method is tested with different types of state-of-the-art autoencoder.
Original languageEnglish
Number of pages13
JournalJournal of Process Control
Early online date16 Oct 2019
DOIs
Publication statusEarly online date - 16 Oct 2019

Fingerprint

Dive into the research topics of 'DeepVM: A Deep Learning-based Approach with Automatic Feature Extraction for 2D Input Data Virtual Metrology'. Together they form a unique fingerprint.

Cite this