TY - GEN
T1 - ProSelfLC: Progressive Self Label Correction for Training Robust Deep Neural Networks
AU - Wang, Xinshao
AU - Hua, Yang
AU - Kodirov, Elyor
AU - Clifton, David
AU - Robertson, Neil
PY - 2021/11/13
Y1 - 2021/11/13
N2 - To train robust deep neural networks (DNNs), we systematically study several target modification approaches, which
include output regularisation, self and non-self label correction (LC). Two key issues are discovered: (1) Self LC
is the most appealing as it exploits its own knowledge and
requires no extra models. However, how to automatically
decide the trust degree of a learner as training goes is not
well answered in the literature? (2) Some methods penalise
while the others reward low-entropy predictions, prompting
us to ask which one is better?
To resolve the first issue, taking two well-accepted
propositions–deep neural networks learn meaningful patterns before fitting noise [3] and minimum entropy regularisation principle [10]–we propose a novel end-to-end
method named ProSelfLC, which is designed according to
learning time and entropy. Specifically, given a data point,
we progressively increase trust in its predicted label distribution versus its annotated one if a model has been trained
for enough time and the prediction is of low entropy (high
confidence). For the second issue, according to ProSelfLC,
we empirically prove that it is better to redefine a meaningful low-entropy status and optimise the learner toward
it. This serves as a defence of entropy minimisation. We
demonstrate the effectiveness of ProSelfLC through extensive experiments in both clean and noisy settings.
AB - To train robust deep neural networks (DNNs), we systematically study several target modification approaches, which
include output regularisation, self and non-self label correction (LC). Two key issues are discovered: (1) Self LC
is the most appealing as it exploits its own knowledge and
requires no extra models. However, how to automatically
decide the trust degree of a learner as training goes is not
well answered in the literature? (2) Some methods penalise
while the others reward low-entropy predictions, prompting
us to ask which one is better?
To resolve the first issue, taking two well-accepted
propositions–deep neural networks learn meaningful patterns before fitting noise [3] and minimum entropy regularisation principle [10]–we propose a novel end-to-end
method named ProSelfLC, which is designed according to
learning time and entropy. Specifically, given a data point,
we progressively increase trust in its predicted label distribution versus its annotated one if a model has been trained
for enough time and the prediction is of low entropy (high
confidence). For the second issue, according to ProSelfLC,
we empirically prove that it is better to redefine a meaningful low-entropy status and optimise the learner toward
it. This serves as a defence of entropy minimisation. We
demonstrate the effectiveness of ProSelfLC through extensive experiments in both clean and noisy settings.
U2 - 10.1109/CVPR46437.2021.00081
DO - 10.1109/CVPR46437.2021.00081
M3 - Conference contribution
SN - 978-1-6654-4509-2
T3 - Conference on Computer Vision and Pattern Recognition (CVPR)
BT - 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR): Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
ER -