Determining the precise boundaries of skin lesions in dermoscopic images needs to cope with many challenges, such as the presence of hair, inconspicuous lesion edges, low contrast, and the variability in colour, texture, and shapes of these lesions. Recent developed skin lesion segmentation algorithms based on deep learning are computationally expensive in terms of time and memory. Consequently, running these algorithms requires a powerful GPU and high memory bandwidth, which is not available in dermoscopy devices. Thus, this paper aims at achieving high precise segmentation with minimum resources by presenting a lightweight and efficient generative adversarial network (GAN) model, called MobileGAN. Specifically, MobileGAN combines 1-D kernel factorized networks, position and channel attention, and multiscale aggregation mechanisms with a GAN model. The 1-D kernel factorized network reduces the computational cost of 2D filtering. The position and channel attention modules enhance the discriminant ability in between the lesion and non-lesion feature representations in spatial and channel dimensions. Besides, a multiscale block is introduced to aggregate the coarse-to-fine features of input images and reduce the effect of artifacts. MobileGAN is evaluated on two public datasets: ISBI 2017 and the ISIC 2018. Although MobileGAN has only 2.35 million parameters, the experimental results demonstrate that the proposed model provides comparable results to state-of-the-art skin lesion segmentation in terms of an accuracy, Dice and Jaccard similarity coefficient of 97.61%, 90.63% and 81.98% respectively. Moreover, MobileGAN can run at over 110 frames per second (FPS) in a single GTX1080Ti GPU, which is faster than the traditional deep learning models, and therefore MobileGAN is suitable for practical applications.
|Publication status||Published - 01 Jul 2019|
30 pages, Submitted to Expert Systems with Applications