Machine learning for radiomics-based multi-modality and multi-parametric modeling

Lise Wei, Sarah Osman, Mathieu Hatt, Issam El Naqa

Research output: Contribution to journalArticle

Abstract

Due to the recent developments of both hardware and software technologies, multimodality medical imaging techniques have been increasingly applied in clinical practice and research studies. Previously, the application of multimodality imaging in oncology has been mainly related to combining anatomical and functional imaging to improve diagnostic specificity and/or target definition, such as positron emission tomography/computed tomography (PET/CT) and single-photon emission computed tomography (SPECT)/CT. More recently, the fusion of various images, such as multi-parametric magnetic resonance imaging (MRI) sequences, different PET tracer images, PET/MRI, has become more prevalent, which has enabled more comprehensive characterization of the tumor phenotype. In order to take advantage of these valuable multimodal data for clinical decision making using radiomics, we present two ways to implement the multimodal image analysis, namely radiomic (handcrafted feature) based and deep learning (machine learned feature) based methods. Applying advanced machine (deep) learning algorithms across multi-modality images have shown better results compared with single modality modeling for prognostic and/or prediction of clinical outcomes. This holds great potentials for providing more personalized treatment for patients and achieve better outcomes.
Original languageEnglish
Journal The Quarterly Journal of Nuclear Medicine and Molecular Imaging
DOIs
Publication statusPublished - 13 Sep 2019

Fingerprint

Magnetic Resonance Imaging
Diagnostic Imaging
Software
Technology
Phenotype
Research
Neoplasms
Machine Learning
Therapeutics
Single Photon Emission Computed Tomography Computed Tomography
Positron Emission Tomography Computed Tomography
Clinical Decision-Making

Cite this

@article{5b9d2c6f0f134fd1868951bfcdf9d02d,
title = "Machine learning for radiomics-based multi-modality and multi-parametric modeling",
abstract = "Due to the recent developments of both hardware and software technologies, multimodality medical imaging techniques have been increasingly applied in clinical practice and research studies. Previously, the application of multimodality imaging in oncology has been mainly related to combining anatomical and functional imaging to improve diagnostic specificity and/or target definition, such as positron emission tomography/computed tomography (PET/CT) and single-photon emission computed tomography (SPECT)/CT. More recently, the fusion of various images, such as multi-parametric magnetic resonance imaging (MRI) sequences, different PET tracer images, PET/MRI, has become more prevalent, which has enabled more comprehensive characterization of the tumor phenotype. In order to take advantage of these valuable multimodal data for clinical decision making using radiomics, we present two ways to implement the multimodal image analysis, namely radiomic (handcrafted feature) based and deep learning (machine learned feature) based methods. Applying advanced machine (deep) learning algorithms across multi-modality images have shown better results compared with single modality modeling for prognostic and/or prediction of clinical outcomes. This holds great potentials for providing more personalized treatment for patients and achieve better outcomes.",
author = "Lise Wei and Sarah Osman and Mathieu Hatt and {El Naqa}, Issam",
year = "2019",
month = "9",
day = "13",
doi = "10.23736/S1824-4785.19.03213-8",
language = "English",
journal = "The Quarterly Journal of Nuclear Medicine and Molecular Imaging",
issn = "1824-4661",

}

Machine learning for radiomics-based multi-modality and multi-parametric modeling. / Wei, Lise; Osman, Sarah; Hatt, Mathieu; El Naqa, Issam.

In: The Quarterly Journal of Nuclear Medicine and Molecular Imaging , 13.09.2019.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Machine learning for radiomics-based multi-modality and multi-parametric modeling

AU - Wei, Lise

AU - Osman, Sarah

AU - Hatt, Mathieu

AU - El Naqa, Issam

PY - 2019/9/13

Y1 - 2019/9/13

N2 - Due to the recent developments of both hardware and software technologies, multimodality medical imaging techniques have been increasingly applied in clinical practice and research studies. Previously, the application of multimodality imaging in oncology has been mainly related to combining anatomical and functional imaging to improve diagnostic specificity and/or target definition, such as positron emission tomography/computed tomography (PET/CT) and single-photon emission computed tomography (SPECT)/CT. More recently, the fusion of various images, such as multi-parametric magnetic resonance imaging (MRI) sequences, different PET tracer images, PET/MRI, has become more prevalent, which has enabled more comprehensive characterization of the tumor phenotype. In order to take advantage of these valuable multimodal data for clinical decision making using radiomics, we present two ways to implement the multimodal image analysis, namely radiomic (handcrafted feature) based and deep learning (machine learned feature) based methods. Applying advanced machine (deep) learning algorithms across multi-modality images have shown better results compared with single modality modeling for prognostic and/or prediction of clinical outcomes. This holds great potentials for providing more personalized treatment for patients and achieve better outcomes.

AB - Due to the recent developments of both hardware and software technologies, multimodality medical imaging techniques have been increasingly applied in clinical practice and research studies. Previously, the application of multimodality imaging in oncology has been mainly related to combining anatomical and functional imaging to improve diagnostic specificity and/or target definition, such as positron emission tomography/computed tomography (PET/CT) and single-photon emission computed tomography (SPECT)/CT. More recently, the fusion of various images, such as multi-parametric magnetic resonance imaging (MRI) sequences, different PET tracer images, PET/MRI, has become more prevalent, which has enabled more comprehensive characterization of the tumor phenotype. In order to take advantage of these valuable multimodal data for clinical decision making using radiomics, we present two ways to implement the multimodal image analysis, namely radiomic (handcrafted feature) based and deep learning (machine learned feature) based methods. Applying advanced machine (deep) learning algorithms across multi-modality images have shown better results compared with single modality modeling for prognostic and/or prediction of clinical outcomes. This holds great potentials for providing more personalized treatment for patients and achieve better outcomes.

U2 - 10.23736/S1824-4785.19.03213-8

DO - 10.23736/S1824-4785.19.03213-8

M3 - Article

JO - The Quarterly Journal of Nuclear Medicine and Molecular Imaging

JF - The Quarterly Journal of Nuclear Medicine and Molecular Imaging

SN - 1824-4661

ER -