Deep disentangled representation learning of pet images for lymphoma outcome prediction - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

Deep disentangled representation learning of pet images for lymphoma outcome prediction

Résumé

Image feature extraction based on deep disentangled representation learning of PET images is proposed for the prediction of lymphoma treatment response. Our method encodes PET images as spatial representations and modality representations by performing supervised tumor segmentation and image reconstruction. In this way, the whole image features (global features) as well as tumor region features (local features) can be extracted without the labor-intensive tumor segmentation and feature calculation procedure. The learned global and local image features are then joined with several prognostic factors evaluated by physicians based on clinical information, and used as input of a SVM classifier for predicting outcome results of lymphoma patients. In this study, 186 lymphoma patient data were included for training and testing the proposed model. The proposed method was compared with the traditional straightforward feature extraction method. The better prediction results of the proposed method also show its efficiency for prognostic prediction related feature extraction in PET images.
Fichier non déposé

Dates et versions

hal-02922441 , version 1 (26-08-2020)

Identifiants

  • HAL Id : hal-02922441 , version 1

Citer

Yu Guo, Pierre Decazes, Stéphanie Becker, Hua Li, Su Ruan. Deep disentangled representation learning of pet images for lymphoma outcome prediction. International Symposium on Biomedical Imaging (IEEE-ISBI), Apr 2020, Iowa, United States. ⟨hal-02922441⟩
20 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More