Longitudinal autoencoder for multi-modal disease progression modelling

Raphael Couronne 1, 2 Maxime Louis 1, 2 Stanley Durrleman 1, 2
1 ARAMIS - Algorithms, models and methods for images and signals of the human brain
SU - Sorbonne Université, Inria de Paris, ICM - Institut du Cerveau et de la Moëlle Epinière = Brain and Spine Institute
Abstract : Imaging modalities and clinical measurement, as well as their time progression can be seen as heterogeneous observations of the same underlying disease process. The analysis of sequences of multi-modal observations, where not all modalities are present at each visit, is a challenging task. In this paper, we propose a multi-modal autoencoder for longitudinal data. The sequences of observations for each modality are encoded using a recurrent network into a latent variable. The variables for the different modalities are then fused into a common variable which describes a linear trajectory in a low-dimensional latent space. This latent space is mapped into the multi-modal observation space using separate decoders for each modality. We first illustrate the stability of the proposed model through simple scalar experiments. Then, we illustrate how information can be conveyed from one modality to refine predictions about the future using the learned autoencoder. Finally, we apply this approach to the prediction of future MRI for Alzheimer's patients.
Complete list of metadatas

Cited literature [12 references]  Display  Hide  Download

Contributor : Raphael Couronne <>
Submitted on : Wednesday, April 17, 2019 - 11:29:13 AM
Last modification on : Tuesday, April 30, 2019 - 3:41:48 PM


Files produced by the author(s)


  • HAL Id : hal-02090886, version 2


Raphael Couronne, Maxime Louis, Stanley Durrleman. Longitudinal autoencoder for multi-modal disease progression modelling. 2019. ⟨hal-02090886v2⟩



Record views


Files downloads