Description |
Today, acquiring images is much more feasible in the medical imaging field, enabling longitudinal studies that have excellent potential to reveal the development patterns of brain growth, deviation from normal trajectories, monitoring disease progression and effects of a therapeutic intervention, and studies of neurodegeneration in aging. Accurate and consistent segmentation of longitudinal image sequences is an essential processing step to understand even subtle temporal changes of anatomy, with the primary objective to optimally use the inherent correlation of repeated scans of individual subjects for segmentation. Depending on how images over time are correlated, we introduce three models with different levels of correlation assumption. In the beginning, we suppose that images over time are well-aligned and present an intensity change model, which enforce the intensity contrast patterns among tissue classes over time, to achieve more consistent segmentation. As the next step, we ease the assumption by integrating a diffeomorphic registration with a novel linear appearance model, which eliminates the need for a preceding intensity change model. Jointly estimated appearance model's parameters are then used for segmentation. Finally, to achieve minimum constraints by only assuming that there is a correlation between images and by leveraging recent deep learning technology, we build a joint segmentation model without registration by combining fully convolutional networks (for spatial end-to-end pixel-wise segmentation) with recurrent neural networks (for temporal sequence-to-sequence modeling). We demonstrate the feasibility of these new approaches with verification on synthetic data as well as a clinical longitudinal multimodal pediatric study with images in the age range from neonates to 24-month-olds. The methodologies themselves, however, are generic concerning different application domains requiring segmentation of serial image data. |