Content not yet available
This lecture has no active video or poster.
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
The challenge of accelerated MRI reconstruction lies in recovering high-quality images from undersampled k-space. Recently, the selective state space model (Mamba) has shown promising results in various tasks with balanced global receptive field and computational efficiency, shedding new light on MRI reconstruction. However, existing approaches directly flatten 2D images based on spatial positions and apply Mamba to vision tasks, failing to preserve and explore the content properties. In this paper, we posit that the key to unlocking Mamba's full potential for MRI reconstruction lies in content-aware sequence modeling. We investigate two fundamental challenges: (1) how to reasonably preserve semantic information when converting 2D images into 1D sequences, and (2) how to effectively identify and recover the crucial high-frequency textures. To this end, we introduce CAM, a novel framework that shifts Mamba-based MRI reconstruction from position-based to content-aware sequence modeling. Specifically, we introduce three modules: (1) the Semantic Preservation Scanning Module (SPSM) introduces learnable clustering centers to group similar pixels, establishing the semantic preserved sequence. (2) The Texture Extraction Scanning Module (TESM) acts as a differentiable local texture descriptor to estimate crucial high-frequency information, forming the texture emphasized sequence. (3) The Texture Enhancement Mamba Module (TEMM) further modulates the semantic sequence with texture-informed system matrices derived from the texture sequence, yielding both context- and texture-aware sequential representations. With these enhancements, our CAM significantly outperforms state-of-the-art methods across various datasets and under-sampling masks. Codes will be available.