Content not yet available
This lecture has no active video or poster.
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Deep unrolling models (DUMs) have shown great potential in sparse-view CT reconstruction by combining iterative optimization and deep learning. However, most DUMs insufficiently account for physical degradation from sparse-view imaging, leading to slow convergence and persistent artifacts. To address this, we propose PAUM, a Physics-Aware Accelerated Unrolling Model explicitly incorporating CT imaging physics into the iterative reconstruction. PAUM introduces a Dual-Domain Physics-Aware Extrapolation (DDPE) module. By modeling dual-domain degradations, it performs row-wise extrapolation in the sinogram domain to improve missing view recovery, and pixel-wise extrapolation in the image domain to address spatially variant degradation from incomplete backprojection. This physics-aware extrapolation aligns optimization dynamics with underlying physical imaging degradation, significantly accelerating convergence. Subsequently, we develop a lightweight Block-Attention Deformable Regularization Network (BDRN), leveraging deformable convolutions and block-wise attention to model spatially variant and structured artifact physical characteristics. This enables spatially adaptive regularization on extrapolated results, effectively improving reconstruction quality. Extensive experiments demonstrate PAUM achieves over 1dB PSNR improvement compared to SOTA methods, while reducing iteration count by 50\%. Code will be released.
