Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Achieving a balance between low parameter count, reduced FLOPs, and high accuracy and throughput remains a central challenge in neural network design. To address this, we propose the partial channel mechanism (PCM), which leverages the inherent redundancy in feature map channels. PCM divides feature map channels into multiple groups, each processed by distinct operations such as convolution, attention, pooling, or identity mapping. Building on this, we introduce partial attention convolution (PATConv), a novel module that efficiently fuses convolution and visual attention within a unified framework. Our results demonstrate that PATConv can fully replace both standard convolution and visual attention modules, leading to significant reductions in parameters and FLOPs. Furthermore, PATConv enables three efficient visual attention variants: Partial Channel Attention, Partial Spatial Attention, and Partial Self-Attention. To further optimize the allocation of channel splits, we propose dynamic {partial convolution (DPConv), which adaptively learns the optimal split ratio for each layer, achieving a better trade-off between speed and accuracy. By integrating PATConv and DPConv, we develop a new hybrid network family, PartialNet, which achieves superior top-1 accuracy and inference speed on ImageNet-1K, and demonstrates strong performance on COCO detection and segmentation tasks.
