Content not yet available
This lecture has no active video or poster.
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Feature coding has recently emerged as a key technique for efficient transmission of intermediate representations in distributed AI systems. Existing approaches largely follow a transform-quantization-entropy coding pipeline inherited from image and video coding, where the transform module is used to remove spatial structural redundancies in visual signals. However, our analysis indicates that such redundancies have already been removed during feature extraction, which reduces the necessity of the transform module. Building on this insight, we propose a new vector quantization-entropy coding pipeline that directly encodes the extracted features via a vector quantization module and an entropy model. The proposed transform‑free framework jointly learns the quantization codebook and entropy model, enabling end‑to‑end optimization tailored to the inherent feature characteristics. Furthermore, the proposed method inherently avoids the computational complexity of the transform module. Experiments on features from diverse architectures and tasks demonstrate that our method achieves superior rate-distortion performance compared to transform-based baselines, while significantly reducing the encoding and decoding complexity.