Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Low-light object detection faces significant challenges due to the substantial domain shift between normal-light and low-light conditions. Prior works often enhance low-light images before detection, but this preprocessing can introduce artifacts that degrade detection performance since it focuses on human visual quality rather than task-specific features. Other methods incorporate illumination-aware modules for low-light feature learning, yet their scalability is limited by the scarcity of annotated low-light datasets. To overcome these limitations, we propose a unified Dual-Level Domain Adaptation (DLDA) framework that jointly addresses pixel-level and feature-level domain discrepancies for robust low-light object detection. Specifically, we introduce a luminance-aware contrastive translation module that synthesizes target-style low-light images while preserving structural details, enabling effective pixel-level adaptation. Building on this, we further design a multi-scale conditional adversarial alignment strategy that enforces semantic consistency across feature hierarchies to enhance domain-invariant feature extraction. Extensive experiments on multiple low-light detection benchmarks demonstrate that DLDA achieves state-of-the-art performance, exhibiting strong robustness and generalization.