Accurate 3D object detection in real-world environments requires a huge amount of annotated data with high quality. Acquiring such data is tedious and expensive, and often needs repeated effort when a new sensor is adopted or when the detector is deployed in a new environment.
We investigate a new scenario to construct 3D object detectors: learning from the predictions of a nearby unit that is equipped with an accurate detector. For example, when a self-driving car enters a new area, it may learn from other traffic participants whose detectors have been optimized for that area. This setting is label-efficient, sensor-agnostic, and communication-efficient: nearby units only need to share the predictions with the ego agent (e.g., car).
Naively using the received predictions as ground-truths to train the detector for the ego car, however, leads to inferior performance. We systematically study the problem and identify viewpoint mismatches and mislocalization (due to synchronization and GPS errors) as the main causes, which unavoidably result in false positives, false negatives, and inaccurate pseudo labels.
We propose a distance-based curriculum, first learning from closer units with similar viewpoints and subsequently improving the quality of other units' predictions via self-training. We further demonstrate that an effective pseudo label refinement module can be trained with a handful of annotated data, largely reducing the data quantity necessary to train an object detector.
We validate our approach on the recently released real-world collaborative driving dataset, using reference cars' predictions as pseudo labels for the ego car. Extensive experiments including several scenarios (e.g., different sensors, detectors, and domains) demonstrate the effectiveness of our approach toward label-efficient learning of 3D perception from other units' predictions.
Mislocalized labels: Inaccuracies such as GPS errors and synchronization delays between agents are common in real-world applications. For example, a minor delay of just 0.1 seconds can cause a discrepancy of several meters in localization for a vehicle traveling at 60 mph.
Viewpoint-mismatched labels: The viewpoints of the two agents can vary significantly. An object visible to one agent might be obscured or out of range for the other due to occlusion or distance, leading to false positives and negatives in the predictions.
Refining & Discovering Boxes for 3D Perception from Others’ Predictions (R&B-POP)!
The ego car first receives reference's predictions which contain inherent noises. It refines their localization with proposed label-efficient box ranker. Then, it creates high-quality pseudo labels by distance-based curriculum for self-training.
Check the paper for the details!
With proposed R&B-POP, we significantly close the gap to the upper bound that directly uses ego car’s ground-truth labels.
The quality of pseudo labels is gradually improved with the proposed R&B-POP. Our ranker successfully fixes mislocalization errors, and distance-based curriculum further discovers new objects from the ego car’s view.
@article{yoo2024rnbpop,
title={Learning 3D Perception from Others' Predictions},
author={Yoo, Jinsu and Feng, Zhenyang and Pan, Tai-Yu and Sun, Yihong and Phoo, Cheng Perng and Chen, Xiangyu and Campbell, Mark and Weinberger, Kilian Q. and Hariharan, Bharath and Chao, Wei-Lun},
journal={arXiv preprint arxiv:2410.02646},
year={2024}
}