Abstract

Teaser Image

Autonomous vehicles that navigate in open-world environments may encounter previously unseen object classes. However, most existing LiDAR panoptic segmentation models rely on closed-set assumptions, failing to detect unknown object instances. In this work, we propose ULOPS, an uncertainty-guided open-set panoptic segmentation framework that leverages Dirichlet-based evidential learning to model predictive uncertainty. Our architecture incorporates separate decoders for semantic segmentation with uncertainty estimation, embedding with prototype association, and instance center prediction. During inference, we leverage uncertainty estimates to identify and segment unknown instances. To strengthen the model’s ability to differentiate between known and unknown objects, we introduce three uncertainty-driven loss functions. Uniform Evidence Loss to encourage high uncertainty in unknown regions. Adaptive Uncertainty Separation Loss ensures a consistent difference in uncertainty estimates between known and unknown objects at a global scale. Contrastive Uncertainty Loss refines this separation at the fine-grained level. To evaluate open-set performance, we extend benchmark settings on KITTI-360 and introduce a new open-set evaluation for nuScenes. Extensive experiments demonstrate that ULOPS consistently outperforms existing open-set LiDAR panoptic segmentation methods.

Technical Approach

Overview of our approach
Figure:(a) Illustration of our proposed ProFusion3D architecture that employs progressive fusion and (b) The topology of our proposed fusion module.

Our framework is based on a backbone consisting of a stem that encodes the raw point cloud into a fixed-size 2D polar BEV representation, followed by a shared encoder that extracts high-level features. Next, we employ three decoders, each dedicated to a specific task: 1) a semantic segmentation decoder that generates semantic predictions along with uncertainty estimates, 2) an embedding decoder that learns instance-aware embeddings and prototypes, and 3) an instance center decoder that estimates class-agnostic object centers.


During inference, we combine these outputs to yield the final open-set panoptic segmentation. We first use the uncertainty estimates to separate known and unknown regions, labeling high-uncertainty areas as potential unknown objects. In lower-uncertainty (i.e., known) regions, instance-aware embeddings are associated with instance center prototypes to segment thing objects. Meanwhile, majority voting in semantic prediction assigns semantic classes to these class-agnostic instances. For high-uncertainty regions, embeddings are clustered to identify unknown objects. During training, we encourage higher uncertainty for unknown regions through a set of specialized losses: Uniform Evidence Loss, Contrastive Uncertainty Loss, and Adaptive Uncertainty Separation Loss. By jointly optimizing these losses alongside the core panoptic segmentation objectives, the network is better equipped to distinguish between known and unknown objects in LiDAR scenes.

ULOPS Visualization Results

Code

For academic usage a software implementation of this project based on PyTorch can be found in our GitHub repository and is released under the GPLv3 license. For any commercial purpose, please contact the authors.

Publications

If you find our work useful, please consider citing our paper:

Rohit Mohan, Julia Hindel, Florian Drews, Claudius Gläser, Daniele Cattaneo, Abhinav Valada

Open-Set LiDAR Panoptic Segmentation Guided by Uncertainty-Aware Learning
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hangzhou, China, 2025.
(PDF) (BibTeX)

Authors

Rohit Mohan

Rohit Mohan

University of Freiburg

Julia Hindel

Julia Hindel

University of Freiburg

Daniele Cattaneo

Daniele Cattaneo

University of Freiburg

Abhinav Valada

Abhinav Valada

University of Freiburg

Acknowledgment

This research was funded by Bosch Research as part of a collaboration between Bosch Research and the University of Freiburg on AI-based automated driving.