Improved Multi-Scale Grid Rendering of Point Clouds for Radar Object Detection Networks

authored by
Daniel Köhler, Maurice Quach, Michael Ulrich, Frank Meinl, Bastian Bischoff, Holger Blume
Abstract

Architectures that first convert point clouds to a grid representation and then apply convolutional neural networks achieve good performance for radar-based object detection. However, the transfer from irregular point cloud data to a dense grid structure is often associated with a loss of information, due to the discretization and aggregation of points. In this paper, we propose a novel architecture, multi-scale KPPillarsBEV, that aims to mitigate the negative effects of grid rendering. Specifically, we propose a novel grid rendering method, KPBEV, which leverages the descriptive power of kernel point convolutions to improve the encoding of local point cloud contexts during grid rendering. In addition, we propose a general multi-scale grid rendering formulation to incorporate multi-scale feature maps into convolutional backbones of detection networks with arbitrary grid rendering methods. We perform extensive experiments on the nuScenes dataset and evaluate the methods in terms of detection performance and computational complexity. The proposed multi-scale KPPillarsBEV architecture outperforms the baseline by 5.37% and the previous state of the art by 2.88% in Car AP4.0 (average precision for a matching threshold of 4 meters) on the nuScenes validation set. Moreover, the proposed single-scale KPBEV grid rendering improves the Car AP4.0 by 2.90% over the baseline while maintaining the same inference speed.

Organisation(s)
Institute of Microelectronic Systems
External Organisation(s)
Robert Bosch GmbH
Type
Conference contribution
Publication date
2023
Publication status
Published
Peer reviewed
Yes
ASJC Scopus subject areas
Computer Networks and Communications, Computer Vision and Pattern Recognition, Signal Processing, Instrumentation
Electronic version(s)
https://doi.org/10.48550/arXiv.2305.15836 (Access: Open)
https://doi.org/10.23919/FUSION52260.2023.10224223 (Access: Closed)