这是用户在 2024-12-25 13:42 为 https://app.immersivetranslate.com/pdf-pro/6cc213ed-a3c5-4412-ac1a-8ae04e20c39a 保存的双语快照页面,由 沉浸式翻译 提供双语支持。了解如何保存?

Radar4Motion: IMU-Free 4D Radar Odometry with Robust Dynamic Filtering and RCS-Weighted Matching
Radar4Motion:采用鲁棒动态滤波和 RCS 加权匹配的无 IMU 4D 雷达测距仪

Journal:  期刊: IEEE Transactions on Intelligent Vehicles
电气和电子工程师学会智能车辆论文集
Manuscript ID  手稿编号 T-IV-24-07-4465
Manuscript Type:  手稿类型: Full Length Article  全文
Date Submitted by the
提交日期
Author:  作者:
Journal: IEEE Transactions on Intelligent Vehicles Manuscript ID T-IV-24-07-4465 Manuscript Type: Full Length Article Date Submitted by the Author: | Journal: | IEEE Transactions on Intelligent Vehicles | | ---: | :--- | | Manuscript ID | T-IV-24-07-4465 | | Manuscript Type: | Full Length Article | | Date Submitted by the | | | Author: | |
08-Jul-2024 1

Radar4Motion: IMU-Free 4D Radar Odometry with Robust Dynamic Filtering and RCS-Weighted Matching
Radar4Motion:采用鲁棒动态滤波和 RCS 加权匹配的无 IMU 4D 雷达测距仪

Soyeong Kim 1 1 ^(1)^{1}, Student Member, IEEE, Jiwon Seok 1 1 ^(1){ }^{1} ®, Student Member, IEEE, Jaehwan Lee 1 1 ^(1){ }^{1}, Student Member, IEEE, and Kichun Jo 1 1 ^(1){ }^{1} ®, Member, IEEE
Soyeong Kim 1 1 ^(1)^{1} ,学生会员,IEEE;Jiwon Seok 1 1 ^(1){ }^{1} ®,学生会员,IEEE;Jaehwan Lee 1 1 ^(1){ }^{1} ,学生会员,IEEE;Kichun Jo 1 1 ^(1){ }^{1} ®,会员,IEEE。

Abstract  摘要

Accurate localization is crucial for autonomous vehicle navigation. In particular, there is active research in odometry that estimates a vehicle’s transformation over time. To achieve this, various environmental perception sensors are utilized, among which radar sensors stand out for their costeffectiveness and robustness against adverse weather conditions compared to LiDAR sensors. However, traditional radar sensors provide only 2D spatial information ( x , y x , y x,yx, y, doppler ). Recent technical advancements have introduced 4D imaging radar, providing 3D spatial ( x , y , z x , y , z x,y,zx, y, z, doppler) information. Nonetheless, radar data remains sparse and noisy when compared to LiDAR data, making it challenging to apply conventional LiDAR-based odometry algorithms. To address this challenge, we propose a radar point cloud odometry system that leverages Doppler, representing relative velocity, and Radar cross-section (RCS), a unique measure of an object’s reflective ability. The IMU-free ego-motion estimation step utilizes Doppler data to generate an initial guess for registration. Additionally, to effectively extract meaningful points, we propose a polar-grid-based feature extraction algorithm utilizing RCS. To overcome the sparsity of radar point cloud data, we perform RCS-weighted accumulated scans-to-submap matching, where weights for point cloud registration are modeled based on RCS values to achieve robust odometry. Our proposed algorithm was experimentally evaluated using the View-of-Delft dataset. The results demonstrate that our algorithm outperforms existing methods, providing superior odometry performance. Code is available: https://github.com/ailab-konkuk/Radar4Motion
精确定位对自动车辆导航至关重要。特别是,目前正在积极研究估算车辆随时间变化的里程测量法。为了实现这一目标,人们使用了各种环境感知传感器,其中雷达传感器因其成本效益高和与激光雷达传感器相比对恶劣天气条件的鲁棒性而脱颖而出。然而,传统的雷达传感器只能提供二维空间信息( x , y x , y x,yx, y ,多普勒)。最近的技术进步引入了 4D 成像雷达,提供 3D 空间信息( x , y , z x , y , z x,y,zx, y, z ,多普勒)。然而,与激光雷达数据相比,雷达数据仍然稀疏且噪声大,因此应用基于激光雷达的传统里程测量算法具有挑战性。为了应对这一挑战,我们提出了一种雷达点云里程测量系统,该系统利用多普勒(代表相对速度)和雷达横截面(RCS)来测量物体的反射能力。无 IMU 的自我运动估算步骤利用多普勒数据生成用于注册的初始猜测。此外,为了有效提取有意义的点,我们提出了一种利用 RCS 的基于极网的特征提取算法。为了克服雷达点云数据的稀疏性,我们执行了 RCS 加权累积扫描到子地图匹配,其中点云注册的权重基于 RCS 值建模,以实现稳健的里程测量。我们使用 View-of-Delft 数据集对所提出的算法进行了实验评估。结果表明,我们的算法优于现有方法,具有出色的里程测量性能。可用代码: https://github.com/ailab-konkuk/Radar4Motion

Index Terms-4D Imaging radar, odometry, radar processing, radar cross-section, registration.

I. INTRODUCTION  I.引言

FOR safe autonomous driving, accurately estimating precise location is essential. In particular, odometry, one of the key components in achieving accurate localization, is a technique that estimates the transformation between consecutive time steps. Most odometry studies have focused on
对于安全自动驾驶而言,准确估计精确位置至关重要。其中,里程测量是实现精确定位的关键要素之一,它是一种估算连续时间步长之间变换的技术。大多数里程测量研究都集中在
point cloud data from LiDAR (Light Detection and Ranging) sensors [1]-[5], which provide 3D information about the surrounding environment. However, LiDAR sensors are vulnerable to adverse weather conditions such as rain, snow, and fog. Additionally, their high cost presents a significant barrier to their widespread adoption in various applications. On the other hand, radar sensors offer advantages such as weather robustness and affordability, but they only provide 2D spatial information in terms of x , y x , y x,yx, y, including additional doppler information.
LiDAR(光探测和测距)传感器提供的点云数据 [1]-[5],可提供周围环境的三维信息。然而,激光雷达传感器容易受到雨、雪和雾等恶劣天气条件的影响。此外,其高昂的成本也严重阻碍了其在各种应用中的广泛应用。另一方面,雷达传感器具有不受天气影响和经济实惠等优点,但它们只能提供 x , y x , y x,yx, y 的二维空间信息,包括额外的多普勒信息。
Recent technological advancements have introduced 4D imaging radar sensors. Unlike conventional radars that only provide x , y x , y x,yx, y, doppler information, 4D imaging radar sensors also offer height information, allowing them to capture rich point cloud data in three-dimensional space, similar to LiDAR sensors. Additionally, they provide relative velocity through Doppler frequency shift measurements and RCS (Radar crosssection) information. Despite efforts to offer abundant points, radar point cloud data still contains fewer points. In addition, radar data is more prone to clutter and noise than LiDAR data since it utilizes the radio frequency spectrum. Consequently, directly applying traditional LiDAR-based odometry algorithms to radar data can be challenging because they cannot handle the noise and sparsity characteristics effectively.
最近的技术进步引入了 4D 成像雷达传感器。与只能提供 x , y x , y x,yx, y 多普勒信息的传统雷达不同,4D 成像雷达传感器还能提供高度信息,从而捕捉三维空间中丰富的点云数据,与激光雷达传感器类似。此外,它们还可通过多普勒频移测量和 RCS(雷达截面)信息提供相对速度。尽管努力提供丰富的点,但雷达点云数据包含的点仍然较少。此外,雷达数据比激光雷达数据更容易受到杂波和噪音的影响,因为它使用的是无线电频谱。因此,将传统的基于激光雷达的里程测量算法直接应用于雷达数据可能具有挑战性,因为它们无法有效处理噪声和稀疏性特征。
LiDAR odometry typically employs registration techniques on consecutive point cloud data to estimate transformations, including rotation and translation. Common registration algorithms include ICP (Iterative Closest Point) variants [6], such as point-to-point [7], point-to-plane [8], and generalized ICP [9], along with NDT (Normal Distributions Transformation) [10]. Some adaptations of these algorithms involve extracting feature points, such as planes and edges, from dense LiDAR point clouds before the registration process [1]. Providing an initial guess close to the ground truth is important in the registration algorithm to ensure registration algorithms converge effectively. For this reason, IMU (Inertial Measurement Unit) sensors are commonly integrated into most research to estimate initial transformation [11], [12]. In some cases, IMU sensors are not used in the odometry process. For example, in the study by [5], the initial guess is estimated based solely on velocity information derived from the estimated odometry, assuming constant velocity motion.
激光雷达里程测量通常采用连续点云数据的配准技术来估计变换,包括旋转和平移。常见的配准算法包括 ICP(迭代最邻近点)变体 [6],如点到点(point-to-point)[7]、点到平面(point-to-plane)[8]和广义 ICP [9],以及 NDT(正态分布变换)[10]。这些算法的一些调整涉及在注册过程之前从密集的激光雷达点云中提取特征点,如平面和边缘[1]。为确保注册算法有效收敛,在注册算法中提供接近地面实况的初始猜测非常重要。因此,IMU(惯性测量单元)传感器通常被集成到大多数研究中,用于估计初始变换[11]、[12]。在某些情况下,IMU 传感器不用于里程测量过程。例如,在[5]的研究中,初始猜测仅根据估计的里程测量得出的速度信息进行估计,并假设为匀速运动。
Radar sensors utilizing FMCW (Frequency-Modulated Continuous Wave) technology, unlike LiDAR sensors based on time-of-flight (ToF), provide Doppler velocity information
与基于飞行时间(ToF)的激光雷达传感器不同,采用 FMCW(频率调制连续波)技术的雷达传感器可提供多普勒速度信息

for the detected points. Due to the capability of directly measuring velocity, radar-based ego-velocity estimation research has advanced. For example, [13] demonstrated egomotion estimation using the RANdom SAmple Consensus (RANSAC) algorithm and showed that single radar sensors could estimate angular velocity by utilizing vehicle nonholonomic constraints. Additionally, [14] proposed a threepoint RANSAC-LSQ algorithm for 3D velocity estimation using 4 D imaging radar in a three-dimensional environment. [15] developed a method to determine 3D rotation by applying non-holonomic constraints with a single sensor.
为探测到的点。由于可以直接测量速度,基于雷达的自我速度估计研究取得了进展。例如,[13] 使用 RANdom SAmple Consensus(RANSAC)算法演示了自我运动估算,并表明单个雷达传感器可利用车辆非整体约束估算角速度。此外,[14] 还提出了一种三点 RANSAC-LSQ 算法,用于在三维环境中使用 4 D 成像雷达估计三维速度。[15]开发了一种通过单传感器应用非整体约束来确定三维旋转的方法。
Using multiple sensors, radar-only rotation estimation can also be achieved. [16] extended the RANSAC-based least squares (LSQ) method to estimate both linear and angular velocities without motion constraints using only radar sensors. Moreover, [17] incorporated non-linear optimization into the RANSAC-LSQ-based velocity estimation to account for Doppler velocity measurements and target azimuth angle uncertainties, thereby enhancing estimation accuracy.
利用多个传感器,还可以实现仅雷达旋转估算。文献[16]扩展了基于 RANSAC 的最小二乘(LSQ)方法,仅使用雷达传感器就能在无运动约束条件下估计线速度和角速度。此外,[17] 还在基于 RANSAC-LSQ 的速度估计中加入了非线性优化,以考虑多普勒速度测量和目标方位角的不确定性,从而提高估计精度。
In the case of conventional radar-based odometry research, radar images have often been the input, requiring the application of image processing techniques [18]-[20]. Recently, odometry research related to 4D imaging radar has emerged. In the papers by [21], [22], a scan-to-submap matching technique is utilized instead of registering consecutive single scans to address the sparsity and noise characteristics of radar point clouds. This involves matching the current scan to a submap formed by accumulating several recent scans. However, these papers do not consider the unique measurement characteristics of radar sensors. Additionally, 4D imaging radar odometry research using deep learning techniques is actively being conducted [23], [24]. These studies show excellent performance on the View-of-Delft (VoD) dataset, which provides 4D imaging radar data [25]. However, deep learning-based studies may exhibit reduced performance in environments different from the training dataset. Moreover, the relatively few publicly available radar odometry datasets limit their general applicability.
在基于雷达的传统里程测量研究中,雷达图像通常是输入,需要应用图像处理技术[18]-[20]。最近,出现了与 4D 成像雷达相关的里程测量研究。在论文[21]和[22]中,利用了扫描到子地图匹配技术来解决雷达点云的稀疏性和噪声特性,而不是注册连续的单个扫描。这包括将当前扫描与由最近几次扫描累积而成的子地图进行匹配。然而,这些论文并未考虑雷达传感器独特的测量特性。此外,使用深度学习技术的 4D 成像雷达里程测量研究也在积极进行中[23]、[24]。这些研究在提供 4D 成像雷达数据的 View-of-Delft (VoD)数据集上显示出卓越的性能[25]。然而,基于深度学习的研究在不同于训练数据集的环境中可能会表现出性能下降。此外,公开的雷达里程测量数据集相对较少,限制了其普遍适用性。
We propose a robust odometry algorithm that leverages the unique characteristics of 4D imaging radar, specifically Doppler and RCS information. Firstly, we introduce an egomotion estimation technique called ME-LSQ (M-estimated LSQ), which utilizes Doppler information from a radar sensor without the need for an IMU sensor. This method enables robust ego-motion estimation and reduces the randomness of the 3D RANSAC-LSQ algorithm [14] by using previous estimates as the initial guess.
我们提出了一种稳健的测距算法,该算法利用了 4D 成像雷达的独特特性,特别是多普勒和 RCS 信息。首先,我们引入了一种名为 ME-LSQ(M-估计 LSQ)的自我运动估计技术,该技术利用雷达传感器的多普勒信息,无需使用 IMU 传感器。这种方法可以实现稳健的自我运动估计,并通过使用以前的估计值作为初始猜测来减少 3D RANSAC-LSQ 算法 [14] 的随机性。
Secondly, we propose a polar-grid local RCS feature extraction technique to extract meaningful points from noisy radar point clouds. Radar sensors provide RCS information, a quantitative measure of how effectively objects reflect radar signals. Objects made of metallic materials or equipped with retro-reflective film typically exhibit higher RCS values [26]. These types of objects, such as lampposts, fences, bus stops, and bollards, often serve as prominent features in an on-road driving environment. In this context, RCS becomes highly valuable information for odometry. However, simply extracting high RCS points from the entire point cloud may result in
其次,我们提出了一种极网局部 RCS 特征提取技术,以从噪声雷达点云中提取有意义的点。雷达传感器可提供 RCS 信息,这是衡量物体如何有效反射雷达信号的量化指标。由金属材料制成或配有逆反射膜的物体通常会显示较高的 RCS 值 [26]。这类物体,如灯柱、栅栏、公交车站和系缆桩,通常是道路驾驶环境中的突出特征。在这种情况下,RCS 就成为非常有价值的里程测量信息。然而,简单地从整个点云中提取高 RCS 点可能会导致

losing locally significant points within the sparse radar point cloud. Therefore, by voxelizing the radar point cloud space in polar coordinates and extracting locally high RCS points within each voxel, the local RCS filter technique allows us to extract meaningful points without losing spatially significant point information.
在稀疏的雷达点云中丢失局部重要点。因此,通过对雷达点云空间进行极坐标体素化,并在每个体素内提取局部高 RCS 点,局部 RCS 滤波技术使我们能够在不丢失重要空间点信息的情况下提取有意义的点。
Finally, we propose the RCS-weighted accumulated scans-to-submap matching (RCSW-ASSM) algorithm. Sparse radar point clouds pose significant challenges in registration due to their noisiness and sparsity, which can lead to inaccurate correspondence. Therefore, we introduce the accumulated scans-tosubmap matching (ASSM) method instead of the conventional scan-to-submap matching approach. We form accumulated scans using the transformation information estimated from ego-motion estimation. These accumulated scans are then matched with a submap generated using the transformation information refined through registration. In this process, after establishing correspondences during registration, we assign weights considering not only the distance to the sensor but also the RCS information of each point. This approach enables robust registration utilizing sparse and noisy radar point clouds.
最后,我们提出了 RCS 加权累积扫描到子地图匹配(RCSW-ASSM)算法。稀疏雷达点云由于其噪声和稀疏性,会导致对应不准确,给注册带来巨大挑战。因此,我们引入了累积扫描-子地图匹配(ASSM)方法,以取代传统的扫描-子地图匹配方法。我们利用自我运动估算得出的变换信息形成累积扫描。然后,将这些累积扫描结果与使用通过注册改进的变换信息生成的子地图进行匹配。在此过程中,在注册过程中建立对应关系后,我们不仅要考虑到与传感器的距离,还要考虑到每个点的 RCS 信息,从而分配权重。这种方法可以利用稀疏和有噪声的雷达点云进行稳健的注册。
The contributions of our paper are as follows:
本文的贡献如下
  • IMU-free ego-motion estimation: We propose the M-estimation-based Least Square ( M E L S Q M E L S Q ME-LSQM E-L S Q ) algorithm for robust motion estimation and dynamic point removal. ME-LSQ shows robust performance against randomness.
    无 IMU 的自我运动估计:我们提出了基于 M 估计的最小平方( M E L S Q M E L S Q ME-LSQM E-L S Q )算法,用于稳健的运动估计和动态点移除。ME-LSQ 在随机性方面表现出强劲的性能。
  • Polar grid local RCS feature extraction: We enhance odometry performance by extracting meaningful points from noisy radar point clouds using voxelization in polar coordinates and considering locally high RCS values.
    极坐标网格局部 RCS 特征提取:我们利用极坐标体素化技术从噪声雷达点云中提取有意义的点,并考虑局部高 RCS 值,从而提高了里程测量性能。
  • RCS-weighted accumulated scans-to-submap matching: We enhance the registration algorithm by using accumulated scans-to-submap matching and weighting points based on RCS information.
    RCS 加权累积扫描到子地图匹配:我们通过使用累积扫描到子地图匹配和基于 RCS 信息的加权点来增强注册算法。

II. SYSTEM OVERVIEW  II.系统概述

The entire system comprises three main stages, as illustrated in Figure 1 Firstly, the IMU-free ego-motion estimation step inputs radar point cloud information, specifically x , y , z x , y , z x,y,zx, y, z, doppler, and RCS. The ME-LSQ-based ego-motion estimation step performs ego-motion estimation using Doppler velocity from a radar sensor. In this process, we utilize the previously estimated ego-motion as an initial value. Egomotion estimation extracts static points with estimated velocity and refines the estimated velocity with static points. As a result, ego-motion estimation provides static points and initial pose for registration.
整个系统包括三个主要阶段,如图 1 所示 首先,无 IMU 自我运动估计步骤输入雷达点云信息,特别是 x , y , z x , y , z x,y,zx, y, z 、多普勒和 RCS。基于 ME-LSQ 的自我运动估计步骤使用雷达传感器的多普勒速度进行自我运动估计。在此过程中,我们利用之前估计的自我运动作为初始值。自我运动估算提取具有估算速度的静态点,并利用静态点完善估算速度。因此,自我运动估计为注册提供了静态点和初始姿态。
Secondly, in the feature extraction stage, we input the static point cloud from the previous module to extract meaningful points from the noisy and sparse radar point cloud. To achieve this, we perform polar-grid local RCS feature extraction. By voxelizing the point cloud space in polar coordinates and extracting the points with the highest RCS values within each voxel, we ensure that locally significant points are retained while outputting a meaningful feature point cloud.
其次,在特征提取阶段,我们输入上一模块的静态点云,从噪声和稀疏的雷达点云中提取有意义的点。为此,我们进行了极坐标局部 RCS 特征提取。通过在极坐标下对点云空间进行象素化处理,并提取每个象素内 RCS 值最高的点,我们可以确保在输出有意义的特征点云的同时保留有局部意义的点。
Finally, odometry is performed using the feature point cloud and the initial transformation as inputs. We first perform
最后,使用特征点云和初始变换作为输入进行里程测量。我们首先执行

Fig. 1. The proposed system architecture incorporates inputs from a 4D imaging radar’s point cloud and previously estimated odometry motion data. The point cloud data includes dimensions x, y, and z, along with Doppler and RCS data. In the first stage, IMU-free ego-motion estimation, an initial transformation for the odometry is generated. The second stage involves performing polar-grid local RCS feature extraction to sample locally meaningful points from noisy and sparse radar points. Finally, the system executes RCS-weighted accumulated scans-to-submap matching-based odometry using the initial transformation and the filtered point cloud from the previous two stages.
图 1.拟议的系统架构包含来自 4D 成像雷达点云和先前估算的里程测量运动数据的输入。点云数据包括 x、y 和 z 维度以及多普勒和 RCS 数据。在第一阶段,即无 IMU 的自我运动估算阶段,将生成里程测量的初始变换。第二阶段是进行极栅局部 RCS 特征提取,从嘈杂和稀疏的雷达点中抽取有局部意义的点。最后,系统利用初始变换和前两个阶段的过滤点云,执行基于 RCS 加权累积扫描到子地图匹配的测距。

Fig. 2. Cyan points correspond to dynamic objects, and yellow points represent static objects. The purple arrows indicate the Doppler velocity measured from the radar sensor, while the black dashed arrows represent the estimated absolute Doppler velocity. It can be observed that the difference between the estimated static Doppler velocity and the measured Doppler velocity for static points is minimal. In contrast, there is a significant difference for dynamic points.
图 2.青色点代表动态物体,黄色点代表静态物体。紫色箭头表示雷达传感器测得的多普勒速度,黑色虚线箭头表示估计的绝对多普勒速度。可以看出,静态点的估计静态多普勒速度与测量多普勒速度之间的差异很小。相比之下,动态点的差异很大。

accumulated scans and submap generation to mitigate the issue of incorrect or insufficient correspondences in the sparse radar point cloud. In this stage, we utilize the initial transformation information from the IMU-free ego-motion estimation to accumulate previous accumulated scans. The submap is created using a more precise transformation estimated from the entire process. Ultimately, we perform RCS-weighted accumulated scans-to-submap matching ( R C S W A S S M R C S W A S S M RCSW-ASSMR C S W-A S S M ) using the accumulated scans, submap, and the initial transformation as inputs. This process outputs the estimated pose and motion information.
累积扫描和子地图生成,以减轻稀疏雷达点云中对应关系不正确或不充分的问题。在这一阶段,我们利用来自无 IMU 自我运动估算的初始变换信息来累积之前的累积扫描。子地图是利用整个过程中估算出的更精确的变换创建的。最后,我们使用累积扫描、子地图和初始变换作为输入,执行 RCS 加权累积扫描与子地图匹配( R C S W A S S M R C S W A S S M RCSW-ASSMR C S W-A S S M )。这一过程将输出估计的姿态和运动信息。

III. IMU-FREE EGO-MOTION ESTIMATION
III.无 imu 的自我运动估计

The proposed method consists of two steps: IMU-free ego-motion estimation and angular velocity estimation. Additionally, using the estimated motion, we estimate the initial transformation for the registration and filter out dynamic points from the radar point cloud.
建议的方法包括两个步骤:无 IMU 的自我运动估计和角速度估计。此外,我们还利用运动估算来估算注册的初始变换,并从雷达点云中过滤掉动态点。
Fig. 2 shows the radar point cloud containing both static and dynamic points. The black dashed line represents the estimated static Doppler velocity, and the purple line represents the measured Doppler velocity. The blue points represent the static points, and the red points represent the dynamic points. The difference between the estimated static Doppler velocity and the measured Doppler velocity for static points is nearly zero. However, this difference in Doppler velocity is significantly larger for dynamic points. Accurate ego-motion estimation
图 2 显示了包含静态和动态点的雷达点云。黑色虚线代表估计的静态多普勒速度,紫色线代表测量的多普勒速度。蓝色点代表静态点,红色点代表动态点。静态点的估计静态多普勒速度与测量多普勒速度之差几乎为零。然而,动态点的多普勒速度差异则要大得多。精确的自我运动估计

is essential for accurate static point cloud extraction and registration. This is because the ego-motion determines the estimated static Doppler velocity.
对于精确的静态点云提取和配准至关重要。这是因为自我运动决定了估计的静态多普勒速度。

A. M-estimated least square (ME-LSQ) for ego-motion estimation
A.用于自我运动估计的 M-估计最小平方(ME-LSQ)

  1. Doppler velocity with ego-motion: Where ego-motion is v e g o S R 3 v e g o S R 3 vec(v)_(ego)^(S)inR^(3)\vec{v}_{e g o}^{S} \in \mathbb{R}^{3} and the radar target is static, the Doppler velocity of target v d v d v_(d)v_{d}, which represented as estimated static Doppler velocity in Fig. 2, can be obtained as the dot product with the direction vector u R 3 u R 3 vec(u)inR^(3)\vec{u} \in \mathbb{R}^{3} of the measurement point (2). The symbols v S v S vec(v)^(S)\vec{v}^{S} and u S u S vec(u)^(S)\vec{u}^{S} represent the velocity and direction vector of the target in the sensor coordinate, respectively. The superscript S S SS represents the sensor coordinate system, and superscript T T TT represents the transposition. And the small latter v v vv and u u uu represent the scalar value of the Doppler velocity and direction vector, respectively. The p S p S vec(p)^(S)\vec{p}^{S} is the target radar point in the sensor coordinate system.
    具有自我运动的多普勒速度:当自我运动为 v e g o S R 3 v e g o S R 3 vec(v)_(ego)^(S)inR^(3)\vec{v}_{e g o}^{S} \in \mathbb{R}^{3} 且雷达目标为静态时,目标 v d v d v_(d)v_{d} 的多普勒速度(在图 2 中表示为估计的静态多普勒速度)可通过与测量点 (2) 的方向矢量 u R 3 u R 3 vec(u)inR^(3)\vec{u} \in \mathbb{R}^{3} 的点积求得。符号 v S v S vec(v)^(S)\vec{v}^{S} u S u S vec(u)^(S)\vec{u}^{S} 分别表示目标在传感器坐标中的速度和方向向量。上标 S S SS 表示传感器坐标系,上标 T T TT 表示转置。小标号 v v vv u u uu 分别表示多普勒速度和方向矢量的标量值。 p S p S vec(p)^(S)\vec{p}^{S} 是传感器坐标系中的目标雷达点。
v ego S = [ v ego, x S , v ego,y S , v ego, , S ] T , u S = [ p x S p S , p y S p S , p z S p S ] T = [ u x S , u y S , u z S ] T v d = ( u S ) T v ego S v ego  S = v ego,  x S , v ego,y  S , v ego,  , S T , u S = p x S p S , p y S p S , p z S p S T = u x S , u y S , u z S T v d = u S T v ego  S {:[ vec(v)_("ego ")^(S)=[v_("ego, "x)^(S),v_("ego,y ")^(S),v_("ego, ",)^(S)]^(T)","],[ vec(u)^(S)=[(p_(x)^(S))/(|| vec(p)^(S)||),(p_(y)^(S))/(|| vec(p)^(S)||),(p_(z)^(S))/(|| vec(p)^(S)||)]^(T)],[=[u_(x)^(S),u_(y)^(S),u_(z)^(S)]^(T)],[v_(d)=-( vec(u)^(S))^(T) vec(v)_("ego ")^(S)]:}\begin{aligned} \vec{v}_{\text {ego }}^{S} & =\left[v_{\text {ego, } x}^{S}, v_{\text {ego,y }}^{S}, v_{\text {ego, },}^{S}\right]^{T}, \\ \vec{u}^{S} & =\left[\frac{p_{x}^{S}}{\left\|\vec{p}^{S}\right\|}, \frac{p_{y}^{S}}{\left\|\vec{p}^{S}\right\|}, \frac{p_{z}^{S}}{\left\|\vec{p}^{S}\right\|}\right]^{T} \\ & =\left[u_{x}^{S}, u_{y}^{S}, u_{z}^{S}\right]^{T} \\ & v_{d}=-\left(\vec{u}^{S}\right)^{T} \vec{v}_{\text {ego }}^{S} \end{aligned}
Doppler velocities ( v d R N ) v d R N ( vec(v)_(d)inR^(N))\left(\vec{v}_{d} \in \mathbb{R}^{N}\right) for N N NN number of radar measurement points are shown in (3). The H H H\mathbf{H} matrix represents the direction vector of each radar measurement point. v d , n v d , n v_(d,n)v_{d, n} and u n u n vec(u)_(n)\vec{u}_{n} represent the Doppler velocity and direction vector of the n n nn-th radar measurement point, respectively.
雷达测量点数 N N NN 的多普勒速度 ( v d R N ) v d R N ( vec(v)_(d)inR^(N))\left(\vec{v}_{d} \in \mathbb{R}^{N}\right) 如 (3) 所示。 H H H\mathbf{H} 矩阵表示每个雷达测量点的方向矢量。 v d , n v d , n v_(d,n)v_{d, n} u n u n vec(u)_(n)\vec{u}_{n} 分别表示第 n n nn 个雷达测量点的多普勒速度和方向矢量。
[ v d , 1 v d , 2 v d , N ] = [ u 1 , x S u 1 , y S u 1 , z S u 2 , x S u 2 , y S u 2 , z S u N , x S u N , y S u N , z S ] [ v x S v y S v z S ] v d = H ego S H = [ u 1 S , u 2 S , , u N S ] T v d , 1 v d , 2 v d , N = u 1 , x S u 1 , y S u 1 , z S u 2 , x S u 2 , y S u 2 , z S u N , x S u N , y S u N , z S v x S v y S v z S v d = H ego  S H = u 1 S , u 2 S , , u N S T {:[ubrace([[v_(d,1)],[v_(d,2)],[vdots],[v_(d,N)]]=-[[u_(1,x)^(S),u_(1,y)^(S),u_(1,z)^(S)],[u_(2,x)^(S),u_(2,y)^(S),u_(2,z)^(S)],[,vdots,],[u_(N,x)^(S),u_(N,y)^(S),u_(N,z)^(S)]][[v_(x)^(S)],[v_(y)^(S)],[v_(z)^(S)]]ubrace)_( vec(v)_(d)=-H_("ego ")^(S))],[H=[ vec(u)_(1)^(S), vec(u)_(2)^(S),dots, vec(u)_(N)^(S)]^(T)]:}\begin{gathered} \underbrace{\left[\begin{array}{c} v_{d, 1} \\ v_{d, 2} \\ \vdots \\ v_{d, N} \end{array}\right]=-\left[\begin{array}{ccc} u_{1, x}^{S} & u_{1, y}^{S} & u_{1, z}^{S} \\ u_{2, x}^{S} & u_{2, y}^{S} & u_{2, z}^{S} \\ & \vdots & \\ u_{N, x}^{S} & u_{N, y}^{S} & u_{N, z}^{S} \end{array}\right]\left[\begin{array}{c} v_{x}^{S} \\ v_{y}^{S} \\ v_{z}^{S} \end{array}\right]}_{\vec{v}_{d}=-\mathbf{H}_{\text {ego }}^{S}} \\ \mathbf{H}=\left[\vec{u}_{1}^{S}, \vec{u}_{2}^{S}, \ldots, \vec{u}_{N}^{S}\right]^{T} \end{gathered}
For estimating ego-motion ( v ego S v ego  S vec(v)_("ego ")^(S)\vec{v}_{\text {ego }}^{S} ), a commonly used method for estimating ego velocity is based on the least square (LSQ) [13], [27], as shown in (5].
在估算自我运动( v ego S v ego  S vec(v)_("ego ")^(S)\vec{v}_{\text {ego }}^{S} )时,常用的估算自我速度的方法是基于最小平方(LSQ)[13], [27],如(5)所示。
v ego S = ( H T H ) 1 H T v d v ego  S = H T H 1 H T v d vec(v)_("ego ")^(S)=-(H^(T)H)^(-1)H^(T) vec(v)_(d)\vec{v}_{\text {ego }}^{S}=-\left(\mathbf{H}^{T} \mathbf{H}\right)^{-1} \mathbf{H}^{T} \vec{v}_{d}
The equation (5) estimates ego-motion v ego S v ego  S vec(v)_("ego ")^(S)\vec{v}_{\text {ego }}^{S} using static radar measurement points. However, the radar measurement points include dynamic points, which can degrade the ego-motion estimation. Therefore actually measured Doppler velocity v meas v meas  vec(v)_("meas ")\vec{v}_{\text {meas }} is the difference of the target velocity v target S v target  S vec(v)_("target ")^(S)\vec{v}_{\text {target }}^{S} and ego-motion v ego S v ego  S vec(v)_("ego ")^(S)\vec{v}_{\text {ego }}^{S} as shown in (6). The n n nn-th radar measurement’s Doppler velocity v meas , n v meas  , n v_("meas ",n)v_{\text {meas }, n} is given by:
公式 (5) 使用静态雷达测量点估算了 v ego S v ego  S vec(v)_("ego ")^(S)\vec{v}_{\text {ego }}^{S} 自我运动。然而,雷达测量点包括动态测量点,这会降低自我运动估计值。因此,如 (6) 所示,实际测得的多普勒速度 v meas v meas  vec(v)_("meas ")\vec{v}_{\text {meas }} 是目标速度 v target S v target  S vec(v)_("target ")^(S)\vec{v}_{\text {target }}^{S} 与自我运动 v ego S v ego  S vec(v)_("ego ")^(S)\vec{v}_{\text {ego }}^{S} 之差。第 n n nn 次雷达测量的多普勒速度 v meas , n v meas  , n v_("meas ",n)v_{\text {meas }, n} 由以下公式给出:
v meas , n = ( u n S ) T ( v target , n S v ego S ) = ( u n S ) T v target , n ( u n S ) T v ego S = v target , n , u n S v ego o u n S v meas  , n = u n S T v target  , n S v ego  S = u n S T v target  , n u n S T v ego  S = v target  , n , u n S v ego  o u n S {:[v_("meas ",n)=( vec(u)_(n)^(S))^(T)( vec(v)_("target ",n)^(S)- vec(v)_("ego ")^(S))],[=( vec(u)_(n)^(S))^(T) vec(v)_("target ",n)-( vec(u)_(n)^(S))^(T) vec(v)_("ego ")^(S)],[=v_("target ",n, vec(u)_(n)^(S))-v_("ego "o vec(u)_(n)^(S))]:}\begin{aligned} v_{\text {meas }, n} & =\left(\vec{u}_{n}^{S}\right)^{T}\left(\vec{v}_{\text {target }, n}^{S}-\vec{v}_{\text {ego }}^{S}\right) \\ & =\left(\vec{u}_{n}^{S}\right)^{T} \vec{v}_{\text {target }, n}-\left(\vec{u}_{n}^{S}\right)^{T} \vec{v}_{\text {ego }}^{S} \\ & =v_{\text {target }, n, \vec{u}_{n}^{S}}-v_{\text {ego } o \vec{u}_{n}^{S}} \end{aligned}
Where v target , n S v target  , n S vec(v)_("target ",n)^(S)\vec{v}_{\text {target }, n}^{S} is the radar target’s velocity vector, v ego S v ego  S vec(v)_("ego ")^(S)\vec{v}_{\text {ego }}^{S} is the radar sensor’s velocity vector, and u n S u n S vec(u)_(n)^(S)\vec{u}_{n}^{S} is the target direction vector.
其中, v target , n S v target  , n S vec(v)_("target ",n)^(S)\vec{v}_{\text {target }, n}^{S} 是雷达目标的速度矢量, v ego S v ego  S vec(v)_("ego ")^(S)\vec{v}_{\text {ego }}^{S} 是雷达传感器的速度矢量, u n S u n S vec(u)_(n)^(S)\vec{u}_{n}^{S} 是目标的方向矢量。

2) ME-LSQ for linear velocity estimation: Most other radar odometry methods use the RANSAC-LSQ method to estimate ego-motion [13], [14], [28]-[30]. The RANSAC-LSQ method randomly samples three points from the radar point cloud and estimates the ego-motion using the inlier points. Therefore, the RANSAC-LSQ method can produce incorrect results when there are dynamic points in the vicinity.
2) ME-LSQ 线性速度估计法:大多数其他雷达测距方法都使用 RANSAC-LSQ 方法来估计自我运动 [13], [14], [28] - [30]。RANSAC-LSQ 方法从雷达点云中随机抽样三个点,然后使用离群点估计自我运动。因此,当附近有动态点时,RANSAC-LSQ 方法可能会产生错误的结果。
On the other hand, the ME-LSQ method utilizes the previous estimated ego-motion as an initial value for the current ego-motion estimation. The ME-LSQ method is based on the M-estimation [31] with weighted LSQ. Since egomotion typically changes gradually, leveraging the previously estimated ego-motion as the initial value in ME-LSQ enables robust ego-motion estimation. By initializing ME-LSQ with the last known motion estimate, the algorithm can more robustly and effectively filter out noise and inconsistencies caused by moving objects, leading to a smoother and more reliable trajectory estimation.
另一方面,ME-LSQ 方法利用之前估计的自我运动作为当前自我运动估计的初始值。ME-LSQ 方法基于加权 LSQ 的 M-estimation [31]。由于自我运动通常是逐渐变化的,因此在 ME-LSQ 中利用之前估计的自我运动作为初始值可以实现稳健的自我运动估计。通过使用上次已知的运动估计值作为 ME-LSQ 的初始值,该算法可以更稳健、更有效地过滤移动物体造成的噪声和不一致性,从而获得更平滑、更可靠的轨迹估计。
The ME-LSQ method consists of the following steps:
ME-LSQ 方法包括以下步骤:

1. Initialization:  1.初始化:

  • Estimate the initial velocity v ^ ego , k S v ^ ego  , k S vec(hat(v))_("ego ",k)^(S)\overrightarrow{\hat{v}}_{\text {ego }, k}^{S} at time-step k k kk using a previous odometry result.
    利用之前的测距结果,估算出时间步骤 k k kk 时的初始速度 v ^ ego , k S v ^ ego  , k S vec(hat(v))_("ego ",k)^(S)\overrightarrow{\hat{v}}_{\text {ego }, k}^{S}

2. Prediction:  2.预测

  • Predict the expected static Doppler velocity v ^ d , k , n v ^ d , k , n hat(v)_(d,k,n)\hat{v}_{d, k, n} for each point at time-step k k kk using the initial velocity v ^ ego , k S v ^ ego  , k S vec(hat(v))_("ego ",k)^(S)\overrightarrow{\hat{v}}_{\text {ego }, k}^{S}.
    利用初始速度 v ^ ego , k S v ^ ego  , k S vec(hat(v))_("ego ",k)^(S)\overrightarrow{\hat{v}}_{\text {ego }, k}^{S} 预测每个点在时间步 k k kk 时的预期静态多普勒速度 v ^ d , k , n v ^ d , k , n hat(v)_(d,k,n)\hat{v}_{d, k, n}

3. M-estimation with Weighted LSQ:
3.利用加权 LSQ 进行 M-估计:

  • Calculate the residuals r n = v meas , n v ^ d , k , n r n = v meas  , n v ^ d , k , n r_(n)=v_("meas ",n)- hat(v)_(d,k,n)r_{n}=v_{\text {meas }, n}-\hat{v}_{d, k, n} for all points, where v meas , n v meas  , n v_("meas ",n)v_{\text {meas }, n} is the observed velocity of the n n nn-th point.
    计算所有点的残差 r n = v meas , n v ^ d , k , n r n = v meas  , n v ^ d , k , n r_(n)=v_("meas ",n)- hat(v)_(d,k,n)r_{n}=v_{\text {meas }, n}-\hat{v}_{d, k, n} ,其中 v meas , n v meas  , n v_("meas ",n)v_{\text {meas }, n} 是第 n n nn 个点的观测速度。
  • Compute the weights w n w n w_(n)w_{n} based on the chosen M-estimator loss function (Tukey’s biweight) [31]. ( c c cc is a dynamic threshold value for the radar points.)
    根据所选的 M-estimator 损失函数(Tukey 双权重)计算权重 w n w n w_(n)w_{n} [31] 。( c c cc 是雷达点的动态阈值)。
w n = { ( 1 ( r n / c ) 2 ) 2 if | r n | < c 0 otherwise W = diag ( w 1 , w 2 , , w N ) w n = 1 r n / c 2 2  if  r n < c 0  otherwise  W = diag w 1 , w 2 , , w N {:[w_(n)={[(1-(r_(n)//c)^(2))^(2)," if "|r_(n)| < c],[0," otherwise "]:}],[W=diag(w_(1),w_(2),dots,w_(N))]:}\begin{gathered} w_{n}= \begin{cases}\left(1-\left(r_{n} / c\right)^{2}\right)^{2} & \text { if }\left|r_{n}\right|<c \\ 0 & \text { otherwise }\end{cases} \\ \mathbf{W}=\operatorname{diag}\left(w_{1}, w_{2}, \ldots, w_{N}\right) \end{gathered}
  • Fit a weighted least squares model using the weights W W W\mathbf{W} to update the velocity estimate v ^ ego , k v ^ ego  , k vec(hat(v))_("ego ",k)\overrightarrow{\hat{v}}_{\text {ego }, k}.
    使用权重 W W W\mathbf{W} 拟合加权最小二乘模型,更新速度估计值 v ^ ego , k v ^ ego  , k vec(hat(v))_("ego ",k)\overrightarrow{\hat{v}}_{\text {ego }, k}
v ^ ego , k = ( H T W H ) 1 H T W v measn v ^ ego  , k = H T W H 1 H T W v measn  vec(hat(v))_("ego ",k)=-(H^(T)WH)^(-1)H^(T)Wv_("measn ")\overrightarrow{\hat{v}}_{\text {ego }, k}=-\left(\mathbf{H}^{T} \mathbf{W H}\right)^{-1} \mathbf{H}^{T} \mathbf{W} v_{\text {measn }}
  • The static point cloud ( P static ) P static  (P_("static "))\left(\mathrm{P}_{\text {static }}\right) is obtained simultaneously, and the static point is selected whose weight w n w n w_(n)w_{n} is less than the threshold c c cc.
    同时获得静态点云 ( P static ) P static  (P_("static "))\left(\mathrm{P}_{\text {static }}\right) ,并选择权重 w n w n w_(n)w_{n} 小于阈值 c c cc 的静态点。
P static = { p n w n < c } P static  = p n w n < c P_("static ")={ vec(p)_(n)∣w_(n) < c}\mathrm{P}_{\text {static }}=\left\{\vec{p}_{n} \mid w_{n}<c\right\}

4. Iterative Refinement:
4.迭代改进:

  • Repeat steps 2 and 3 until the change in the velocity estimate v ^ ego,k v ^ ego,k  vec(hat(v))_("ego,k ")\overrightarrow{\hat{v}}_{\text {ego,k }} is below a predefined threshold.
    重复步骤 2 和 3,直到速度估计值 v ^ ego,k v ^ ego,k  vec(hat(v))_("ego,k ")\overrightarrow{\hat{v}}_{\text {ego,k }} 的变化低于预定阈值。

5. Update:  5.更新:

  • Use the final velocity estimate v ^ ego,k v ^ ego,k  vec(hat(v))_("ego,k ")\overrightarrow{\hat{v}}_{\text {ego,k }} for filtering dynamic points and for the next prediction step.
    将最终速度估计值 v ^ ego,k v ^ ego,k  vec(hat(v))_("ego,k ")\overrightarrow{\hat{v}}_{\text {ego,k }} 用于过滤动态点和下一步预测。

    However, the ME-LSQ method can sometimes be faulty when the sensor occludes all of its Field of View (FOV) or cannot detect any static points. In this case, the ME-LSQ method can not estimate the ego-motion accurately. To solve this problem, we use the previous estimated motion when no point cloud is detected in the current frame. In addition, we use the RANSAC-LSQ method [14] to estimate the egomotion when the number of static points is less than a certain threshold.
    然而,当传感器遮挡了所有视场(FOV)或无法检测到任何静态点时,ME-LSQ 方法有时会出现问题。在这种情况下,ME-LSQ 方法就无法准确估计自我运动。为了解决这个问题,当当前帧中没有检测到点云时,我们使用之前估计的运动。此外,当静态点的数量少于某个阈值时,我们使用 RANSAC-LSQ 方法 [14] 来估计自我运动。
  1. Angular velocity estimation: As mentioned in Section III-A2, the ME-LSQ aims to estimate the linear velocity of the radar sensor. For robust registration, we need not only the linear velocity estimated by ME-LSQ, but also the angular velocity. While this can be achieved with multiple sensors, it is not feasible with a single sensor [16]. This limitation is due to the insufficiency of information, as a single radar can only provide three degrees of freedom motion information. Therefore, we utilize the kinematic constraint of the vehicle to estimate angular velocity, similar to the method proposed by [13]. This approach allows us to leverage the Doppler effect to infer rotational motion, which is crucial for accurate transformation estimation in our radar-based odometry systems.
    角速度估计:如第 III-A2 节所述,ME-LSQ 的目的是估计雷达传感器的线速度。为了实现稳健配准,我们不仅需要 ME-LSQ 估算的线速度,还需要角速度。虽然这可以通过多个传感器来实现,但对于单个传感器来说并不可行[16]。造成这种限制的原因是信息不足,因为单个雷达只能提供三个自由度的运动信息。因此,我们利用车辆的运动学约束来估计角速度,这与 [13] 提出的方法类似。这种方法允许我们利用多普勒效应来推断旋转运动,这对我们基于雷达的里程测量系统中的精确变换估计至关重要。
Using the vehicle’s Ackerman condition and the radar’s Doppler velocity, three-dimensional rotation movement can be obtained. Typically, radars in vehicles are mounted at the front, characterized by a wide horizontal Field of View (FoV) and a narrow vertical FoV. This configuration results in insufficient data for estimating roll and pitch motions. Considering these sensor limitations, we focus solely on estimating yaw movements. This approach is reasonable given the nature of vehicles moving on ground surfaces. We assume that there is no lateral velocity v e g o , y V v e g o , y V v_(ego,y)^(V)v_{e g o, y}^{V} and vertical velocity v e g o , z V v e g o , z V v_(ego,z)^(V)v_{e g o, z}^{V} relative to the vehicle’s rear axle, based on the non-holonomic constraint. The yaw rate ω z V ω z V omega_(z)^(V)\omega_{z}^{V} can be calculated using the Doppler velocity v radar,y V v radar,y  V v_("radar,y ")^(V)v_{\text {radar,y }}^{V} and the distance from the rear axle to the radar p radar V p radar  V vec(p)_("radar ")^(V)\vec{p}_{\text {radar }}^{V} as shown in (11).
利用车辆的阿克曼条件和雷达的多普勒速度,可以获得三维旋转运动。车辆上的雷达通常安装在前部,其特点是水平视场(FoV)较宽,垂直视场较窄。这种配置导致用于估算滚动和俯仰运动的数据不足。考虑到这些传感器的局限性,我们只专注于估计偏航运动。考虑到车辆在地面上移动的特性,这种方法是合理的。根据非人体工学约束条件,我们假设相对于车辆后轴没有横向速度 v e g o , y V v e g o , y V v_(ego,y)^(V)v_{e g o, y}^{V} 和纵向速度 v e g o , z V v e g o , z V v_(ego,z)^(V)v_{e g o, z}^{V} 。如 (11) 所示,偏航率 ω z V ω z V omega_(z)^(V)\omega_{z}^{V} 可通过多普勒速度 v radar,y V v radar,y  V v_("radar,y ")^(V)v_{\text {radar,y }}^{V} 和后轴到雷达的距离 p radar V p radar  V vec(p)_("radar ")^(V)\vec{p}_{\text {radar }}^{V} 计算得出。
ω z V = p radar , x V v radar, , V p radar, , V v radar , x V ω z V = p radar  , x V v radar,  , V p radar,  , V v radar  , x V omega_(z)^(V)=p_("radar ",x)^(V)v_("radar, ",)^(V)-p_("radar, ",)^(V)v_("radar ",x)^(V)\omega_{z}^{V}=p_{\text {radar }, x}^{V} v_{\text {radar, },}^{V}-p_{\text {radar, },}^{V} v_{\text {radar }, x}^{V}
The superscript V V VV denotes the vehicle’s coordinate system. Where ω z V ω z V omega_(z)^(V)\omega_{z}^{V} denotes the vehicle’s yaw rate, and p radar V p radar  V vec(p)_("radar ")^(V)\vec{p}_{\text {radar }}^{V} is the position from the rear axle to the radar. The velocity of the radar sensor in the vehicle coordinate system, v radar V v radar  V vec(v)_("radar ")^(V)\vec{v}_{\text {radar }}^{V}, can be determined using the rotational calibration information from the vehicle to the radar, denoted as R r a d a r V R r a d a r V R_(radar)^(V)\mathbf{R}_{r a d a r}^{V}.
上标 V V VV 表示车辆的坐标系。其中, ω z V ω z V omega_(z)^(V)\omega_{z}^{V} 表示车辆的偏航率, p radar V p radar  V vec(p)_("radar ")^(V)\vec{p}_{\text {radar }}^{V} 是后轴到雷达的位置。雷达传感器在车辆坐标系中的速度 v radar V v radar  V vec(v)_("radar ")^(V)\vec{v}_{\text {radar }}^{V} 可以使用从车辆到雷达的旋转校准信息来确定,表示为 R r a d a r V R r a d a r V R_(radar)^(V)\mathbf{R}_{r a d a r}^{V}
v radar V = R radar V r radar S v radar  V = R radar  V r radar  S vec(v)_("radar ")^(V)=R_("radar ")^(V) vec(r)_("radar ")^(S)\vec{v}_{\text {radar }}^{V}=\mathbf{R}_{\text {radar }}^{V} \vec{r}_{\text {radar }}^{S}
Algorithm 1 Polar-grid Local RCS Feature Extraction
Input:
    \(\mathrm{P} \subset \mathbb{R}^{3}\) : set of points of the point cloud \((x, y, z, R C S)\);
    \(\Delta \theta\) : size of azimuth steps in degrees;
    \(\Delta \phi\) : size of elevation steps in degrees;
    \(\Delta r\) : size of radius steps in meters;
    \(N_{\text {feature }}:\) number of top RCS points to extract per voxel;
Output:
    The set of top \(N_{\text {feature }}\) RCS points for each cell \(V(i)\)
    Initialize \(V(i)=\emptyset\) for each cell \(c_{i}\)
    for each point \(\vec{p}\) in P do
        \(\mathbf{R} \leftarrow \sqrt{p_{x}^{2}+p_{y}^{2}+p_{z}^{2}}\)
        \(\theta_{\text {azimuth }} \leftarrow \boldsymbol{\operatorname { t a n }}^{-1}\left(p_{y}, p_{x}\right)\)
        \(\theta_{\text {elevation }} \leftarrow \boldsymbol{\operatorname { t a n }}^{-1}\left(p_{z}, \sqrt{p_{x}^{2}+p_{y}^{2}}\right)\)
        \(i d x_{r} \leftarrow\left\lfloor\frac{\mathbf{R}}{\Delta r}\right\rfloor\)
        \(i d x_{a z i} \leftarrow\left[\frac{\theta_{a z i m u t h} \cdot 180 / \pi}{\Delta \theta}\right\rfloor\)
        \(i d x_{\text {ele }} \leftarrow\left[\frac{\left(\theta_{\text {elevation }}+\pi / 2\right) \cdot 180 / \pi}{\Delta \phi}\right]\)
        \(i d \leftarrow \operatorname{combine}\left(i d x_{r}, i d x_{a z i}, i d x_{e l e}\right)\)
        \(V(i d) \leftarrow V(i d) \cup\{\vec{p}\}\)
    end for
    for each cell \(V(i d)\) do
        Sort points in \(V(i d)\) by RCS in descending order
        \(V(i d) \leftarrow\) top \(N_{\text {feature }}\) points from \(V(i d)\)
    end for=0
The angular velocity in the sensor coordinate system ω radar S ω radar  S vec(omega)_("radar ")^(S)\vec{\omega}_{\text {radar }}^{S} is required for the registration. This can be obtained by transforming the angular velocity in the vehicle coordinate system ω r a d a r V ω r a d a r V vec(omega)_(radar)^(V)\vec{\omega}_{r a d a r}^{V} to the sensor coordinate system using the R r a d a r V R r a d a r V R_(radar)^(V)\mathbf{R}_{r a d a r}^{V}.
注册需要传感器坐标系 ω radar S ω radar  S vec(omega)_("radar ")^(S)\vec{\omega}_{\text {radar }}^{S} 中的角速度。这可以通过使用 R r a d a r V R r a d a r V R_(radar)^(V)\mathbf{R}_{r a d a r}^{V} 将车辆坐标系 ω r a d a r V ω r a d a r V vec(omega)_(radar)^(V)\vec{\omega}_{r a d a r}^{V} 中的角速度转换到传感器坐标系来获得。
ω r a d a r S = ( R r a d a r V ) T ω r a d a r V ω r a d a r S = R r a d a r V T ω r a d a r V vec(omega)_(radar)^(S)=(R_(radar)^(V))^(T) vec(omega)_(radar)^(V)\vec{\omega}_{r a d a r}^{S}=\left(\mathbf{R}_{r a d a r}^{V}\right)^{T} \vec{\omega}_{r a d a r}^{V}

B. Initial transform estimation
B.初始变换估计

The initial transformation T i ( init ) = ( R i ( init ) , p i ( init ) ) T i ( init  ) = R i ( init  ) , p i ( init  ) T_(i)^(("init "))=(R_(i)^(("init ")), vec(p)_(i)^(("init ")))\mathbf{T}_{i}^{(\text {init })}=\left(\mathbf{R}_{i}^{(\text {init })}, \vec{p}_{i}^{(\text {init })}\right) at the current timestamp i i ii is determined using the position at the previous timestamp j j jj, denoted as T j = ( R j , p j ) SO ( 3 ) × R 3 T j = R j , p j SO ( 3 ) × R 3 T_(j)=(R_(j), vec(p)_(j))inSO(3)xxR^(3)\mathbf{T}_{j}=\left(\mathbf{R}_{j}, \vec{p}_{j}\right) \in \mathrm{SO}(3) \times \mathbb{R}^{3}, along with the estimated velocity v ego , i v ego  , i v_("ego ",i)v_{\text {ego }, i}, angular velocity ω i ω i vec(omega)_(i)\vec{\omega}_{i}, and the time interval Δ t i , j Δ t i , j Delta vec(t)_(i,j)\Delta \vec{t}_{i, j} between the two timestamps.
当前时间戳 i i ii 处的初始变换 T i ( init ) = ( R i ( init ) , p i ( init ) ) T i ( init  ) = R i ( init  ) , p i ( init  ) T_(i)^(("init "))=(R_(i)^(("init ")), vec(p)_(i)^(("init ")))\mathbf{T}_{i}^{(\text {init })}=\left(\mathbf{R}_{i}^{(\text {init })}, \vec{p}_{i}^{(\text {init })}\right) 是使用前一个时间戳 j j jj 处的位置(表示为 T j = ( R j , p j ) SO ( 3 ) × R 3 T j = R j , p j SO ( 3 ) × R 3 T_(j)=(R_(j), vec(p)_(j))inSO(3)xxR^(3)\mathbf{T}_{j}=\left(\mathbf{R}_{j}, \vec{p}_{j}\right) \in \mathrm{SO}(3) \times \mathbb{R}^{3} )以及估计速度 v ego , i v ego  , i v_("ego ",i)v_{\text {ego }, i} 、角速度 ω i ω i vec(omega)_(i)\vec{\omega}_{i} 和两个时间戳之间的时间间隔 Δ t i , j Δ t i , j Delta vec(t)_(i,j)\Delta \vec{t}_{i, j} 确定的。
R i ( init) ) = R j exp ( ω i Δ u i , j ) p i (init) = p j + R j v ego , i Δ u i , j R i ( init)  ) = R j exp ω i Δ u i , j p i (init)  = p j + R j v ego  , i Δ u i , j {:[R_(i)^(("init) "))=R_(j)exp( vec(omega)_(i)Delta vec(u)_(i,j))],[ vec(p)_(i)^((init) )= vec(p)_(j)+R_(j) vec(v)_("ego ",i)Delta vec(u)_(i,j)]:}\begin{aligned} \mathbf{R}_{i}^{(\text {init) })} & =\mathbf{R}_{j} \exp \left(\vec{\omega}_{i} \Delta \vec{u}_{i, j}\right) \\ \vec{p}_{i}^{\text {(init) }} & =\vec{p}_{j}+\mathbf{R}_{j} \vec{v}_{\text {ego }, i} \Delta \vec{u}_{i, j} \end{aligned}

IV. Feature Extraction  IV.特征提取

The radar point cloud has two notable disadvantages compared to LiDAR: lower point density and noise, including clutter from surrounding objects. Applying traditional LiDARbased filters directly to radar point clouds can risk retaining noisy points instead of meaningful points.
与激光雷达相比,雷达点云有两个明显的缺点:点密度较低和噪声,包括周围物体产生的杂波。将传统的基于激光雷达的滤波器直接应用于雷达点云,有可能会保留噪声点而不是有意义的点。

A. Radar cross-section (RCS)
A.雷达截面(RCS)

An object’s Radar cross-section (RCS) is its unique property measurable by radar systems. RCS is influenced by material, physical object size, and incident and reflection angles of
物体的雷达截面(RCS)是雷达系统可测量的独特属性。RCS 受材料、物体大小、入射角和反射角的影响。

radar waves. Excluding incident and reflection angles, other factors are unique to the object, making RCS a distinctive landmark in radar odometry systems. In particular, columnarshaped objects such as lampposts, fence poles, bollards, etc., have consistent RCS measurements in all directions due to having the same incident and reflection angles [26].
雷达波。除入射角和反射角外,其他因素都是物体所独有的,这使得 RCS 成为雷达测距系统中的一个独特标志。特别是灯柱、栅栏杆、系缆桩等柱状物体,由于具有相同的入射角和反射角,因此在所有方向上的 RCS 测量结果都是一致的[26]。
RCS values can be estimated from the received power P w r r P w r r Pwr_(r)P w r_{r} measured by radar sensors. P w r r P w r r Pwr_(r)P w r_{r} is determined by transmitted power P w r t P w r t Pwr_(t)P w r_{t}, transmit antenna gain G t G t G_(t)G_{t}, the effective area of receiving antenna A e f f A e f f A_(eff)A_{e f f}, and the object’s RCS σ σ sigma\sigma. And P w r r P w r r Pwr_(r)P w r_{r} is inversely proportional to the fourth power of the distance r r rr from the object (15).
RCS 值可通过雷达传感器测得的接收功率 P w r r P w r r Pwr_(r)P w r_{r} 估算。 P w r r P w r r Pwr_(r)P w r_{r} 由发射功率 P w r t P w r t Pwr_(t)P w r_{t} 、发射天线增益 G t G t G_(t)G_{t} 、接收天线有效面积 A e f f A e f f A_(eff)A_{e f f} 和物体的 RCS σ σ sigma\sigma 决定。而 P w r r P w r r Pwr_(r)P w r_{r} 与物体距离 r r rr 的四次方成反比 (15)。
P w r r = P w r t G t 4 π r 2 σ 1 4 π r 2 A eff = k r 4 σ P w r r = P w r t G t 4 π r 2 σ 1 4 π r 2 A eff = k r 4 σ Pwr_(r)=(Pwr_(t)G_(t))/(4pir^(2))sigma(1)/(4pir^(2))A_(eff)=(k)/(r^(4))sigmaP w r_{r}=\frac{P w r_{t} G_{t}}{4 \pi r^{2}} \sigma \frac{1}{4 \pi r^{2}} A_{\mathrm{eff}}=\frac{k}{r^{4}} \sigma
Therefore, using the formula below (16), the object’s RCS σ σ sigma\sigma can be calculated with the radar’s received power P w r r P w r r Pwr_(r)P w r_{r} and object’s distance r r rr.
因此,根据雷达接收功率 P w r r P w r r Pwr_(r)P w r_{r} 和目标距离 r r rr 可以利用下式(16)计算出目标的 RCS σ σ sigma\sigma
σ = r 4 k P w r r σ = r 4 k P w r r sigma=(r^(4))/(k)Pwr_(r)\sigma=\frac{r^{4}}{k} P w r_{r}
Through the above equation, we can see that the RCS σ σ sigma\sigma is proportional to the fourth power of the distance r r rr and the received power P w r r P w r r Pwr_(r)P w r_{r}.
通过上式可以看出,RCS σ σ sigma\sigma 与距离 r r rr 和接收功率 P w r r P w r r Pwr_(r)P w r_{r} 的四次方成正比。

B. Polar-grid local RCS feature extraction
B.极网局部 RCS 特征提取

Imaging radar sensors employ signals in a lower frequency range than LiDAR sensors, which intrinsically exhibit higher noise levels. This effect is especially marked in the case of objects with a low Radar cross-section (RCS). Objects with low RCS indeed reflect less radar signal back to the sensor, which can result in noisier data. Furthermore, noise artifacts can also arise from signal scattering around objects with high RCS. When a radar signal encounters an object with a high RCS, especially in an environment with multiple reflective surfaces, the signal can bounce several times before returning to the sensor. This phenomenon, known as multipath scattering, can create ghost echoes or false targets that confuse the radar system, appearing as noise or artifacts in the radar image [32], [33]. Such noise points can lead to the formation of incorrect correspondences during registration, which may significantly impair the overall performance of odometry. Therefore, extracting meaningful points from the radar point cloud becomes crucial before performing odometry to ensure the system’s accuracy and reliability.
成像雷达传感器采用的信号频率范围比激光雷达传感器低,因此噪音水平也更高。这种影响在雷达截面(RCS)较低的物体上尤为明显。RCS 低的物体反射回传感器的雷达信号确实较少,这可能会导致数据噪声增大。此外,高 RCS 物体周围的信号散射也会产生噪声伪影。当雷达信号遇到高 RCS 的物体时,特别是在有多个反射表面的环境中,信号可能会反弹数次后才返回传感器。这种现象被称为多径散射,会产生幽灵回波或虚假目标,混淆雷达系统,在雷达图像中显示为噪声或伪影[32], [33]。这些噪声点会导致在配准过程中形成错误的对应关系,从而严重影响测距的整体性能。因此,在进行里程测量之前,从雷达点云中提取有意义的点对于确保系统的准确性和可靠性至关重要。
This paper introduces a polar-grid local RCS feature extraction method in response to this challenge. Details of our polargrid local RCS feature extraction algorithm are provided in Algorithm 1. First, points consisting of Euclidean coordinates are converted to the polar coordinate system using radius, azimuth, and elevation. Then, the space is divided using a polar grid size Δ θ , Δ ϕ , Δ r Δ θ , Δ ϕ , Δ r Delta theta,Delta phi,Delta r\Delta \theta, \Delta \phi, \Delta r. Within each voxel, points are sorted in descending order based on their RCS values, and the top N points with the highest RCS values are extracted. Figure 3 illustrates the result of this algorithm when only the top point is selected. This method extracts points with high RCS values and retains locally significant point information.
本文针对这一挑战,介绍了一种极网局部 RCS 特征提取方法。极栅局部 RCS 特征提取算法的详情见算法 1。首先,使用半径、方位角和仰角将欧几里得坐标组成的点转换为极坐标系。然后,使用极坐标网格大小 Δ θ , Δ ϕ , Δ r Δ θ , Δ ϕ , Δ r Delta theta,Delta phi,Delta r\Delta \theta, \Delta \phi, \Delta r 对空间进行划分。在每个体素内,根据点的 RCS 值按降序排序,提取 RCS 值最高的 N 个点。图 3 展示了这种算法只选取顶部点时的结果。这种方法可以提取 RCS 值高的点,并保留局部重要的点信息。

Fig. 3. Each point color in the 4D imaging radar point cloud means the magnitude of its Radar cross-section (RCS). The color spectrum transitions from red to purple, with colors towards purple indicating higher RCS values. Assuming a two-dimensional space, the X Y X Y X-Y\mathrm{X}-\mathrm{Y} domain is segmented into a polar grid. The two highest RCS values are extracted within each segment of this grid.
图 3.4D 成像雷达点云中每个点的颜色表示其雷达截面(RCS)的大小。色谱从红色过渡到紫色,紫色表示较高的 RCS 值。假设为二维空间, X Y X Y X-Y\mathrm{X}-\mathrm{Y} 域被分割成极坐标网格。在该网格的每个分段中提取两个最高的 RCS 值。

V. Odometry  V.测距

A. Iterative Closest Point (ICP)
A.迭代最接近点 (ICP)

In a variety of studies, various registration techniques have been proposed depending on the type (edge, surface, solid) of the LiDAR point cloud [4]. However, in the case of radar point clouds, it was difficult to capture morphological features such as edges and surfaces due to the sparsity and noise characteristics of radar, so the point-to-point ICP algorithm [7] as the base algorithm.
在各种研究中,根据激光雷达点云的类型(边缘、表面、实体)提出了各种配准技术[4]。然而,对于雷达点云,由于雷达的稀疏性和噪声特性,很难捕捉到边缘和曲面等形态特征,因此采用了点对点的 ICP 算法[7]作为基础算法。
The ICP algorithm consists of two main steps: finding correspondence and minimizing distances. In the first step, ICP finds m m mm correspondences from the each source points S = { s 1 , s 2 , , s m } S = s 1 , s 2 , , s m S={ vec(s)_(1), vec(s)_(2),dots, vec(s)_(m)}\mathrm{S}=\left\{\vec{s}_{1}, \vec{s}_{2}, \ldots, \vec{s}_{m}\right\} to the target point Q = { q 1 , q 2 , , q n } Q = q 1 , q 2 , , q n Q={ vec(q)_(1), vec(q)_(2),dots, vec(q)_(n)}\mathrm{Q}=\left\{\vec{q}_{1}, \vec{q}_{2}, \ldots, \vec{q}_{n}\right\}. The equation (17) shows finding the k k kk iteration correspondence point q ^ j , i k q ^ j , i k hat(q)_(j,i)^(k)\hat{q}_{j, i}{ }^{k} of the j th j th  j^("th ")j^{\text {th }} scan point s j , i S inlier s j , i S inlier  vec(s)_(j,i)inS_("inlier ")\vec{s}_{j, i} \in S_{\text {inlier }} at time i i ii. If k k kk is 1 , T i T i T_(i)\mathbf{T}_{i} is set as T i init T i init  T_(i)^("init ")\mathbf{T}_{i}^{\text {init }}. The rigid transformation T T T\mathbf{T} is composed of a rotation matrix R SO ( 3 ) R SO ( 3 ) RinSO(3)\mathbf{R} \in \mathrm{SO}(3) and a translation vector t R 3 t R 3 vec(t)inR^(3)\vec{t} \in \mathbb{R}^{3}.
ICP 算法包括两个主要步骤:查找对应关系和最小化距离。第一步,ICP 从每个源点 S = { s 1 , s 2 , , s m } S = s 1 , s 2 , , s m S={ vec(s)_(1), vec(s)_(2),dots, vec(s)_(m)}\mathrm{S}=\left\{\vec{s}_{1}, \vec{s}_{2}, \ldots, \vec{s}_{m}\right\} 到目标点 Q = { q 1 , q 2 , , q n } Q = q 1 , q 2 , , q n Q={ vec(q)_(1), vec(q)_(2),dots, vec(q)_(n)}\mathrm{Q}=\left\{\vec{q}_{1}, \vec{q}_{2}, \ldots, \vec{q}_{n}\right\} 找到 m m mm 对应点。公式 (17) 显示在时间 i i ii 找到 j th j th  j^("th ")j^{\text {th }} 扫描点 s j , i S inlier s j , i S inlier  vec(s)_(j,i)inS_("inlier ")\vec{s}_{j, i} \in S_{\text {inlier }} k k kk 迭代对应点 q ^ j , i k q ^ j , i k hat(q)_(j,i)^(k)\hat{q}_{j, i}{ }^{k} 。如果 k k kk 为 1 ,则 T i T i T_(i)\mathbf{T}_{i} 设置为 T i init T i init  T_(i)^("init ")\mathbf{T}_{i}^{\text {init }} 。刚性变换 T T T\mathbf{T} 由旋转矩阵 R SO ( 3 ) R SO ( 3 ) RinSO(3)\mathbf{R} \in \mathrm{SO}(3) 和平移矢量 t R 3 t R 3 vec(t)inR^(3)\vec{t} \in \mathbb{R}^{3} 组成。
q ^ j ^ , i k = argmin q n Q R i k s j , i + t i k q n q ^ j ^ , i k = argmin q n Q R i k s j , i + t i k q n hat(q)_( hat(j),i)^(k)=argmin_( vec(q)_(n)inQ)||R_(i^(k)) vec(s)_(j,i)+ vec(t)_(i)^(k)- vec(q)_(n)||\hat{q}_{\hat{j}, i}^{k}=\underset{\vec{q}_{n} \in \mathrm{Q}}{\operatorname{argmin}}\left\|\mathbf{R}_{i^{k}} \vec{s}_{j, i}+\vec{t}_{i}^{k}-\vec{q}_{n}\right\|
The second step is updating k t h k t h k^(th)k^{t h} step’s rotation R i k R i k R_(i)^(k)\mathbf{R}_{i}^{k} and translation t i k t i k vec(t)_(i)^(k)\vec{t}_{i}^{k} to minimize the distance between all of the correspondences of S . The proposed method minimizes the distance with RCS-weight described in section V-B Finally, ICP iteratively updates the transformation to minimize the L2 distance between all corresponding points.
第二步是更新 k t h k t h k^(th)k^{t h} 步的旋转 R i k R i k R_(i)^(k)\mathbf{R}_{i}^{k} 和平移 t i k t i k vec(t)_(i)^(k)\vec{t}_{i}^{k} 以最小化 S 所有对应点之间的距离。最后,ICP 会迭代更新变换,以最小化所有对应点之间的 L2 距离。

B. RCS-weighted accumulated scans-to-submap matching
B.RCS 加权累积扫描到子映射匹配

  1. Accumulated scans and submap generation: 4D imaging radars provide sparse point data, leading many studies to employ scan-to-submap matching techniques to overcome these limitations. Scan-to-submap matching involves aligning individual scans with a submap created by accumulating point clouds around the vehicle derived from registration results. This method helps prevent the formation of a sparse number of correspondences within the radar point cloud, thereby achieving stable registration results. However, the data from single radar scans that are matched to the submap remain sparse.
    累积扫描和子地图生成:4D 成像雷达可提供稀疏的点数据,因此许多研究采用扫描到子地图匹配技术来克服这些限制。扫描到子地图匹配包括将单个扫描数据与根据登记结果在车辆周围累积点云而生成的子地图进行对齐。这种方法有助于防止雷达点云中形成稀疏的对应关系,从而获得稳定的登记结果。但是,与子地图匹配的单个雷达扫描数据仍然稀疏。
Algorithm 2 Local RCS Normalization
Input:
    \(\mathbb{V}\) : map of spherical voxel indices to sets of points
\((x, y, z, R C S)\);
    \(T\) : minimum RCS difference for normalization;
    \(\sigma_{N F}\) : normalization factor of RCS;
Output:
    \(\Omega\) : set of points with normalized RCS values
    Initialize \(Q \leftarrow \emptyset\)
    for all voxel \(V\) in \(\mathbb{V}\) do
        Extract \(\mathrm{P}_{V}\) from \(V\)
        Find \(\sigma_{\min }\) and \(\sigma_{\max }\) in \(\mathrm{P}_{V}\)
        if \(\sigma_{\max }-\sigma_{\text {min }}>T\) then
            for all \(p_{r c s}\) in \(\mathrm{P}_{V}\) do
                \(\Omega \leftarrow \frac{\sigma_{N F}\left(p_{r r s}-\sigma_{\text {min }}\right)}{\sigma_{\text {mix }}-\sigma_{\min }}\)
                end for
        else
            for all \(p_{r c s}\) in \(\mathrm{P}_{V}\) do
                \(\Omega \leftarrow 0\)
            end for
        end if
    end for \(=0\)
return \(\Omega\)
Fig. 4. Visualization of accumulated scans and submap formation for accumulated scans-to-submap matching, with M set to 5 and N set to 10 . Accumulated scans are formed using the point cloud and initial transformation information within the orange box. Subsequently, the point cloud and refined transformation information from registration within the light blue box are used to form the submap.
图 4.累积扫描和子地图形成的可视化,用于累积扫描到子地图的匹配,M 设为 5,N 设为 10。累积扫描是利用橙色框内的点云和初始变换信息形成的。随后,使用浅蓝色框内的点云和注册后的细化变换信息形成子地图。
Unlike LiDAR sensors, radar sensors offer the powerful advantage of providing Doppler information. Most 4D imaging radar sensors deliver Doppler data with an accuracy between 0.01 m / s 0.01 m / s 0.01m//s0.01 \mathrm{~m} / \mathrm{s} and 0.15 m / s 0.15 m / s 0.15m//s0.15 \mathrm{~m} / \mathrm{s}, which is sufficient for accumulating scans over short periods for odometry estimation. Leveraging this capability, this paper proposes accumulated scans-tosubmap matching (ASSM), which matches M accumulated scans with a submap that is created using N scans. Initially, the accumulated scans M AccuScans,i M AccuScans,i  M_("AccuScans,i ")\mathbb{M}_{\text {AccuScans,i }} comprising M individual scans are accumulated using only the initial transformation information estimated through radar ego-motion estimation, as described in Section III.
与激光雷达传感器不同,雷达传感器具有提供多普勒信息的强大优势。大多数 4D 成像雷达传感器提供的多普勒数据精度介于 0.01 m / s 0.01 m / s 0.01m//s0.01 \mathrm{~m} / \mathrm{s} 0.15 m / s 0.15 m / s 0.15m//s0.15 \mathrm{~m} / \mathrm{s} 之间,足以在短时间内积累扫描数据,用于里程估算。利用这种能力,本文提出了累积扫描-子地图匹配(ASSM),它将 M 个累积扫描与使用 N 个扫描创建的子地图进行匹配。最初,由 M 个扫描组成的累积扫描 M AccuScans,i M AccuScans,i  M_("AccuScans,i ")\mathbb{M}_{\text {AccuScans,i }} 仅使用通过雷达自我运动估计估算出的初始变换信息进行累积,如第三节所述。
M A c c u S c a n s , i = S i + j = i M i 1 ( l = i j i 1 ( T l ( i n i t ) ) 1 ) S j M A c c u S c a n s , i = S i + j = i M i 1 l = i j i 1 T l ( i n i t ) 1 S j M_(AccuScans,i)=S_(i)+sum_(j=i-M)^(i-1)(prod_(l=i-j)^(i-1)(T_(l)^((init)))^(-1))S_(j)\mathbb{M}_{A c c u S c a n s, i}=\mathrm{S}_{i}+\sum_{j=i-M}^{i-1}\left(\prod_{l=i-j}^{i-1}\left(\mathbf{T}_{l}^{(i n i t)}\right)^{-1}\right) \mathrm{S}_{j}
Algorithm 3 Correspondence weight calculation with RCS
Input:
    S: set of source points in correspondences (x,y,z);
    \Omega source: set of normalized RCS for the source points;
    Q: set of target points in correspondences (x,y,z);
    \kappa: kernel value for weighting;
Output:
    W: calculated weights for each correspondence
    Initialize W \leftarrow []
    for each index i in S do
        Compute r
        Compute }\mp@subsup{w}{\mathrm{ res }}{}\leftarrow1.0/(1.0+\operatorname{exp}(-\mp@subsup{\Omega}{\vec{\mp@subsup{s}{i}{}}}{})
        Append wi to W
    end for=0
return W
where T i ( init ) T i ( init  ) T_(i)^(("init "))\mathbf{T}_{i}^{(\text {init })} is the initial transformation matrix estimated through radar ego-motion estimation for the i i ii-th scan, and S i S i S_(i)\mathrm{S}_{i} represents the i i ii-th radar scan.
其中, T i ( init ) T i ( init  ) T_(i)^(("init "))\mathbf{T}_{i}^{(\text {init })} 是通过第 i i ii 次扫描的雷达自我运动估计估计出的初始变换矩阵, S i S i S_(i)\mathrm{S}_{i} 表示第 i i ii 次雷达扫描。
Next, in the case of the submap M submap , i M submap  , i M_("submap ",i)\mathbb{M}_{\text {submap }, i} created by accumulating N scans, it is formed using the refined transformation information T i T i T_(i)\mathbf{T}_{i} obtained through the registration process.
接下来,对于通过累积 N 次扫描创建的子映射 M submap , i M submap  , i M_("submap ",i)\mathbb{M}_{\text {submap }, i} ,将使用通过注册过程获得的细化变换信息 T i T i T_(i)\mathbf{T}_{i} 形成子映射。
M submap , i = S i + j = i N i 1 ( l = i j i 1 T l 1 ) S j M submap  , i = S i + j = i N i 1 l = i j i 1 T l 1 S j M_("submap ",i)=S_(i)+sum_(j=i-N)^(i-1)(prod_(l=i-j)^(i-1)T_(l)^(-1))S_(j)\mathbb{M}_{\text {submap }, i}=\mathrm{S}_{i}+\sum_{j=i-N}^{i-1}\left(\prod_{l=i-j}^{i-1} \mathbf{T}_{l}^{-1}\right) \mathrm{S}_{j}
In the case of the M AccuScans , i M AccuScans  , i M_("AccuScans ",i)\mathbb{M}_{\text {AccuScans }, i} created by accumulating M scans with initial transformations, drift can occur over time due to the use of the T j ( init) T j ( init)  T_(j)^(("init) ")\mathbf{T}_{j}^{(\text {init) }} obtained through only ego-motion estimation. Therefore, the number of accumulated scans is kept to 10 or fewer, given the 10 Hz frame rate of the radar. Additionally, M must be smaller than N to minimize drift errors, as initial transformations from radar Doppler may be less accurate. In contrast, submap transformations (N) are refined through the entire system, providing more reliable data. Ensuring M < N M < N M < N\mathrm{M}<\mathrm{N} leverages these refined transformations to correct drift errors, resulting in a more robust odometry system.
在通过初始变换累积 M 次扫描创建 M AccuScans , i M AccuScans  , i M_("AccuScans ",i)\mathbb{M}_{\text {AccuScans }, i} 的情况下,由于使用仅通过自我运动估计获得的 T j ( init) T j ( init)  T_(j)^(("init) ")\mathbf{T}_{j}^{(\text {init) }} 会随时间发生漂移。因此,考虑到雷达的帧频为 10 Hz,累积扫描的次数应控制在 10 次或更少。此外,M 必须小于 N,以尽量减少漂移误差,因为来自雷达多普勒的初始变换可能不太准确。相比之下,子映射变换 (N) 通过整个系统进行细化,可提供更可靠的数据。确保 M < N M < N M < N\mathrm{M}<\mathrm{N} 利用这些细化变换来纠正漂移误差,从而使里程测量系统更加稳健。

2) RCS-weighted matching: In the operating environment of four-wheeled mobility, differentiating between objects with similar structures, such as building walls or road surfaces, within radar point cloud data can be challenging. Conversely, objects like lampposts, bollards, and fences, which are often metallic and possess high RCS, tend to have distinct features that facilitate the identification of key points and enhance matching accuracy during registration. We can potentially improve registration performance by incorporating these characteristics into the ICP (Iterative Closest Point) process.
2) RCS 加权匹配:在四轮移动的工作环境中,在雷达点云数据中区分具有相似结构的物体(如建筑物墙壁或路面)是一项挑战。相反,灯柱、系缆桩和栅栏等物体通常是金属质地,具有较高的 RCS,往往具有明显的特征,便于识别关键点,提高注册时的匹配精度。通过将这些特征纳入 ICP(迭代最接近点)流程,我们有可能提高注册性能。
To model RCS-based weights, we first normalize the RCS values of all points, as described in Algorithm 2. During RCS normalization, after polar-grid voxelization, each point’s RCS value is normalized between 0 and σ N F σ N F sigma_(NF)\sigma_{N F} based on the maximum and minimum RCS values within each polar grid. If the difference between the maximum and minimum RCS values in a voxel is smaller than a specific threshold T T TT, all RCS values in that voxel are set to 0 . Points with these
为了建立基于 RCS 的权重模型,我们首先对所有点的 RCS 值进行归一化处理,如算法 2 所述。在 RCS 归一化过程中,在极网格体素化之后,根据每个极网格内的最大和最小 RCS 值,在 0 和 σ N F σ N F sigma_(NF)\sigma_{N F} 之间对每个点的 RCS 值进行归一化。如果一个体素中的最大和最小 RCS 值之差小于特定的阈值 T T TT ,则该体素中的所有 RCS 值均设为 0。具有这些

normalized RCS values are then used to generate the weight of the correspondences.
然后使用归一化的 RCS 值生成对应关系的权重。
As shown in Algorithm 3, we first determine the residual r i r i r_(i)r_{i} between the source and target points to compute the correspondence weight. Next, we apply a sigmoid function to the normalized RCS value computed in Algorithm 2 , resulting in w r c s w r c s w_(rcs)w_{r c s}, which is scaled between 0.5 and 1 . Using the Geman-McClure robust kernel introduced in KISS-ICP [34], we multiply the weight term for the residual by w r c s w r c s w_(rcs)w_{r c s} to obtain the final weight w i w i w_(i)w_{i}.
如算法 3 所示,我们首先确定源点和目标点之间的残差 r i r i r_(i)r_{i} 以计算对应权重。接下来,我们对算法 2 中计算出的归一化 RCS 值应用一个 sigmoid 函数,得出 w r c s w r c s w_(rcs)w_{r c s} 值,该值的比例在 0.5 和 1 之间。使用 KISS-ICP [34] 中引入的 Geman-McClure 鲁棒内核,我们将残差的权重项乘以 w r c s w r c s w_(rcs)w_{r c s} ,得到最终权重 w i w i w_(i)w_{i}
Ultimately, optimization is performed as follows to minimize the residual for point-to-point correspondences using the weight values W W W\mathbf{W} calculated from the Algorithm 3
最后,使用算法 3 计算出的权值 W W W\mathbf{W} 进行如下优化,以最小化点对点对应的残差
T j ( k + 1 ) = argmin T ( s , q ) C , w W T s q 2 w T j ( k + 1 ) = argmin T ( s , q ) C , w W T s q 2 w T_(j)^((k+1))=argmin_(T)sum_(( vec(s), vec(q))in C,w inW)||T vec(s)- vec(q)||_(2)*w\mathbf{T}_{j}^{(k+1)}=\underset{\mathbf{T}}{\operatorname{argmin}} \sum_{(\vec{s}, \vec{q}) \in C, w \in \mathbf{W}}\|\mathbf{T} \vec{s}-\vec{q}\|_{2} \cdot w
Here, C C CC denotes the set of correspondences between the source and target. s s vec(s)\vec{s} and q q vec(q)\vec{q} represent the source and target points formed by the correspondences.
这里, C C CC 表示源点和目标点之间的对应集合。 s s vec(s)\vec{s} q q vec(q)\vec{q} 表示由对应关系形成的源点和目标点。

VI. EXPERIMENTs  VI.实验

A. Experiment setup  A.实验装置

For the performance evaluation of the proposed algorithm, we utilized the View-of-Delft (VoD) dataset, which provides 4D imaging radar sensor data and odometry information. This dataset was collected in 2022 in various regions of Delft, Netherlands (such as campus, suburb, and old town) using a vehicle equipped with multiple sensors, including the 4D imaging radar (ZF FR-Gen21) and RTK-GPS. The radar in the VoD dataset offers point-wise RCS information, which we leveraged for our algorithm. The dataset is divided into several sequences, each providing an average of 40 seconds of data. Recent odometry studies using 4D imaging radar typically utilize sequences 03 , 04 , 09 , 17 , 19 , 22 03 , 04 , 09 , 17 , 19 , 22 03,04,09,17,19,2203,04,09,17,19,22, and 24 as test sets; therefore, we also evaluated our algorithm on these sequences.
为了对所提算法进行性能评估,我们使用了代尔夫特视图(VoD)数据集,该数据集提供了四维成像雷达传感器数据和里程测量信息。该数据集于 2022 年在荷兰代尔夫特的不同区域(如校园、郊区和老城区)收集,使用的车辆配备了多个传感器,包括 4D 成像雷达(ZF FR-Gen21)和 RTK-GPS。VoD 数据集中的雷达提供了点状 RCS 信息,我们在算法中利用了这些信息。数据集分为几个序列,每个序列平均提供 40 秒的数据。最近使用 4D 成像雷达进行的里程测量研究通常使用 03 , 04 , 09 , 17 , 19 , 22 03 , 04 , 09 , 17 , 19 , 22 03,04,09,17,19,2203,04,09,17,19,22 和 24 序列作为测试集;因此,我们也在这些序列上评估了我们的算法。
The algorithms for comparison include those that use only 4D imaging radar, such as classical methods compared in the 4DRVO-Net [24]: ICP (point-to-point, point-to-plane) [7], NDT [10], G-ICP [9], and learning-based methods such as RaFlow [35] and CMFlow [36]. Ablation studies are conducted on each module proposed in the paper, using KISS-ICP [5] as the baseline algorithm.
用于比较的算法包括仅使用四维成像雷达的算法,如 4DRVO-Net [24] 中比较的经典方法:ICP(点到点、点到面)[7]、NDT [10]、G-ICP [9],以及基于学习的方法,如 RaFlow [35] 和 CMFlow [36]。以 KISS-ICP [5] 为基准算法,对本文提出的每个模块进行了消融研究。
In the baseline algorithm KISS-ICP [5], the initial estimation does not utilize ego-motion estimation derived from the radar’s Doppler velocity. Instead, it relies solely on the pre-defined motion model (constant velocity) for the initial guess. Additionally, voxel-based sampling is applied, where only the first index point of each voxel is extracted. Odometry estimation is performed through scan-to-submap matching, where the submap is managed based on distance rather than on a per-sequence scan basis.
在基线算法 KISS-ICP [5]中,初始估计并不利用雷达多普勒速度得出的自我运动估计。相反,它完全依靠预先定义的运动模型(恒定速度)进行初始猜测。此外,还采用了基于象素的采样方法,即只提取每个象素的第一个索引点。通过扫描到子映射匹配来进行运动轨迹估算,其中子映射是基于距离而不是基于每个序列扫描来管理的。
In contrast, our proposed method uses ego-motion estimation for the initial guess and applies polar-grid-based local RCS feature extraction using RCS values. Furthermore, odometry is estimated through RCS-weighted accumulated scans-tosubmap matching, managed on a per-scan basis. The parameter settings used in this experiment are shown in Table Π Π Pi\Pi.
相比之下,我们提出的方法使用自我运动估计进行初始猜测,并使用 RCS 值进行基于极网的局部 RCS 特征提取。此外,通过 RCS 加权累积扫描-子地图匹配来估算里程计,并按扫描进行管理。本实验中使用的参数设置如表 Π Π Pi\Pi 所示。
TABLE I  表 I
EVALUATION RESULTS ON VIEW-OF-DELFT DATASET.
视图数据集的评估结果。

BLUE REPRESENTS THE BEST PERFORMANCE, WHILE CYAN INDICATES THE SECOND-BEST PERFORMANCE. THE UNIT OF t r e l t r e l t_(rel)t_{r e l} IS METERS, AND r r e l r r e l r_(rel)r_{r e l} IS DEGREES, BOTH REPRESENTING THE RELATIVE PoSE ERROR (RPE) PER 1 METER.
蓝色表示最佳性能,青色表示次佳性能。 t r e l t r e l t_(rel)t_{r e l} 的单位是米, r r e l r r e l r_(rel)r_{r e l} 的单位是度数,两者都表示每 1 米的相对坡度误差 (RPE)。
Method  方法 03 04 09 17 19 22 24 Mean  平均值
t rel t rel  t_("rel ")t_{\text {rel }} r rel r rel  r_("rel ")r_{\text {rel }} t rel t rel  t_("rel ")t_{\text {rel }} r rel r rel  r_("rel ")r_{\text {rel }} t rel t rel  t_("rel ")t_{\text {rel }} r rel r rel  r_("rel ")r_{\text {rel }} t rel t rel  t_("rel ")t_{\text {rel }} r rel r rel  r_("rel ")r_{\text {rel }} t rel t rel  t_("rel ")t_{\text {rel }} r rel r rel  r_("rel ")r_{\text {rel }} t rel t rel  t_("rel ")t_{\text {rel }} r rel r rel  r_("rel ")r_{\text {rel }} t rel t rel  t_("rel ")t_{\text {rel }} r r e l r r e l r_(rel)r_{r e l} t rel t rel  t_("rel ")t_{\text {rel }} r rel r rel  r_("rel ")r_{\text {rel }}
ICP (point-to-point)  ICP(点对点) 0.39 1.00 0.21 1.14 0.15 0.72 0.16 0.53 0.14 4.70 0.44 0.76 0.24 0.77 0.43 1.37
ICP (point-to-plane)  ICP(点对平面) 0.42 2.19 0.37 1.83 0.50 1.32 0.23 0.68 3.04 5.62 0.42 1.20 0.35 0.67 0.76 1.93
GICP 0.46 0.68 0.30 0.39 0.51 0.32 0.40 0.10 0.51 1.23 0.34 0.57 0.15 0.30 0.38 0.51
NDT 0.55 1.60 0.47 0.91 0.46 0.56 0.44 0.40 1.33 2.58 0.47 1.10 0.36 1.84 0.58 1.28
RaFlow 35]  RaFlow 35] 0.87 2.09 0.07 0.44 0.11 0.09 0.13 0.03 1.22 4.09 0.72 1.34 0.25 1.14 0.48 1.32
CMFlow 36  CMFlow  36 " CMFlow "36\text { CMFlow } 36 0.06 0.10 0.05 0.09 0.09 0.14 0.06 0.03 0.28 0.94 0.14 0.29 0.12 0.58 0.11 0.31
KISS-ICP [5] 0.10 0.90 0.12 0.81 0.11 0.60 0.12 0.23 0.16 0.35 0.14 0.94 0.37 1.07 0.16 0.70
Ours  我们的 0.05 0.48 0.07 0.44 0.09 0.30 0.11 0.20 0.15 0.52 0.08 0.59 0.10 0.71 0.09 0.46
Method 03 04 09 17 19 22 24 Mean t_("rel ") r_("rel ") t_("rel ") r_("rel ") t_("rel ") r_("rel ") t_("rel ") r_("rel ") t_("rel ") r_("rel ") t_("rel ") r_("rel ") t_("rel ") r_(rel) t_("rel ") r_("rel ") ICP (point-to-point) 0.39 1.00 0.21 1.14 0.15 0.72 0.16 0.53 0.14 4.70 0.44 0.76 0.24 0.77 0.43 1.37 ICP (point-to-plane) 0.42 2.19 0.37 1.83 0.50 1.32 0.23 0.68 3.04 5.62 0.42 1.20 0.35 0.67 0.76 1.93 GICP 0.46 0.68 0.30 0.39 0.51 0.32 0.40 0.10 0.51 1.23 0.34 0.57 0.15 0.30 0.38 0.51 NDT 0.55 1.60 0.47 0.91 0.46 0.56 0.44 0.40 1.33 2.58 0.47 1.10 0.36 1.84 0.58 1.28 RaFlow 35] 0.87 2.09 0.07 0.44 0.11 0.09 0.13 0.03 1.22 4.09 0.72 1.34 0.25 1.14 0.48 1.32 " CMFlow "36 0.06 0.10 0.05 0.09 0.09 0.14 0.06 0.03 0.28 0.94 0.14 0.29 0.12 0.58 0.11 0.31 KISS-ICP [5] 0.10 0.90 0.12 0.81 0.11 0.60 0.12 0.23 0.16 0.35 0.14 0.94 0.37 1.07 0.16 0.70 Ours 0.05 0.48 0.07 0.44 0.09 0.30 0.11 0.20 0.15 0.52 0.08 0.59 0.10 0.71 0.09 0.46| Method | 03 | | 04 | | 09 | | 17 | | 19 | | 22 | | 24 | | Mean | | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | | $t_{\text {rel }}$ | $r_{\text {rel }}$ | $t_{\text {rel }}$ | $r_{\text {rel }}$ | $t_{\text {rel }}$ | $r_{\text {rel }}$ | $t_{\text {rel }}$ | $r_{\text {rel }}$ | $t_{\text {rel }}$ | $r_{\text {rel }}$ | $t_{\text {rel }}$ | $r_{\text {rel }}$ | $t_{\text {rel }}$ | $r_{r e l}$ | $t_{\text {rel }}$ | $r_{\text {rel }}$ | | ICP (point-to-point) | 0.39 | 1.00 | 0.21 | 1.14 | 0.15 | 0.72 | 0.16 | 0.53 | 0.14 | 4.70 | 0.44 | 0.76 | 0.24 | 0.77 | 0.43 | 1.37 | | ICP (point-to-plane) | 0.42 | 2.19 | 0.37 | 1.83 | 0.50 | 1.32 | 0.23 | 0.68 | 3.04 | 5.62 | 0.42 | 1.20 | 0.35 | 0.67 | 0.76 | 1.93 | | GICP | 0.46 | 0.68 | 0.30 | 0.39 | 0.51 | 0.32 | 0.40 | 0.10 | 0.51 | 1.23 | 0.34 | 0.57 | 0.15 | 0.30 | 0.38 | 0.51 | | NDT | 0.55 | 1.60 | 0.47 | 0.91 | 0.46 | 0.56 | 0.44 | 0.40 | 1.33 | 2.58 | 0.47 | 1.10 | 0.36 | 1.84 | 0.58 | 1.28 | | RaFlow 35] | 0.87 | 2.09 | 0.07 | 0.44 | 0.11 | 0.09 | 0.13 | 0.03 | 1.22 | 4.09 | 0.72 | 1.34 | 0.25 | 1.14 | 0.48 | 1.32 | | $\text { CMFlow } 36$ | 0.06 | 0.10 | 0.05 | 0.09 | 0.09 | 0.14 | 0.06 | 0.03 | 0.28 | 0.94 | 0.14 | 0.29 | 0.12 | 0.58 | 0.11 | 0.31 | | KISS-ICP [5] | 0.10 | 0.90 | 0.12 | 0.81 | 0.11 | 0.60 | 0.12 | 0.23 | 0.16 | 0.35 | 0.14 | 0.94 | 0.37 | 1.07 | 0.16 | 0.70 | | Ours | 0.05 | 0.48 | 0.07 | 0.44 | 0.09 | 0.30 | 0.11 | 0.20 | 0.15 | 0.52 | 0.08 | 0.59 | 0.10 | 0.71 | 0.09 | 0.46 |
TABLE II  表 II
Parameter SEtTings FOR THE VIEW-OF-DELfT Dataset
VIEW-OF-DELfT 数据集的参数设置
Parameter  参数 Value  价值
Polar-grid voxel azimuth Δ θ Δ θ Delta theta\Delta \theta
极坐标网格体方位角 Δ θ Δ θ Delta theta\Delta \theta
2 deg  2度
Polar-grid voxel elevation Δ ϕ Δ ϕ Delta phi\Delta \phi
极坐标网格体素标高 Δ ϕ Δ ϕ Delta phi\Delta \phi
2 deg  2度
Polar-grid voxel range Δ r Δ r Delta r\Delta r
极坐标网格体素范围 Δ r Δ r Delta r\Delta r
2 m
Minimum RCS difference for normalization T T TT
用于归一化的最小 RCS 差值 T T TT
10
RCS normalization factor σ N F σ N F sigma_(NF)\sigma_{N F}
RCS 归一化系数 σ N F σ N F sigma_(NF)\sigma_{N F}
10
# of top RCS points per voxel N feature N feature  N_("feature ")N_{\text {feature }}
# 每个体素的顶部 RCS 点数量 N feature N feature  N_("feature ")N_{\text {feature }}
1
# of scans used in the accumulated scans M
# 累计扫描中使用的扫描次数 M
1 5 1 5 1∼51 \sim 5
# of scans used in the submap N
# 子地图中使用的扫描次数 N
10 15 10 15 10∼1510 \sim 15
Parameter Value Polar-grid voxel azimuth Delta theta 2 deg Polar-grid voxel elevation Delta phi 2 deg Polar-grid voxel range Delta r 2 m Minimum RCS difference for normalization T 10 RCS normalization factor sigma_(NF) 10 # of top RCS points per voxel N_("feature ") 1 # of scans used in the accumulated scans M 1∼5 # of scans used in the submap N 10∼15| Parameter | Value | | :---: | :---: | | Polar-grid voxel azimuth $\Delta \theta$ | 2 deg | | Polar-grid voxel elevation $\Delta \phi$ | 2 deg | | Polar-grid voxel range $\Delta r$ | 2 m | | Minimum RCS difference for normalization $T$ | 10 | | RCS normalization factor $\sigma_{N F}$ | 10 | | # of top RCS points per voxel $N_{\text {feature }}$ | 1 | | # of scans used in the accumulated scans M | $1 \sim 5$ | | # of scans used in the submap N | $10 \sim 15$ |

B. Error metric  B.误差度量

The metric to evaluate the proposed odometry method is Relative Pose Error (RPE) [37].
评估所提出的测距方法的指标是相对姿态误差(RPE)[37]。
R P E i , j = δ e s t i , j δ r e f i , j = ( T r e f , i 1 T r e f , j ) 1 ( T e s t , i 1 T e s t , j ) R P E i , j = δ e s t i , j δ r e f i , j = T r e f , i 1 T r e f , j 1 T e s t , i 1 T e s t , j RPE_(i,j)=delta_(est_(i),j)⊖delta_(ref_(i,j))=(T_(ref,i)^(-1)T_(ref,j))^(-1)(T_(est,i)^(-1)T_(est,j))R P E_{i, j}=\delta_{e s t_{i}, j} \ominus \delta_{r e f_{i, j}}=\left(\mathbf{T}_{r e f, i}^{-1} \mathbf{T}_{r e f, j}\right)^{-1}\left(\mathbf{T}_{e s t, i}^{-1} \mathbf{T}_{e s t, j}\right)
where T i T i T_(i)T_{i} represents the i i ii th pose. Using the 21, the RPE denoted as E i , j E i , j E_(i,j)E_{i, j} between pose i i ii and pose j j jj. Timestamp j is determined as the time that satisfies the moving distance of one meter for timestamp i. The R P E i , j R P E i , j RPE_(i,j)R P E_{i, j} is evaluated for both translation t r e l t r e l t_(rel)t_{r e l} and rotation r r e l r r e l r_(rel)r_{r e l}, and all evaluations are conducted in the SE(3) space. Additionally, the evaluation was performed using the evaluation tool provided by [37].
其中 T i T i T_(i)T_{i} 表示第 i i ii 个姿势。使用 21,姿势 i i ii 和姿势 j j jj 之间的 RPE 表示为 E i , j E i , j E_(i,j)E_{i, j} R P E i , j R P E i , j RPE_(i,j)R P E_{i, j} 针对平移 t r e l t r e l t_(rel)t_{r e l} 和旋转 r r e l r r e l r_(rel)r_{r e l} 进行评估,所有评估均在 SE(3) 空间中进行。此外,还使用 [37] 提供的评估工具进行了评估。

C. Experiment results in View-of-Delft dataset
C.德尔夫特视图数据集的实验结果

Table \square shows the experimental results using this dataset. These results represent the RPE for both translation and rotation for each test sequence in the VoD dataset. All values represent the RMSE (Root Mean Square Error) of the RPE within a single sequence. In this table, RaFlow [35] and CMFlow [36] are deep learning-based algorithms, while the remaining algorithms are based on traditional rule-based registration methods. Overall, the proposed algorithm demonstrates the best performance in terms of average translation RPE ( t rel ) t rel  (t_("rel "))\left(t_{\text {rel }}\right). Additionally, compared to the KISS-ICP [5] algorithm, which is referenced in this paper, the proposed algorithm shows an improvement of 77.8 % 77.8 % 77.8%77.8 \% in t r e l t r e l t_(rel)t_{r e l} and 52.2 % 52.2 % 52.2%52.2 \% in r r e l r r e l r_(rel)r_{r e l}.
\square 显示了使用该数据集的实验结果。这些结果表明了 VoD 数据集中每个测试序列的平移和旋转 RPE。所有值均表示单个序列中 RPE 的 RMSE(均方根误差)。在该表中,RaFlow [35] 和 CMFlow [36] 是基于深度学习的算法,其余算法则是基于传统规则的配准方法。总体而言,所提出的算法在平均翻译 RPE ( t rel ) t rel  (t_("rel "))\left(t_{\text {rel }}\right) 方面表现最佳。此外,与本文参考的 KISS-ICP [5] 算法相比,拟议算法在 t r e l t r e l t_(rel)t_{r e l} 中的 77.8 % 77.8 % 77.8%77.8 \% r r e l r r e l r_(rel)r_{r e l} 中的 52.2 % 52.2 % 52.2%52.2 \% 均有改善。
Fig. 5 visualizes the trajectory results for the VoD test sequence. It shows the ground truth (blue), KISS-ICP (yellow), the proposed ME-LSQ ego-motion estimation only (red),
图 5 展示了 VoD 测试序列的轨迹结果。图中显示了地面实况(蓝色)、KISS-ICP(黄色)和拟议的 ME-LSQ 自我运动估计(红色)、

the proposed method without accumulated scans-to-submap matching (purple), and the complete proposed method (green). Overall, in the magnified region, the trajectory of KISS-ICP (yellow) exhibits significant perturbations compared to the ground truth. This is attributed to the use of only the constant velocity model for initial transformation and scan-to-submap matching, leading to correspondence uncertainties in sparse radar data.
没有累积扫描到子地图匹配的建议方法(紫色),以及完整的建议方法(绿色)。总体而言,在放大区域,KISS-ICP(黄色)的轨迹与地面实况相比有明显的扰动。这是因为在初始变换和扫描到子地图匹配中仅使用了恒速模型,导致稀疏雷达数据的对应不确定性。
Examining the trajectory with only the proposed MELSQ for ego-motion estimation (red), we observe a smooth path rather than KISS-ICP but a gradual increase in drift error due to the absence of refinement through registration. However, this indicates that the ME-LSQ provides sufficient performance for forming accumulated scans using estimated initial transformation in short sequences.
在仅使用建议的 MELSQ 进行自我运动估算的轨迹(红色)中,我们观察到一条比 KISS-ICP 更平滑的轨迹,但由于没有通过注册进行细化,漂移误差逐渐增加。不过,这表明 ME-LSQ 的性能足以在短序列中使用估计的初始变换形成累积扫描。
By analyzing the plot in purple, which represents the results without the accumulated scans-to-submap matching from the proposed method, it becomes apparent that the trajectory exhibits noise similar to KISS-ICP, appearing as perturbations. This observation suggests that one of the primary causes of perturbations in the odometry results is the formation of incorrect correspondences. Due to the inherent characteristics of radar data, where points may not be detected in the same location for the same object in every step, perfect “identical” correspondences may not be established during scan-submap matching. The noise in these correspondences translates into errors in the registration results, leading to the appearance of perturbations in the trajectory.
通过分析紫色图(代表未使用建议方法进行累积扫描到子地图匹配的结果),可以明显看出轨迹显示出与 KISS-ICP 类似的噪声,表现为扰动。这一观察结果表明,里程测量结果出现扰动的主要原因之一是形成了不正确的对应关系。由于雷达数据的固有特征,在每一步中都可能无法在同一位置检测到同一物体的点,因此在扫描-子地图匹配过程中可能无法建立完美的 "相同 "对应关系。这些对应关系中的噪声会转化为注册结果中的误差,导致轨迹中出现扰动。
In contrast, the trajectory of the proposed method (in green) progresses smoothly without perturbations. Additionally, examining the results from all sequences, the trajectory closely follows and is most similar to the ground truth path. This indicates that the proposed method not only leverages the smooth trajectory characteristics of the ME-LSQ but also reduces the potential for correspondence errors through accumulated scans-to-submap matching, thereby minimizing registration errors.
相比之下,拟议方法的轨迹(绿色)进展顺利,没有受到任何干扰。此外,从所有序列的结果来看,该轨迹紧跟地面实况路径,与地面实况路径最为相似。这表明,建议的方法不仅利用了 ME-LSQ 的平滑轨迹特性,还通过累积扫描到子地图的匹配减少了可能出现的对应错误,从而最大限度地减少了配准错误。
However, an important consideration when using our algorithm is to accurately determine the calibration between the radar and the rear axle of the vehicle. During accumulated scans-to-submap matching, the accumulated scans rely solely
不过,在使用我们的算法时,一个重要的考虑因素是准确确定雷达和车辆后轴之间的校准。在累积扫描与子地图匹配过程中,累积扫描完全依赖于

Fig. 5. The visualization results of the trajectories for the test sequences ( 03 , 04 , 09 , 17 , 19 , 22 , 24 ) ( 03 , 04 , 09 , 17 , 19 , 22 , 24 ) (03,04,09,17,19,22,24)(03,04,09,17,19,22,24) from the VoD dataset. The blue line shows the ground truth trajectory. The yellow line represents KISS-ICP, the red line shows ME-LSQ results, the purple line shows the proposed algorithm without accumulated scans-to-submap matching, and the green line represents the proposed method.
图 5.VoD 数据集中 ( 03 , 04 , 09 , 17 , 19 , 22 , 24 ) ( 03 , 04 , 09 , 17 , 19 , 22 , 24 ) (03,04,09,17,19,22,24)(03,04,09,17,19,22,24) 测试序列轨迹的可视化结果。蓝线表示地面实况轨迹。黄线表示 KISS-ICP,红线表示 ME-LSQ 结果,紫线表示没有累积扫描到子映射匹配的建议算法,绿线表示建议方法。

on the initial transformation information derived from egomotion estimation. Without precise calibration information, the accumulated scans may become blurred, leading to degraded performance.
的初始变换信息。如果没有精确的校准信息,累积扫描可能会变得模糊,导致性能下降。

D. Computation time  D.计算时间

The experiments were carried out on a system equipped with a 13th Gen Intel® Core™ i7-13700K CPU operating at 3.40 GHz and featuring 16 cores. When measuring the computation time for each proposed module, the MELSQ took 0.11 ms , local RCS feature extraction and RCS normalization took 0.05 ms , and the R C S W R C S W RCSWR C S W-ASSM took 0.80 ms , resulting in a total computation time of 0.97 ms . Without applying local RCS feature extraction, the RCS normalization alone reduced the computation time to approximately 0.02 ms . However, the registration stage then utilized more points, increasing the computation time to 1.23 ms . This demonstrates the effectiveness of the local RCS feature extraction module in reducing the overall computation time.
实验在配备第 13 代 Intel® Core™ i7-13700K CPU 的系统上进行,该 CPU 工作频率为 3.40 GHz,具有 16 个内核。在测量每个拟议模块的计算时间时,MELSQ 耗时 0.11 毫秒,本地 RCS 特征提取和 RCS 归一化耗时 0.05 毫秒, R C S W R C S W RCSWR C S W -ASSM 耗时 0.80 毫秒,因此总计算时间为 0.97 毫秒。在不应用局部 RCS 特征提取的情况下,仅 RCS 归一化就将计算时间减少到约 0.02 毫秒。然而,配准阶段使用了更多的点,使计算时间增加到 1.23 毫秒。这证明了局部 RCS 特征提取模块在减少整体计算时间方面的有效性。

E. Ablation studies  E.消融研究

To evaluate the effectiveness of the module proposed in this paper, an ablation study was conducted using the View-of-Delft dataset. As shown in Table III. ME-LSQ refers to the results of performing only the ego-motion estimation proposed in Section III, local RCSF refers to the polar-grid-based local RCS feature extraction proposed in Section IV, and finally,
为了评估本文提出的模块的有效性,我们使用 View-of-Delft 数据集进行了一项消融研究。如表 III 所示。ME-LSQ 指的是仅执行第三节中提出的自我运动估计的结果,局部 RCSF 指的是第四节中提出的基于极网的局部 RCS 特征提取,最后,ME-LSQ 指的是仅执行第三节中提出的自我运动估计的结果,局部 RCSF 指的是第四节中提出的基于极网的局部 RCS 特征提取、
TABLE III  表 III
Ablation Study ResUlts. THE CHECK MARK INDICATES THE appliCation OF EACH MODULE. The RESULTS SHOW RPE FOR BOTH TRANSLATION AND ROTATION WHEN EACH MODULE IS APPLIED.
消融研究结果。对号表示每个模块的应用情况。结果显示了每个模块应用时的平移和旋转 RPE。
ME-LSQ Local RCSF  地方区域协调框架 RCSW-ASSM t rel t rel  t_("rel ")t_{\text {rel }} r rel r rel  r_("rel ")r_{\text {rel }}
KISS-ICP (baseline)  KISS-ICP (基线) 0.16 0.70
\checkmark 0.10 0.64
\boldsymbol{\checkmark} \checkmark 0.10 0.66
\boldsymbol{\checkmark} \checkmark 0.10 0.47
\boldsymbol{\checkmark} \checkmark \checkmark 0 . 0 9 0 . 0 9 0.09\mathbf{0 . 0 9} 0 . 4 6 0 . 4 6 0.46\mathbf{0 . 4 6}
ME-LSQ Local RCSF RCSW-ASSM t_("rel ") r_("rel ") KISS-ICP (baseline) 0.16 0.70 ✓ 0.10 0.64 ✓ ✓ 0.10 0.66 ✓ ✓ 0.10 0.47 ✓ ✓ ✓ 0.09 0.46| ME-LSQ | Local RCSF | RCSW-ASSM | $t_{\text {rel }}$ | $r_{\text {rel }}$ | | :---: | :---: | :---: | :---: | :---: | | | KISS-ICP (baseline) | | 0.16 | 0.70 | | $\checkmark$ | | | 0.10 | 0.64 | | $\boldsymbol{\checkmark}$ | $\checkmark$ | | 0.10 | 0.66 | | $\boldsymbol{\checkmark}$ | | $\checkmark$ | 0.10 | 0.47 | | $\boldsymbol{\checkmark}$ | $\checkmark$ | $\checkmark$ | $\mathbf{0 . 0 9}$ | $\mathbf{0 . 4 6}$ |
RCSW-ASSM refers to the RCS-weighted accumulated scans-to-submap matching method proposed in Section V-B.
RCSW-ASSM 指的是第五节 B 部分提出的 RCS 加权累积扫描到子映射匹配方法。
The results indicate that by applying the proposed egomotion estimation method, M E L S Q M E L S Q ME-LSQM E-L S Q reduces the existing t r e l t r e l t_(rel)t_{r e l} value from 0.16 to 0.10 , which translates to an error reduction of approximately 6 cm . This demonstrates that M E L S Q M E L S Q ME-LSQM E-L S Q is effective in improving translation estimation performance. Additionally, the application of RCS-weighted accumulated scans-to-submap matching significantly reduces the existing r r e l r r e l r_(rel)r_{r e l} value from 0.64 to 0.47 , showing its effectiveness in enhancing rotation estimation performance. However, in the case of local RCS feature extraction, if accumulated scans-to-submap matching is not applied, the sparse radar points become even sparser, leading to a slight decline in rotation estimation performance. Conversely, when the proposed accumulated scans-to-submap matching is applied, it effectively extracts meaningful points from the dense radar point cloud, improving overall performance.
结果表明,通过应用拟议的电子运动估计方法, M E L S Q M E L S Q ME-LSQM E-L S Q 将现有的 t r e l t r e l t_(rel)t_{r e l} 值从 0.16 降至 0.10,这意味着误差减少了约 6 厘米。这表明 M E L S Q M E L S Q ME-LSQM E-L S Q 能够有效提高平移估计性能。此外,应用 RCS 加权累积扫描到子地图匹配可将现有的 r r e l r r e l r_(rel)r_{r e l} 值从 0.64 显著降低到 0.47,这表明其在提高旋转估计性能方面非常有效。然而,在局部 RCS 特征提取的情况下,如果不应用累积扫描到子映射匹配,稀疏的雷达点会变得更加稀疏,导致旋转估计性能略有下降。相反,当应用所提出的累积扫描到子映射匹配时,它能有效地从密集的雷达点云中提取有意义的点,从而提高整体性能。

VII. CONCLUSIONS  VII.结论

This paper proposes a robust odometry system using sparse and noisy 4D imaging radar point clouds. We introduce three novel techniques by actively leveraging the unique characteristics of 4D imaging radar, namely Doppler and Radar cross-section (RCS) information. First, we present the MELSQ method for ego-motion estimation utilizing Doppler information. This approach allows for rapid convergence and improved accuracy by using previous estimates as initial values for subsequent steps. Second, we can identify significant points from the noisy and sparse radar point cloud through local RCS feature extraction without losing spatial information. Lastly, we propose an RCSW-ASSM (RCS-weighted accumulated scans-to-submap matching) method to develop a robust odometry system. Our approach demonstrated state-of-the-art performance in relative translation and significant performance in relative rotation within the VoD dataset, which provides odometry information for 4D imaging radar. For future work, we aim to model the spatial uncertainty of the radar to further enhance the robustness of the radar odometry system.
本文提出了一种使用稀疏且有噪声的四维成像雷达点云的鲁棒性测距系统。我们积极利用四维成像雷达的独特特性,即多普勒和雷达截面(RCS)信息,引入了三种新技术。首先,我们介绍了利用多普勒信息进行自我运动估计的 MELSQ 方法。这种方法将之前的估计值作为后续步骤的初始值,从而实现快速收敛并提高精度。其次,我们可以在不丢失空间信息的情况下,通过局部 RCS 特征提取从噪声和稀疏雷达点云中识别重要点。最后,我们提出了一种 RCSW-ASSM(RCS 加权累积扫描与子地图匹配)方法,以开发一种稳健的测距系统。我们的方法在 VoD 数据集中展示了最先进的相对平移性能和显著的相对旋转性能,为 4D 成像雷达提供了测距信息。在未来的工作中,我们将对雷达的空间不确定性进行建模,以进一步增强雷达测距系统的鲁棒性。

REFERENCES  参考文献

[1] J. Zhang and S. Singh, “Loam: Lidar odometry and mapping in realtime.” in Robotics: Science and systems, vol. 2, no. 9. Berkeley, CA, 2014, pp. 1-9.
[1] J. Zhang 和 S. Singh,"Loam:Lidar odometry and mapping in realtime." in Robotics:科学与系统》,第 2 卷,第 9 期。加州伯克利,2014 年,第 1-9 页。

[2] T. Shan and B. Englot, “Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018, pp. 4758-4765.
[2] T. Shan 和 B. Englot,"Lego-loam:Lightweight and ground-optimized lidar odometry and mapping on variable terrain," in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).IEEE, 2018, pp.

[3] W. Xu, Y. Cai, D. He, J. Lin, and F. Zhang, “Fast-lio2: Fast direct lidarinertial odometry,” IEEE Transactions on Robotics, vol. 38, no. 4, pp. 2053-2073, 2022.
[3] W. Xu、Y. Cai、D. He、J. Lin 和 F. Zhang,"Fast-lio2:Fast direct lidarinertial odometry," IEEE Transactions on Robotics, vol. 38, no.4, pp.

[4] Y. H. Z. S. Z. L. Yue Pan, Pengchuan Xiao, “Mulls: Versatile lidar slam via multi-metric linear least square,” in IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021.
[4] Y. H. Z. S. Z. L. Yue Pan, Pengchuan Xiao, "Mulls:Versatile lidar slam via multi-metric linear least square," in IEEE International Conference on Robotics and Automation (ICRA).IEEE, 2021.

[5] I. Vizzo, T. Guadagnino, B. Mersch, L. Wiesmann, J. Behley, and C. Stachniss, “Kiss-icp: In defense of point-to-point icp-simple, accurate, and robust registration if done the right way,” IEEE Robotics and Automation Letters, vol. 8, no. 2, pp. 1029-1036, 2023.
[5] I. Vizzo、T. Guadagnino、B. Mersch、L. Wiesmann、J. Behley 和 C. Stachniss,"Kiss-icp:In defense of point-to-point icp-simple, accurate, and robust registration if done the right way," IEEE Robotics and Automation Letters, vol. 8, no. 2, pp.

[6] S. Rusinkiewicz and M. Levoy, “Efficient variants of the icp algorithm,” in Proceedings third international conference on 3-D digital imaging and modeling. IEEE, 2001, pp. 145-152.
[6] S. Rusinkiewicz 和 M. Levoy,"icp 算法的高效变体",《第三届三维数字成像与建模国际会议论文集》,IEEE,2001 年,第 145-152 页。IEEE,2001,第 145-152 页。

[7] P. J. Besl and N. D. McKay, “Method for registration of 3-d shapes,” in Sensor fusion IV: control paradigms and data structures, vol. 1611. Spie, 1992, pp. 586-606.
[7] P. J. Besl 和 N. D. McKay,"Method for registration of 3-d shapes",in Sensor fusion IV:control paradigms and data structures,vol. 1611.Spie, 1992, pp.

[8] K.-L. Low, “Linear least-squares optimization for point-to-plane icp surface registration,” Chapel Hill, University of North Carolina, vol. 4, no. 10, pp. 1-3, 2004.
[8] K.-L. Low,"Linear least-squares optimization for point-to-plane icp surface registration,"Chapel Hill,University of North Carolina,vol.4,no.10,pp.1-3,2004.

[9] A. Segal, D. Haehnel, and S. Thrun, “Generalized-icp.” in Robotics: science and systems, vol. 2, no. 4. Seattle, WA, 2009, p. 435.
[9] A. Segal, D. Haehnel, and S. Thrun, "Generalized-icp." in Robotics: Science and Systems, vol. 2, no.4.华盛顿州西雅图,2009 年,第 435 页。

[10] P. Biber and W. Straßer, “The normal distributions transform: A new approach to laser scan matching,” in Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003)(Cat. No. 03CH37453), vol. 3. IEEE, 2003, pp. 2743-2748.
[10] P. Biber 和 W. Straßer,"正态分布变换:IEEE, 2003, pp.

[11] W. Xu and F. Zhang, “Fast-lio: A fast, robust lidar-inertial odometry package by tightly-coupled iterated kalman filter,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 3317-3324, 2021.
[11] W. Xu 和 F. Zhang,"Fast-lio:通过紧耦合迭代卡尔曼滤波的快速、鲁棒激光雷达-惯性里程测量软件包",《IEEE 机器人与自动化通讯》,第 6 卷第 2 期,第 3317-3324 页,2021 年。

[12] T. Shan, B. Englot, D. Meyers, W. Wang, C. Ratti, and D. Rus, “Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping,” in 2020 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, 2020, pp. 5135-5142.
[12] T. Shan、B. Englot、D. Meyers、W. Wang、C. Ratti 和 D. Rus,"Lio-sam:通过平滑和映射实现紧密耦合的激光雷达惯性测距,"2020 年 IEEE/RSJ 智能机器人与系统(IROS)国际会议。IEEE, 2020, pp.

[13] D. Kellner, M. Barjenbruch, J. Klappstein, J. Dickmann, and K. Dietmayer, “Instantaneous ego-motion estimation using doppler radar,” in 16th International IEEE Conference on Intelligent Transportation Systems (ITSC 2013). IEEE, 2013, pp. 869-874.
[13] D. Kellner, M. Barjenbruch, J. Klappstein, J. Dickmann, and K. Dietmayer, "Instantaneous ego-motion estimation using doppler radar," in 16th International IEEE Conference on Intelligent Transportation Systems (ITSC 2013).IEEE, 2013, pp.

[14] C. Doer and G. F. Trommer, “An ekf based approach to radar inertial odometry,” in 2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI). IEEE, 2020, pp. 152-159.
[14] C. Doer 和 G. F. Trommer,"基于 EKF 的雷达惯性里程测量方法",2020 年电气与电子工程师学会智能系统多传感器融合与集成国际会议(MFI)。IEEE, 2020, pp.

[15] A. Galeote-Luque, V. Kubelka, M. Magnusson, J.-R. Ruiz-Sarmiento, and J. Gonzalez-Jimenez, “Doppler-only Single-scan 3D Vehicle Odometry,” 2023. [Online]. Available: http://arxiv.org/abs/2310.04113
[15] A. Galeote-Luque、V. Kubelka、M. Magnusson、J.-R.Ruiz-Sarmiento, and J. Gonzalez-Jimenez, "Doppler-only Single-scan 3D Vehicle Odometry," 2023.[在线]。Available: http://arxiv.org/abs/2310.04113

[16] D. Kellner, M. Barjenbruch, J. Klappstein, J. Dickmann, and K. Dietmayer, “Instantaneous ego-motion estimation using multiple doppler radars,” in 2014 IEEE International Conference on Robotics and Automation (ICRA), 2014, pp. 1592-1597.
[16] D. Kellner、M. Barjenbruch、J. Klappstein、J. Dickmann 和 K. Dietmayer,"使用多个多普勒雷达进行瞬时自我运动估计",2014 年电气和电子工程师学会机器人与自动化国际会议(ICRA),2014 年,第 1592-1597 页。

[17] C. D. Monaco and S. N. Brennan, “RADARODO: Ego-Motion Estimation from Doppler and Spatial Data in RADAR Images,” IEEE Transactions on Intelligent Vehicles, vol. 5, no. 3, pp. 475-484, 2020.
[17] C. D. Monaco and S. N. Brennan, "RADARODO: Ego-Motion Estimation from Doppler and Spatial Data in RADAR Images," IEEE Transactions on Intelligent Vehicles, vol. 5, no.3, pp.

[18] Y. S. Park, Y.-S. Shin, and A. Kim, “Pharao: Direct radar odometry using phase correlation,” in 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020, pp. 2617-2623.
[18] Y. S. Park, Y.-S.Shin 和 A. Kim,"Pharao:Direct radar odometry using phase correlation," in 2020 IEEE International Conference on Robotics and Automation (ICRA).IEEE, 2020, pp.

[19] E. B. Quist, P. C. Niedfeldt, and R. W. Beard, “Radar odometry with recursive-ransac,” IEEE Transactions on Aerospace and Electronic Systems, vol. 52, no. 4, pp. 1618-1630, 2016.
[19] E. B. Quist, P. C. Niedfeldt, and R. W. Beard, "Radar odometry with recursive-ransac," IEEE Transactions on Aerospace and Electronic Systems, vol. 52, no.4, pp.

[20] H. Lim, K. Han, G. Shin, G. Kim, S. Hong, and H. Myung, “Orora: Outlier-robust radar odometry,” arXiv preprint arXiv:2303.01876, 2023.
[20] H. Lim、K. Han、G. Shin、G. Kim、S. Hong 和 H. Myung,"Orora:Outlier-robust radar odometry," arXiv preprint arXiv:2303.01876, 2023.

[21] Y. Zhuang, B. Wang, J. Huai, and M. Li, “4d iriom: 4d imaging radar inertial odometry and mapping,” IEEE Robotics and Automation Letters, 2023.
[21] Y. Zhuang、B. Wang、J. Huai 和 M. Li,"4d iriom:4d 成像雷达惯性测距与绘图",《电气和电子工程师学会机器人与自动化通讯》,2023 年。

[22] J. Zhang, H. Zhuge, Z. Wu, G. Peng, M. Wen, Y. Liu, and D. Wang, “4dradarslam: A 4d imaging radar slam system for large-scale environments based on pose graph optimization,” in 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2023, pp. 8333-8340.
[22] J. Zhang, H. Zhuge, Z. Wu, G. Peng, M. Wen, Y. Liu, and D. Wang, "4dradarslam:A 4d imaging radar slam system for large-scale environments based on pose graph optimization," in 2023 IEEE International Conference on Robotics and Automation (ICRA).IEEE, 2023, pp.

[23] S. Lu, G. Zhuo, L. Xiong, X. Zhu, L. Zheng, Z. He, M. Zhou, X. Lu, and J. Bai, “Efficient deep-learning 4d automotive radar odometry method,” IEEE Transactions on Intelligent Vehicles, 2023.
[23] S. Lu、G. Zhuo、L. Xiong、X. Zhu、L. Zheng、Z. He、M. Zhou、X. Lu 和 J. Bai,"Efficient deep-learning 4d automotive radar odometry method," IEEE Transactions on Intelligent Vehicles,2023.

[24] G. Zhuoins, S. Lu, L. Xiong, H. Zhouins, L. Zheng, and M. Zhou, “4drvo-net: Deep 4d radar-visual odometry using multi-modal and multi-scale adaptive fusion,” IEEE Transactions on Intelligent Vehicles, 2023.
[24] G. Zhuoins、S. Lu、L. Xiong、H. Zhouins、L. Zheng 和 M. Zhou,"4drvo-net:使用多模态和多尺度自适应融合的深度 4d 雷达-视觉里程测量》,《电气和电子工程师学会智能车辆论文集》,2023 年。

[25] A. Palffy, E. Pool, S. Baratam, J. F. Kooij, and D. M. Gavrila, “Multiclass road user detection with 3+ 1d radar in the view-of-delft dataset,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 4961-4968, 2022.
[25] A. Palffy, E. Pool, S. Baratam, J. F. Kooij, and D. M. Gavrila, "Multiclass road user detection with 3+ 1d radar in the view-of-delft dataset," IEEE Robotics and Automation Letters, vol. 7, no. 2, pp.

[26] K. Uchiyama, T. Motomura, and A. Kajiwara, “A study on self-vehicle location estimation employing 79ghz uwb radar,” in 2018 IEEE Sensors Applications Symposium (SAS). IEEE, 2018, pp. 1-5.
[26] K. Uchiyama、T. Motomura 和 A. Kajiwara,"利用 79ghz uwb 雷达进行自车辆位置估计的研究",2018 年 IEEE 传感器应用研讨会(SAS)。IEEE, 2018, pp.

[27] X. Chen, J. L. Barron, R. E. Mercer, and P. Joe, “3d regularized velocity from 3d doppler radial velocity,” in Proceedings 2001 International Conference on Image Processing (Cat. No. 01CH37205), vol. 3. IEEE, 2001, pp. 664-667.
[27] X. Chen, J. L. Barron, R. E. Mercer, and P. Joe, "3d regularized velocity from 3d doppler radial velocity," in Proceedings 2001 International Conference on Image Processing (Cat. No. 01CH37205), vol. 3. IEEE, 2001, pp.

[28] A. Galeote-Luque, V. Kubelka, M. Magnusson, J.-R. Ruiz-Sarmiento, and J. Gonzalez-Jimenez, “Doppler-only single-scan 3d vehicle odometry,” 2023. [Online]. Available: http://arxiv.org/abs/2310.04113
[28] A. Galeote-Luque、V. Kubelka、M. Magnusson、J.-R.Ruiz-Sarmiento, and J. Gonzalez-Jimenez, "Doppler-only single-scan 3d vehicle odometry," 2023.[Online].Available: http://arxiv.org/abs/2310.04113

[29] M. Nissov, N. Khedekar, K. Alexis, and R. O. Mar, “Degradation Resilient LiDAR-Radar-Inertial Odometry.”
[29] M. Nissov、N. Khedekar、K. Alexis 和 R. O. Mar,"抗降解激光雷达-雷达-惯性测距仪"。

[30] H. V. Do, Y. H. Kim, J. H. Lee, M. H. Lee, J. W. Song, and R. O. Mar, “DeRO : Dead Reckoning Based on Radar Odometry With Accelerometers Aided for Robot Localization.”
[30] H. V. Do, Y. H. Kim, J. H. Lee, M. H. Lee, J. W. Song, and R. O. Mar, "DeRO : Dead Reckoning Based on Radar Odometry With Accelerometers Aided for Robot Localization."

[31] J. W. Tukey et al., Exploratory data analysis. Springer, 1977, vol. 2.
[31] J. W. Tukey 等人,《探索性数据分析》。Springer, 1977, vol. 2.

[32] G. Galati, G. Pavan, and C. Wasserzier, “Characterization of backscattering and multipath in a suburban area after the calibration of an x-band commercial radar,” Sensors, vol. 20, no. 2, p. 463, 2020.
[32] G. Galati、G. Pavan 和 C. Wasserzier,"校准 x 波段商用雷达后郊区反向散射和多径的特征",《传感器》,第 20 卷第 2 期第 463 页,2020 年。

[33] K. Luan, C. Shi, N. Wang, Y. Cheng, H. Lu, and X. Chen, “Diffusionbased point cloud super-resolution for mmwave radar data,” arXiv preprint arXiv:2404.06012, 2024.
[33] K. Luan、C. Shi、N. Wang、Y. Cheng、H. Lu 和 X. Chen,"基于扩散的毫米波雷达数据点云超分辨率",arXiv 预印本 arXiv:2404.06012, 2024。

[34] G. CRD, “Statistical methods for tomographic image reconstruction,” 2000.
[34] G. CRD,"断层图像重建的统计方法",2000 年。

[35] F. Ding, Z. Pan, Y. Deng, J. Deng, and C. X. Lu, “Self-supervised scene flow estimation with 4-d automotive radar,” IEEE Robotics and Automation Letters, vol. 7, no. 3, pp. 8233-8240, 2022.
[35] F. Ding, Z. Pan, Y. Deng, J. Deng, and C. X. Lu, "Self-supervised scene flow estimation with 4-d automotive radar," IEEE Robotics and Automation Letters, vol. 7, no.3,第 8233-8240 页,2022 年。

[36] F. Ding, A. Palffy, D. M. Gavrila, and C. X. Lu, “Hidden gems: 4d radar scene flow learning using cross-modal supervision,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 9340-9349.
[36] F. Ding, A. Palffy, D. M. Gavrila, and C. X. Lu, "Hidden gems:使用跨模态监督的 4d 雷达场景流学习",《IEEE/CVF 计算机视觉与模式识别会议论文集》,2023 年,第 9340-9349 页。

[37] M. Grupp, “evo: Python package for the evaluation of odometry and slam.” https://github.com/MichaelGrupp/evo 2017.
[37] M. Grupp,"evo:Python软件包,用于评估里程测量和斜线。" https://github.com/MichaelGrupp/evo 2017.

Soyeong Kim received a B.S. and M.S. degrees in smart vehicle engineering from Konkuk University, Seoul, South Korea, in 2021 and 2022, respectively. She is currently working toward a Ph.D. with the Automotive Intelligence Laboratory at Hanyang University. Her main fields of interest are applications for state estimation, multiple object tracking, sensor fusion, and sensor processing for autonomous vehicles.
Soyeong Kim 分别于 2021 年和 2022 年获得韩国首尔建国大学智能汽车工程学士和硕士学位。目前,她正在汉阳大学汽车智能实验室攻读博士学位。她的主要兴趣领域是自动驾驶汽车的状态估计、多目标跟踪、传感器融合和传感器处理等应用。

Jiwon Seok received a B.S. degree in mechanical engineering from Konkuk University, Seoul, South Korea, in 2020. He is currently working toward a Master’s and Ph.D. integrated course with the Automotive Intelligence Laboratory at Hanyang University. His research interests include localization, mapping, and sensor fusion applications for an autonomous vehicle.
Jiwon Seok 于 2020 年获得韩国首尔建国大学机械工程学士学位。目前,他正在攻读汉阳大学汽车智能实验室的硕士和博士综合课程。他的研究兴趣包括自动驾驶汽车的定位、测绘和传感器融合应用。

Jaehwan Lee received a B.S. degree in Smart Vehicle Engineering from Konkuk University, Seoul, Korea, in 2023. He is currently pursuing an M.S. degree with the Automotive Intelligence Laboratory at Hanyang University. His research interests include object tracking, localization, and autonomous vehicles.
Jaehwan Lee 于 2023 年获得韩国首尔建国大学智能汽车工程学士学位。目前,他正在汉阳大学汽车智能实验室攻读硕士学位。他的研究兴趣包括物体跟踪、定位和自动驾驶汽车。

Kichun Jo (S’10 - M’14) received a B.S. degree in mechanical engineering and a Ph.D. degree in automotive engineering from Hanyang University, Seoul, South Korea, in 2008 and 2014, respectively. From 2014 to 2015, he was with the ACE Lab, Department of Automotive Engineering, Hanyang University, researching system design and implementation of autonomous cars. From 2015 to 2018, he was with the Valeo Driving Assistance Research, Bobigny, France, working on highly automated driving. He is currently an Associate Professor at the Department of Automotive Engineering at Hanyang University, Seoul. His current research interests include localization and mapping, object tracking, information fusion, vehicle state estimation, behavior planning, and vehicle motion control for highly automated vehicles.
Kichun Jo(S'10 - M'14)分别于 2008 年和 2014 年获得韩国首尔汉阳大学机械工程学士学位和汽车工程博士学位。2014 年至 2015 年,他在汉阳大学汽车工程系 ACE 实验室工作,研究自动驾驶汽车的系统设计和实现。2015 年至 2018 年,他在法国博比尼法雷奥驾驶辅助研究中心从事高度自动驾驶研究。他目前是首尔汉阳大学汽车工程系副教授。他目前的研究兴趣包括高度自动驾驶车辆的定位和映射、物体跟踪、信息融合、车辆状态估计、行为规划和车辆运动控制。

  1. This work was supported by the Technology Innovation Program (Development on Automated Driving with Perceptual Prediction Based on TCar/Vehicle Parts to Intelligent Control/System Integration for Assessment) funded by the Ministry of Trade, Industry Energy (MOTIE), South Korea, under Grant 20018101; by the Institute for Information Communication Technology Planning Evaluation (IITP) grant funded by the Korean Government Ministry of Science and ICT (MSIT) (No. RS-2023-00235293, Development of autonomous driving big data processing, management, search, and sharing interface technology to provide autonomous driving data according to the purpose of usage); and by the National Research Foundation of Korea (NRF) grant funded by the Korean Government Ministry of Science and ICT (MSIT) under grant No. RS-2023-00209252.
    这项工作得到了韩国贸易产业能源部(MOTIE)资助的技术创新计划(基于TCar/车辆部件的感知预测自动驾驶开发到智能控制/系统集成评估)(20018101号补助金)的支持;得到了韩国政府科学和信息通信技术部(MSIT)资助的信息通信技术规划评估研究所(IIPP)补助金(编号:RS-2023-00235293,根据使用目的提供自动驾驶数据的自动驾驶大数据处理、管理、搜索和共享接口技术的开发)的支持;以及韩国国家研究基金会(NRF)的支持。RS-2023-00235293, Development of autonomous driving big data processing, management, search, and sharing interface technology to provide autonomous driving data according to the purpose of usage); and by the National Research Foundation of Korea (NRF) grant funded by the Korean Government Ministry of Science and ICT (MSIT) under grant No.

    1 1 ^(1){ }^{1} Soyeong Kim, Jiwon Seok, Jaehwan Lee, and Kichun Jo are with the Department of Automotive Engineering, Hanyang University, 222, Wangsimni-ro, Seongdong-gu, Seoul, Republic of Korea. kichunjo@hanyang.ac.kr
    1 1 ^(1){ }^{1} Soyeong Kim、Jiwon Seok、Jaehwan Lee 和 Kichun Jo 现就职于汉阳大学汽车工程系,地址:大韩民国首尔城东区 Wangsimni-ro 222。kichunjo@hanyang.ac.kr。