这是用户在 2024-5-22 10:15 为 https://app.immersivetranslate.com/pdf-pro/31301f06-5376-4410-9168-1ed449d9b154 保存的双语快照页面,由 沉浸式翻译 提供双语支持。了解如何保存?
2024_05_22_9df337526f32b2aaf4a3g

NDT-6D for color registration in agri-robotic applications
植保机器人应用中的色彩校准 NDT-6D

Himanshu Gupta | Achim J. Lilienthal | Henrik Andreasson |
古普塔·希曼舒 | 利利昂塔尔·阿希姆·J. | 安德烈亚松·亨利克 |
Polina Kurtser
波利娜·库尔策

Centre for Applied Autonomous Sensor Systems, Institutionen för naturvetenskap & teknik, Örebro University, Örebro, Sweden
应用自主传感系统中心,自然科学与技术学院,厄勒布鲁大学,厄勒布鲁,瑞典
Perception for Intelligent Systems, Technical University of Munich, Munich, Germany
智能系统感知,慕尼黑工业大学,慕尼黑,德国
Department of Radiation Science, Radiation Physics, Umeå University, Umeå, Sweden
瑞典乌米奥大学辐射科学、辐射物理学系

Correspondence 通讯

Himanshu Gupta, Centre for Applied Autonomous Sensor Systems, Institutionen för naturvetenskap & teknik, Örebro University, Fakultetsgatan 1, Örebro 70182, Sweden. Email: Himanshu.Gupta@oru.se
希曼西胡·古普塔, 应用自主传感系统中心, 自然科学与技术学院, 厄勒布鲁大学, 法学院街 1 号, 厄勒布鲁 70182, 瑞典。电子邮件: Himanshu.Gupta@oru.se
Funding information 资助信息
Horizon 2020 Framework Programme
科学创新框架计划 2020

Abstract 摘要

Registration of point cloud data containing both depth and color information is critical for a variety of applications, including in-field robotic plant manipulation, crop growth modeling, and autonomous navigation. However, current state-ofthe-art registration methods often fail in challenging agricultural field conditions due to factors such as occlusions, plant density, and variable illumination. To address these issues, we propose the NDT-6D registration method, which is a color-based variation of the Normal Distribution Transform (NDT) registration approach for point clouds. Our method computes correspondences between pointclouds using both geometric and color information and minimizes the distance between these correspondences using only the three-dimensional (3D) geometric dimensions. We evaluate the method using the GRAPES3D data set collected with a commercial-grade RGB-D sensor mounted on a mobile platform in a vineyard. Results show that registration methods that only rely on depth information fail to provide quality registration for the tested data set. The proposed color-based variation outperforms state-of-the-art methods with a root mean square error (RMSE) of for NDT-6D compared with 1.1-2.3 cm for other color-information-based methods and 1.2-13.7 for noncolor-information-based methods. The proposed method is shown to be robust against noises using the TUM RGBD data set by artificially adding noise present in an outdoor scenario. The relative pose error (RPE) increased for our method compared to an increase of for the best-performing registration method. The obtained average accuracy suggests that the NDT-6D registration methods can be used for in-field precision agriculture applications, for example, crop detection, size-based maturity estimation, and growth modeling.
包含深度和颜色信息的点云数据的注册对于各种应用至关重要,包括现场机器人植物操作、作物生长建模和自主导航。然而,当前最先进的注册方法在具有挑战性的农业环境条件下常常失败,这是由于遮挡、植株密度和照明变化等因素造成的。为了解决这些问题,我们提出了 NDT-6D 注册方法,这是基于色彩信息的 Normal Distribution Transform(NDT)注册方法的变体。我们的方法使用几何和颜色信息计算点云之间的对应关系,并仅使用三维(3D)几何尺度来最小化这些对应关系的距离。我们使用安装在移动平台上的商用 RGB-D 传感器在葡萄园收集的 GRAPES3D 数据集来评估该方法。结果表明,仅依赖于深度信息的注册方法无法为测试数据集提供高质量的注册。所提出的基于色彩的变体在 RMSE 方面优于最先进的方法,NDT-6D 为

KEYWORDS 关键词

agricultural robotics, color pointcloud, in-field sensing, machine perception, RGB-D registration, stereo IR, vineyard
农业机器人技术、彩色点云、田间感知、机器感知、RGB-D 配准、立体红外、葡萄园

1 | INTRODUCTION 1 | 简介

Automation in the agricultural domain is a fast-growing application of outdoor robotics mostly due to the lack of human labor and, as a result, increasing the cost of manual field operations such as harvesting, planting, pruning, and trimming (Oliveira et al., 2021). With recent advancements in the field of visual detection, threedimensional (3D) reconstruction, and positioning using analytical and artificial intelligence-based methods, these labor-intensive tasks are being automated using robots (Bac et al., 2014; Bakker et al., 2006; Bawden et al., 2017). These systems have the potential to reduce costs and increase field productivity. They employ machine vision algorithms (Kamilaris & Prenafeta-Boldú, 2018; Tian et al., 2020) for the detection and positioning of target crops, mainly relying on color images.
农业领域的自动化是户外机器人快速发展的应用,主要是由于人工劳动力的缺乏,导致如收割、种植、修剪等人工田间作业成本不断上升(Oliveira 等人,2021)。随着视觉检测、三维重建及基于分析和人工智能的定位技术的进步,这些劳动密集型任务正在使用机器人进行自动化(Bac 等人,2014;Bakker 等人,2006;Bawden 等人,2017)。这些系统有望降低成本,提高田间生产效率。它们采用机器视觉算法(Kamilaris & Prenafeta-Boldú,2018;Tian 等人,2020)来检测和定位目标作物,主要依赖彩色图像。
For the calculation of the target crops' spatial position or morphological aspects, depth information is often required (Arad et al., 2020; Kurtser, Ringdahl, Rotstein, Berenstein, et al., 2020; Vit & Shani, 2018). For this purpose, the RGB-D camera is well suited as these sensors provide colored 2D images and 3D point clouds (Kurtser, Ringdahl, Rotstein, Berenstein, et al., 2020; Ringdahl et al., 2019) enclosed in a single rigid packaging capable of sustaining the harsh environmental conditions often encountered in these applications. The colored images can be used for color-based detection, plant growth monitoring, and ripeness estimation. The 3D point clouds can be used to estimate the physical crop size, shape, and target localization. Commercialgrade RGB-D sensors operating in field conditions have only recently become available in the market (Ringdahl et al., 2019; Vit & Shani, 2018). Till recently, point clouds were employed mostly exclusively for the navigation of the robot in the field using and 3D LiDARs, an application often not requiring color data (Biber et al., 2012; Malavazi et al., 2018) or high pointcloud density.
对于目标作物的空间位置或形态学方面的计算,通常需要深度信息(Arad et al., 2020; Kurtser, Ringdahl, Rotstein, Berenstein, 等, 2020; Vit & Shani, 2018)。为此,RGB-D 相机非常适合,因为这些传感器提供彩色 2D 图像和 3D 点云(Kurtser, Ringdahl, Rotstein, Berenstein, 等, 2020; Ringdahl et al., 2019),封装在单个刚性包装内,能够承受这些应用中常见的恶劣环境条件。彩色图像可用于基于颜色的检测、植物生长监测和成熟度估计。3D 点云可用于估算实际作物的大小、形状和目标定位。最近市场上才出现了在田间条件下运行的商业级 RGB-D 传感器(Ringdahl et al., 2019; Vit & Shani, 2018)。直到最近,点云主要被用于使用 和 3D 激光雷达对机器人在田间的导航,这种应用通常不需要颜色数据(Biber et al., 2012; Malavazi et al., 2018)或高密度点云。
With the penetration of the RGB-D sensors into the agricultural robotics domain, algorithms for in-field extraction of crop size, shape, ripeness, and position were developed. These algorithms rely mainly on previous work in indoor conditions where detailed 3D plant models can be extracted using hand-held 3D scanners (Schunck et al., 2021), acquiring data from multiple viewpoints. Despite previous work showing that employing multiple viewpoints can significantly improve precision (i.e., Harel et al., 2016; Kurtser & Edan, 2018b), most outdoor algorithms rely on pointclouds acquired from a single location. This can be attributed to a working assumption often voiced in the field that state-of-the-art registration algorithms generally fail to provide accurate registration results for the noisy outdoor sensory data acquired from RGB-D cameras and the dense and repetitive soft dynamic foliage present in the agricultural domain. While it was claimed in our previous work (Kurtser, Ringdahl, Rotstein, & Andreasson, 2020; Kurtser, Ringdahl, Rotstein, Berenstein, et al., 2020) that single frame detection can be sufficient for some applications, it is apparent that higher precision can be obtained from the registration of data from several viewpoints before analysis. For example, algorithms relying on a single frame RGB-D are often more prone to additional error due to a significant number of overflowing points near the boundary of the objects. This problem can be solved by combining consecutive point clouds using registration algorithms and 3D reconstruction of the registered point cloud. Similarly, single frame RGB-D only provides one object surface, the one facing the camera, making correct estimation of volume and location biased.
随着 RGB-D 传感器渗透到农业机器人领域,开发了用于田间作物尺寸、形状、成熟度和位置提取的算法。这些算法主要依赖于之前在室内条件下的工作,在那里可以使用手持 3D 扫描仪从多个视角获取详细的 3D 植物模型(Schunck 等人,2021 年)。尽管之前的工作表明,采用多个视点可以显著提高精度(即 Harel 等人,2016 年;Kurtser & Edan,2018b),但大多数室外算法依赖于从单个位置获取的点云。这可以归因于该领域普遍存在的一个工作假设,即最先进的配准算法通常无法为 RGB-D 摄像机获取的噪声室外传感数据以及农业领域存在的密集和重复的软动态树叶提供准确的配准结果。虽然我们之前的工作(Kurtser、Ringdahl、Rotstein 和 Andreasson,2020 年;Kurtser、Ringdahl、Rotstein、Berenstein 等人,2020 年)声称单帧检测可能足够满足某些应用,但很明显,在分析之前对来自多个视点的数据进行配准可以获得更高的精度。例如,依赖单个帧 RGB-D 的算法通常更容易受到对象边界附近大量溢出点导致的额外误差。通过使用配准算法和 3D 重建注册点云可以解决这个问题。同样,单个帧 RGB-D 只提供面向相机的一个物体表面,这会导致体积和位置估计存在偏差。
Beyond very close range applications such as plant morphological modeling and localization, registration of multiple pointclouds originating from consecutive frames acquired in field conditions from commercial grade RGB-D cameras can also potentially replace or supplement LiDARs in close range navigation. Enriched maps generated from the aggregation of consecutive pointclouds acquired by LiDARs for navigation purposes with dense close range information can lead to a variety of possible applications such as field monitoring and acquisition of measures such as yield.
除了像植物形态建模和定位等极近距离应用之外,来自商用级 RGB-D 相机连续帧采集的多个点云配准也可能取代或补充近距导航中的激光雷达。将激光雷达采集的连续点云聚集生成的丰富地图,与近距离信息相结合,可能带来诸如田间监测和产量测量等各种应用。
Since pointcloud registration in field conditions is a well researched field in many domains, in this paper, we aim to investigate the reasons for the failure of state-of-the-art registration algorithms given the field conditions in which agricultural robots are to operate. We perform this by comparing a range of commonly used registration algorithms on a data set acquired in commercial vineyard conditions (GRAPES3D data set [Kurtser, Ringdahl, Rotstein, Berenstein, et al., 2020]). Once the weaknesses are identified, we propose our registration method, which is shown to be more robust in these conditions. We show our algorithm's robustness using a benchmark data set for RGB-D registration and SLAM methods, TUM RGBD data set (Sturm et al., 2012)) using the evaluation metrics of the data set.
在实地条件下点云注册是许多领域中广泛研究的课题,在本文中,我们旨在探讨在农业机器人应运工作的实地条件下,最先进的注册算法会失败的原因。我们通过比较一系列常用的注册算法在商业葡萄园条件下获取的数据集(GRAPES3D 数据集[Kurtser, Ringdahl, Rotstein, Berenstein 等人,2020])来实现这一目标。一旦识别出弱点,我们提出了我们的注册方法,在这些条件下表现更为稳健。我们使用 TUM RGBD 数据集(Sturm 等人,2012)的基准数据集和评估指标,展示了我们算法在 RGB-D 注册和 SLAM 方法方面的稳健性。

1.1 | Contribution 1.1 | 贡献

Given the outlined need for RGB-D data registration in this specific setting, our contribution is as follows:
基于本特定环境中对 RGB-D 数据配准的需求,我们的贡献如下:
  1. We introduce a novel registration method (NDT-6D) that successfully registers the collected data and shows to be more robust to sensory noise than the state-of-the-art registration methods including a supplementary code release.
    我们引入了一种新颖的配准方法(NDT-6D),该方法成功地配准了收集的数据,并且比现有最先进的配准方法更能抵抗传感噪声,并附有补充代码发布。
  2. We present evaluation results for the current state-of-the-art registration methods on prototypical agri-robotics RGB-D data collected from a mobile robot in a vineyard setting. We compare these results to the evaluation of the same algorithms on a typical indoor benchmark data set.
    我们提供了当前最先进的配准方法在典型农业机器人 RGB-D 数据(从在葡萄园环境中的移动机器人收集)上的评估结果。我们将这些结果与同样算法在典型室内基准数据集上的评估进行比较。
  3. We evaluate in detail the contribution of color cues for scan registration in the agricultural setting.
    我们详细评估农业环境中扫描注册的颜色线索贡献。
  4. We provide an evaluation methodology that focuses on measures specifically relevant to agri-robotics applications.
    我们提供了一种专注于农业机器人应用相关措施的评估方法。
Code available here (last accessed Oct 2021): https://github.com/hgupta01/ndt-6d.git
<代码 0> 代码可在此处获取(最后访问于 2021 年 10 月):https://github.com/hgupta01/ndt-6d.git
The rest of the paper is structured as follows. First, we provide an overview of the use of RGB-D data in the agri-robotics domain and the challenges in data registration and multi-view analysis. Next, we provide an overview of the current methods of point cloud registration to which we compare our work, as well as some standard notations. In Section 4, we first introduce the empirical data used to evaluate the various registration algorithms, followed by a description of the suggested NDT-6D method. Finally, we present and discuss the detailed results obtained from applying the registration algorithms to the mentioned data sets in the results section, followed by a short conclusion.
本文的其余部分结构如下。首先,我们概述了 RGB-D 数据在农业机器人领域的使用及数据注册和多视图分析的挑战。接下来,我们概述了当前点云注册方法,并将其与我们的工作进行比较,同时介绍了一些标准符号。在第 4 节中,我们首先介绍用于评估各种注册算法的经验数据,然后描述了建议的 NDT-6D 方法。最后,我们在结果部分呈现并讨论了将注册算法应用于上述数据集所获得的详细结果,并作出简短结论。

2 | LITERATURE OVERVIEW
文学概述

2.1 | Visual sensors in agricultural robotics
2.1 | 农业机器人中的视觉传感器

The most common sensors employed in agricultural robotics operating in field conditions are imaging cameras (Bac et al., 2014; Kamilaris & Prenafeta-Boldú, 2018). Factors such as robustness, low cost, low weight and size, and the fact that humans rely greatly on vision to perform manual crop monitoring and manipulations, are all contributors to the widespread use of the RGB camera in the applications of crop monitoring. Detection of the crop, as well as diseases and pests in field conditions, are most often achieved using a color camera placed facing the foliage (Al-Hiary et al., 2011; Bac et al., 2014; Kamilaris & Prenafeta-Boldú, 2018; Singh & Misra, 2017). The algorithms developed are aimed to detect abnormalities and foreign objects from imagery data. The main obstacles affecting performance in detection directly on foliage are most often high occlusion rates and variable lighting conditions. Some of the solutions proposed are often multi or hyperspectral cameras (Dale et al., 2013) thermal imaging(Vadivambal & Jayas, 2011) and light resilient adaptive algorithms (Arad et al., 2019; Zemmour et al., 2017).
在田间条件下,农业机器人常用的传感器是成像相机(Bac et al., 2014; Kamilaris & Prenafeta-Boldú, 2018)。坚固性、低成本、轻量和小型以及人类依赖视觉执行手动作物监测和操作等因素,都促进了 RGB 相机在作物监测应用中的广泛使用。通常使用对着叶片的彩色相机检测作物、病虫害(Al-Hiary et al., 2011; Bac et al., 2014; Kamilaris & Prenafeta-Boldú, 2018; Singh & Misra, 2017)。开发的算法旨在从图像数据中检测异常和外来物。直接在叶片上检测的主要障碍通常是遮挡率高和光照条件变化。一些提出的解决方案包括多光谱或高光谱相机(Dale et al., 2013)、热成像(Vadivambal & Jayas, 2011)和抗光照的自适应算法(Arad et al., 2019; Zemmour et al., 2017)。
Despite the clear advantages of relying on imagery data for most operations of crop monitoring, not all field operations can rely solely on color data or spectral data. Specifically, in agricultural robotics, the physical dimension and location of the detected crop can be crucial for continuous operation. The somewhat popular examples in agrirobotics include operations requiring physical manipulation of the plant and therefore requiring localization of the target in world coordinates for actions such as harvesting (Arad et al., 2020; Bac et al., 2014), weeding (Bakker et al., 2006; Bawden et al., 2017) and pruning (Botterill et al., 2017). Up till recently, most commercially available range measuring sensors, combined with RGB cameras failed to provide the necessary sensory data quality to be implemented in outdoor conditions, and therefore technical solutions such as visual servoing (e.g., for harvesting - [Arad et al., 2020; Barth et al., 2016]) or assumption of constant distance to target (e.g., for top-down weeding [Tillett et al., 2008]) are often employed. With the recent developments in commercial-grade RGB-D sensors the acquisition of acceptable quality colored point clouds is now possible in outdoor conditions as well (Ringdahl et al., 2019; Vit & Shani, 2018).
尽管依靠影像数据进行作物监测的大多数操作具有明显优势,但并非所有农场作业都能完全依赖颜色数据或光谱数据。具体而言,在农业机器人中,检测作物的实际尺寸和位置可能对持续运行至关重要。农业机器人领域中一些广为人知的示例包括需要物理操纵植物的作业,因此需要确定目标在世界坐标系中的定位,如收获(Arad et al., 2020; Bac et al., 2014)、除草(Bakker et al., 2006; Bawden et al., 2017)和修剪(Botterill et al., 2017)等。直到最近,大多数商用测距传感器与 RGB 相机无法提供足够的传感数据质量以应用于户外条件,因此经常采用视觉伺服(例如,用于收获 - [Arad et al., 2020; Barth et al., 2016])或假定到目标的距离恒定(例如,用于自上而下的除草[Tillett et al., 2008])等技术解决方案。随着商用级 RGB-D 传感器的最新发展,在户外条件下也可以获取可接受质量的彩色点云数据(Ringdahl et al., 2019; Vit & Shani, 2018)。

These capabilities open the door for close sensing applications for monitoring the crop's physical size and location. Applications such as growth monitoring, maturity estimation based on physical size, and phenotypic features extraction were so far tested mostly in the laboratory and postharvest controlled photo chamber conditions (Hacking et al., 2019; Kirk et al., 2020; Nandi et al., 2016). The availability of such sensors is now enabling in-field size- based phenotypes acquisitions (Kurtser, Ringdahl, Rotstein, Berenstein, et al., 2020; Milella et al., 2019; Vit & Shani, 2018). All of these operations require depth sensors.
这些功能为作物物理尺寸和位置监测的近距离感应应用程序开辟了道路。诸如生长监测、基于物理尺寸的成熟度估计以及表型特征提取等应用程序迄今主要在实验室和收获后受控的光室条件下进行测试(Hacking et al., 2019; Kirk et al., 2020; Nandi et al., 2016)。这种传感器的可用性现在正在使现场尺寸驱动的表型获取成为可能(Kurtser, Ringdahl, Rotstein, Berenstein, et al., 2020; Milella et al., 2019; Vit & Shani, 2018)。所有这些操作都需要深度传感器。

2.2 | Mapping and data fusion in orchard and vineyard settings
2.2 | 在果园和葡萄园环境中进行制图和数据融合

Several recent projects in autonomous monitoring of vineyards and orchards have focused on the need to fuse and aggregate information collected from ground mobile robots in a form of a semantically enriched map. VineScout (Fernández-Novales et al., 2021) autonomous ground vehicle equipped with an IR sensor was used to monitor grapevine water status. The information is aggregated into maps of the entire vineyard. More classic simultaneous mapping and localization (SLAM) algorithms were tested in vineyard conditions by the Bacchus project (Papadimitriou et al., 2022) with the aim of generating navigation maps. Wang, Tang, and Whitty (Wang et al., 2020) generated maps of flower density in apple orchards using a ground robot equipped with RGB and RGBD sensors. Despite the aggregation of the semantic data in a form of a map using geolocation extracted from the GPS unit, the authors do not register the pointclouds from the RGBD camera but perform single-frame detection.
一些最近在葡萄园和果园自主监测方面的项目,重点关注将从地面移动机器人收集的信息融合和聚合成语义丰富的地图的需求。VineScout(Fernández-Novales 等人,2021 年)自主地面车辆配备了红外传感器,用于监测葡萄藤水分状况。信息被聚合成整个葡萄园的地图。Bacchus 项目(Papadimitriou 等人,2022 年)在葡萄园条件下测试了更经典的同步建图和定位(SLAM)算法,目的是生成导航地图。Wang、Tang 和 Whitty(Wang 等人,2020 年)使用配备有 RGB 和 RGBD 传感器的地面机器人,生成了苹果园的花密度地图。尽管使用从 GPS 单元提取的地理位置将语义数据聚合成地图形式,但作者没有注册 RGBD 相机的点云,而是进行单帧检测。

2.3 | Multi-view and point cloud registration in the agricultural automation domain
2.3 | 农业自动化领域中的多视图和点云配准

The use of multiple viewpoints planning for an eye-in-hand robotic configuration or drone field monitoring is a widely discussed issue in agri-robotic vision applications (Barth et al., 2016; Bulanon et al., 2009; Hemming et al., 2014; Kurtser & Edan, 2018a, 2018b; Zaenker et al., 2021, 2020). The discussion often focuses on target visibility due to the high occlusion levels requiring multiple viewpoints to overcome the problem. The sensor viewpoint planning methods often focus on the need to plan the sensing strategy under time constraints and expected information content. These methods often do not register the point clouds but rather plan the next optimal viewpoint. Attempts to register RGB-D point clouds in agricultural settings, acquired from on-ground robots often focus on grasping pose calculation for fruit harvesting (Guo et al., 2020; Lehnert et al., 2016) or growth modeling (Alenya et al., 2011; Chebrolu et al., 2020).
使用多个视角规划用于眼手式机器人配置或无人机田间监测是农业机器人视觉应用中广泛讨论的问题(Barth et al., 2016; Bulanon et al., 2009; Hemming et al., 2014; Kurtser & Edan, 2018a, 2018b; Zaenker et al., 2021, 2020)。讨论通常集中在目标可见性上,由于需要克服高度遮挡情况而需要多个视角。传感器视角规划方法通常集中在在时间约束和预期信息内容下规划感知策略的需求。这些方法通常不注册点云,而是计划下一个最佳视角。尝试在农业环境中注册从地面机器人获取的 RGB-D 点云,通常集中在果实收获(Guo et al., 2020; Lehnert et al., 2016)或生长建模(Alenya et al., 2011; Chebrolu et al., 2020)的抓取姿态计算。
To the best of the authors' knowledge, these applications have been tested so far only in indoor laboratory conditions and do not
据作者所知,这些应用迄今仅在室内实验室条件下进行了测试,尚未

deal with issues of data registration under challenging illumination, occlusions, and plant density. Point cloud registration in field conditions has been so far implemented exclusively in navigation and mapping applications of the mobile agri-robot, acquiring 3-D point clouds using laser scanners and LiDARs, which are more resilient to outdoor illumination conditions (Gao et al., 2018; Shalal et al., 2013). Therefore, registration methods applied so far in field conditions have mostly overlooked the possible added value of color information for registration purposes.
处理挑战性照明、遮挡和植物密度下的数据注册问题。迄今为止,在农业机器人导航和制图应用中,主要采用激光扫描仪和 LiDAR 获取 3D 点云进行野外环境下的点云注册,这种方法对户外照明条件较为抗性(高等,2018 年;沙拉尔等,2013 年)。因此,目前应用于野外环境的注册方法大多忽视了颜色信息对注册目的的可能增值作用。
An exception to this is the work of Dong, Roy, and Isler (Dong et al., 2020), who performed tree row mapping using registration of pointclouds acquired from an RGB-D camera. To register the pointcloud, the authors proposed a tailor-made algorithm that relies on domain knowledge in the form of semantic constraints, such as the presence of tree trunks and their expected orientation.
董、罗伊和伊斯勒(Dong et al., 2020)的工作是这一规则的例外。他们利用从 RGB-D 相机采集的点云进行树行制图。为了注册点云,作者提出了一种定制的算法,该算法依赖于语义约束等领域知识,例如树干的存在及其预期方向。
In our previous work, we have shown how the acquisition of colored-point clouds can be used for both detections (Kurtser, Ringdahl, Rotstein, & Andreasson, 2020a) and volume estimation (Kurtser, Ringdahl, Rotstein, Berenstein, et al., 2020) of grapes in vineyard conditions with the goal of yield prediction. In both previous applications, we have employed single-frame non-registered point clouds under the assumption that state-of-the-art registration algorithms generally fail to provide accurate registration results for both the noisy outdoor sensory data acquired from RGB-D cameras and the dense and repetitive feature lacking soft and dynamic foliage present in the agricultural domain.
在我们之前的工作中,我们已经展示了如何使用有色点云的获取来进行葡萄检测(Kurtser, Ringdahl, Rotstein, & Andreasson, 2020a)和体积估算(Kurtser, Ringdahl, Rotstein, Berenstein, et al., 2020),以实现产量预测。在这两种之前的应用中,我们都使用了单帧非注册点云,因为最先进的注册算法通常无法为从 RGB-D 相机获取的嘈杂户外传感数据以及农业领域中稠密且重复特征缺失的软质和动态枝叶提供准确的注册结果。
In this paper, we aim to challenge this assumption through the evaluation of several state-of-the-art registration algorithms and propose our own registration method.
在本文中,我们旨在通过评估多种最先进的配准算法来对这一假设提出质疑,并提出我们自己的配准方法。

3 | POINT SET REGISTRATION
点集配准

In this section, we review the state-of-the-art registration methods that are used in this work. We start by defining the registration problem mathematically and discuss the registration methods briefly.
在本节中,我们回顾了本工作中使用的最先进的配准方法。我们首先从数学上定义了配准问题,并简要讨论了配准方法。
Registration of two point clouds and means finding the transformation matrix that aligns the point clouds. It is an iterative optimization problem in which registration loss is minimized. The registration problem can be mathematically expressed as in the following equation:
注册两个点云 意味着找到变换矩阵 来对齐这两个点云。这是一个迭代的优化问题,其中注册损失被最小化。注册问题可以用以下方程式来数学表示:
where is the rigid transformation matrix , represented using rotation matrix and translation vector . is the registration loss function and depends on the registration algorithm used.
其中 是刚性变换矩阵 ,使用旋转矩阵 和平移向量 表示。 是注册损失函数,取决于所使用的注册算法。
In this paper, we build upon the well-known and most widely used registration algorithms-iterative closest point (ICP) registration (Korn et al., 2014) and Normal Distribution Transform (NDT) registration (Stoyanov et al., 2012).
在本论文中,我们建立在著名的且最广泛使用的配准算法-迭代最近点(ICP)配准(Korn et al., 2014)和 Normal Distribution Transform(NDT)配准(Stoyanov et al., 2012)的基础之上。

Since Besl and McKay (1992) first used the term ICP, several variations were proposed. However, according to the review performed by Pomerleau et al., (Pomerleau et al., 2015), the main variation in ICP algorithms include variations in: (1) transformation functions; (2) data filters; (3) distance functions.
自从 Besl 和 McKay(1992 年)首次使用 ICP 一词以来,已经提出了多种变体。但是,根据 Pomerleau 等人的评论(Pomerleau 等人,2015 年),ICP 算法的主要变化包括以下几个方面:(1)变换函数;(2)数据滤波器;(3)距离函数。
In the described above application in agri-robotics, the scans are not expected to scale significantly. As a result, the ICP variations presented focus only on rigid transformation functions that include translation and rotation changes only. Data filters in the case of point clouds are used to reduce noise by doing feature enhancement (e.g., calculating point normal, extracting corner or surface points) and feature reduction (e.g., point density reduction, ground removal). Besides the basic point cloud data filters, image-based data filtering methods are also used in this work, described in Section 4. In this work, we used the Euclidean distance function with point-to-point and point-to-plane distance for ICP point-2-point and ICP point-2plane registration respectively.
在上述农用机器人应用中,扫描不需要显著扩展。因此,所呈现的 ICP 变体仅关注包括平移和旋转变化的刚性变换函数。在点云的情况下,使用数据过滤器来减少噪声,通过执行特征增强(如计算点法线、提取角点或表面点)和特征减少(如点密度减少、地面去除)。除了基本的点云数据过滤器外,本文还使用了基于图像的数据过滤方法,在第 4 节有所描述。在本工作中,我们分别使用点对点和点对平面距离的欧几里得距离函数进行 ICP 点对点和 ICP 点对平面配准。
Published variations of NDT algorithms are more scarce and often conceptually do not vary significantly from the below method. Nevertheless, some variations are available (Das & Waslander, 2014; Magnusson et al., 2009; Stoyanov et al., 2012; Valencia et al., 2014). Therefore, we chose the most common ones that rely on point-todistribution and distribution-to-distribution distances.
发表的 NDT 算法变体较为稀缺,通常概念上与下述方法差异不大。尽管如此,仍有一些变体可供选择(Das & Waslander, 2014; Magnusson et al., 2009; Stoyanov et al., 2012; Valencia et al., 2014)。因此,我们选择了最常见的依赖于点到分布和分布到分布距离的变体。
Finally, some recent efforts in Deep Learning based registration methods have gained popularity by training networks for feature extraction and registration (Villena-Martinez et al., 2020). These methods appear promising in data-abundant applications but can be expected to require large amounts of data, a common bottleneck in the agricultural robotics domain (Kamilaris & Prenafeta-Boldú, 2018; Kurtser, Ringdahl, Rotstein, & Andreasson, 2020). The designed networks often rely on the same conceptual approach as ICP and NDT of searching key points using images and could be viewed as an extension to the SIFT-ICP method we evaluate.
最后,一些基于深度学习的配准方法获得了广泛的应用,这些方法通过训练网络进行特征提取和配准(Villena-Martinez 等人,2020)。这些方法在数据丰富的应用中显得很有前景,但在农业机器人领域,由于数据瓶颈的限制,可能需要大量的数据(Kamilaris & Prenafeta-Boldú, 2018; Kurtser, Ringdahl, Rotstein, & Andreasson, 2020)。所设计的网络通常采用与 ICP 和 NDT 相同的概念方法,使用图像搜索关键点,可视为我们评估的 SIFT-ICP 方法的扩展。

3.1 | ICP registration
备案注册

The ICP registration loss function can be defined as the sum of the squared distance between the entities in the source cloud to the corresponding entity in the target point cloud . Here, the entity could be a point, a plane, or a shape, and the corresponding entity is usually the nearest neighbor of the transformed entity or point in the target point cloud. In work by Tavares et al. (2020), a detailed description of the ICP registration loss functions is presented. The generalized ICP loss function as per Tavares et al. (2020) can be defined as the sum of the distance between the matching features in point clouds and can be written as in Equation 2.
点到点的 ICP 配准损失函数可以定义为源点云中实体到对应目标点云中实体的平方距离之和。这里实体可以是点、平面或形状,对应实体通常是转换后的实体或点在目标点云中的最近邻点。Tavares 等人(2020)在他们的工作中详细描述了 ICP 配准损失函数。Tavares 等人(2020)提出的广义 ICP 损失函数可以定义为两个点云中匹配特征之间距离的和,如等式 2 所示。
where is the distance function, is the optional weight for the entity pair and defines the corresponding entity of source point
其中 是距离函数, 是实体对的可选权重, 定义了源点的相应实体

cloud in target point cloud . When the entities are points, the loss function is the sum of Euclidean distance between the corresponding points (Besl & McKay, 1992), and correspondence is the nearest neighbor based on Euclidean distance. In Chen and Medioni (1992), point-to-plane correspondence is established, where the loss function is defined such that the distance between the point in the source point cloud is decreased along the normal of the corresponding plane of points in the target point cloud.
目标点云中的云 。当实体是点时,损失函数是相应点之间欧几里德距离的总和(Besl & McKay, 1992),对应关系是基于欧几里德距离的最近邻。在 Chen 和 Medioni (1992)中,建立了点到平面的对应关系,损失函数的定义使得源点云中的点沿目标点云中相应平面的法线方向减小距离。

3.2 | NDT registration
注册非破坏性检测

NDT registration is a point set registration algorithm which uses NDT maps. The NDT maps are constructed by dividing the point cloud into grids called NDT cells. For each NDT cell, normal distribution is calculated using the points that fall in the grid. There are two types of NDT registration algorithms, point-2distribution (P2D) and distribution-to-distribution (D2D). In NDT registration, the cost function is minimized iteratively with respect to the rigid transformation matrix . The NDT P2D registration cost function between a point cloud and NDT map (where is the NDT map of point cloud ) is defined as the negative likelihood of point in belonging to the NDT cells in map . The cost function for NDT P2D registration is given in Equation 3
正态分布变换(NDT)注册是一种点集注册算法,使用 NDT 地图。NDT 地图是通过将点云划分为称为 NDT 单元格的网格构建的。对于每个 NDT 单元格,使用落入网格中的点计算正态分布 。NDT 注册算法有两种类型,点-分布(P2D)和分布-分布(D2D)。在 NDT 注册中,目标函数被迭代地最小化,以获得刚体变换矩阵 。NDT P2D 注册的目标函数定义为点云 中的点属于地图 中 NDT 单元格(其 NDT 地图为 )的负对数似然。NDT P2D 注册的目标函数如等式 3 所示。
where, and are positive regularization factor mentioned by Magnusson et al. (2007), iterate over the points in point cloud and iterate over the NDT cells of map . To make the cost calculation computationally less expensive, NDT cell parameter ( and ) closest to the transformed point is used.
其中, 是 Magnusson 等人(2007)提到的正则化因子, 遍历点云 中的点, 遍历地图 的 NDT 单元。为了降低计算成本,使用转换点最接近的 NDT 单元参数( )。
NDT D2D registration cost function is defined between two NDT maps (source map and target map ) and represents the dissimilarity between the maps. There are two types of NDT D2D registration cost function, the first cost function Equation (4) is defined as the sum of the distances between NDT cells of source and target map (Andreasson & Stoyanov, 2012). The second cost function Equation 5 is based on fuzzy logic (Liao et al., 2022).
NDT D2D 注册成本函数在两个 NDT 地图(源地图和目标地图)之间定义,代表地图之间的不相似性。NDT D2D 注册成本函数有两种类型,第一个成本函数公式(4)被定义为源地图和目标地图的 NDT 单元之间的距离之和(Andreasson & Stoyanov, 2012)。第二个成本函数公式 5 基于模糊逻辑(Liao et al., 2022)。
where, 在哪里,

3.3 | TEASER++ 3.3 | TEASER++ 人工智能助手 (深度生成型) 翻译: 3.3 | TEASER++

Both ICP and NDT-based registration algorithms are powerful registration tools that have been employed actively in the current literature, but in recent times more robust and fast registration algorithms have been introduced. A representative example of this is the TEASER++ algorithm (Yang et al., 2020). This algorithm is specifically designed to provide robust pointcloud registration in the presence of large amounts of outlier correspondences, a condition expected in the outdoor agricultural data set. The registration algorithm utilizes the correspondences (Fast Point Feature Histograms (FPFH) point features used for the color point cloud in the paper and also used in this work) between points and uses a graph-based method of finding the maximum clique to reject most of the outliers. In addition, the registration cost function is decoupled for translation, rotation, and scale estimation and based on Truncated Least Squares (TLS) cost, which is robust to a large fraction of outlier correspondences.
同时, ICP 和基于 NDT 的配准算法都是功能强大的配准工具,广泛应用于现有文献中,但最近也引入了更强大和更快速的配准算法。这其中一个代表性例子就是 TEASER++算法(Yang et al., 2020)。这种算法特别设计用于在存在大量异常对应关系的情况下提供稳健的点云配准,这种情况在户外农业数据集中很常见。该配准算法利用点与点之间的对应关系(用于论文中彩色点云的快速点特征直方图(FPFH)点特征,也用在本工作中),并采用基于图的方法寻找最大团来拒绝大部分异常值。此外,配准的目标函数针对平移、旋转和缩放分别进行估计,基于截断最小二乘(TLS)代价,对大比例的异常对应关系也具有鲁棒性。
The algorithm is also supplemented with easily implementable code, which makes it a great candidate for comparison. Given the complexity of the algorithm and space considerations in this article, we refer the interested reader to the original paper ( (
该算法也补充了易于实现的代码,这使它成为比较的良好候选者。鉴于该算法的复杂性和本文中的空间考虑,我们建议有兴趣的读者参考原始论文( (

3.4 | Introduction of color information
3.4 | 颜色信息介绍

The ICP and NDT methods described above do not use color information, and the cost function is solely based on the geometrical information of the points. With the introduction of RGB-D data, the cost functions can be adjusted either by using the color information of every single point or by using image features to find the correspondences. Korn et al. (2014) find the correspondences between the point clouds using color points and register them using the ICP registration. In Huhle et al. (2008), colored NDT cells are used for registrations which are defined as Gaussian mixture models (GMM) in color space and corresponding weighted spatial means and covariance. The color NDT registration is derived from the NDT P2D registration cost function, and the cost is calculated as the weighted negative likelihood of spatial point in the GMM of NDT cells, where the weights are likelihood in the color space. Andreasson and Stoyanov (2012) used SURF image features to find the correspondence between points in two RGB-D frames, and registration is done using the NDT D2D registration method. Our method is derived from the work of Korn et al. (2014) and Stoyanov et al. (2012), by introducing a novel approach to utilizes the color and geometric info for finding the correspondences between the colored NDT cells. The NDT D2D registration cost function is used for minimizing the distance between correspondences.
上述的 ICP 和 NDT 方法不使用颜色信息,成本函数仅基于点的几何信息。通过引入 RGB-D 数据,成本函数可以通过使用每个点的颜色信息或使用图像特征来找到对应关系进行调整。Korn 等人 (2014) 使用颜色点找到点云的对应关系,并使用 ICP 配准进行注册。在 Huhle 等人 (2008) 中,使用颜色 NDT 单元进行注册,这些单元在颜色空间中被定义为高斯混合模型 (GMM),并具有相应的加权空间均值和协方差。颜色 NDT 注册是从 NDT P2D 注册成本函数派生而来的,成本计算为 NDT 单元 GMM 中空间点的加权负对数似然,其权重为颜色空间中的似然。Andreasson 和 Stoyanov (2012) 使用 SURF 图像特征找到两个 RGB-D 帧中点之间的对应关系,并使用 NDT D2D 注册方法进行注册。我们的方法是从 Korn 等人 (2014) 和 Stoyanov 等人 (2012) 的工作中派生而来的,引入了一种新的方法来利用颜色和几何信息来找到有色 NDT 单元之间的对应关系。使用 NDT D2D 注册成本函数来最小化对应关系之间的距离。
A schematic representation of the inclusion of color information in point cloud registration is presented in Figure 1. The pipeline from RGB-D images to the registered point cloud includes getting the
图 1 示出了在点云注册中包含颜色信息的示意图。从 RGB-D 图像到注册点云的管道包括获取相机外参数、图像到点云的投影、颜色信息的转移以及最终的点云注册。
FIGURE 1 Flowchart for the scene registration using RGB-D images. Registration using 3D points and/or color information and Registration using image features and 3D points.
图 1 使用 RGB-D 图像进行场景注册的流程图。使用 3D 点和/或颜色信息进行注册,以及使用图像特征和 3D 点进行注册。
point cloud from RGB-D images, filtering the point cloud, getting the correspondences based on either distance or image features, and then performing the registration.
从 RGB-D 图像获取点云、对点云进行过滤、基于距离或图像特征获取对应关系、然后执行配准。

4 | METHODS AND MATERIALS
四 | 方法和材料

4.1 | Evaluation data sets
4.1 | 评估数据集

To evaluate the registration methods outlined above we use 2 data sets-GRAPES3D data set (Kurtser & Edan, 2018b), and TUM RGBD data set (Sturm et al., 2012). The data sets were chosen given the following criteria. (1) The data set must be acquired by a commercial grade RGB-D camera; (2) It must include a view towards the target with significant variability in distance to the targets and other objects in the scene; (3) Consecutive frames must be acquired with significant overlap (i.e., acquisition frequency must be reasonable to the speed of motion of the camera); (4) The pointcloud and color images should be acquired in a feature-rich environment. The criteria were chosen to adhere to the possible application of the proposed algorithms - pointcloud registration for better crop monitoring in orchards, vineyards, and greenhouse conditions.
评估上述配准方法,我们使用了两个数据集-GRAPES3D 数据集(Kurtser & Edan, 2018b)和 TUM RGBD 数据集(Sturm et al., 2012)。选择这些数据集是基于以下标准:(1)数据集必须由商用级别的 RGB-D 相机采集;(2)必须包括目标的视角,且目标与场景中其他物体之间的距离有显著变化;(3)连续帧之间必须有较大的重叠区域(即相机运动速度与采集频率相适应);(4)点云和彩色图像应在特征丰富的环境中采集。这些标准的选择,是为了符合所提出算法的可能应用-用于果园、葡萄园和温室条件下作物监测的点云配准。
The proposed data sets were both collected using RGB-D cameras to acquire colored pointcloud. GRAPES3D data sets represent the environmental vineyards and orchard conditions in which the algorithms are intended to be used. The TUM data set provides valuable benchmark data for deeper insights into algorithm performance in noise-free indoor conditions. The highquality data of the TUM data set provides the additional ability to compare registration algorithms subject to artificial noise to gain insights into the stability and robustness of the registration algorithms.
提出的数据集均使用 RGB-D 相机采集有色点云。GRAPES3D 数据集代表了算法使用的环境葡萄园和果园条件。TUM 数据集提供了宝贵的基准数据,深入了解算法在无噪声室内条件下的性能。TUM 数据集的高质量数据还可以比较受人工噪声影响的配准算法,从而洞悉配准算法的稳定性和鲁棒性。

Unfortunately, the number of RGB-D data sets publicly available for benchmarking is limited at this point and does not adhere to the criteria mentioned above. Well-established benchmarks such as SugarBeats 2016 (Chebrolu et al., 2017) and Rosario (Pire et al., 2019) are acquired for the use cases of aerial crop monitoring in open fields. While these data sets include a detailed ground truth using GPS-RTK, the acquisition protocol differs significantly. The pointclouds are typically acquired in a top-down viewpoint, which occludes illumination disturbances and provides a rather constant distance to target measure in field conditions. Additionally, the acquisition is often performed at a low frequency with limited overlap between frames. While the TUM data set was not acquired in agricultural conditions, the acquisition protocol used is adhering to the criteria mentioned above and provides additional insights. In Appendix B we provide additional results of applying the algorithms to the SugraBeats2016 data set (Chebrolu et al., 2017) and further explain the limitations in translating the suggested methods to aerial crop monitoring applications.
可惜的是,目前公开用于基准测试的 RGB-D 数据集有限,且不符合上述标准。像 SugarBeats 2016(Chebrolu 等人,2017 年)和 Rosario(Pire 等人,2019 年)这样的知名基准测试,是为了航空作物监测在开放场地的应用采集的。虽然这些数据集使用 GPS-RTK 提供了详细的地面实况,但采集协议存在明显差异。点云通常从俯视角度采集,遮挡了照明干扰,在田间条件下提供了较为恒定的目标测量距离。此外,采集通常以较低频率进行,帧之间重叠有限。尽管 TUM 数据集不是在农业条件下采集的,但采集协议符合上述标准,并提供了额外的见解。在附录 B 中,我们提供了在 SugraBeats2016 数据集(Chebrolu 等人,2017 年)上应用算法的其他结果,并进一步解释了将建议的方法应用于航空作物监测应用的局限性。

4.1.1 | GRAPES3D 葡萄 3D

The RGB-D point clouds were collected using an Intel Realsense D435 camera mounted on Greenhouse Spraying Robot (GSR) platform in two different conditions, a controlled outdoor setup with potted grape plants and a commercial vineyard setup. The Realsense D435 has field of view of and has active stereo depth resolution of . The data is collected by teleoperating the GSR robot in a straight line with the camera mounted in two different configurations: (1) facing the growing row at ; and (2) facing the growing row at horizontal angle with respect to the moving direction. The data set contains bag files with color images, depth images, and camera info. The color and depth
RGB-D 点云数据使用安装在温室喷雾机器人(GSR)平台上的 Intel Realsense D435 相机在两种不同条件下收集:有控制的室外设置和商业葡萄园设置。 Realsense D435 的视野为 ,有 的主动立体深度分辨率。 数据是通过远程控制 GSR 机器人沿直线行驶,相机安装在两种不同配置下收集的:(1)正对生长行,水平角度为 ;(2)相对于移动方向,水平角度为 。 数据集包含颜色图像、深度图像和相机信息的 bag 文件。 颜色和深度

(a)
(b)
(c)
FIGURE 2 Outlier points example from the GRAPES3D data set and pointcloud example for TUM RGBD. (a) Overflowing points at the edge of an object, (b) wavy surface points for a flat surface, and (c) noise-free pointcloud example from the TUM data set.
图 2 GRAPES3D 数据集的异常值点示例和 TUM RGBD 的点云示例。(a)物体边缘的溢出点,(b)平面表面的波状点, (c)TUM 数据集中噪声较少的点云示例。
images are extracted from bag files and aligned using -covert tool of the librealsense library. These aligned RGB-D images are converted to 3D colored point cloud using the following equation:
图像从 bag 文件中提取,并使用 librealsense 库的 -covert 工具进行对齐。 这些对齐的 RGB-D 图像使用以下公式转换为 3D 彩色点云:
where, is the 3D point in camera frame corresponding to the pixel coordinates is the depth value at the pixel coordinates in depth image is the scale factor, and are the pixel center of the camera sensor, and and are the focal length of camera.
其中, 是相机坐标系中对应于像素坐标 的 3D 点, 中深度图像上该像素坐标处的深度值, 是尺度因子, 是相机传感器的像素中心, 是相机的焦距.

4.1.2 | TUM data sets
4.1.2 | TUM 数据集

The sequences of the TUM RGBD data set used in this work were collected using a Microsoft Kinect sensor in indoor scenarios like offices and rooms. A high-accuracy motion-capture system with eight high-speed tracking cameras was used to collect the ground-truth trajectory. The data set includes scans with significantly less noise than expected in scans collected in outdoor agricultural conditions, as seen in Figure 2. As a result, the evaluation using the TUM data set aims to provide insights into our method's robustness to noise. Specifically, we evaluate the following scenarios-(1) noisy blurred images and (2) sparse pointclouds. We used "freiburg1_desk," "freiburg1_room," and "freiburg1_xyz" data sequences as these are long sequences with features present in each scan for registration methods and SIFT feature matching.
这项工作使用的 TUM RGBD 数据集是在办公室和房间等室内场景中使用微软 Kinect 传感器收集的。用于收集真实轨迹的是一个高精度的运动捕捉系统,配备有 8 台高速跟踪摄像机。该数据集的扫描噪声明显低于预期的户外农业条件下的扫描噪声,如图 2 所示。因此,使用 TUM 数据集进行评估旨在提供我们方法抗噪能力的见解。具体来说,我们评估了以下两种情况-(1)噪音模糊的图像和(2)稀疏的点云。我们使用了"freiburg1_desk"、"freiburg1_room"和"freiburg1_xyz"数据序列,因为这些是包含注册方法和 SIFT 特征匹配所需特征的长序列。
Additionally, due to the availability of ground-truth information in the TUM data sets, we are able to provide registration error measures compared to ground truth as described in Section 4.4.
此外,由于在 TUM 数据集中存在真相信息,我们能够提供与第 4.4 节中描述的真相相比的配准误差度量。

4.2 | Preprocessing 4.2 | 预处理

In Grapes3D data sets, the point cloud obtained from the RGB-D image contains many outlier points which need to be removed. The outlier points in a point cloud are points that do not belong to the surface of any object and occur due to noise in the sensor or ambient noise/parameters, as seen in Figure 2. Some common reasons for noise/outlier points in the point cloud generated using an RGB-D camera are different viewing angles, light intensities, different reflection properties of the objects, vibration, or jerk in the camera position. Since the RGB-D camera is calibrated for near objects, we observed a reduction in depth accuracy with increased object distance. Therefore the point cloud must first be filtered based on the distance from the camera origin. Other outlier points, like overflowing points near the edges, wavy surfaces, or points due to sensor noise, can be removed (not completely) using analytical methods like radius outlier removal or statistical outlier rejection. In this work, the radius outlier removal method is used to filter the outliers. According to this method, the points which satisfy the condition in Equation (7) are filtered from the point cloud, where is the neighborhood function that returns the number of points in the radius of point and is the threshold for a minimum number of points. The different stages of the filtered point cloud are shown in Figure 3.
在 Grapes3D 数据集中,从 RGB-D 图像获得的点云包含许多需要移除的异常点。点云中的异常点是指不属于任何物体表面的点,它们由传感器噪声或环境噪声/参数造成,如图 2 所示。点云中噪声/异常点的常见原因包括观察角度不同、光强不同、物体反射特性不同、相机位置振动或抖动。由于 RGB-D 相机校准的对象主要为近距离物体,我们观察到随物体距离增加,深度精度有所降低。因此,必须首先根据距相机原点的距离对点云进行过滤。其他类型的异常点,如边缘溢出点、起伏表面或传感器噪声引起的点,可以使用半径异常点移除或统计异常点移除等分析方法部分移除。在本工作中,使用半径异常点移除方法来过滤异常点。根据该方法,满足公式(7)条件的点将从点云中过滤掉,其中 是邻域函数,返回半径 内点的数量, 是最小点数的阈值。过滤后的点云的不同阶段如图 3 所示。
FIGURE 3 Filtered point cloud at different filtration stages. From left to right (i) RGB-D image, (ii) unfiltered point cloud, (iii) filtered point cloud based on distance filter, and (iv) filtered point cloud based on radius outlier filter and voxel grid downsampling.
图 3 不同过滤阶段的点云。从左到右(i)RGB-D 图像,(ii)未过滤的点云,(iii)基于距离过滤的点云,(iv)基于半径异常值过滤和体素网格下采样的过滤点云。
From the aligned RGB-D images, the point cloud is calculated using Equation (6) with scale factor , and intrinsic camera parameters. The point clouds are filtered using the same criteria for each registration algorithm (1) distance-based filter: points at a distance of more than are rejected; (2) radius based filter: the point is rejected if the number of neighboring points in a radius of is less than 20 (3) voxel grid filter: if downsampling is required (in case of NDT P2D registration), the grid size of is used. The same preprocessed clouds are used for all compared methods.
从对齐的 RGB-D 图像中,使用等式 (6) 和缩放因子 以及内部相机参数计算点云。使用以下相同标准过滤点云: (1) 基于距离的滤波器:距离超过 的点被拒绝; (2) 基于半径的滤波器:如果半径 内的邻近点个数小于 20,该点被拒绝; (3) 体素网格滤波器:如果需要下采样(对于 NDT P2D 配准),使用 的网格大小。这些预处理后的点云用于所有对比方法。
To evaluate the methods under different colorspaces, three versions of the data sets were generated in RGB, HSV, and Lab colorspaces.
为了在不同的色彩空间中评估这些方法,我们生成了三个版本的数据集,分别是 RGB、HSV 和 Lab 色彩空间。
For the TUM data set, Equation (6) was used to convert the RGBD scans into pointclodus; the camera parameters (focal length and image center) and scale factor used for conversion were taken from Sturm et al. (2012). For this data set, no outlier removal preprocessing procedure was done because pointclouds are inherently less noisy and more feature-rich, as seen in Figure 2. The only pre-processing was to artificially introduce noise in pointclouds to evaluate the robustness of the algorithms. These degradation procedures included generating data sets with (1) downsampled pointclouds with a voxel size of , and (2) applying a Gaussian blur kernel with window size .
对于 TUM 数据集,使用等式(6)将 RGBD 扫描转换为点云;转换使用的相机参数(焦距和图像中心)和缩放因子取自 Sturm 等人(2012)。对于这个数据集,没有进行离群值删除预处理程序,因为点云本质上噪声较小,特征更丰富,如图 2 所示。唯一的预处理是在点云中人为引入噪声,以评估算法的鲁棒性。这些降解过程包括生成(1)体素大小为 的下采样点云数据集,以及(2)应用窗口大小为 的高斯模糊核。

4.3 | Proposed registration method
4.3 | 提议的注册方法

The new method (NDT-6D) that is proposed in this work is based on Stoyanov et al. (2012) and Korn et al. (2014). The method uses NDT D2D registration but introduces color to the method by calculation of NDT cells mean using both point and color information (Equation [8]). For covariance calculation using Equation (9), only geometric information is used. Colors are represented in Lab color space as used in the work of Korn et al. (2014).
本文中提出的新方法(NDT-6D)基于 Stoyanov 等人(2012 年)和 Korn 等人(2014 年)的研究。该方法使用 NDT D2D 配准,但通过使用点和颜色信息计算 NDT 单元格平均值(公式[8])来引入颜色。在使用公式(9)计算协方差时,仅使用几何信息。颜色用 Korn 等人(2014 年)的工作中使用的 Lab 色彩空间表示。
The cell means are used only to find the cell correspondences between NDT maps using -nearest neighbor methods. And registration is done by using only geometric information of the cell ( , and . For optimization of the NDT D2D registration cost function, an auto-differentiation library Ceres (Agarwal et al., 2022) was used. Hence, the derivatives of the NDT6D cost function were not calculated for optimization.
细胞均值 仅用于使用 最近邻方法找到 NDT 地图之间的细胞对应关系。注册仅使用细胞的几何信息( )完成。为优化 NDT D2D 注册代价函数,使用了自动微分库 Ceres(Agarwal 等,2022)。因此,在优化过程中未计算 NDT6D 代价函数的导数。
Our method's novelty lies in how we incorporate color information in NDT registration. While NDT D2D employs the same registration cost function, it neglects the color information in the point cloud and relies solely on its shape information. In contrast, our approach utilizes the color information of NDT cells to establish correspondences between the source and target point clouds, similar to Korn et al. (2014), without increasing the algorithm's complexity. This is different from the previous method of incorporating color information in NDT registration, which involved calculating color GMMs for NDT cells and was more complex. By building on the works of NDT D2D registration and ICP-6D, we propose a new NDT-based registration method that effectively integrates color information.
我们方法的新颖性在于如何在 NDT 配准中融入颜色信息。虽然 NDT D2D 采用了相同的配准代价函数,但它忽略了点云中的颜色信息,仅依赖于其形状信息。相比之下,我们的方法利用 NDT 单元的颜色信息来建立源点云和目标点云之间的对应关系,类似于 Korn 等人 (2014)的方法,而不会增加算法的复杂性。这不同于之前在 NDT 配准中融入颜色信息的方法,该方法涉及计算 NDT 单元的颜色高斯混合模型,更加复杂。通过建立在 NDT D2D 配准和 ICP-6D 的工作基础上,我们提出了一种新的基于 NDT 的配准方法,能够有效地集成颜色信息。

4.4 | Evaluation method and measures
4.4 | 评估方法和衡量标准

In this work, state-of-the-art registration methods are tested and evaluated on a complex agricultural data set, GRAPES3D, and a nonagriculture indoor data set, TUM RGBD data set. The point clouds are pre-processed as described in the previous sub-section. The registration methods evaluated in this work are divided into three main groups. The first group contains the registration methods that only used geometric information for registration. The second group of the registration method uses color point information for registration, and the third group uses image features.
在本项工作中,最先进的配准方法在一个复杂的农业数据集 GRAPES3D 和一个非农业室内数据集 TUM RGBD 数据集上进行了测试和评估。点云数据如前一小节所述进行了预处理。本文评估的配准方法分为三大类。第一类仅使用几何信息进行配准。第二类使用点云颜色信息进行配准,第三类使用图像特征进行配准。
  1. 3D/Geometric point info (Group1)-ICP point-to-point, ICP point-to-plane, NDT point-2-distribution, NDT distribution-todistribution, and TEASER++.
    3D/几何点信息(Group1)-ICP 点到点,ICP 点到平面,NDT 点-2-分布,NDT 分布到分布,和 TEASER++。
  2. Color +3D/geometric point info (Group2)-Generalized ICP-6D, NDT-6D (ours), and color NDT registration
    颜色+3D/几何点信息(Group2)-广义 ICP-6D、NDT-6D(我们的)和颜色 NDT 配准
FIGURE 4 Example of extracted labeled data for registration evaluation. (a) Grape cluster and (b) supporting beam.
图 4 提取的标记数据示例用于注册评估。(a)葡萄簇和(b)支撑梁。
  1. Image features /geometric point info (Group3)-SIFT Correspondence+ICP point-2-point, and SIFT Correspondence + NDT distribution-2-distribution
    图像特征 几何点信息(Group3)-SIFT 对应点+ICP 点对点,以及 SIFT 对应点+NDT 分布对分布
The registration methods in group1, group2, and group3 were tested on the GRAPES3D data set to test the performance of different registration methods in an agri-robotics scenario. In addition, the best-performing registration methods are further evaluated for robustness against noise using the TUM RGBD data set.
通组 1、组 2 和组 3 中的注册方法在 GRAPES3D 数据集上进行了测试,以评估不同注册方法在农业机器人场景中的性能。此外,表现最佳的注册方法还使用 TUM RGBD 数据集进行了抗噪能力评估。
We performed pairwise registration of every consecutive scan for all registration. In the case of all ICP-based registration methods, we downsampled the pointclouds with a voxel grid of , and the grid size to create an NDT map for all evaluations was . For the TEASER++ algorithm, the noise parameter used was , which provides the balance between robustness and fast iteration.
我们对所有配准的每个连续扫描进行成对注册。在所有基于 ICP 的注册方法中,我们用体素网格 对点云进行下采样,用于创建 NDT 地图的网格大小为 。对于 TEASER++算法,使用的噪声参数为 ,这提供了稳健性和快速迭代之间的平衡。
Due to the lack of ground truth information in the GRAPES3D data set, we choose to evaluate the registration performance on the GRAPES3D data set by calculating the root mean square error (RMSE) between the identifiable and static objects like grape clusters, pots, or beams (Figure 4). This method allows greater insight into the registration capabilities of small objects of interest, a feature required for acquiring physical crop measures, such as grape cluster volume.
由于 GRAPES3D 数据集中缺乏真实信息,我们选择通过计算可识别和静态对象(如葡萄串、罐子或梁)(图 4)之间的均方根误差(RMSE)来评估注册性能。此方法可更深入地了解感兴趣小对象的注册能力,这是获取实际作物测量(如葡萄串体积)所需的功能。
The object of interest is marked by drawing a bounding box in the color image, followed by extracting the corresponding point cloud using color-based segmentation. To minimize errors and bias of a manual label, we use the practice outlined by Kurtser and Edan (2018b) for the extraction of a grape cluster from a region of interest marked manually. The steps include (1) Conversion of the image into NDI color space; (2) Clustering using K-means in NDI color space; (3) Clustering using K-means in 3D space (i.e., using points) for the NDI clusters. We used grape clusters for the evaluation by marking one prominent cluster in every fifth scan and extracting the pointcloud. The total number of grape clusters extracted from the whole data set was 194.
感兴趣的目标通过在彩色图像中绘制边界框来标记,然后使用基于颜色的分割提取相应的点云。为了最小化手动标注的错误和偏差,我们使用 Kurtser 和 Edan(2018b)提出的实践来提取感兴趣区域中的葡萄簇。步骤包括:(1)将图像转换为 NDI 颜色空间;(2)在 NDI 颜色空间使用 K-means 进行聚类;(3)对 NDI 聚类使用 3D 空间(即使用点)中的 K-means 进行聚类。我们使用葡萄簇进行评估,在每五个扫描中标记一个突出的葡萄簇并提取点云。从整个数据集中提取的葡萄簇总数为 194。

Given the source point cloud, is the transformed point cloud in the reference frame of target point cloud , the following measures are extracted:
给定源点云 是在目标点云 参考系中的变换点云,提取以下测量指标:
  1. Root mean square error between the labeled points and labeled points .
    标记点 和标记点 之间的均方根误差。
  2. Euclidean distance between the center of mass of the labeled points and labeled points .
    标记点 和标记点 的质心之间的欧几里德距离。
In the case of the TUM RGBD data set, the ground truth trajectory is given and used to evaluate different registration methods. We have reported the RMSE of relative pose error (RPE) using the tool provided with the data set.
在 TUM RGBD 数据集的情况下,给出了基准真值轨迹,并用于评估不同的配准方法。我们使用该数据集提供的工具报告了相对姿态误差(RPE)的均方根误差。

5 | RESULTS AND DISCUSSION
5 | 结果和讨论

5.1 | Results on GRAPES3D data set
5.1 | GRAPES3D 数据集上的结果

The registration is performed in a pairwise manner, and the registration results are compared using evaluation measures described in the previous section. The registration results are given in Table 1.
注册以成对的方式进行,并使用上一节中描述的评估措施比较注册结果。注册结果见表 1。
As can be interpreted from the measures collected in Table 1, registration of colored point cloud (RBG-D data) is better when either the color information or image features are used to find the point correspondences between the two scans. The differences are statistically significant as shown in the three-way ANOVA followed by a posthoc analysis in Appendix A.
根据表 1 中收集的测量结果,在使用颜色信息或图像特征查找两次扫描之间的点对应关系时,彩色点云(RGB-D 数据)的注册效果更佳。如附录 A 中的三因素方差分析及后续分析所示,这些差异具有统计学意义。

5.1.1 | Impact of color information
5.1.1 | 色彩信息的影响

Of all the registration methods evaluated, ICP point-2-point registration performed the worst. The ICP point-2-plane
的所有注册方法中,ICP 点对点注册效果最差。ICP 点对平面
TABLE 1 Summary of registration results for algorithms not relying on color information, point-by-point color information, or color information from extracted image features.
表 1 不依赖颜色信息、逐点颜色信息或从提取的图像特征中获取的颜色信息的算法的注册结果概要。
Note: Measures collected for each registered pair of point clouds include root mean square error (RMSE) and distance between the center of mass (COM). The table presents the average value over all frame pairs in four scenarios from the GRAPES3D data set-controlled outdoor environment versus commercial vineyard conditions, and the camera facing straight into the growing row at and . Best, green; ours, orange.
对于每对注册点云收集的度量包括均方根误差(RMSE)和质心(COM)之间的距离。该表显示了 GRAPES3D 数据集中四种情况下所有帧对的平均值-受控室外环境与商业葡萄园条件,以及相机直对生长行的 。最佳,绿色;我们的,橙色。
(a)
RGB color space (b)
RGB 颜色空间
Lab color space (c)
Lab 色彩空间(c)
HSV color space HSV 色彩空间
FIGURE 5 Pointcloud representation of (a) RGB, (b) Lab, and (c) HSV color spaces which shows similarity between RGB and Lab color spaces and dissimilarity from HSV color space.
图 5 展示了(a) RGB、(b) Lab 和 (c) HSV 颜色空间的点云表示,显示了 RGB 和 Lab 颜色空间之间的相似性,以及与 HSV 颜色空间的差异。
(RMSE also performed poorly, as no initial guess was provided for registration. The state-of-the-art pointcloud registration algorithms, like NDT point-2-distribution (RMSE ) and NDT distribution-2-distribution (RMSE ) have performed fairly well for a no color information based registration. The new and robust registration algorithm TEASER++ also did not perform well with RMSE close to for all sequences. The reason for the poor performance of the above mentioned registration methods on this data set is the scene's complexity, with very few defined features and the abundance of noisy points in the point cloud. This conclusion is also supported by the fact that the simplified scene in the controlled outdoor data set yields a slightly lower average error (RMSE ) compared with the commercial vineyard setting .
(RMSE 也没有很好的表现,因为没有提供注册的初始猜测。像 NDT point-2-distribution (RMSE)和 NDT distribution-2-distribution (RMSE)这样的最先进的点云注册算法在没有颜色信息的情况下表现相当出色。新的健壮的注册算法 TEASER++也没有很好的表现,所有序列的 RMSE 都接近。造成上述注册方法在这个数据集上表现不佳的原因是场景的复杂性,只有很少定义的特征,而且点云中充满了噪点。这一结论也得到了控制室外数据集中简化场景的支持,它产生的平均误差(RMSE)略低于商业葡萄园环境。

The impact of scene complexity and noisy points on the registration can be mitigated by using the color information either on a point-by-point basis or as image features for searching corresponding points in the two scans making the registration more robust. In the case of using point color information, the ICP-6D and NDT-6D have performed similarly ( ) with better performance on the commercial vineyard data set, while Color NDT P2D registration was worst in the group.
场景复杂性和噪点对配准的影响可通过使用点对点颜色信息或图像特征搜索对应点进行缓解,从而提高配准的稳健性。在使用点颜色信息的情况下,ICP-6D 和 NDT-6D 的表现相似( ),在商业葡萄园数据集上表现更好,而 Color NDT P2D 配准在该组中表现最差。

5.1.2 | Impact of color-space
5.1.2 | 色彩空间的影响

For NDT-6D, three color spaces (RGB, Lab, and HSV) were evaluated. RGB and Lab have performed better out of these color spaces
对于 NDT-6D, 评估了三种色彩空间(RGB、Lab 和 HSV)。其中, RGB 和 Lab 表现更好。
FIGURE 6 Point cloud of grape clusters for the individual frame and registered frames.
图 6 单个帧和注册帧的葡萄簇点云。
compared to HSV. The better performance of RGB and Lab color spaces could be that the leaves are all represented in similar values. In contrast, in the HSV color space, there are significant distinctions between objects of the same class, as seen in Figure 5. As a result, a lot of additional noise is added to the registration procedure leading to less accurate registration results.
与 HSV 相比,RGB 和 Lab 色彩空间的性能更佳。这可能是因为所有的叶子在类似的值中得到表示。相比之下,在 HSV 色彩空间中,同一类别的物体之间存在显著差异,如图 5 所示。因此,在注册过程中添加了大量额外噪声,导致注册结果的精度较低。

5.1.3 | Registration using image features
使用图像特征进行注册

For image features-based registration, NDT D2D and ICP point-2point registration methods are evaluated with prefixed/predefined point correspondences using the best matched SIFT features (feature distance < 0.5 ). Between these two methods, SIFT+ICP point-2-point performed better when the viewing angle was , and SIFT+NDT D2D performed better when the viewing angle was . For SIFT based registration method the RMSE . The SIFT-NDT registration has performed better for a viewing angle of as a more detailed NDT map was created due to direct viewing of the plants.
基于图像特征的配准中,使用最佳匹配的 SIFT 特征(特征距离<0.5)预设/预定义的点对应关系,评估了 NDT D2D 和 ICP 点到点配准方法。在这两种方法中,当观察角度 时,SIFT+ICP 点到点表现更佳,当观察角度为 时,SIFT+NDT D2D 表现更佳。对于基于 SIFT 的配准方法,RMSE 为 。由于直接观察植物,SIFT-NDT 配准在 的观察角度下表现更佳,因为创建了更详细的 NDT 地图。

5.1.4 | Overall performance
整体表现

In general, the best performing algorithms yielded a registration error (RMSE and COM distance) of , with NDT-6D, giving the second-best results with a difference of in RSME compared to the best results. These differences are found to be statistically insignificant ( . val , see Appendix A). The RMSE error originates not only in registration error but, among others, in the manual labeling mismatch (shape of the labeled area) and variation in sensory data acquisitions of the same object from different directions. Therefore, with increased accuracy in the acquisition process, including the acquisition of ground truth information, the registration error could be even lower.
总的来说,表现最佳的算法产生了
Nevertheless, this registered point cloud can be used in various outdoor in-field agricultural robotics applications with the reported accuracy. For example, previous research on in-field single frame grape cluster size estimation, using the same datasets that were evaluated in this work (Kurtser, Ringdahl, Rotstein, Berenstein, et al., 2020), reported a fitting error of when estimating grape cluster size. The fitting was performed based on a single frame with a low average point cloud density of for the test case shown in Figure 6. With the implementation of a registration procedure, cloud density is expected to increase (two times more for the test case), which will increase fitting accuracy for in-field phenotyping procedures. The registered point cloud cluster is downsampled to using a voxel filter of . The volume to calculate the point density is the maximum cluster volume out of all considered frames.
然而,这个注册点云可以用于各种户外农业机器人应用,并具有报告的精度。例如,之前关于使用本研究中评估的相同数据集的单帧葡萄串大小估计的研究(Kurtser, Ringdahl, Rotstein, Berenstein 等人,2020 年),报告了在估计葡萄串大小时的拟合误差小于 。该拟合是基于单帧进行的,测试案例中的平均点云密度较低,为 。通过实施配准程序,预计云密度会增加(对于测试案例,增加两倍),这将提高田间表型分析程序的拟合精度。注册点云簇被下采样至 ,使用 的体素滤波器。计算点密度的体积是所有考虑帧中的最大簇体积。

5.2 | Results on TUM RGBD data set
5.2 | TUM RGBD 数据集结果

From the GRAPES3D data sets, we established that using color information in the form of color features or image features significantly increases registration accuracy. Therefore for the TUM data set, we evaluate only the methods employing color that generated the best results. We evaluate the algorithms on the data set with added noise similar to the ones experienced in the grapes3D data sets, including (1) Gaussian blurring kernel size ; and (2) pointcloud downsampling (DS) with voxel size. Table 2 summarizes the RMSE of relative position error, translation error eT (m), and rotation error eR ( ) on the three data set sequences.
从 GRAPES3D 数据集中,我们确定使用颜色特征或图像特征的颜色信息可显著提高配准精度。因此,对于 TUM 数据集,我们仅评估使用最佳结果生成的颜色方法。我们在添加噪声的数据集上评估算法,这些噪声与 GRAPES3D 数据集中的噪声类似,包括:(1)高斯模糊核大小 ;以及(2)点云下采样(DS)的 体素大小。表 2 总结了三个数据集序列的相对位置误差 RMSE、平移误差 eT(m)和旋转误差 eR( )。
TABLE 2 Summary of registration results on TUM RGBD data set.
表 2 概述了在 TUM RGBD 数据集上的注册结果。
Noise Method Room Desk XYZ
wo SIFT+ICP eT 0.033 0.033 0.018
eR 1.235 1.385 0.018
SIFT+NDT eT 0.011 0.014 0.007
eR 0.756 1.026 0.571
ICP6D eT 0.026 0.027 0.012
eR 1.23 1.565 0.828
NDT6D eT 0.014 0.018 0.01
eR 0.869 1.298 0.686
Blur SIFT+ICP eT 0.041 0.059 0.039
eR 10.06 9.307 1.733
SIFT+NDT eT 0.022 0.021 0.012
eR 1.251 1.269 0.76
ICP6D eT 0.026 0.026 0.011
eR 1.198 1.459 0.78
NDT6D eT 0.016 0.02 0.01
eR 0.932 1.325 0.675
DS SIFT+ICP eT 0.043 0.058 0.04
eR 6.55 3.741 1.91
SIFT+NDT eT 0.025 0.026 0.015
eR 1.403 1.392 0.943
ICP6D eT 0.019 0.019 0.007
eR 0.971 1.242 0.654
NDT6D eT 0.016 0.02 0.009
eR 0.935 1.328 0.677
Note: The root mean square error (RMSE) for relative pose error (eT [m] and eR [ ) for pointcloud without noise (WO), point cloud with blur (Blur), and downsampled point pointcloud (DS). Best, green; ours, orange.
根据无噪音点云(WO)、模糊点云(Blur)和下采样点云(DS)的相对姿态误差(eT [m]和 eR [ )的均方根误差(RMSE)。最优解为绿色,我们的结果为橙色。
We can see from Table 2 that NDT-based methods have performed better than ICP-based methods for each sequence with or without additive noise ( ). We can also see that in the scenario of no added noise, NDT6D has shown comparable but slightly worse results than SIFT+NDT. The results are statistically significant ( ), as shown in Appendix A. With the addition of noise, NDT6D remained relatively robust, increasing RMSE by compared with the case without added noise. In comparison, the increase in RMSE for the bestperforming registration method (SIFT+NDT) is more than for added noise. Our proposed method, NDT-6D, outperforms the SIFT+NDT method for blur noise, especially in the room scenario. In the case of downsampling noise, SIFT+NDT accuracy was significantly degraded, with NDT6D and ICP6D presenting the best results.
从表 2 可以看出,基于 NDT 的方法在有无添加噪音(< code0 >)的各个序列中都表现更好 than 基于 ICP 的方法。我们还可以看到,在没有添加噪音的情况下,NDT6D 的结果与 SIFT+NDT 相当但稍差。结果具有统计学意义(< code1 >),如附录 A 所示。添加噪音后,NDT6D 保持相对稳健,与无添加噪音的情况相比,RMSE 仅增加了< code2 >。相比之下,最佳配准方法(SIFT+NDT)在添加噪音的情况下,RMSE 增加了超过< code3 >。我们提出的方法 NDT-6D 在模糊噪音,特别是在房间场景中,优于 SIFT+NDT 方法。在下采样噪音的情况下,SIFT+NDT 的准确性显著下降,而 NDT6D 和 ICP6D 给出了最佳结果。

6 | CONCLUSIONS 6 | 结论

In conclusion, point cloud registration is a critical component for autonomous agricultural robotics in field conditions, enabling infield crop manipulation and phenotyping activities. This paper has highlighted how the state-of-the-art methods for registering uncolored point clouds collected in the above-mentioned conditions are limited, and the importance of colored point clouds or complementary registered image data has been emphasized.
在结论中,点云配准是自主农业机器人在田间条件下的关键组成部分,实现田间作物操作和表型分析活动。本文强调了现有用于注册上述条件下采集的无色点云的最先进方法存在局限性,并强调了彩色点云或配准图像数据的重要性。
Most current methods utilizing color information rely on image features. This paper has proposed an alternative approach, NDT-6D, which directly integrates color into the distance metrics between NDT cells. The proposed method demonstrated comparable registration accuracy to current state-of-the-art color-based methods under highly accurate pointclouds while maintaining higher consistency in accuracy under noisier point clouds. Additionally, it has shown the capability to register point clouds under challenging conditions where image feature extraction is difficult.
大多数当前利用颜色信息的方法依赖于图像特征。本论文提出了一种替代方法 NDT-6D,该方法直接将颜色集成到 NDT 单元之间的距离度量中。该方法在高精度点云下显示了与当前最先进的基于颜色的方法相当的配准准确度,同时在噪音更大的点云下保持较高的一致性。此外,它还展现了在图像特征提取困难的情况下注册点云的能力。
These findings suggest that implementing the proposed algorithms in current commercial grade RGB-D cameras could potentially improve the accuracy of in-field agricultural operations relying on point cloud data, such as plant manipulation and crop and plant size estimation for growth modeling. However, state-of-the-art algorithms have limitations under conditions where the scene lacks features, as demonstrated in the aerial crops use case.
这些发现表明,在现有的商业级 RGB-D 相机中实施所提出的算法可能会提高依赖点云数据的农业现场操作的精度,例如植物操作以及作物和植物大小估计用于生长建模。然而,最先进的算法在缺乏特征的场景中存在局限性,正如航空作物用例所示。
The successful use of RGB-D sensors for registration implies that, in certain conditions, the technology could replace LiDARs for autonomous navigation. Nevertheless, further testing is necessary to validate this proposition.
在某些条件下,RGB-D 传感器可成功用于注册,意味着该技术可能可以取代激光雷达用于自主导航。然而,需要进一步测试来验证这一命题。

ACKNOWLEDGMENTS 鸣谢

This work has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 858101.
这项工作获得了欧洲联盟地平线 2020 研究和创新计划马里·居里-斯库罗多夫斯卡奖助金协议第 858101 号的资助。

DATA AVAILABILITY STATEMENT
数据可用性声明

Data sharing is not applicable to this article as no new data were created or analyzed in this study.
本研究中没有产生或分析任何新数据,因此本文不涉及数据共享。

ORCID

Himanshu Gupta (D) http://orcid.org/0000-0001-9364-7994
古普塔·希曼舒(D) http://orcid.org/0000-0001-9364-7994
Achim J. Lilienthal (10) http://orcid.org/0000-0003-0217-9326
阿希姆·j·利利恩塔尔 (10) http://orcid.org/0000-0003-0217-9326

REFERENCES 参考文献

Agarwal, S., Mierle, K. & Team, T.C.S. (2022) Ceres Solver. Available at: https://github.com/ceres-solver/ceres-solver
阿加尔瓦尔、米尔勒、CTS 团队 (2022 年)。Ceres Solver。可在 https://github.com/ceres-solver/ceres-solver 获取。
Alenya, G., Dellen, B. & Torras, C. (2011) 3D modelling of leaves from color and ToF data for robotized plant measuring. In: 2011 IEEE International Conference on Robotics and Automation. pp. 3408-3414
阿蕾妮娅,G., 德伦,B. 和 托拉斯,C. (2011) 基于彩色和飞行时间数据的叶子三维建模用于机器人植物测量. 2011 IEEE 国际机器人与自动化大会. 第 3408-3414 页
Al-Hiary, H., Bani-Ahmad, S., Reyalat, M., Braik, M. & Alrahamneh, Z. (2011) Fast and accurate detection and classification of plant diseases. International Journal of Computer Applications, 17(1), 31-38,
阿尔-海利, H., 巴尼-艾哈迈德, S., 雷亚拉特, M., 布赖克, M. 和 阿尔拉哈姆内, Z. (2011)快速准确的植物病害检测和分类。国际计算机应用杂志, 17(1), 31-38.
Andreasson, H. & Stoyanov, T. (2012) Real time registration of RGB-D data using local visual features and 3D-NDT registration. In: SPME Workshop at International Conference on Robotics and Automation (ICRA). Vol. 1.
安德烈亚松,H.和斯托亚诺夫,T.(2012)使用局部视觉特征和 3D-NDT 注册实时注册 RGB-D 数据。见:国际机器人与自动化大会(ICRA)上的 SPME 研讨会。第 1 卷。
Arad, B., Balendonck, J., Barth, R., Ben-Shahar, O., Edan, Y. & Hellström, T. et al. (2020) Development of a sweet pepper harvesting robot. Journal of Field Robotics, 37(6), 1027-1039.
阿拉德、巴伦东克、巴特、本-沙哈尔、埃丹和赫尔斯特罗姆等人(2020 年)开发了一款甜椒采摘机器人。《田间机器人杂志》,37(6),1027-1039。
Arad, B., Kurtser, P., Barnea, E., Harel, B., Edan, Y. & Ben-Shahar, O. (2019) Controlled lighting and illumination-independent target detection for real-time cost-efficient applications. The case study of sweet pepper robotic harvesting. Sensors, 19(6), 1390.
阿拉德, B., 库尔特塞尔, P., 巴尔内, E., 哈雷尔, B., 伊丹, Y. 和 本沙哈, O. (2019) 对于实时高效应用的受控照明和免于照明影响的目标检测。甜椒机器人采摘的案例研究。传感器, 19(6), 1390.
Bac, C.W., van Henten, E.J., Hemming, J. & Edan, Y. (2014) Harvesting robots for high-value crops: State-of-the-art review and challenges ahead. Journal of Field Robotics, 31(6), 888-911.
巴克、范·亨滕、赫明和埃丹(2014 年)。高价值作物收获机器人:现状回顾和未来挑战。《野外机器人学杂志》,31(6),888-911。
Bakker, T., van Asselt, K., Bontsema, J., Mü Iler, J. & van Straten, G. (2006) An autonomous weeding robot for organic farming. In: Field and Service Robotics 25. pp. 579-590.
巴克尔, T., 范阿瑟特, K., 朋佐马, J., 穆勒, J. 和 范斯特拉腾, G. (2006) 一款用于有机农业的自主除草机器人。载于: "田间和服务机器人" 25, 页 579-590。
Barth, R., Hemming, J. & van Henten, E.J. (2016) Design of an eye-in-hand sensing and servo control framework for harvesting robotics in dense vegetation. Biosystems Engineering, 146, 71-84.
巴斯、海明和范亨滕(2016)设计了一个眼手感知和伺服控制框架,用于密集植被中的采摘机器人。生物系统工程学,146,71-84。
Bawden, O., Kulk, J., Russell, R., McCool, C., English, A., Dayoub, F., Lehnert, C. & Perez, T. (2017) Robot for weed species plant-specific management. Journal of Field Robotics, 34(6), 1179-1199.
巴尔登、库尔克、罗素、麦库尔、英格兰、大卢布、里纳特和佩雷斯(2017)。用于杂草物种特定管理的机器人。《野外机器人学报》,34(6),1179-1199。
Besl, P.J. & McKay, N.D. (1992) Method for registration of 3-D shapes. In: Sensor fusion IV: control paradigms and data structures. Vol. 1611, pp. 586-606.
贝斯尔, P.J. 和 McKay, N.D. (1992) 3-D 形状注册方法。收录于: 传感器融合 IV: 控制范式和数据结构。第 1611 卷, 第 586-606 页。
Biber, P., Weiss, U., Dorna, M. & Albert, A. (2012) Navigation system of the autonomous agricultural robot bonirob. In: Workshop on Agricultural Robotics: Enabling Safe, Efficient, and Affordable Robots for Food Production (Collocated with IROS 2012), Vilamoura, Portugal.
自主农业机器人 bonirob 的导航系统
Botterill, T., Paulin, S., Green, R., Williams, S., Lin, J., Saxton, V., Mills, S., Chen, X. & Corbett-Davies, S. (2017) A robot system for pruning grape vines. Journal of Field Robotics, 34(6), 1100-1122.
博特里尔、保林、格林、威廉姆斯、林、萨克斯顿、米尔斯、陈和科比特-戴维斯(2017 年)。 葡萄藤修剪机器人系统。《田间机器人学报》,34(6),1100-1122。
Bulanon, D., Burks, T. & Alchanatis, V. (2009) Fruit visibility analysis for robotic citrus harvesting. Transactions of the ASABE, 52(1), 277-283.
布拉农,D.,伯克斯,T.&阿尔查纳蒂斯,V.(2009)机器人柑橘收获的水果可见性分析。ASABE 交易,52(1),277-283。
Chebrolu, N., Läbe, T. & Stachniss, C. (2020) Spatio-temporal non-rigid registration of 3d point clouds of plants. In: 2020 IEEE International Conference on Robotics and Automation (ICRA). pp. 3112-3118.
切布罗卢,N., Läbe, T. 和 Stachniss, C. (2020) 植物三维点云的时空非刚性配准. 收录于: 2020 年 IEEE 国际机器人与自动化会议 (ICRA). 第 3112-3118 页.
Chebrolu, N., Lottes, P., Schaefer, A., Winterhalter, W., Burgard, W. & Stachniss, C. (2017) Agricultural robot dataset for plant classification, localization and mapping on sugar beet fields. The International Journal of Robotics Research, 36(10), 1045-1052.
切布罗卢, N., 洛特斯, P., 沙伊弗, A., 温特哈尔特, W., 伯尔加德, W. 和 斯塔希尼斯, C. (2017) 用于甜菜田中植物分类, 定位和制图的农业机器人数据集. 《国际机器人研究杂志》, 36(10), 1045-1052.
Chen, Y. & Medioni, G. (1992) Object modelling by registration of multiple range images. Image and Vision Computing, 10(3), 145-155.
陈, Y. 和 Medioni, G. (1992) 通过多个范围图像的注册实现对象建模. 图像与视觉计算, 10(3), 145-155.
Dale, L.M., Thewis, A., Boudry, C., Rotar, I., Dardenne, P., Baeten, V. & Pierna, J.A.F. (2013) Hyperspectral imaging applications in agriculture and agro-food product quality and safety control: a review. Applied Spectroscopy Reviews, 48(2), 142-159.
戴尔,L.M.,特韦斯,A.,布德里,C.,罗塔尔,I.,达德涅,P.,巴顿,V.和皮埃纳,J.A.F.(2013)高光谱成像在农业和农产品质量和安全控制中的应用:综述.应用光谱评论,48(2),142-159.
Das, A. & Waslander, S.L. (2014) Scan registration using segmented region growing NDT. The International Journal of Robotics Research, 33(13),
戴斯, A. 和 瓦斯兰德, S.L. (2014) 使用分段区域生长 NDT 进行扫描注册. 国际机器人研究杂志, 33(13),
Dong, W., Roy, P. & Isler, V. (2020) Semantic mapping for orchard environments by merging two-sides reconstructions of tree rows. Journal of Field Robotics, 37(1), 97-121.
董文, 罗伊, P. 和艾斯勒, V. (2020)通过合并树行两侧的重建来构建果园环境的语义地图。《现场机器人学杂志》, 37(1), 97-121.
Fernández-Novales, J., Saiz-Rubio, V., Barrio, I., Rovira-Más, F., CuencaCuenca, A., SantosAlves, F., Valente, J., Tardaguila, J. & Diago, M.P. (2021) Monitoring and mapping vineyard water status using noninvasive technologies by a ground robot. Remote Sensing, 13(14), 2830.
费尔南德斯-诺瓦莱斯,J., 赛斯-鲁比奥,V., 巴里奥,I., 罗维拉-马斯,F., 库恩卡-库恩卡,A., 桑托斯-阿尔维斯,F., 瓦伦特,J., 塔尔达杰拉,J. & 迪亚戈,M.P. (2021) 使用非侵入性技术由地面机器人监测和映射葡萄园水状况。遥感,13(14), 2830。
Gao, X., Li, J., Fan, L., Zhou, Q., Yin, K., Wang, J., Song, C., Huang, L. & Wang, Z. (2018) Review of wheeled mobile robots' navigation problems and application prospects in agriculture. IEEE Access, 6, 49248-49268.
高, X., 李, J., 范, L., 周, Q., 尹, K., 王, J., 宋, C., 黄, L. 和 王, Z. (2018) 农业机器人导航问题与应用前景综述. IEEE Access, 6, 49248-49268.
Guo, N., Zhang, B., Zhou, J., Zhan, K. & Lai, S. (2020) Pose estimation and adaptable grasp configuration with point cloud registration and geometry understanding for fruit grasp planning. Computers and Electronics in Agriculture, 179, 105818.
郭, 娜., 张, 宝., 周, 杰., 詹, 凯 和 赖, 穗. (2020) 用于水果抓握规划的点云注册和几何理解的姿态估计和自适应抓握配置. 计算机与电子农业, 179, 105818.
Hacking, C., Poona, N., Manzan, N. & Poblete-Echeverría, C. (2019) Investigating 2-D and 3-D proximal remote sensing techniques for vineyard yield estimation. Sensors, 19(17), 3652. Available from: https://www.mdpi.com/1424-8220/19/17/3652; https://doi.org/
黑客, C., Poona, N., Manzan, N. & Poblete-Echeverría, C. (2019) 研究 2-D 和 3-D 近距离遥感技术用于葡萄园产量估算. Sensors, 19(17), 3652. 可从以下网址获取: https://www.mdpi.com/1424-8220/19/17/3652; https://doi.org/
Harel, B., Kurtser, P., Van Herck, L., Parmet, Y. & Edan, Y. (2016) Sweet pepper maturity evaluation via multiple viewpoints color analyses. In: CIGR-AgEng Conference, Aarhus, Denmark, 26-29 June, 2016. pp. 1-7.
哈雷尔、库特塞尔、范赫尔克、帕尔梅特、伊登(2016)基于多视角色彩分析的甜椒成熟度评估。载于:CIGR-AgEng 大会,丹麦,奥尔胡斯,2016 年 6 月 26-29 日,第 1-7 页。
Hemming, J., Ruizendaal, J., Hofstee, J.W. & Van Henten, E.J. (2014) Fruit detectability analysis for different camera positions in sweet-pepper. Sensors, 14(4), 6032-6044
赫明, J.,鲁伊森塔尔, J.,霍夫斯特, J.W. 和范亨特, E.J. (2014) 不同相机位置对甜椒果实可检测性的分析. 传感器, 14(4), 6032-6044
Huhle, B., Magnusson, M., Straßer, W. & Lilienthal, A.J. (2008) Registration of colored 3D point clouds with a kernel-based extension to the normal distributions transform. In: 2008 IEEE International Conference on Robotics and Automation. pp. 4025-4030.
胡勒、马格努松、施特拉瑟和利利安塔尔(2008 年)。在正常分布变换中使用基于核的扩展进行彩色 3D 点云配准。第 2008 年 IEEE 国际机器人和自动化大会论文集。第 4025-4030 页。
Kamilaris, A. & Prenafeta-Boldú, F.X. (2018) Deep learning in agriculture: a survey. Computers and electronics in agriculture, 147, 70-90.
卡米拉里斯, A. & 普雷纳费塔-波尔杜, F.X. (2018) 农业中的深度学习: 一项调查. 农业计算机和电子, 147, 70-90.
Kirk, R., Mangan, M. & Cielniak, G. (2020) Feasibility study of in-field phenotypic trait extraction for robotic soft-fruit operations. UKRAS20 Conference: "Robots into the real world" Proceedings (pp. 21-23). https://doi.org/10.31256/Uk4Td6|
柯克, R., 曼甘, M. 和 Cielniak, G. (2020) 田间表型特征提取用于机器人软果作业的可行性研究. UKRAS20 会议: "走进真实世界的机器人" 论文集 (第 21-23 页). https://doi.org/10.31256/Uk4Td6|
Korn, M., Holzkothen, M. & Pauli, J. (2014) Color supported generalizedICP. In: 2014 International Conference on Computer Vision Theory and Applications (VISAPP). Vol. 3, pp. 292-599
科恩、霍尔茨克滕、保利(2014)颜色支持的广义 ICP。载于:2014 年国际计算机视觉理论与应用会议(VISAPP),第 3 卷,第 292-599 页。
Kurtser, P. & Edan, Y. (2018a) Statistical models for fruit detectability: spatial and temporal analyses of sweet peppers. Biosystems Engineering, 171, 272-289.
库尔特塞尔、P. 和 埃丹、Y. (2018a) 水果可检测性的统计模型:甜椒的空间和时间分析。生物系统工程,171,272-289。
Kurtser, P. & Edan, Y. (2018b) The use of dynamic sensing strategies to improve detection for a pepper harvesting robot. In: 2018 IEEE/RS International Conference on Intelligent Robots and Systems (IROS). pp. .
库尔策,P.和伊登,Y.(2018b)利用动态感知策略提高辣椒采收机器人的检测效果。载于:2018 年 IEEE/RSJ 国际智能机器人与系统大会(IROS)。
Kurtser, P., Ringdahl, O., Rotstein, N. & Andreasson, H. (2020) Pointnet and geometric reasoning for detection of grape vines from single frame rgb-d data in outdoor conditions. In: 3rd Northern Lights Deep Learning Workshop, Tromsö, Norway 20-21 January, 2019. Vol. 1, pp. .
库尔特塞尔、林达尔、罗特斯坦和安德烈亚松(2020 年)。在室外条件下,基于单帧 RGB-D 数据使用 PointNet 和几何推理检测葡萄藤。载于:2019 年 1 月 20-21 日,于挪威特罗姆瑟举办的第 3 届北极光深度学习研讨会论文集,第 1 卷,页码待定。
Kurtser, P., RingdahI, O., Rotstein, N., Berenstein, R. & Edan, Y. (2020) Infield grape cluster size assessment for vine yield estimation using a mobile robot and a consumer level RGB-D camera. IEEE Robotics and Automation Letters, 5(2), 2031-2038
库尔特塞尔、朗德尔、罗斯坦、贝伦斯坦和埃登(2020 年)利用移动机器人和消费级 RGB-D 相机在田间对葡萄丛大小进行评估,以估算藤蔓产量。IEEE 机器人与自动化通讯,5(2),2031-2038。
Lehnert, C., Sa, I., McCool, C., Upcroft, B. & Perez, T. (2016) Sweet pepper pose detection and grasping for automated crop harvesting. In: 2016 IEEE International Conference on Robotics and Automation (ICRA). pp. .
利亨特、萨、麦克库尔、阿普克罗夫特和佩雷兹(2016 年)。自动收获作物的甜椒姿势检测和抓握。在:2016 IEEE 国际机器人自动化会议(ICRA)。页码
Liao, Q., Sun, D. & Andreasson, H. (2022) Fuzzypsreg: strategies of fuzzy cluster-based point set registration. IEEE Transactions on Robotics, 38(4), 2632-2651. https://doi.org/10.1109/TRO.2021 . 3123898
廖,Q.,孙,D.和 Andreasson, H. (2022) Fuzzypsreg: 基于模糊簇的点集配准策略. IEEE 机器人学交易, 38(4), 2632-2651. https://doi.org/10.1109/TRO.2021. 3123898
Magnusson, M., Lilienthal, A. & Duckett, T. (2007) Scan registration for autonomous mining vehicles using 3D-NDT. Journal of Field Robotics, 24(10), 803-827.
马格努森、利利恩塔尔和达克特(2007)。利用三维正态分布变换进行自主采矿车辆的扫描注册。《野外机器人学杂志》,24(10), 803-827。
Magnusson, M., Nuchter, A., Lorken, C., Lilienthal, A.J. & Hertzberg, J. (2009) Evaluation of 3D registration reliability and speed-a comparison of ICP and NDT. In: 2009 IEEE International Conference on Robotics and Automation. pp. 3907-3912.
马格努森、努赫特、洛尔肯、利利安特和赫尔茨伯格(2009)。3D 配准的可靠性和速度评估-ICP 和 NDT 的比较。在:2009 年 IEEE 国际机器人自动化大会上。第 3907-3912 页。
Malavazi, F.B., Guyonneau, R., Fasquel, J.-B., Lagrange, S. & Mercier, F. (2018) Lidar-only based navigation algorithm for an autonomous agricultural robot. Computers and Electronics in Agriculture, 154 .
马拉瓦兹、F.B.、吉约内、R.、法斯凯尔、J.-B.、拉格朗日、S.和梅西尔、F.(2018 年)自主农业机器人的仅基于激光雷达的导航算法.计算机与农业电子学,154.
Milella, A., Marani, R., Petitti, A. & Reina, G. (2019) In-field high throughput grapevine phenotyping with a consumer-grade depth camera. Computers and Electronics in Agriculture, 156, 293-306.
米拉,A.,马拉尼,R.,佩蒂蒂,A.和雷纳,G. (2019) 利用消费级深度相机进行高通量葡萄园测量。计算机和电子农业,156, 293-306。
Nandi, C.S., Tudu, B. & Koley, C. (2016) A machine vision technique for grading of harvested mangoes based on maturity and quality. IEEE Sensors Journal, 16(16), 6387-6396.
南迪、C.S.、图杜、B.和科雷、C. (2016)基于成熟度和质量的收获芒果分级机器视觉技术。 IEEE 传感器期刊,16(16),6387-6396。
Oliveira, L.F., Moreira, A.P. & Silva, M.F. (2021) Advances in agriculture robotics: a state-of-the-art review and challenges ahead. Robotics, .
奥利维拉,L.F.,莫雷拉,A.P.和席尔瓦,M.F.(2021)农业机器人的进步:最新研究现状及未来挑战.机器人学, .
Papadimitriou, A., Kleitsiotis, I., Kostavelis, I., Mariolis, I., Giakoumis, D., Likothanassis, S. & Tzovaras, D. (2022) Loop closure detection and slam in vineyards with deep semantic cues. In: 2022 International Conference on Robotics and Automation (ICRA). pp. 2251-2258.
帕帕迪米特里欧,A.、克莱齐奥蒂斯,I.、科斯塔维利斯,I.、马里奥利斯,I.、贾科斯,D.、利科坦纳西斯,S.和佐瓦拉斯,D. (2022)用深度语义线索进行葡萄园的回路检测和 SLAM。在:2022 年国际机器人与自动化会议(ICRA)。页码 2251-2258。
Pire, T., Mujica, M., Civera, J. & Kofman, E. (2019) The Rosario dataset: multisensor data for localization and mapping in agricultural environments. The International Journal of Robotics Research, 38(6), 633-641.
皮雷、穆希卡、西维拉和科夫曼。(2019)罗萨里奥数据集:农业环境中定位和制图的多传感器数据。《国际机器人研究杂志》,38(6),633-641。
Pomerleau, F., Colas, F. & Siegwart, R. (2015) A review of point cloud registration algorithms for mobile robotics. Foundations and Trends in Robotics, 4(1), 1-104.
庞梅鲁, F., 克拉斯, F. & 西格瓦特, R. (2015) 用于移动机器人的点云配准算法综述。机器人基础与趋势, 4(1), 1-104.
Ringdahl, O., Kurtser, P. & Edan, Y. (2019) Performance of rgb-d camera for different object types in greenhouse conditions. In: 2019 European Conference on Mobile Robots (ECMR). pp. 1-6.
瑞金德尔,O.,库特塞尔,P.和埃丹,Y.(2019)RGB-D 相机在温室条件下对不同物体类型的性能表现。载于:2019 年欧洲移动机器人会议(ECMR)。第 1-6 页。
Schunck, D., Magistri, F., Rosu, R.A., Cornelißen, A., Chebrolu, N. & Paulus, S. et al. (2021) Pheno4d: a spatio-temporal dataset of maize and tomato plant point clouds for phenotyping and advanced plant analysis. Plos One, 16(8), e0256340.
舒恩克,D., 马吉斯特丽,F., 罗苏,R.A., 科纳利生,A., 切布鲁,N. 和 保罗斯,S. 等人. (2021) Pheno4d: 一个用于表型分类和高级植物分析的玉米和西红柿植株点云时空数据集. Plos One, 16(8), e0256340.
Shalal, N., Low, T., McCarthy, C. & Hancock, N. (2013) A review of autonomous navigation systems in agricultural environments. In SEAg 2013: Innovative agricultural technologies for a sustainable future. University of Southern Queensland.
沙拉尔,N.,罗,T.,麦卡锡,C.和汉考克,N.(2013)农业环境中自主导航系统的综述。在 SEAg 2013:可持续未来的创新农业技术中。南昆士兰大学。
Singh, V. & Misra, A.K. (2017) Detection of plant leaf diseases using image segmentation and soft computing techniques. Information Processing in Agriculture, 4(1), 41-49.
辛格, V. 和 米斯拉, A.K. (2017) 使用图像分割和软计算技术检测植物叶病。农业信息处理, 4(1), 41-49。
Stoyanov, T., Magnusson, M., Andreasson, H. & Lilienthal, A.J. (2012) Fast and accurate scan registration through minimization of the distance between compact 3D NDT representations. The International Journal of Robotics Research, 31(12), 1377-1393.
斯托亚诺夫、马格努松、安德烈阿孙和利列塔尔(2012)通过最小化紧凑型 3D NDT 表示之间的距离,实现了快速准确的扫描注册。《国际机器人研究杂志》,31(12),1377-1393。
Sturm, J., Engelhard, N., Endres, F., Burgard, W. & Cremers, D. (2012) A benchmark for the evaluation of rgb-d slam systems. In: Proceedings of the International Conference on Intelligent Robot Systems (IROS).
暴风, J., 恩格哈德, N., 恩德斯, F., 布尔格, W. 和 克莱默斯, D. (2012) 用于评估 RGB-D SLAM 系统的基准。发表于: 国际机器人系统大会 (IROS) 论文集。
Tavares, A.C., Lawin, F.J. & Forssén, P.-E. (2020) Assessing losses for point set registration. IEEE Robotics and Automation Letters, 5(2), 3360-3367.
塔瓦雷斯、拉温、福尔森 (2020 年) 评估点集配准的损失。《IEEE 机器人与自动化通信》,5(2),3360-3367。
Tian, H., Wang, T., Liu, Y., Qiao, X. & Li, Y. (2020) Computer vision technology in agricultural automation-a review. Information Processing in Agriculture, 7(1), 1-19.
田, H., 王, T., 刘, Y., 乔, X. 和 李, Y. (2020) 农业自动化中的计算机视觉技术-综述。农业信息处理, 7(1), 1-19。
Tillett, N., Hague, T., Grundy, A. & Dedousis, A. (2008) Mechanical withinrow weed control for transplanted crops using computer vision. Biosystems Engineering, 99(2), 171-178.
蒂莱特、哈格、格伦迪和德多西斯(2008 年)。利用计算机视觉进行植物行际机械除草的移栽作物。生物系统工程,99(2),171-178。
Vadivambal, R. & Jayas, D.S. (2011) Applications of thermal imaging in agriculture and food industry-a review. Food and Bioprocess Technology, 4(2), 186-199.
瓦迪瓦姆巴尔,R.和贾亚斯,D.S.(2011)农业和食品工业中热成像应用-综述。食品和生物加工技术,4(2),186-199。
Valencia, R., Saarinen, J., Andreasson, H., Vallvé, J., Andrade-Cetto, J. & Lilienthal, A.J. (2014) Localization in highly dynamic environments using dual-timescale ndt-mcI. In: 2014 IEEE International Conference on Robotics and Automation (ICRA). pp. 3956-3962.
瓦伦西亚、萨里宁、安德烈森、瓦勒夫、安德拉德-塞托和利利安泰尔(2014)。在高度动态环境中使用双时间尺度 NDT-MCL 进行定位。在:2014 IEEE 国际机器人与自动化会议(ICRA)。第 3956-3962 页。
Villena-Martinez, V., Oprea, S., Saval-Calvo, M., Azorin-Lopez, J., FusterGuillo, A. & Fisher, R.B. (2020) When deep learning meets data alignment: a review on deep registration networks (drns). Applied Sciences, 10(21), 7524.
维列纳-马丁内斯,V.,奥普雷亚,S.,萨瓦尔-卡尔沃,M.,阿佐林-洛佩斯,J.,富斯特-吉略,A.和菲舍尔,R.B.(2020)当深度学习遇到数据对齐时:对深度配准网络(DRNs)的综述.应用科学,10(21),7524.
Vit, A. & Shani, G. (2018) Comparing RGB-D sensors for close range outdoor agricultural phenotyping. Sensors, 18(12), 4413.
维特, A.和沙尼, G. (2018)比较近程户外农业表型测定的 RGB-D 传感器.传感器, 18(12), 4413.
Wang, X.A., Tang, J. & Whitty, M. (2020) Side-view apple flower mapping using edge-based fully convolutional networks for variable rate chemical thinning. Computers and Electronics in Agriculture, 178, 105673.
王,X.A.,唐,J.&惠特,M.(2020)使用基于边缘的完全卷积网络进行变量率化学间苗的侧视苹果花映射。计算机与电子农业,178,105673。

Yang, H., Shi, J. & Carlone, L. (2020) TEASER: fast and certifiable point cloud registration. IEEE Transactions of Robotics.
杨恒, 石佳, 卡罗尼, L. (2020) TEASER:快速和确证的点云配准. IEEE 机器人学报.
Zaenker, T., Lehnert, C., McCool, C. & Bennewitz, M. (2021) Combining local and global viewpoint planning for fruit coverage. arXiv preprint arXiv:2108.08114.
采恩克尔, 莱纳特, 麦考尔和贝内维茨(2021)将局部和全局视点规划结合用于水果覆盖。
Zaenker, T., Smitt, C., McCool, C. & Bennewitz, M. (2020) Viewpoint planning for fruit size and position estimation. arXiv preprint arXiv:2011.00275.
扎恩克尔、斯米特、麦考尔和贝内维茨(2020 年)。水果大小和位置估计的视角规划。arXiv 预印本 arXiv:2011.00275。
Zemmour, E., Kurtser, P. & Edan, Y. (2017) Dynamic thresholding algorithm for robotic apple detection. In: 2017 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC). pp. 240-246.
泽穆尔、E.、库尔策、P.和埃丹、Y. (2017)机器人苹果检测的动态阈值算法。载于:2017 IEEE 国际自主机器人系统和竞赛会议 (ICARSC)。页码 240-246。
How to cite this article: Gupta, H., Lilienthal, A.J., Andreasson, H. & Kurtser, P. (2023) NDT-6D for color registration in agri-robotic applications. Journal of Field Robotics, 40, 1603-1619.
如何引用本文:Gupta, H., Lilienthal, A.J., Andreasson, H. & Kurtser, P. (2023) NDT-6D 用于农用机器人应用中的颜色配准。《现场机器人杂志》, 40, 1603-1619。
https://doi.org/10.1002/rob . 22194
10.1002/rob.22194

APPENDIX A: STATISTICAL ANALYSIS OF EXPERIMENTAL RESULTS
附录 A:实验结果的统计分析

A. 1 | GRAPES3D data set
葡萄 3D 数据集

The absolute registration errors produced for the GRAPES3D data set are presented in Figure A1. The figure presents the originally acquired errors and a log2 transformation of the errors, which normalizes their distribution.
本文中展示了 GRAPES3D 数据集的绝对注册误差,图 A1 展示了原始误差和对误差进行 log2 变换后的正态分布化结果。
Log transformed errors were tested with a three-way ANOVA (Table A1) accounting for registration algorithm (NDT D2D/NDT-6D in Lab color space/SIFT+NDT D2D), environment (Controlled outdoor environment/commercial Vineyard), and viewing angle . The residuals were found to adhere to the Normal distribution assumption (Kolmogorov-Smirnov test ). A posthoc analysis of the statistically significant differences between algorithms (Table A2) shows the errors produced by the colorless D2D method are significantly higher than the NDT-6D and SIFT+NDT methods.
对数转换误差采用三因素方差分析进行检验(表 A1),考虑了配准算法(NDT D2D/NDT-6D 在 Lab 颜色空间/SIFT+NDT D2D)、环境(受控户外环境/商业葡萄园)和视角。残差符合正态分布假设(Kolmogorov-Smirnov 检验)。事后分析显示,无色 D2D 方法产生的误差显著高于 NDT-6D 和 SIFT+NDT 方法(表 A2)。

A. 2 | TUM data set
二、TUM 数据集

The absolute registration translation errors compared to IMU produced for the TUM data set are presented in Figure A2. The figure presents the originally acquired errors and a boxcox transformed errors with parameter , which normalizes their distribution.
与 IMU 生成的 TUM 数据集相比,绝对注册翻译误差如图 A2 所示。该图展示了原始获得的误差以及使用参数
The transformed errors were tested with a three-way ANOVA (Table A3) accounting for registration algorithm (NDT-6D in/SIFT +NDT D2D/ICP6D), environment (room/desk/xyz), and noise (WO/Blur/DS). The residuals were found to adhere to the Normal distribution assumption (Kolmogorov-Smirnov test for 500 randomly selected values). A posthoc analysis of the statistically significant differences between every pair of algorithms .
变换后的误差用三因素方差分析(表 A3)进行了测试,考虑了配准算法(NDT-6D in/SIFT +NDT D2D/ICP6D)、环境(房间/桌面/xyz)和噪声(无噪声/模糊/降采样)。残差被发现符合正态分布假设(500 个随机选择值的 Kolmogorov-Smirnov 检验 )。对每对算法之间的统计显著差异进行了事后分析

FIGURE A1 Distribution of registration errors for GRAPES3D data set before (top) and after (bottom) a log transformation as a function of registration algorithm. Results show log transformation is required to fit a normal distribution to the errors. Results also show higher error rates for the NDT D2D method compared to alternative.
图 A1 GRAPES3D 数据集在进行对数变换之前(上)和之后(下)的配准误差分布,作为配准算法的函数。结果表明需要进行对数变换才能使误差符合正态分布。结果还显示,NDT D2D 方法的误差率高于其他替代方法。
TABLE A1 Results of a three-way ANOVA test performed on log-transformed registration errors for the GRAPES3D data set as a function of registration algorithm (NDT D2D/NDT-6D in Lab color space/SIFT+NDT D2D), environment (controlled outdoor environment/commercial Vineyard), and viewing angle . Results show statistically significant differences between algorithms, angles, and environments.
表 A1 对 GRAPES3D 数据集进行的三因素方差分析检验结果,分别从配准算法(NDT D2D/NDT-6D 在 Lab 色彩空间/SIFT+NDT D2D)、环境(受控室外环境/商用葡萄园)和视角
sum_sq df F PR(>F)
Algorithm 12.02 2.0 14.80
Angle 13.58 1.0 33.45
Environment 2.42 1.0 5.97
Algorithm: Angle 角度 0.11 2.0 0.13
Algorithm: Environment 算法:环境 0.14 2.0 0.17
Angle: Environment 环境 0.41 1.0 1.01
Residual 162.77 401.0 - -
TABLE A2 Results of a posthoc test performed on log-transformed registration errors for the GRAPES3D data set as a function of registration algorithm (NDT D2D/NDT-6D in Lab color space/SIFT+NDT D2D). Results show statistically significant differences between NDT D2D and color-based algorithms.
TABLE A2 算法(NDT D2D / NDT-6D 在 Lab 色彩空间/SIFT+NDT D2D)比较分析结果显示,NDT D2D 与基于颜色的算法之间存在显著统计学差异。
NDT D2D NDT-6D Lab
SIFT
+ NDT D2D
NDT D2D - 0.000016 Outliers
NDT-6D Lab - - 0.47

APPENDIX B: ANALYSIS OF SUGARBEETS2016 DATABASE
附录 B:2016 年甜菜数据库分析

The SugarBeets2016 data set is a benchmark data set acquired using a mobile robot equipped with an RGB-D camera in sugarbeet fields and presents a use case for crop monitoring in arable crops. The data is acquired using a Kinect sensor mounted on a mobile platform, with the RGBD camera facing the ground and the distance from the ground is . The data was collected with frequency, during which the robot moved forward. The pointcloud resolution is relatively small and reasonably flat (Figure B2) compared with pointcloud in Grapes3D and TUM RGBD data sets due to the restricted viewing area of the camera which poses challenges to various registration algorithms (Figures B1 and B2).
糖甜菜数据集 2016 是一个基准数据集,使用配备 RGB-D 相机的移动机器人在甜菜田中获取,为农作物监测提供一个应用案例。数据使用安装在移动平台上的 Kinect 传感器获取,RGBD 相机朝地面,距离地面 。数据以 的频率采集,机器人前进 。与 Grapes3D 和 TUM RGBD 数据集相比,由于相机的视野受限,点云分辨率相对较小且相对平坦(图 B2),这给各种配准算法带来了挑战(图 B1 和 B2)。
In this section, we present the evaluation of the outlined algorithms for registering the collected pointclouds. The outlined registration algorithms include no-color (ICP, NDT-D2D), colorbased (ICP-6D, our NDT-6D), and feature-based (SIFT+NDT D2D). Unfortunately, all the above algorithms failed to register consecutive scans, as the pose estimation from scan registration resulted in an identity transformation matrix or a minor deviation from it.
本部分中,我们评估了所概述的注册收集的点云的算法。所概述的注册算法包括无色(ICP、NDT-D2D)、基于颜色(ICP-6D、我们的 NDT-6D)和基于特征(SIFT+NDT D2D)。不幸的是,上述所有算法都无法注册连续扫描,因为扫描注册得到的姿态估计得到了单位变换矩阵或从中轻微偏离的矩阵。
Below we outline the reasons for the failure of the various tested registration algorithms resulting in their limitations.
以下我们概述了各种已测试注册算法失败的原因,导致其局限性。
  • No-color: Geometric-based registration algorithms such as NDT D2D and ICP Pt2PI rely on matching 3D shapes or minimizing point-to-plane errors. However, in the SugarBeets2016 data set, the top-down viewpoint facing the ground leads to flat point clouds with fewer distinct 3D features. The high speed and low frame acquisition rate also result in limited overlap between consecutive point clouds, making registration difficult. Additionally, the choice of using a Kinect sensor for indoor applications results in ambient light penetrating between the wheels, reducing the point cloud density in certain areas and further decreasing the available 3D features for registration. The output of these algorithms is usually an identity matrix.
    无色:基于几何的配准算法,如 NDT D2D 和 ICP Pt2PI,依赖于匹配 3D 形状或最小化点到平面的误差。然而,在 SugarBeets2016 数据集中,面向地面的俯视角度导致了扁平的点云,缺乏明显的 3D 特征。高速和低帧率采集也导致了连续点云之间重叠区域有限,使得配准变得困难。此外,选用 Kinect 传感器用于室内应用导致环境光穿过轮子,降低了某些区域的点云密度,进一步减少了可用于配准的 3D 特征。这些算法的输出通常是一个单位矩阵。

FIGURE A2 Distribution of registration errors for the TUM data set before (top) and after(bottom) a box-cox transformation with as a function of the registration algorithm. Results show box-cox transformation is required to fit a normal distribution to the errors. Results also show fewer outlier values and lower variance values in the error rate for NDT-6D.
图 A2 TUM 数据集注册误差在应用 BOX-COX 变换前(上)与变换后(下)的分布情况,以注册算法为函数。结果表明,需要应用 BOX-COX 变换以使误差服从正态分布。结果还显示,NDT-6D 算法的误差率具有更少的异常值和更低的方差。
TABLE A3 Results of a three-way ANOVA test performed on transformed registration translation errors for the TUM data set as a function of registration algorithm (NDT-6D in/ SIFT+NDT D2D/ICP6D), environment (room/desk/xyz), and noise (WO/Blur/DS). Results show statistically significant differences between algorithms.
对于 TUM 数据集进行三因素方差分析测试的结果,被转换的配准平移误差作为注册算法(NDT-6D in/ SIFT+NDT D2D/ICP6D)、环境(room/desk/xyz)和噪声(WO/Blur/DS)的函数。结果显示算法之间存在显著差异。
sum_sq df F PR(>F)
Algorithm 227.67 2.0 79.79
Environment 3821.73 1.0 2678.85
Noise 201.12 2.0 70.48
Algorithm: Environment 算法:环境 377.23 2.0 132.21
Environment: Noise 噪音 21.94 2.0 7.69
Algorithm: Noise 算法:噪音 913.55 4.0 160.08
Residual
FIGURE B1 Images from the SugarBeet2016 data set.
图 B1 SugarBeet2016 数据集的图像。
  • Color-based registration: In color-based registration methods such as ICP-6D and NDT-6D, color information is used to find point correspondences between point clouds. However, the Sugarbeets data set has significant areas of homogeneous color (the ground), which limits the number of distinct
    基于颜色的配准:在基于颜色的配准方法(如 ICP-6D 和 NDT-6D)中,使用颜色信息来查找点云之间的点对应关系。但是,Sugarbeets 数据集具有大面积的均一颜色区域(地面),这限制了可辨别特征的数量。
FIGURE B2 Visualization of single pointcloud from SugarBeet2016 data set and pointcloud overlap for consecutive pointclouds by manually registering the pointclouds.
图 B2 SugarBeet2016 数据集的单个点云可视化以及通过手动配准点云实现的连续点云重叠。
color features. Some frames may not contain any objects besides the soil and imaging artifacts, leading to errors such as points appearing on the robot or variations in color due to ambient light. The lack of variability in color information combined with the low overlap between consecutive point clouds and limited 3D features results in minor deviations from the identity matrix and high error rates compared to the ground truth.
颜色特征。某些帧可能除了土壤和成像伪影以外没有任何物体,这可能导致错误,如机器人上出现点或由于环境光而出现颜色变化。颜色信息缺乏变化性以及连续点云之间重叠度低及 3D 特征有限,导致与真实情况相比偏离身份矩阵较小且误差率较高。
  • Image feature-based algorithms: Of the mentioned above limitations of color-based algorithms, the feature-based registration algorithm (SIFT+NDT D2D) shows the potential to register the SugarBeets pointcloud. In some consecutive frames, correct SIFT features can be observed (Figure B3 but
    基于图像特征的算法:与前述基于颜色的算法的局限性相比,特征基算法(SIFT+NDT D2D)表现出注册 SugarBeets 点云的潜能。在某些连续帧中,可以观察到正确的 SIFT 特征(图 B3),但
Source PointCloud 源点云
Registered Pointclouds 注册点云
FIGURE B3 Visualization of the SIFT-NDT D2D registration for SugarBeet2016 data set showing matched SIFT features (top), source and target pointcloud (middle), and registered pointclouds. The registration of pointcloud failed even though correctly matched SIFT feature points are present because the noise in pointcloud results in an incorrect selection of matched pairs. the number of valid points extracted from SIFT matches and filtered point clouds is significantly lower than other GRAPES3D and TUM RGBD datasets. As a result, The rate of erroneous outlier points is also higher, making their adverse effect on registration more notable. Although SIFT+NDT D2D has resulted in no valid registration or a highly erroneous registration matrix in most cases, it has worked for a few scan pairs with correct SIFT feature matches and valid points in the point cloud. An example of SIFT+NDT D2D registration is shown in Figure B3.
图 B3 显示了 SIFT-NDT D2D 对 SugarBeet2016 数据集的配准结果,包括匹配的 SIFT 特征(上部)、源点云和目标点云(中部),以及配准后的点云。尽然正确匹配的 SIFT 特征点存在,但由于点云中的噪声导致匹配点对选择不正确,因此点云配准失败。从 SIFT 匹配和经过滤的点云中提取的有效点数量明显低于其他 GRAPES3D 和 TUM RGBD 数据集。因此,错误的离群点比例更高,对配准造成更大的不利影响。尽管在大多数情况下 SIFT+NDT D2D 无法实现有效的配准或产生高度错误的配准矩阵,但对于一些具有正确 SIFT 特征匹配和有效点云的扫描对,该方法仍可工作。图 B3 给出了一个 SIFT+NDT D2D 配准的示例。
The aforementioned attributes of SugarBeets2016, arise mostly from the fact that the data set was not originally intended for use as a SLAM benchmark, limiting the applicability of current registration algorithms. Some of these attributes, such as errors caused by ambient light due to the use of a Kinect sensor, have been addressed in more recent RGBD sensors like the Intel Realsense D435, which was used to collect the GRAPES3D data set analyzed in this study. However, other attributes such as poor image and pointcloud features will persist in applications that follow the outlined protocol. Such applications require extensive development of registration algorithms to perform well under these conditions.
以下是该文本翻译为简体中文: SugarBeets2016 所提到的这些属性主要源于该数据集并非原本用于 SLAM 基准测试,这限制了当前配准算法的适用性。其中一些属性,如由于使用 Kinect 传感器而导致的环境光错误,已经在更新的 RGBD 传感器(如本研究中使用的 Intel Realsense D435)中得到解决。然而,诸如图像和点云特征较差等其他属性将在遵循上述协议的应用中持续存在。这种应用需要大量开发配准算法,才能在这些条件下表现良好。

  1. This is an open access article under the terms of the Creative Commons Attribution-NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes.
    这是一篇在知识共享署名-非商业性使用许可协议下的公开获取文章,允许在任何媒体上使用、分发和复制,前提是必须正确引用原始作品,且不得用于商业目的。
    © 2023 The Authors. Journal of Field Robotics published by Wiley Periodicals LLC.
    © 2023 作者。《现场机器人学报》由 Wiley Periodicals LLC 出版。
  2. https://github.com/IntelRealSense/librealsense- last accessed October 2022.
    https://github.com/IntelRealSense/librealsense - 2022 年 10 月最后访问。