Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Point cloud registration is the task of aligning two or more point clouds by estimating the relative transformation between them, and it has been an essential part of many computer vision algorithms such as 3D object matching [8], localization and mapping [30], dense 3D reconstruction of a scene [29], and object pose estimation [31].

Recently point set registration methods [38] have been gaining more importance due to the growing commercial interest of virtual and mixed reality [25], commercial robotics, and autonomous driving applications [17, 23]. In most of these applications, massive amounts of 3D point cloud data (PCD) are directly captured from various active sensors (i.e., LiDAR and depth cameras) but at different times under different poses or local coordinate systems. The task of point cloud registration is then to try to find a common coordinate system, which is done by estimating some type of geometric similarity in the point data that can be recovered through optimization over a set of spatial transformations.

One of the oldest and most widely used registration algorithms, Iterative Closest Point (ICP) [1, 3], is based on an iterative matching process where point proximity establishes candidate point pair sets. Given a set of point pairs, the rigid transformation that minimizes the sum of squared point pair distances can be calculated efficiently in closed form. ICP and its dozens of variants [34] often fail to produce correct results in many common but challenging scenarios, where the presence of noise, uneven point density, occlusions, or when large pose displacements can cause a large proportion of points to be without valid matches.

Compared to traditional ICP-based approaches, much research has been done on the use of statistical models for registration, which in principle can provide better estimates for outlier rejection, convergence, and geometric matching [15, 27, 39]. In particular, many statistical methods have been designed around the Expectation Maximization (EM) algorithm [7] as it has been shown that EM generalizes the ICP algorithm under a few basic assumptions [16, 35]. Many statistical registration techniques have explicitly utilized this paradigm to deliver better robustness and accuracy [6, 13, 16, 18], but these algorithms tend to be much slower than ICP and often offer only marginal improvement in all but a few specific circumstances. As a result, ICP-based methods are still heavily used in practice for many real-world applications.

Our proposed method falls into the category of GMM-based statistical registration algorithms. We tackle the typical shortcomings of these methods, slow speeds and lack of generality, by adopting an efficient hierarchical construction for the creation of an adaptive multi-scale point matching process. Efficiency: The search over multiple scales as a recursive tree-based search produces a highly performant logarithmic-time algorithm that quickly and adaptively finds the most appropriate level of geometric detail with which to match points. Generality: By using a data-driven point matching procedure over multiple scales, our proposed algorithm can automatically adapt to many different types of scenes, particularly with real-world data where widely varying sampling sparsity and scene complexity are common. Finally, we introduce a novel Mahalanobis distance approximation resembling ICP’s point-to-plane distance minimization metric, which more faithfully approximates the true MLE solution under general anisotropic covariances than previous methods.

Table 1. A Comparison of Registration Methods. Multiply Linked: Many-to-one or many-to-many correspondences, Anisotropic: General shape alignment using unrestricted covariance structures, Multi-Scale: Registration at multiple levels of granularity, Data Transform: Underlying data structure or transform, Association Complexity: Complexity of data association problem over all N points (E Step in the case of EM-based methods), Optimization Complexity: Size of the optimization problem (M Step in the case of EM-based methods). Assuming both point clouds size N, number of voxels/grid points V, and number of mixture components J.

2 Related Work

Our method builds on previous work in GMM-based methods for registration such as GMM-Reg [19, 21], JRMPC [13], and MLMD [9], while also leveraging recent results using hierarchical GMMs for point cloud modeling [10]. By adopting a GMM-based paradigm, we gain robustness in situations of large pose displacement, optimal solutions in the form of maximum likelihood estimates, and an ability to more easily leverage point-level parallelism on GPUs. By augmenting the GMM into a hierarchy, we can efficiently compress empty space, achieve logarithmic-time matching, and perform robust multi-scale data analysis.

The earliest statistical methods placed an isotropic covariance around every point in the first set of points and then registered the second set of points to it under an MLE framework (MPM [6], EM-ICP [16], CPD [27, 28]). More modern statistical approaches utilize a generative model framework, where a GMM is usually constructed from the points explicitly and registration is solved in an MLE sense using an EM or ECM [26] algorithm (REM-Seg [11], ECMPR [18], JRMPC [13], MLMD [9]), though some utilize a max correlation or \(L_2\) distance approach (Kernel Correlation [40], GMM-Reg [19, 21], SVR [2], NDT-D2D [36]). Since a statistical framework for point cloud registration tends to be more heavyweight than ICP, techniques such as decimation (EM-ICP [16]), voxelization (NDT methods [36, 37]), or Support Vector Machines (SVR [2]) have been used to create smaller or more efficient models, while others have relied on computational tricks such as the Fast Gauss Transform (CPD [27], ECMPR [18]), or have devised ways to exploit point-level parallelism and GPU-computation for increased computational tractability and speed (MLMD [9], parallelized EM-ICP [39]).

In contrast to these statistical model-based approaches, modern robust variants of point-to-plane ICP (e.g. Trimmed ICP [5], Fractional ICP [32]) are often much faster and sometimes perform nearly as well, especially under real-world conditions [33]. See Table 1 for a detailed comparison of key registration algorithms utilizing the ICP and GMM paradigms. Our proposed method offers favorable complexity over both classes of algorithms due to its novel use of a GMM-Tree structure, without needing to resort to discretization strategies like the NDT-based methods.

Fig. 1.
figure 1

Multi-Scale Representation using a Hierarchy of Gaussian Mixtures: Top-row shows identical geometries (black lines) and associated points (blue circles), which are represented by different levels of Gaussian models (green contour for 1 \(\sigma \).) (a) (Top) Ideal Normals (red arrows) on the surfaces, (b) Too coarse (only two Gaussians in Level 2): poor segmentation leads to incorrect normals, which will degrade accuracy when registering points to model, (c) Too fine (using finest level of Gaussian models): over-segmentation leads to erroneous normals as sample noise overtakes real facet geometry (d) Adaptive multi-scale (Mixture of level 3 and level 4 models): point-to-model association can be much more robust when fidelity adaptively changes according to data distribution so that facets can be well-modeled given differing spatial frequencies and sampling densities.

3 Registration as Expectation Maximization

The Expectation Maximization (EM) algorithm forms the theoretical foundation for most modern statistical approaches to registration and also generalizes ICP under certain basic assumptions. EM is commonly employed for MLE optimization in the case where directly maximizing the data likelihood for the sought after variable is intractable, but maximizing the expected joint data likelihood conditioned on a set of latent variables is tractable. For the registration case, the sought after variable is the transformation T between point clouds and the latent variables are the point-model associations.

The problem is set up as follows: Given point clouds \({\mathcal {Z}}_1\) and \({\mathcal {Z}}_2\), we would like to maximize the data probability of \({\mathcal {Z}}_2\) under a set of transformations T with respect to a probability model \(\varvec{\varTheta }_{{\mathcal {Z}}_1}\) derived from the first point cloud \({\mathcal {Z}}_{1}\).

$$\begin{aligned} {\hat{T}} = \mathop {\mathrm{argmax}}\limits _{T} p(T({\mathcal {Z}}_2) | {\hat{\varvec{\varTheta }}}_{{\mathcal {Z}}_1}) \end{aligned}$$
(1)

That is, the most likely estimate of the transformation \(\hat{T}\) is the estimate that maximizes the probability that the samples of the transformed point cloud \(T({\mathcal {Z}}_2)\) came from some probabilistic representation of spatial likelihood (parameterized by \(\hat{\varvec{\varTheta }}\)) derived from the spatial distribution of the first point cloud \({\mathcal {Z}}_1\). The most common form for parametrizing this probability distribution is through a Gaussian Mixture Model (GMM), whose data probability is defined as a convex combination of J Gaussians weighted by the J-component vector \(\mathbf {\pi }\),

$$\begin{aligned} p(z | \varvec{\varTheta }_{{\mathcal {Z}}_1}) = \sum _{j=1}^J \pi _j \mathcal {N}(z|\varvec{\varTheta }_{j}) \end{aligned}$$
(2)

The derivation of the probability model \(\varvec{\varTheta }_{{\mathcal {Z}}_1}\) may be as simple as statically setting an isotropic covariance around each point in \({\mathcal {Z}}_1\) (e.g. EM-ICP [16]), or as complicated as framing the search for \(\varvec{\varTheta }_{{\mathcal {Z}}_1}\) as a completely separate optimization problem (e.g. SVR [2], MLMD [9]). Regardless of how the model is constructed, however, EM provides an iterative procedure to solve for T through the introduction of a set of latent correspondence variables \(\mathcal {C}= \{c_{ij}\}\) that dictate how points \(\mathbf {z}_{i}\in {\mathcal {Z}}_2\) probabilistically associate to the J subcomponents \(\varvec{\varTheta }_{j}\) of the model \(\varvec{\varTheta }_{{\mathcal {Z}}_1}\). Intuitively, we can view EM as a statistical generalization of ICP: The E Step estimates data associations, replacing ICP’s matching step, while the M Step maximizes the expected likelihood conditioned on these data associations, replacing ICP’s distance minimization step over matched pairs.

In the E Step, we use Bayes’ rule to calculate expectations over the correspondences. For a particular point \(\mathbf {z}_{i}\), its expected correspondence to \(\varvec{\varTheta }_{j}\) (\(E[c_{ij}]\)) can be calculated as follows,

$$\begin{aligned} E[c_{ij} = 1] = \frac{\pi _j\mathcal {N}(\mathbf {z}_{i}|\varvec{\varTheta }_{j})}{\sum _{k=1}^{J} \pi _k \mathcal {N}(\mathbf {z}_{i}|\varvec{\varTheta }_{k})} \end{aligned}$$
(3)

Generally speaking, larger model sizes (larger J) produce more accurate registration results since larger models have more representational fidelity. However, large models produce very slow registration algorithms: Given N points in \({\mathcal {Z}}_2\), Eq. 3 must be calculated \(N \times J\) times for each subsequent M Step. For methods that utilize models of size \(J \approx O(N)\) (e.g. EM-ICP [16], CPD [27], GMMReg [21]), this causes a data association complexity of \(O(N^2)\) and thus these algorithms have problems scaling beyond small point cloud sizes.

To combat this scaling problem, our approach builds from recent advances in fast statistical point cloud modeling via hierarchical generative models [10]. In this approach, point cloud data is modeled via a GMM-Tree, which is built in a top-down recursive fashion from small-sized Gaussian Mixtures. This GPU-based approach can produce high-fidelity GMM-Trees in real-time, but given that they were originally designed to optimize reconstructive fidelity and for dynamic occupancy map generation, it is not obvious how to adapt these models for use in a registration setting. That is, we must derive a way to associate new data to the model and then use the associations to drive an optimization over T. As such, we can use their model construction algorithm in order to construct \(\varTheta _{{\mathcal {Z}}_1}\) from \(Z_1\) (see [10] for details), but we must derive a separate and new EM algorithm to use these GMM-Tree models for registration.

4 Hierarchical Gaussian Mixture Mahalanobis Estimation

In this section, we review our proposed approach for hierarchical GMM-based registration under a new EM framework. In Sect. 4.1 we discuss our new E Step for probabilistic data association that utilizes the GMM-Tree representation for point clouds, and in Sect. 4.2 we introduce a new optimization criterion to approximate the MLE T for rigid transformations.

4.1 E Step: Adaptive Tree Search

Our proposed E Step uses a recursive search procedure to perform probabilistic data association in logarithmic time. We also introduce an early stopping heuristic in order to select the most appropriate scale at which to associate data to the hierarchical model.

The GMM-Tree representation from [10] forms a top-down hierarchy of 8-component GMM nodes, with each individual Gaussian component in a node having its own 8-component GMM child. Thus, a particular node in the GMM-Tree functions in two ways: first, as a probabilistic partition of the data and second, as a statistical description of the data within a partition. We exploit both of these properties in our proposed E Step by using the partitioning information to produce an efficient search algorithm and by using the local data distributions as a scale selection heuristic.

figure a

Logarithmic Search Each level in the GMM-Tree forms a statistical segmentation at finer levels of granularity and detail. Crucially, the expectation of a point \(\mathbf {z}_{i}\) to a particular Gaussian component \(\varvec{\varTheta }_{j}\) is exactly the sum of the expectations of that point to its child GMM. Thus, if we query a parent node’s point-model expectation and it falls under a threshold, we can effectively prune away all its children’s expectations, thus avoiding calculating all \(N\times J\) probabilistic associations. Refer to Algorithm 1 for details. In our implementation, we only traverse down the maximum likelihood path at each step. By utilizing the hierarchy in this way, we can recursively search through the tree in logarithmic time (\(O(\log {J})\)) to calculate a point’s expectation. This is opposed to previous registration algorithms using traditional GMM’s, where a linear search much be performed over all mixture components (O(J)) in order to match data to the model.

Multiscale Adaptivity. Real-world point clouds often exhibit large spatial discrepancies in sampling sparsity and geometric complexity, and so different parts of the scene may benefit from being represented at different scales when performing point-scene association. Refer to Fig. 1 for an overview of this concept. Under a single scale, the point cloud modeling and matching process might succumb to noise or sampling inadequacies if the given modeling fidelity is not appropriate to the local data distribution.

To take advantage of the GMM-Tree multiscale representation and prevent overfitting, we make a check for the current mixture component’s geometric complexity and stop early if this condition is not met. This complexity check acts as a heuristic for proper scale selection. We implement our complexity function (Complexity(\(\cdot \)) in Algorithm 1, L10) as \(\frac{\lambda _3}{\lambda _1 + \lambda _2 + \lambda _3}\) for each covariance where \(\lambda _1 \ge \lambda _2 \ge \lambda _3\) are its associated eigenvalues. We experimentally set our adaptive threshold, \(\lambda _C = 0.01\) for all experiments. This means we terminate the search at a particular scale if the current cluster associated to the point becomes too planar: when 1% or less of its variance occurs along its normal direction. Experimentally, we have found that if we recurse further, we will likely start to chase noise.

Fig. 2.
figure 2

Scale Selection using a GMM-Tree To show qualitatively how scale selection works, we first build a model over a crop (couch, plant, and floor) of the Stanford Scene Lounge dataset [42]. We then associate random colors to each mixture component and color each point according to its data-model expectation. (a) shows this coloring given a static recursion level of 2 in the GMM-Tree, while (c) shows this coloring for a static recursion level of 3. We contrast this with (b), which shows our adaptively scale-selected model containing components at varying levels of recursion depending on the local properties of the mixture components. The scale selection process provides our Mahalanobis estimator (Sect. 4.2) robust component normals, preventing the use of over-fitted or under-fitted mixture components and resulting in a more accurate registration result.

Figure 2 shows a graphical depiction of what our adaptive threshold looks like in practice. The Gaussian mixture components break down the point cloud data at a static tree level of 2 (\(J=64\)) and 3 (\(J=512\)) as compared to an adaptive model that is split into different recursion levels according to a complexity threshold \(\lambda _C=0.01\). The points are color coded according to their expected cluster ownership. Note that the adaptive model has components of both levels of the GMM hierarchy according how smooth or complex the facet geometry is. The ability to adapt to changing levels of complexity allows our M Step to always use a robustly modeled piece of geometry (cf. Fig. 1).

4.2 M Step: Mahalanobis Estimation

In this section, we will derive a new M Step for finding the optimal transformation T between a point set \({\mathcal {Z}}_2\) and an arbitrary GMM \(\hat{\varvec{\varTheta }}_{{\mathcal {Z}}_1}\) representing point set \({\mathcal {Z}}_1\).

First, given N points \(\mathbf {z}_{i}\) and J clusters \(\varvec{\varTheta }_{j}\in \hat{\varvec{\varTheta }}_{{\mathcal {Z}}_1}\), we introduce a \(N \times J\) set of point-cluster correspondences \(\mathcal {C}=\{c_{ij}\}\), so that the full joint probability becomes

$$\begin{aligned} \ln p(T({\mathcal {Z}}),\mathcal {C}|\varvec{\varTheta }) = \sum _{i=1}^N \sum _{j=1}^J c_{ij}\{ \ln \pi _j + \ln \mathcal {N}(T(\mathbf {z}_{i})|\varvec{\varTheta }_{j})\} \end{aligned}$$
(4)

We iterate between E and M Steps. On the E Step, we calculate \({\gamma _{ij}}\overset{\underset{\mathrm {def}}{}}{=}E[c_{ij}]\) under the current posterior. On the M Step, we maximize the expected data log likelihood with respect to T while keeping all \({\gamma _{ij}}\) fixed,

$$\begin{aligned} \hat{T}&=\mathop {\mathrm{argmax}}\limits _{T} E_{p(\mathcal {C}|T({\mathcal {Z}}),\varvec{\varTheta })} [ \ln p(T({\mathcal {Z}}), \mathcal {C}| \varvec{\varTheta }) ]\end{aligned}$$
(5)
$$\begin{aligned}&= {{\mathrm{arg\,min}}}_{T} \sum _{ij}{\gamma _{ij}}(T(\mathbf {z}_{i}) - \varvec{\mu }_{j})^T\varvec{\varSigma }_{j}^{-1}(T(\mathbf {z}_{i}) - \varvec{\mu }_{j}) \end{aligned}$$
(6)

From this construction, we see that the most likely transformation T between point sets is the one that minimizes the weighted sum of squared Mahalanobis distances between points of \({\mathcal {Z}}_2\) and individual clusters of \(\varvec{\varTheta }_{{\mathcal {Z}}_1}\), with weights determined by calculating expected correspondences given the current best guess for \(\hat{T}\).

As shown mathematically in previous work [9, 13, 16, 18], if we restrict T solely to the set of all rigid transformations (\(T \in SE(3)\)) we can further reduce the double sum over both points and clusters into a single sum over clusters. This leaves us with a simplified MLE optimization criterion over weighted moments,

$$\begin{aligned} \hat{T} = {{\mathrm{arg\,min}}}_{T} \sum _j M_j^0 \left( T\left( \frac{M_j^1}{M_j^0}\right) - \varvec{\mu }_{j}\right) ^T\varvec{\varSigma }_{j}^{-1}\left( T\left( \frac{M_j^1}{M_j^0}\right) - \varvec{\mu }_{j}\right) \end{aligned}$$
(7)

where \(M_j^0 = \sum _i {\gamma _{ij}}\) and \(M_j^1 = \sum _i {\gamma _{ij}}\mathbf {z}_{i}\).

One can interpret the Mahalanobis distance as a generalization of point-to-point distance where the coordinate system has undergone some affine transformation. In the case of GMM-based registration, each affine transformation is determined by the covariance, or shape, of the cluster to which points are being registered. For example, clusters that are mostly planar in shape (two similar eigenvalues and one near zero) will tend to aggressively pull points toward it along its normal direction while permitting free movement in the plane. This observation should match one’s intuition: given that we have chosen a probabilistic model that accurately estimates local geometry, an MLE framework will utilize this information to pull like geometry together as a type of probabilistic shape matching. By using fully anisotropic covariances, arbitrarily oriented point-to-geometry relations can be modeled. Previous algorithms in the literature, however, have yet to fully leverage this general MLE construction. Simplifications are made either by (1) placing a priori restrictions on the complexity of the Gaussian covariance structure (e.g. isotropic only [13] or a single global bandwidth term [16]), or by (2) using approximations to the MLE criterion that remove or degrade this information [9]. The reasons behind both model simplification and MLE approximation are the same: Eq. 7 has no closed form solution. However, we will show how simply reinterpreting the Mahalanobis distance calculation can lead to a highly accurate and novel method for registration.

We first rewrite the inner Mahalanobis distance inside the MLE criterion of Eq. 7 by decomposing each covariance \(\varvec{\varSigma }_{j}\) into its associated eigenvalues \(\lambda \) and eigenvectors \(\varvec{n}\), thereby producing the following equivalence,

$$\begin{aligned} \left| \left| T\left( \frac{M_j^1}{M_j^0}\right) - \varvec{\mu }_{j}\right| \right| ^2_{\varvec{\varSigma }_{j}} = \sum _{l=1}^3 \frac{1}{\lambda _l} \left( \varvec{n}_l^T\left( T\left( \frac{M_j^1}{M_j^0}\right) - \varvec{\mu }_{j}\right) \right) ^2 \end{aligned}$$
(8)

Thus, we can reinterpret each cluster’s Mahalanobis distance term inside the MLE criterion as a weighted sum of three separate point-to-plane distances. The weights are inversely determined by the eigenvalues, with their associated eigenvectors constituting each plane’s normal vector. Going back to the example of a nearly planar Gaussian, its covariance will have two large eigenvalues and one near-zero eigenvalue, with the property that the eigenvectors associated with the larger eigenvalues will lie in the plane and the eigenvector associated with the smallest eigenvalue will point in the direction of its normal vector. Since the weights are inversely related to the eigenvalues, we can easily see that the MLE criterion will mostly disregard any point-to-\(\varvec{\mu }_{j}\) distance inside its plane (that is, along the two dominant PCA axes) and instead disproportionately focus on minimizing out-of-plane distances by pulling nearby points along the normal to the plane.

We can see that by plugging in this equivalence back into Eq. 7, we arrive at the following MLE criterion,

$$\begin{aligned} \hat{T} = {{\mathrm{arg\,min}}}_{T} \sum _{j=1}^J \sum _{l=1}^3 \frac{M_j^0}{\lambda _{j_l}} \left( \varvec{n}_{j_l}^T\left( T\left( \frac{M_j^1}{M_j^0}\right) - \varvec{\mu }_{j}\right) \right) ^2 \end{aligned}$$
(9)

where the set of \(\varvec{n}_{j_l}, l=1..3\) represent the three eigenvectors for the jth Gaussian (anisotropic) covariance, and \(\lambda _{j_l}\) the associated eigenvalues.

We have transformed the optimization from the minimization of a weighted sum of J squared Mahalanobis distances to an equivalent minimization of a weighted sum of 3J squared point-to-plane distances. In doing so, we arrive at a form that can be leveraged by any number of minimization techniques previously developed for point-to-plane ICP [3]. Note that unlike traditional point-to-plane methods, which usually involve the computationally difficult task of finding planar approximations over local neighborhoods at every point and sometimes also for multiple scales [22, 41], the normals in Eq. 9 are found through a very small number of \(3 \times 3\) eigendecompositions (typically \(J \le 1000\) for even complex geometric models) over the model covariances, with appropriate scales chosen through our proposed recursive search over the covariances in the GMM-Tree (Sect. 4.1).

We solve Eq. 9 using the linear least squares technique described by Low [24] for point-to-plane ICP optimization, which we adapt into a weighted form. The only approximation required is a linearization of R using the small-angle assumption.

5 Speed Vs Accuracy

For every registration algorithm, there is an inherent trade-off between accuracy and speed. To explore how different registration algorithms perform under various accuracy/speed trade-offs, we have designed a synthetic experiment using the Stanford Bunny. We take 100 random 6DoF transformations of the bunny and then run each algorithm over the same group of random point subsets of increasing cardinality. Our method of obtaining a random transformation is to sample each axis of rotation uniformly from [\(-15, 15\)] degrees and each translation uniformly from [\(-0.05, 0.05\)] (roughly half the extent of the bunny). We can then plot speed vs accuracy as a scatter plot in order to see how changing the point cloud size (a proxy for model complexity) affects the speed vs accuracy tradeoff.

The algorithms and code used in the following experiments were either provided directly by the authors (JRMPC, ECMPR, NDT-D2D, NDT-P2D, SVR, GMMReg), taken from popular open source libraries (libpointmatcher for TrICP-pt2pt, TrICP-pt2pl, FICP), or are open source re-implementations of the original algorithms with various performance optimizations (EM-ICP-GPU, SoftAssign-GPU, ICP-OpenMP, CPD-C++). Links to the sources can be found in our project page. Parameters were set for all algorithms according to what was recommended by the authors and/or by the software. All our experiments were run on Intel Core i7-5920K and NVIDIA Titan X.

In order to test how each design decision affects the performance of the proposed algorithm, we test against three variants:

Adaptive Ln: The full algorithm proposed in this paper: Hierarchical Gaussian Mixture Registration (HGMR). Adaptive multi-scale data association using a GMM-Tree that was constructed up to a max recursion level of n.

GMM-Tree Ln: Here we use the same GMM-Tree representation for logarithmic time data association, but without multi-scale adaptivity (\(\lambda _c = 0\)). The tree is constructed up to a max recursion level of n. By comparing GMM-Tree to Adaptive, we can see the benefits of stopping our recursive search according to data complexity.

GMM J=n: This variant forgoes a GMM-Tree representation and uses a simple, fixed complexity, single-level GMM with n mixture components. Similar to other fixed complexity GMM-based registration approaches (e.g. [9, 13, 16, 21]), both recursive data-association and adaptive complexity cannot be used. However, it is still GPU-optimized and uses the new MLE optimization. Comparing this approach to the tree-based representations (GMM-Tree and Adaptive) shows how the tree-based data representation affects registration performance.

Fig. 3.
figure 3

Each data point represents a particular algorithm’s average speed and accuracy when registering together randomly transformed Stanford Bunnies. We produce multiple points for each algorithm at different speed/accuracy levels by applying the methods multiple times to different sized point clouds. The lower left corner shows the fastest and most accurate algorithms for a particular model size. Our proposed algorithms (black, cyan, and red) tend to dominate the bottom left corner, though robust point-to-plane ICP methods sometimes produce more accurate results, albeit at much slower speeds (e.g. Trimmed ICP).

Figure 3(a) shows each algorithm’s speed vs accuracy trade-off by plotting registration error vs time elapsed. The lower left corner is best (both fast and accurate). One can quickly see how different classes of algorithms clearly dominate each other on the speed/accuracy continuum. For additional clarity, Fig. 3(b) explicitly plots the time scaling of each registration method as a function of point cloud size. For both timing and accuracy, one can see that, roughly speaking, our adaptive tree formulation performs the best, followed by our non-adaptive tree formulation, followed by our non-adaptive non-tree formulation, then ICP-based variants, and then finally previous GMM-based variants (black > cyan > red > blue > green).

It should be noted that even though our proposed algorithms (black, cyan, and red) tend to dominate the lower left corner of Fig. 3(a), certain robust point-to-plane ICP methods sometimes produce more accurate results, albeit at much slower speeds. See for example in Fig. 3 that some point-to-plane ICP results were less than \(10^{-2\circ }\) angular error and near 1 second convergence time. We estimate that this timing gap might be decreased given a good GPU-optimized robust planar ICP implementation, though it is unclear if the neighborhood-based planar approximation scheme used by these algorithms could benefit from GPU parallelization as much as our proposed Expectation Maximization approach, which is designed to be almost completely data parallel at the point level. However, if computation time is not a constraint for a given application (e.g. offline approaches), we would recommend trying both types of algorithms (our model-based approach vs a robust planar ICP-based approach) to see which provides the best accuracy.

For completeness, we repeated the test with two frames of real-world Lidar data, randomly transformed and varyingly subsampled as before in order to obtain our set of speed/accuracy pairs. The results are shown in Fig. 4. As in Fig. 3(a), the bottom left corner is most desirable (both fast and accurate), our methods shown in red, teal, and black. Given that the bunny and LiDAR scans have very different sampling properties, a similar outcome for all three tests shows that the relative performance of the proposed approach isn’t dependent on evenly sampled point clouds.

Fig. 4.
figure 4

Speed vs accuracy tests for two types of real-world LiDAR frames with very different sampling properties from the Stanford Bunny. In general, similar results are obtained as in Fig. 3.

Table 2. Comparison of Registration Methods for the Lounge and LiDAR Datasets Timing results for both datasets include the time to build the GMM-Tree. Errors are frame-to-frame averages. Speed given is in average frames per second that the data could be processed (note that the sensor outputs data frames at 30 Hz for the Lounge data and roughly 10 Hz for the LiDAR data).
Fig. 5.
figure 5

Frame-to-Frame Registration with Outdoor LiDAR Dataset: Ground truth path shown in red, calculated path shown in blue. Each frame of LiDAR data represents a single sweep. We register successive frames together and concatenate the transformation in order to plot the results in a single coordinate system. Note that drift is expected over such long distances as we perform no loop closures. The first two examples in top row are from GMM-Based methods, next three results are from modern ICP variants, the last three results show our proposed adaptive GMM-Tree methods at three different max recursion levels. For our methods, the timing results include the time to build the GMM model. GMM-Based methods generally perform slowly. ICP-based methods fared better in our testing, though our proposed methods show an order of magnitude improvement in speed while beating or competing with other state-of-the-art in accuracy.

6 Evaluation on Real-World Data

Lounge Dataset. In this test, we calculate the frame-to-frame accuracy on the Stanford Lounge dataset, which consists of range data produced by moving a handheld Kinect around an indoor environment [42]. We register together every 5th frame for the first 400 frames, each downsampled to 5000 points. To measure the resulting error, we calculate the average Euler angle deviation from ground truth. Refer to Table 2(a) for error and timing. All our experiments were run on Intel Core i7-5920K and NVIDIA Titan X. We chose to focus on rotation error since this was where the largest discrepancies were found among algorithms. The best performing algorithm we tested against, Trimmed ICP with point-to-plane distance error minimization, had an average Euler angle error of 0.54 degrees and took on average 119 ms to converge. Our best algorithm, the adaptive algorithm to a max depth of 3, had an average Euler angle error of 0.46 degrees and took on average less than half the time (50.5 ms) to converge. The accuracy of our proposed methods is comparable with the best ICP variants, but at roughly twice the speed.

Velodyne LiDAR Dataset. We performed frame-to-frame registration on an outdoor LiDAR dataset using a Velodyne (VLP-16) LiDAR and overlaid the results in a common global frame. See Fig. 5 for a qualitative depiction of the result. Table 2(b) summarizes the quantitative results from Fig. 5 in an easier to read table format. In Fig. 5, the ground truth path is shown in red, and the calculated path is shown in blue. Since there is no loop closures, the error is expected to compound and cause drift over time. However, despite the compounding error, the bottom right three diagrams of Fig. 5 (and correspondingly, the bottom three line items of Table 2(b)) show that the proposed methods can be used for fairly long distances (city blocks), without the need for any odometry (e.g. INS or GPS) or loop closures. Given that this sensor outputs sweeps at roughly 10 Hz, our methods achieve faster than real-time speeds (17–39 Hz), while the state-of-the-art ICP methods are an order of magnitude slower (\(\approx \)1 fps). Also, note that our times include the time to build the model (the GMM-Tree), which could be utilized for other concurrent applications besides registration.

7 Conclusion

We propose a registration algorithm using a Hierarchical Gaussian Mixture to efficiently perform point-to-model association. Data association as recursive tree search results in orders of magnitude speed-up relative to traditional GMM-based approaches that linearly perform these associations. In addition, we leverage the model’s multi-scale anisotropic representation using a new approximation scheme that reduces the MLE optimization criteria to a weighted point-to-plane measure. We test our proposed methods against state-of-the-art and find that our approach is often an order of magnitude faster while achieving similar or greater accuracy.