Three-Filters-to-Normal: An Accurate and Ultrafast Surface Normal Estimator

Over the past decade, significant efforts have been made to improve the trade-off between speed and accuracy of surface normal estimators (SNEs). This paper introduces an accurate and ultrafast SNE for structured range data. The proposed approach computes surface normals by simply performing three filtering operations, namely, two image gradient filters (in horizontal and vertical directions, respectively) and a mean/median filter, on an inverse depth image or a disparity image. Despite the simplicity of the method, no similar method already exists in the literature. In our experiments, we created three large-scale synthetic datasets (easy, medium and hard) using 24 3-dimensional (3D) mesh models. Each mesh model is used to generate 1800--2500 pairs of 480x640 pixel depth images and the corresponding surface normal ground truth from different views. The average angular errors with respect to the easy, medium and hard datasets are 1.6 degrees, 5.6 degrees and 15.3 degrees, respectively. Our C++ and CUDA implementations achieve a processing speed of over 260 Hz and 21 kHz, respectively. Our proposed SNE achieves a better overall performance than all other existing computer vision-based SNEs. Our datasets and source code are publicly available at: sites.google.com/view/3f2n.


I. INTRODUCTION
R EAL-TIME 3-dimensional (3D) object recognition is a very challenging computer vision task [3].Surface normal is an informative and important feature descriptor used in 3D object recognition [4].Over the past decade, there has not been much research on surface normal estimation, as it is merely considered as an auxiliary functionality for other computer vision applications.However, such applications are generally required to perform in an online fashion, and thus, the estimation of surface normals must be carried out extremely fast [4].
The surface normals can be estimated from either a 3D point cloud or a depth/disparity image (see Figure 1).The former, such as a LiDAR point cloud, is generally unstructured.Estimating surface normals from unstructured range data usually requires the generation of an undirected graph, e.g.a k-nearest neighbor graph or a Delaunay tessellation graph.However, the generation of such graphs is very computationally intensive.R. Fan is with the Jacobs School of Engineering and the UCSD Health, the University of California, San Diego, La Jolla, CA 92093, United States (e-mail: rui.fan@ieee.org).
Therefore, in recent years, many researcher have been focused on surface normal estimation from structured range data, i.e., depth/disparity images.
The existing surface normal estimators (SNEs) can be classified as either computer vision-based [3]- [6] or machine learning-based [7]- [13].The former typically computes the surface normals by fitting planar or curved surfaces to locally selected 3D point sets, using statistical analysis or optimization techniques, e.g., singular value decomposition (SVD) or principal component analysis (PCA) [4].On the other hand, the latter generally utilizes data-driven classification/regression models, e.g., convolutional neural networks (CNNs) to infer surface normal information from RGB or depth images [12].
In recent years, with rapid advances in machine/deep learning, many researchers have resorted to deep convolutional neural networks (DCNNs) for surface normal estimation.For example, Xu et al. [7] utilized a so-called prediction-anddistillation network (PAD-Net) to simultaneously solve two continuous regression tasks (monocular depth prediction and surface normal inference) and two discrete classification tasks (scene parsing and contour detection).Similarly, Li et al. [13] designed a DCNN model to learn the mapping from multi-scale image patches to surface normals and monocular depth.Such inferences were then refined using conditional random fields (CRF) [14].Furthermore, Bansal et al. [10] built a skip-network model based on a pre-trained Oxford VGG-16 CNN [15] for 2.5D surface normal prediction and 3D object recognition in 2D images.Recently, Huang et al. [16] formulated the problem of densely estimating local 3D canonical frames from a single RGB image as a joint estimation of surface normals, canonical tangent directions and projected tangent directions.Such problem was then addressed by a DCNN.
The existing data-driven SNEs are generally trained using supervised learning techniques.Hence, they require a large amount of labeled training data to find the best CNN parameters [13].Additionally, such CNNs were not specifically designed for surface normal estimation, because SNEs were only used as an auxiliary functionality for other computer vision applications, e.g., scene parsing [7], 3D object detection [9], depth perception [13], etc.Furthermore, many robotics and computer vision applications, e.g., autonomous driving, require very fast surface normal estimation (in milliseconds).Unfortunately, the existing machine/deep learning-based SNEs are not that fast.Moreover, the accuracy achieved by data-driven SNEs is still far from satisfactory (the average proportion of good pixels, detailed in Section IV, is usually lower than 80%) [10], [13].Most importantly, it can be considered more reasonable to estimate surface normals from point clouds or disparity/depth images rather than from RGB images.Hence, there is a strong motivation to develop a lightweight SNE for structured range data with high accuracy and speed.
The main novel contributions of this work are as follows: a) A novel, accurate and ultrafast SNE is proposed.We implement our SNE in Matlab C, C++ and CUDA.The source code will be publicly available at IEEE Xplore for research purposes.Compared with other computer vision-based SNEs, the proposed SNE greatly improves the trade-off between speed and accuracy.b) Three datasets (easy, medium and hard) are created using 24 3D mesh models.Each mesh model is used to generate 1800-2500 depth images from different views.The corresponding surface normal ground truth is also provided, as 3D mesh object models (rather than the objects themselves) are available for surface normal ground truth generation.
The rest of this paper continues in the following manner: Section II reviews the state-of-the-art computer vision-based SNEs; Section III introduces our proposed SNE; the experimental results and the performance evaluation are provided in Section IV; in Section V, we discuss the applications of our SNE; finally, Section VI summarizes the paper and provides recommendations for future work.

II. RELATED WORK
This section provides an overview of computer vision-based SNEs.
1) PlaneSVD SNE [17]: The simplest way to estimate the surface normal of an observed 3D point p i = [x, y, z] in the camera coordinate system (CCS) is to fit a local plane: to the points in where Q i = [q i 1 , . . ., q i k ] (q i j p i ) is a set of k neighboring points of p i .The surface normal n i = [n x , n y , n z ] can be estimated by solving: where b i = [n i , b] and 1 m is an m-entry vector of ones.( 1) can be solved by factorizing Q + i into UΣV using SVD.bi (the optimum b i ) is a column vector in V corresponding to the smallest singular value in Σ [4].
2) PlanePCA SNE [18]: n i can also be estimated by removing the empirical mean qi = 1 k+1 (p i + Σ k j=1 q i j ) from Q + i and rearranging (2) as follows: where Q+ i = 1 k+1 q i .Minimizing (3) is equivalent to performing PCA on Q + i and selecting the principal component with the smallest covariance [4].
3) VectorSVD SNE [4]: A straightforward alternative to fitting (1) to Q + i is to minimize the sum of the inner dot products between r i j = q i j − p i and n i , namely, This minimization is done by SVD. 4) AreaWeighted SNE [4]: A triangle can be formed by a given pair of r i j and r i j+1 , as defined above.A general expression of averaging-based SNEs is as follows [4]: where w j is a weight and r i k+1 = r i 1 .In AreaWeighted SNE, the surface normal of each triangle is weighted by the magnitude of its area: 5) AngleWeighted SNE [4]: The weight w j of each triangle relates to the angle between r i j and r i j+1 : where • is a dot product operator.6) FALS SNE [5]: The relationship between the Cartesian coordinate system and the spherical coordinate system (SCS) is as follows [5]: where . Since all points in Q + i are in a small neighborhood [5], their r i are considered to be identical in FALS SNE. ( 2) and ( 8) result in: where [5]: Similar to FALS SNE, SRI SNE first transforms the range data from the Cartesian coordinate system to the SCS.n i is then obtained by computing the partial derivative of the local tangential surface s: where R i is an SO(3) matrix with respect to θ i and φ i .e z , e x and e y are the unit vectors in the z, x and y coordinate axes, respectively.∇s(θ i , φ i ) can be obtained by applying standard image convolutional kernels.
8) LINE-MOD SNE [3]: Firstly, the optimal gradient ∇z = [∂z/∂u, ∂z/∂v] of a depth map is computed.Then, a 3D plane is formed by three points p 0 , p 1 and p 2 : where t( pi ) is the vector along the line of sight that goes through an image pixel pi = [u i , v i ] and is computed using camera intrinsic parameters.The surface normal n i can be computed using:

III. 3F2N SNE
In this paper, we propose a novel, highly accurate and ultrafast SNE, which is simple to understand and use.Our SNE can compute surface normals from structured range data using three filters, namely, a horizontal image gradient filter, a vertical image gradient filter and a mean/median filter.Hence, we call it three-filters-to-normal (3F2N) SNE.
A 3D point p i = [x, y, z] in the CCS can be transformed to pi = [u, v] using [19]: where K is the camera intrinsic matrix, is the image principal point, and f x and f y are the camera focal lengths (in pixels) in the x and y directions, respectively.Combining (1) and ( 13) results in: Differentiating ( 14) with respect to u and v leads to: which can be approximated by respectively performing horizontal and vertical image gradient filters, e.g., Sobel, Scharr and Prewitt, on the inverse depth image (an image storing the values of 1/z).Rearranging (15) results in the following expressions of n x and n y : Given an arbitrary q i j ∈ Q i , we can compute the corresponding n z j by plugging ( 16) into (1): where r i j = q i j − p i = [∆x i j , ∆y i j , ∆z i j ] .In this paper, k = 8 and Q i is an 8-connected neighborhood.Since ( 16) and ( 17) have a common factor of −b, they can be simplified as: where Φ{•} is a mean or median operator used to estimate n z .Please note: if the depth value of p i is identical to those of all its neighboring points q i j ∈ Q i , we consider that the direction of its corresponding surface normal is perpendicular to the image plane and simply set n i to [0, 0, −1] .The performances of estimating n i using the mean filter and using the median filter will be compared in Section IV.Specifically, for a stereo camera, f x = f y = f , and the relationship between the depth z and disparity d is as follows: where t c is the stereo rig baseline.Therefore, Plugging (19) and ( 20) into (18) results in: Therefore, our SNE can also estimate surface normals from a disparity image using the three filters.

A. Datasets and Evaluation
In our experiments, we used 24 3D mesh models from Free3D 1 to create three datasets (eight models in each dataset).
1 free3d.comAccording to different difficulty levels, we name our datasets "easy", "medium" and "hard", respectively.Each 3D mesh model is first fixed at a certain position.A virtual range sensor with pre-set intrinsic parameters is then used to capture depth images at 1800-2500 different view points.At each view point, a 480 × 640 pixel depth image is generated by rendering the 3D mesh model using OpenGL Shading Language2 (GLSL).However, since the OpenGL rendering process applies linear interpolation by default, rendering surface normal images is infeasible.Hence, the surface normal of each triangle, constructed by three mesh vertices, is considered to be the ground truth surface normal of any 3D points residing on this triangle.Our datasets are publicly available at: sites.google.com/view/3f2n.In addition to our datasets, we also utilize the DIODE dataset3 [20] to evaluate the SNE performance.
Furthermore, we utilize two metrics: a) the average angular error (AAE) e A and b) the proportion of good pixels (PGP) e P [6]: to quantify the SNE accuracy, where: m is the number of 3D points used for evaluation, ϕ is the angular error tolerance, and n k and nk are the estimated and ground truth surface normals, respectively.In addition to accuracy, we also record the SNE processing time t (ms) and introduce a new metric: to quantify the trade-off between the speed and accuracy of a given SNE.A fast and precise SNE achieves a low π score.

B. Filter Settings and Implementation Details
As discussed in Section III, n x and n y can be estimated by convolving an inverse depth image or a disparity map with image convolutional kernels, e.g., Sobel, Scharr, Prewitt, etc.Hence, in our experiments, we first compare the accuracy of the surface normals estimated using the aforementioned convolutional kernels.Then, the brute-force search method is utilized to find the best parameters for a 3 × 3 kernel.Our experiments illustrate that the basic gradient (BG) kernel, i.e., [−1, 0, 1], can achieve the best overall performance.
We implement the proposed SNE in Matlab C and C++ on a CPU and in CUDA on a GPU.The source code are publicly available at: sites.google.com/view/3f2n.Similar to the FALS, SRI and LINE-MOD SNE implementations provided in the opencv_contrib repository, 4 we use advanced vector extensions 2 (AVX2) and streaming SIMD (single instruction, multiple data) extensions (SSE) instruction sets to optimize our C++ implementation.Since our approach estimates surface normals from an 8-connected neighborhood, we also use memory alignment strategies to speed up our SNE.In the GPU implementation, we first create a texture object in the GPU texture memory and then bind this object with the address of the input depth/disparity image, which greatly reduces the memory requests from the GPU global memory.

C. Performance Evaluation
We first compare the performances of the proposed SNE with respect to different image gradient filters (BG, Sobel, Scharr and Prewitt) and mean/median filter.e A scores with respect to the easy, medium and hard datasets are illustrated in Figure 2. The runtime of our implementations on an Intel Core i7-8700K CPU (using a single thread) and three state-ofthe-art GPUs (Jetson TX2, GTX 1080 Ti and RTX 2080 Ti) is also given in Table I and II, respectively.We can see that BG outperforms Sobel, Scharr and Prewitt in terms of e A on all datasets.Also, using the median filter can achieve better surface normal accuracy than using the mean filter, because an n z candidate in (17) can differ significantly from the ground  truth value, introducing significant noise to the mean filter.The e A scores achieved using BG-Median SNE are approximately 1.0 • , 0.8 • and 0.1 • (with respect to the easy, medium and hard datasets, respectively) higher than those obtained using BG-Mean SNE.Furthermore, Figure 3 illustrates the values of e A with respect to different filter sizes, where readers can see that e A decreases gradually with the increase of the filter size.However, median filter is much more computationally intensive and time-consuming than the mean filter, because it needs to sort eight n z candidates and find the median value.
From Table I and II, we can observe that both BG-Mean SNE and BG-Median SNE perform much faster than real-time across different computing platforms.The processing speed of BG-Mean SNE is over 1 kHz and 21 kHz on the Jetson TX2 GPU and RTX 2080 Ti GPU, respectively.Furthermore, BG-Mean SNE performs around 1.4 to 2.1 times faster than the BG-Median SNE.Therefore, the latter achieves the best surface normal accuracy, while the former achieves the best processing speed.Moreover, we compare our SNE with all other computer vision-based SNEs, as mentioned in Section II.Some examples of the experimental results are shown in Figure 4, where it can be seen that the bad estimates mainly reside on the object edges.Additionally, Figure 5 shows comparisons of e A on the easy, medium and hard datasets, where we can find that BG-Median SNE achieves the best e A score on the easy dataset, while AngleWeighted SNE achieves the best e A scores on the medium and hard datasets.Meanwhile, the e A scores achieved by BG-Median SNE and AngleWeighted SNE are very similar.The runtime (C++ implementations using a single thread) and π scores achieved by the aforementioned SNEs are given in Table III, where we can observe that the averagingbased SNEs are the most time-consuming ones, while BG-Mean SNE achieves the fastest processing speed.Furthermore, BG-Mean, FALS and BG-Median SNEs occupy the first three places, respectively, in terms of π score.Moreover, Table IV compares their PGP scores with respect to different ϕ on the easy, medium and hard datasets, where we can see that AngleWeighted SNE achieves the best e P scores, except for ϕ = 10 • (hard dataset).However, according to Table III, AngleWeighted SNE is extremely time-consuming and achieves a very bad π score.On the other hand, BG-Median SNE and AngleWeighted SNE achieve similar e P scores, but the former performs about 100 times faster than the latter.
In addition to our created datasets, we also use the DIODE dataset [20] to compare the performances of the abovementioned SNEs.Examples of our experimental results are shown in Figure 6.The runtime and average angular errors obtained by different SNEs are given in Table V, where it can be seen that BG-Mean SNE is the fastest among all SNEs, while BG-Median SNE achieves the lowest average angular errors.Therefore, 3F2N SNE outperforms all other state-ofthe-art computer vision-based SNEs in terms of both accuracy and speed.Researchers can use either BG-Mean SNE or BG-Median SNE in their work, according to their demand for speed or accuracy.

V. DISCUSSION
A SNE can be applied in a variety of computer vision and robotics tasks.In this section, we first use the ICL-NUIM RGB-D dataset [21] to show an example of 3D geometry reconstruction benefiting from 3F2N SNE.Then, we discuss the possibilities of using 3F2N SNE to improve the performance of the state-of-the-art CNNs.
In our experiments, we first utilize an off-the-shelf registration algorithm provided by the point cloud library5 (PCL) to match the 3D point cloud generated from each depth image with a global 3D geometry model.The sensor poses and motion trajectory can then be obtained.Meanwhile, we integrate the surface normal information into the point cloud registration process and acquire another collection of sensor poses and motion trajectory.Then, we utilize ElasticFusion [22], a real-time dense visual simultaneous localization and  mapping (SLAM) system, to reconstruct the 3D scenery using the input RGB-D data and two collections of sensor poses and motion trajectories.Two reconstructed 3D scenes are illustrated in Figure 7, where it is obvious that the proposed SNE can improve the 3D geometry reconstruction accuracy.According to the quantitative analysis of our experimental results, the 3D reconstruction accuracy can be improved by approximately 19%, when using the surface normal information obtained by 3F2N SNE.Furthermore, we perform 3F2N SNE on the disparity images provided in the Synthia-SF dataset [23].Examples of the experimental results are shown in Figure 8.It can be seen that the 3D points on each planar (or near planar) surface, such as a road or building side, possess similar surface normals.Therefore, we believe that our proposed SNE can be utilized to extract informative features for CNNs in various autonomous driving perception tasks, such as semantic image segmentation and freespace detection, without affecting their training/prediction speed.

VI. CONCLUSION AND FUTURE WORK
In this paper, we presented a precise and ultrafast SNE named 3F2N for structured range data.Our proposed SNE can compute surface normals from an inverse depth image or a disparity image using three filters, namely, a horizontal image gradient filter, a vertical image gradient filter and a mean/median filter.To evaluate the performance of our proposed SNE, we created three datasets (containing about 60k pairs of depth images and the corresponding surface normal ground truth) using 24 3D mesh models.Our datasets are publicly available at https://sites.google.com/view/3f2nfor research purposes.According to our experimental results, BG outperforms other image gradient filters, e.g., Sobel, Scharr and Prewitt, in terms of both precision and speed.BG-Median SNE achieves the best surface normal precision (1.6 • , 5.6 • and 15.3 • on easy, medium and hard datasets, respectively), while BG-Mean SNE is most effective for minimizing the tradeoff between speed and accuracy.Furthermore, our proposed 3F2N SNE achieves better overall performance than all other computer vision-based SNEs.We believe that our SNE can be easily applied in various computer vision and robotics tasks, e.g., autonomous driving, etc.
As a future work, we plan to use the proposed method to learn depth prediction from monocular images, as many methods have already applied the constraints between depth and normal in monocular depth prediction.

Fig. 1 .
Fig. 1.Surface normal estimation from depth/disparity images: (a) and (b) show three examples of RGB and depth images of the Augmented ICL-NUIM dataset [1], respectively; (d) and (e) show three examples of RGB and disparity images of the Tsukuba stereo dataset [2], respectively; (c) and (f) show the surface normals estimated from (b) and (e), respectively, using the proposed SNE.

TABLE I THE
RUNTIME (MS) OF THE CPU IMPLEMENTATIONS (USING A SINGLE THREAD) WITH RESPECT TO DIFFERENT IMAGE GRADIENT FILTERS ANDMEAN/MEDIAN FILTERS.

TABLE III THE
COMPARISONS OF RUNTIME (MS) AND π SCORES AMONG DIFFERENT COMPUTER VISION-BASED SNES.

TABLE IV e
P COMPARISON AMONG DIFFERENT COMPUTER VISION-BASED SNES WITH RESPECT TO DIFFERENT ϕ ON EASY, MEDIUM AND HARD DATASETS.

TABLE V THE
RUNTIME (MS) AND e A COMPARISONS AMONG DIFFERENT COMPUTER VISION-BASED SNES ON THE DIODE DATASET.