Para-Lane
Multi-Lane Dataset Registering Parallel Scans for Benchmarking Novel View Synthesis International Conference on 3D Vision (3DV) 2025

1Autonomous Driving Lab, CaiNiao Inc., Alibaba Group, Hangzhou     2Baidu Research, Sunnyvale

Abstract


To evaluate end-to-end autonomous driving systems, a simulation environment based on Novel View Synthesis (NVS) techniques is essential, which synthesizes photo-realistic images and point clouds from previously recorded sequences under new vehicle poses, particularly in cross-lane scenarios. Therefore, the development of a multi-lane dataset and benchmark is necessary. While recent synthetic scene-based NVS datasets have been prepared for cross-lane benchmarking, they still lack the realism of captured images and point clouds. To further assess the performance of existing methods based on NeRF and 3DGS, we present the first multi-lane dataset registering parallel scans specifically for novel driving view synthesis dataset derived from real-world scans, comprising 25 groups of associated sequences, including 16,000 front-view images, 64,000 surround-view images, and 16,000 LiDAR frames. All frames are labeled to differentiate moving objects from static elements. Using this dataset, we evaluate the performance of existing approaches in various testing scenarios at different lanes and distances. Additionally, our method provides the solution for solving and assessing the quality of multi-sensor poses for multi-modal data alignment for curating such a dataset in real-world.

Sensor Setup and Scenes



We implemented an autonomous system equipped with one front-view camera, four surround-view fisheye-cameras and three 3D laser scanners with 32 LiDAR channels to scan and collect real-world scene data. All sensors' frames timestamps are synchronized at the hardware level , and sampling points from three different laser scanners have been combined into one single LiDAR frame after motion compensation. Besides, we have add additional sensors inside which helps us obtain a high-quality initial trajectory before data alignment process, such as Inertial Navigation System (INS).

Dataset Visualization


Left

Middle

Right


Evaluation Metrics



We benchmarked a range of 3DGS-based methods on five different tracks. (1) Single lane regression, (2) Adjacent lane prediction, (3) Second-adjacent lane prediction, (4) Adjacent lane prediction (trained from two lanes), and (5) Sandwich lane prediction (trained from two side lanes). The image below shows the composition of the training sets(colored in blue) and testing sets(colored in red) for each track. For each track, we uniformly sample 200 frames from training sequences for model learning and 25 frames from test sequences as the ground truth.

Benchmark Result



We found exactly the same conclusion for all methods: the performance gradually decreases in the following sequence: Single > Sandwich > Two-for-One > Adjacent > Second-Adjacent. When the training and testing views are on the same trajectory, all methods achieve the best NVS results. However, when the testing viewpoint undergoes lateral shifts, the results are compromised to varying degrees.

Dataset Download


If you'd like to access the dataset, please complete this form.
Unfortunately, our dataset is currently undergoing some final anonymization work and is not yet fully ready. The dataset will be publicly available as soon as it is ready, which should be by around March 15th.
Contact email: paralane_requests@outlook.com

Citation


@misc{ni2025paralanemultilanedatasetregistering,
                title={Para-Lane: Multi-Lane Dataset Registering Parallel Scans for Benchmarking Novel View Synthesis}, 
                author={Ziqian Ni and Sicong Du and Zhenghua Hou and Chenming Wu and Sheng Yang},
                year={2025},
                eprint={2502.15635},
                archivePrefix={arXiv},
                primaryClass={cs.CV},
                url={https://arxiv.org/abs/2502.15635}, 
            }
    

Thanks to Jiulong Xu for his assistance with data preparation.

The website template is based on XLD, who also adapted from Zip-NeRF and borrowed from Michaël Gharbi and Ref-NeRF.