Please fill in your name
Mobile phone format error
Please enter the telephone
Please enter your company name
Please enter your company email
Please enter the data requirement
Successful submission! Thank you for your support.
Format error, Please fill in again
The data requirement cannot be less than 5 words and cannot be pure numbers
Bird's Eye View (hereinafter referred to as BEV) can be understood as a God's perspective that looks at the overall situation from a high place. The data collected by multiple sensors on the vehicle body will be entered into a unified model for overall reasoning. The bird's-eye view generated in this way expresses multiple sensor data from the same perspective, which can effectively avoid error superposition and solve the problem of multi-sensor data fusion and judgment for autonomous driving.
In the BEV space, since the coordinate systems are the same, time series fusion can be performed to form a 4D space. However, due to the huge number of point clouds, the original 3D annotation technology is obviously unable to meet its needs. The 4D annotation technology for BEV has begun to be paid attention to and adopted by the industry.
4D-BEV annotation technology introduces data annotation into the fourth dimension, namely time series. This technology is based on a bird's-eye view, in which annotators can label static objects such as vehicles, pedestrians, traffic signs, etc., and record their location, size and other information. At the same time, the timeline annotation also records the entry and exit time of objects, helping the algorithm more accurately track the movement trajectory of objects, thereby improving the safety and decision support of autonomous driving.
In order to help customers build a large amount of high-quality 4D-BEV ground truth data more quickly and at a lower cost for perception training and evaluation, Nexdata launched a 4D-BEV annotation solution.
Nexdata's 4D annotation tool can annotate 3D space + time series dimensions, adopts a variety of sensor fusion methods, and supports multiple data types such as lidar, millimeter wave radar, cameras, and camera position maps. At the same time, it supports data alignment and fusion. Quickly improve the efficiency and accuracy of annotation with the platform's built-in pre-identification annotation technology.
Annotation tool highlights
Supports billions of point clouds and smooth processing of large-scale data
The 4D point cloud annotation template is displayed using Potree, a WebGL-based point cloud visualization framework that can interactively display large-scale point cloud data on web pages. It can control the resolution of Octree through parameters such as camera position, distance from point cloud to camera, point cloud density, etc., realize the LOD (Level of Detail) function, and perform fast spatial query on point cloud data. Loading large-scale point clouds is smoother.
Data set obtains mapping parameters to avoid parameter deviation
Since the 4D original point cloud data has only one point cloud per clip, but it corresponds to a multi-frame camera, which is equivalent to corresponding to multi-frame mapping parameters. The Nexdata 4D-BEV tool background implements obtaining mapping parameters from the data set to avoid errors caused by multi-frame images. Parameter deviation. This method can realize that each frame of image corresponds to a mapping parameter, which is very helpful for the mapping accuracy of 4D fusion tasks.
Personalized color value settings to accurately identify point cloud targets
Nexdata 4D-BEV tool supports setting color value adjustment. For different point cloud data, annotators can adjust the point cloud color according to their own color sensitivity, so as to better distinguish the target categories in the point cloud, and the annotation efficiency is significantly improved.
Built-in preloading function effectively improves labeling efficiency
The preloading function can set the number of preloaded frames when there are many corresponding images, so that loading and annotation can be carried out at the same time, so there is no need to wait for all data to be loaded, improving annotation efficiency.
Mature and efficient pre-identification annotation processing capabilities
Nexdata's 4D annotation template has pre-recognition capabilities and automatically identifies annotation targets. Annotators only need to make fine adjustments to quickly complete the annotation task, greatly improving annotation efficiency.
4D lane marking
The customer needs to use continuous frame lidar point cloud data and the corresponding lidar global pose information, and use the corresponding calibrated image data as the original data source to perform overlapping frame annotation for the continuous frame data of the frame segment at a certain time. The main purpose is to annotate overlapping frames. The main categories of lane lines after the frame are solid lines, dotted lines, double solid lines, double dotted lines, diversion lines, etc., and the mapped 2D lane lines mapping each lens image are adjusted and pasted.
4D segmentation annotation
The customer's requirement is to use the data and pose parameters of a certain sequence of frame segments, and reconstruct a certain sequence of frame segments according to the pose parameters, and mainly mark the static semantic segmentation after the stacked frames. The main categories are green plants, drivable areas, Marking of unknown obstacles, etc.
With years of data processing experience and one-stop data solutions, Nexdata has reached in-depth cooperation with hundreds of autonomous driving companies around the world, covering OEMs, new car manufacturing forces, first-line technology companies, mainstream algorithm companies, and the world's top Tier1 manufacturers, etc. . In the future, Nexdata will continue to increase investment in technology research and development, continuously improve the construction of AI infrastructure, and help users train and deploy artificial intelligence applications in a more convenient way.
Contact Nexdata to get a free trial use of our 4D data annotation tool.
Autonomous driving technology continues to evolve, with a strong emphasis on enhancing perception capabilities. The fusion of multiple sensors and the automation of data annotation have emerged as pivotal advancements, particularly in Bird's Eye View (BEV) and occupancy analysis. This article delves into the innovations and implications of automatic multi-sensor data annotation in the realm of autonomous driving.
As an artificial intelligence data service company, Nexdata has continuously accumulated 200,000 hours of speech datasets, 800TB computer vision datasets, 2 billion text datasets, etc. The data quality has been tested by the world's leading AI companies, and has successfully helped customers improve the performance of AI models. We have carefully compiled a series of popular ready made product datasets to meet the intelligent needs of multiple scenarios such as conversational AI, autonomous vehicles, smart home, and new retail.