
English: 
today in Drive Labs we’re talking about our LidarNet deep neural network
an end to end DNN that uses only lidar data
to semantically understand the entire scene around the car
and compute 3D bounding boxes around objects on the scene
lidar can help a self-driving car construct
a detailed 3D picture of what’s around it
in this clip, we see the first stage of
our multi-view LidarNet DNN
the top panel shows the input lidar scan data
while the middle panel shows this lidar data
segmented into both dynamic object classes
such as cars, pedestrians, and bicyclists
as well as static elements
such as drivable space, sidewalks, buildings, trees, and poles
the segmentation output is then projected
into a top-down or "birds eye view" as shown in the bottom panel
this view contains both the semantics and
height information for each processed lidar data point
with each point colorized based on its semantic class
the bird’s eye view representation is then
fed into the second stage of multi-view LidarNet

Chinese: 
在今天的自动驾驶实验室中，我们将介绍 LidarNet 深度神经网络
一种仅使用激光雷达数据
来从语义上理解汽车周围完整场景
并计算场景中对象周围的 3D 边界框的端到端 DNN
激光雷达可以帮助自动驾驶汽车构建自身周围环境的详细 3D 画面
在这段视频中，我们能看到多视角 LidarNet DNN 的第一阶段
上半部分显示的是输入的激光雷达扫描数据
而中间部分显示的是将激光雷达数据分割为动态对象类别
例如汽车、行人和骑自行车的人
和静态元素
例如可驾驶空间、人行道、建筑物、树木和电线杆
然后将分段输出投射到自上至下或“鸟瞰”视角
此视角包含每个经处理的激光雷达数据点的语义和高度信息
每个点均基于其语义类别着色
然后将鸟瞰视角表现输入到多视角 LidarNet 的第二阶段

Chinese: 
第二阶段将运行在自上至下的鸟瞰视角中
并被训练成可预测
第一阶段 DNN 所标识动态对象的 2D 边界框
在这里，我们看到车辆边界框建议以白色呈现
这些原始检测将进行后处理，以计算最终的边界框预测
及不同的对象实例
如视频中所示
我们可以看到多视角 LidarNet 的第二阶段
此阶段我们利用激光雷达对象追踪器进行后处理
而该追踪器可跨数据帧追踪对象实例
然后我们使用 2D 边界框和激光雷达点几何形状
来为每个对象实例计算 3D 边界框
不同的对象类别由不同的边界框形状表示
长方形代表汽车，圆筒代表行人
而不同的对象实例以不同的颜色显示
测试车辆在中心位置并显示为黄色
可驾驶空间以同心绿线显示
而蓝绿色表示所有不属于可驾驶空间的点
为丰富 DNN 输出

English: 
the second stage operates in top down or bird’s eye view
it's trained to predict 2D boundling boxes
for dynamic objects identified by the DNN’s first stage
here we see the vehicle bounding box proposals visualized in white
these raw detections will be post-processed to compute the final bounding box prediction
as well as to compute different object instances
as shown in this video
where we see the output of multi-view LidarNet’s second stage
post-processed by our lidar object tracker
which tracks different object instances across
data frames and uses the 2D bounding boxes and lidar point
geometry to compute 3D bounding boxes for each object instance
different object classes are denoted by different
bounding box shapes rectangles for cars and cylinders for pedestrians
while different object instances are shown in different colors
the ego car is shown in yellow at the center
drivable space is shown by the concentric green lines
cyan denotes all points that do not belong to drivable space
to complement the DNN output

Chinese: 
我们的激光雷达处理软件同样会计算
形状异常的物理边界周围的低级几何围栏
这些地方汽车无法驾驶，用洋红色线显示
LidarNet 和我们激光雷达处理软件堆栈的其余部分
可为我们打造L4 级和 L5 级自动驾驶体验
而其所提供的 3D 信息可与摄像头和雷达感知相结合
构建出更强大的自动驾驶系统

English: 
our lidar processing software also computes low-level geometric fences
around unusually-shaped physical boundaries where the car cannot drive, as shown by the megenta lines
LidarNet along with the rest of our lidar
processing software stack is designed for level 4 and level 5 autonomous
driving and the 3D information it provides can be
combined with camera and radar perception to design an even more robust autonomous system
