
Chinese: 
>>Juan：大家好
欢迎观看本次展示
我叫Juan Espineira
目前效力于英国华威公司
是设施团队的一员
今天我要和大家聊聊我们的主要设备
3xD Simulator
还有我使用虚幻引擎为它设计的系统
首先我先介绍一下我的公司
华威公司是华威大学的一部分
我们的主要任务是将
学院的主要研究与行业联系在一起
我们会和研究伙伴合作
一起完成项目 还有许多博士参与
涉及的领域包括制造、材料
电池、电子、感应器等等
我所在的是智能汽车小组
涉及汽车的方方面面
我们小组共有70名人员

English: 
>>Juan: Hello.
And welcome to my presentation.
My name is Juan Espineira.
I work for WMG in the UK.
I'm part of their
Facilities team.
Today, I'm going to be talking
about our main facility,
the 3xD Simulator,
and the system
I created for it
using Unreal Engine.
I'll start by talking
about my company, WMG.
We are part of the
University of Warwick.
Our main purpose is to link
primary research with industry.
We do this with
research partnerships.
We do projects together,
PhDs, et cetera.
We cover areas like
manufacturing, materials,
batteries, electronics,
sensors, and so on.
The group I'm part of is the
Intelligent Vehicles group.
We do everything
related to automotive.
We have over 70
people in our group

Chinese: 
主要负责开发和测试工作
涉及领域包括热力汽车、感应器、人为因素
和通讯等等
先聊一聊自动汽车的测试
我们预计自动驾驶汽车要
进行数十亿英里的路程测试
才能证明它们比人类驾驶更安全
所以测试路程非常长
那我们要怎么做呢？
有几种不同的方法
其中一种就是在模拟中测试
通常我们会使用软件
我们也会在现实世界的
可控环境中进行测试
有些场地就是专门用来测试的
还用公共环境测试
政府允许企业在遵照相关安全规定的情况下
在部分公共区域进行测试
理想状况下 我们肯定想在现实世界中测试
但问题是在现实世界中测试具有一定的危险性
可是你又需要测试整部车辆

English: 
working on testing
and development
of thermal vehicles,
sensors, human factors,
and communication.
So let's start talking about
testing autonomous vehicles.
There have been estimates
of a few billion miles
that we need to drive
autonomous vehicles,
just to prove that they
are safer than humans.
Now, that's a lot of miles.
So how are we going to do it?
There are a few
different approaches.
One of them is
testing in simulation.
That's generally what
we use for software.
There is also testing
in the real world
in controlled environments.
These are purposely
built for testing.
And there also public
environments where government
allows companies to
test on public grounds
under certain
safety regulations.
Ideally, you always want
to test in the real world.
Now, the problem is testing in
the real world is dangerous.
But you want to test
your whole car somehow.

Chinese: 
那么我们是不是可以
把整部车辆导入虚拟世界 然后再做测试呢？
这样我们就可以在安全的环境中
进行反复测试了
接下来我将介绍3xD Simulator
这个设施大概在四五年前建成
可以看到它的主要功能是
我们可以在里面驾驶任意车辆 将车辆连接到系统
然后进行测试和研发工作
假如你想测试自动应急系统
你让测试人员进入车中
然后测试人员可以进行驾驶
而车辆会以为自己在真实世界中

English: 
So what if we could
bring a whole vehicle
and plug it into a virtual world
and be able to do the testing?
That way, we can test
in a safe environment
in a repeatable manner.
So let me introduce you
to the 3xD Simulator.
This facility was built
four to five years ago.
And as you can see,
the main purpose
was we can drive any vehicle
inside connected to the system
and then do testing
and development.
So let's say you wanted to try
an autonomous emergency system.
So you could put
a person inside.
And that person could
drive, and the car
would think [INAUDIBLE].

Chinese: 
然后场景中出现意外 汽车就会刹车
你可以测试驾驶员的反应
系统的反应等等
整个过程都在一个经过验证的测试仓里
因为我们做过许多通讯项目
它可以很好地隔离信号
视觉效果由一个360度屏幕提供
它使用了八台投影仪
墙上还有一些雷达吸收材料
当我们使用雷达工具的时候会用到
模拟器的系统
原先是由一家公司定制的
这家公司非常擅长制作
汽车硬件的界面
让模拟器可以轻松地
和汽车部件实现交互
但是出现了两大问题
一是画面效果不太好
还有就是那是一个商业软件

English: 
And as your scenario
accident happens, it will brake.
And you could test the
reaction of the person,
the reaction of the
system, and so on.
The whole thing is built
into a {INAUDIBLE] cage
because we do a lot
of comms projects.
So that's useful for
isolating the signals.
The visuals are
provided by a 360 screen
which uses eight projectors.
And we also have
some radar absorbing material
on the walls for when
we use our radar kits.
Now the system that originally
came with the Simulator
was custom made by a company.
And that company was very
good at doing the interface
for the hardware of the car.
So it could easily interface
from the simulation
to the car component.
However, there were
two major problems.
One of them is the
graphics were not great.
And the other problem is that
it was a commercial software.

Chinese: 
虽然是定制的 但是它受到保护
我们无法对它进行修改并插入自己的东西
灵活性对研究来说非常重要
为什么我们会转而使用虚幻引擎呢？
首先我们需要画面有较高的清晰度
尤其是在进行人为因素测试的时候
而且也能让测试员看得更清楚
但是最重要的
还是虚幻引擎提供的灵活性
当我们进行研究的时候
通常都必须要提供新的解决方案
或者调整之前的解决方案
而这是之前的系统
无法做到的
或者说无法通过较低的支出 甚至免费来实现这一点
比如有一个项目 我们可能需要添加
一个特定的感应器模型
或者我们可能需要特定的噪音模型
使用特定的3D模型

English: 
So even though it was custom
made, it was protected.
So we couldn't go and modify
it and insert our own things.
And flexibility is very
important for research.
So why are we moving
to Unreal Engine then?
First of all, things
like graphic fidelity
is very useful, especially
for human factor trials
and also for showing people off.
But the most important
part, I will say,
is the flexibility that
the Engine provides.
When we work research,
we very often
have to provide new solutions
or adapt previous solutions,
or so on.
And that is something
that we couldn't
do on the previous system.
Or, that is we couldn't do
it for low cost or for free.
So for a project, we might need
to implement a specific sensor
model.
Or we might need to implement
a specific noise model.
Or we might need to
use specific 3D models.

Chinese: 
需要用特定的方式创建环境
需要环境中的事物
通过特定的方式彼此互动
这些虚幻引擎都可以实现
你还可以通过引擎之外的C++代码
实现这些
例如我们做了某种库
它会和车辆通讯
我还在引擎中添加了
许多很好用的代码
蓝图也是一个非常好用的工具
大多数情况下我们的研究人员
虽然非常擅长特定领域
例如传感器、电子或者机械
但是他们对编程的了解还只是入门水平
所以假如我们使用开源模拟器
他们就必须去找源代码
然后修改传感器模型或者其它噪音模型
这样难度很高

English: 
Or we might need to create an
environment in a certain way.
We might need things
in that environment
to interact with each
other in a specific way.
And Unreal allows you
to do all of that.
And it also allows you to
implement things through C++
that are not on the Engine.
So, for example,
I have implemented
some kind of libraries to
communicate with a car.
And I've also implemented
some more encoding things
in the Engine that have
proved very useful.
Also Blueprinting is
a very useful tool.
Because a lot of times,
we have researchers
that are very good at a
specific field, like sensors
or electronics or mechanics.
But they have a very basic
understanding of coding.
So if we use, let's say
an Open Source Simulator,
and they have to go
into the source code
and modify a sensor model or other
noise models there, that
is actually quite challenging.

English: 
Blueprinting does provide a
more simplified visual aid
of how these things work.
So the system I created using
Unreal has a few components.
And I will start
describing them now.
At the base of it all
is the Vehicle Template
that you can get on the editor.
There, you have your vehicle
with your built-in vehicle
dynamics that you might need
to modify to suit your needs.
You have an environment
with your roads, and so on.
So you can start by changing
environments and creating
your own roads and world
you want to navigate in.
And you do that the
same way you will do it
for any level of a
game you're designing.
This city is from
a bigger project.
It does happen to be a 3D
model from a lidar scan.
However, it doesn't need
to be that accurate.
You could also use assets
from the Marketplace.
This one, for example, has
been done purely with assets
from the Marketplace.

Chinese: 
而蓝图提供了
简化的视觉协助
我用虚幻引擎创建的系统包含一些组件
我会给大家讲解一下
最基础的组件是汽车模板
你可以在编辑器中获取
这里有内置的汽车动态
你可以根据自己的需要做出改动
可以给道路设置环境等等
一开始你可以调整环境
创建你自己的道路和世界
这就和你在设计游戏关卡时
所做的事情一样
这个城市来自一个更大的项目
这个是激光雷达扫描的3D模型
不过也不需要这么准确
你完全可以从虚幻商城下载资产
例如这个就是从虚幻商城
下载的资产

Chinese: 
它来自一个演示项目 之后我会展示一下
创建这个环境我们只用了几个小时
接下来的问题是
如何实现外部应用和虚幻引擎
彼此之间的通讯？
答案就是这个插件
这个插件在虚幻商城是免费的
并且支持TCP和UDP
那么它的功能是什么呢？
它里面有一系列的蓝图
你可以连接到一个服务器
并且发送许多数据到服务器上
在接收来自服务器的数据时
你还可以获得一个功能
你可以把来自服务器的数据
连接到这个功能上
收到特定的数据之后
你还可以完成特定的操作
这一步的开销很大
我们要怎么--
现在我们做了一个类似汽车游戏的东西

English: 
Those belong to a demo
that I will show you later.
But this is an environment
that we created in a few hours.
So next step, how
do we communicate
from an external application
to Unreal and from Unreal
to an external application?
The answer is this plugin.
So this plugin is
free in the Marketplace
and supports both TCP and UDP.
So what does it allow you to do?
Well, it gives you a
bunch of Blueprints
that allow you to
connect to a server
and also allow you to send
blocks of data to that server.
It also allows you to get a
function when you receive data
from a server.
And you can also link
the data from the server
to the function.
So you can do
specific things when
you receive a specific data.
And this is one thing
that's a little bit heavy.
So how do we--
we made a car game or
something like that.

Chinese: 
我们要怎么把它投影到360度的屏幕上呢？
我们公司有投影仪
所以通常
就是把摄像机放在现实世界中的某个位置
对着屏幕要显示的内容
然后把它放到投影仪里
但其实没那么简单
因为这意味着你需要八个视频端口
而且还需要运行八个虚幻引擎的实例
这无法在一台电脑上实现
所以你需要多台电脑
你还要想方法
利用多台电脑实现理想效果
并且让多台电脑同步
这样你看到的每一帧画面
才都是同步的
看起来就是同时发生的 不会出现跳过的情况
而这就是nDisplay的功能
nDisplay也是一款免费的插件
它让你可以运行多个实例
并且实现实例的同步
简而言之 它的原理就是

English: 
So how do we project
it into a 360 screen?
Well, the firm has projectors.
So generally what
you have to do is put
cameras in your real
environment to look
at where that screen
should be and then
take it to a projector.
However, it's not that simple.
Because that means you
need eight video ports
and also you need to run
eight instances of Unreal.
And you cannot do
it in one computer.
So you may need more
computers to do that.
So you need to find
a way in which you
can do that, multiple computers
and also in such a way
that the computers
are synchronized.
So that way what you
see on each frame
it's always
synchronized so that it
looks like it's all happening
at once and not skipping around.
And that is what
nDisplay allow you to do.
nDisplay is a free
plugin that you can get.
And it allows you to
run multiple instances.
And it allows you to
synchronize all of them.
Generally, the way it works
is one of the instances

Chinese: 
让其中一个实例作为主实例
你要在主实例上
运行所有重要的内容
而其它实例 也就是次实例
主要是为了可视化
它们会去控制
投影仪
所以基本上就是
一台电脑控制一台投影仪
这样可以大幅提升性能
此外还要注意处理图像混合
因为我们是将平面图像投影到曲面屏幕上
因此可能会出现错位
这一点需要你去修复
我们使用了一个名为[模糊]的软件
它会自动进行修复
不过如果你知道算法
也可以使用英伟达的API
我之前用英伟达的API做过类似的解决方案
效果非常好
虽然我也不太了解算法

English: 
will be the master.
And this is where
you should probably
run all your important things.
And the other instances,
the secondary instances,
are going to be
for visualization.
These are the ones that
are going to go and command
the projectors.
So what we actually have
is that each projector
is commanded by one computer.
And that saves us a
lot of performance.
One thing I'm going to
gloss over is [INAUDIBLE]..
So we're projecting a flat
image into a curved screen.
So it's going to be distorted.
So you need to correct that.
We use a software
called [INAUDIBLE] which
does the correction itself.
However, you can also use
NVIDIA's API to do this,
if you know the math.
And I did a similar solution
to that with NVIDIA's API.
And it worked quite well,
even though I sort of
don't know the math that well.

Chinese: 
现在模拟程序会运行
车辆在环境中的动态效果
并且和外部的多个应用进行通讯
也有投影到屏幕的投影仪
最后我们要做的
就是和车辆本身通讯了
怎样才能实现这一点呢？
现代汽车部件的通讯
是通过“CAN Bus”实现的
它是一个可以让部件彼此发送确认讯息
从而传输数据的网络
你要做的就是接入网络
然后你可以做两件事
一是接收讯息
这样就能获得数据
例如 节流阀现在在什么位置？
方向盘是什么角度？
车灯是不是打开了？
诸如此类
你还可以进行编写
例如 刻度盘上显示的速度

English: 
So at this point, the simulation
is running the vehicle dynamics
in an environment
and is communicating
to an external
application or multiple.
And we also have a
projector in the screen.
So the last thing we have to
do is communicate with the car
somehow.
So how do we do this?
Well, modern cars communicate
to their components
through something
called the CANbus which
is a network in which they
can send identifying messages
to each other to transmit data.
So what you have to do
is tap into that network.
And you can do two things.
You can listen to messages.
So you can get data.
For example, what's the position
of the throttle or the angle
of the steering wheel?
Or are the light beams on?
Things like that.
You can also write to it.
So you can tell, for example,
the dials what the speed should

English: 
be or the RPM dial what the
RPM of the Engine should be,
and so on.
To do this,
you need specific hardware.
I am showing you one below.
And if you can see it,
that is the connector
that you use for the CANbus
The other end is generally
connected to a USB, which they
provide software to use it.
However, you can use MATLAB
for commanding this and using
your own algorithms, and so on.
If we put the
components together,
we will get something like this
as the simplified connector.
The Unreal environment
houses the simulation
with the physics, and so
on, and then communicates
to an Echo server.
And then the Echo
server reflects back
to all the clients.
The clients can also
send data to the server
that they generally
want Unreal to read.
And the Echo server
will send it to Unreal.
There's a more accurate
representation of the system.
You can see only the
Master Node of nDisplay

Chinese: 
或者RPM刻度盘 引擎的RPM是多少
等等
为了实现这一点 你需要特定的硬件
这里展示的就是硬件
可以看到 这是CAN Bus的连接器
另一端通常会连接到USB
它们会提供使用的软件
不过你也可以使用MATLAB来控制它
并且使用你自己的算法等等
如果我们把它们组合在一起
就可以获得一个简单的连接图
虚幻引擎中有模拟环境
它带有物理效果等等
然后它会和Echo服务器通讯
接着Echo服务器
会把它们发送给所有的客户端
客户端也可以向服务器发送
要虚幻引擎去读取的数据
Echo服务器会把它发送到虚幻引擎
这个示意图更加精确地展示了这个系统
可以看到只有nDisplay的主节点

Chinese: 
是连接到服务器的
就像我之前说的
次节点只用于可视化
所有重要内容都在主节点上运行
那么我们要从虚幻引擎中提取哪些数据
发送到客户端呢？
速度和RPM之类的数据可以发送给车辆
XY轴位置之类的数据
可以发送给GPS模拟器 用来导航
而地图中其它实体的位置数据
可以发送到Sumo交通程序
还有其它的数据 例如感应器数据
雷达和激光雷达数据可以通过MATLAB
进行发送和接收 我们还可以实现可视化
然后把它发送给信号模拟器
或者发送给
需要它的测试控制单元
然后我们还可以将控制单元中的控制信号
发送到模拟器中
这样实际上就是控制单元

English: 
is actually connected
to the server.
That's because,
as I said before,
the secondary nodes are
only used for visualization.
So all the important things are
being run on the Master Node.
So what kind of data are
we extending from Unreal
and sending to clients?
Well, things like speed and
RPM that can be fed to the car.
Things like xy
position that can be
fed to GPS for GPS
simulation or for navigation.
Position of entities
around the map
can be sent to something
like Sumo Traffic.
And other things include
things like sensory data.
So Radar and Lidar data
can be accepted and sent
to something like MATLAB,
where we could visualize it,
we can feed it
to a signal emulator,
or we can send it
to test the control
unit that requires that.
Then, we could also
feedback the control signal
from that control unit
to the simulation.
So the control unit
is actually driving

Chinese: 
在驾驶模拟器中的车辆
这个系统基本不会作为驾驶模拟器
然后你去驾驶虚拟车辆或者显示车辆
以及虚幻引擎中的环境投射到你周围
如果我们想要测试自动汽车
还需要最后一样东西
因为自动汽车不只是靠它们的“眼睛”
也就是摄像机
去观察周围的世界
它们还会用到其它东西 例如雷达、激光雷达
还有其他感应器
所以车辆会通过摄像机、雷达、激光雷达、GPS
超声波感应器等等设备来观察周围的世界
所以我们还要让它们相信
至少让我们测试的主要设备相信自己是在现实世界里
要让感应器相信自己看到的是现实世界
主要有两种办法
可以将数据推送到它前面或者后方
放在前面的例子就是

English: 
the car inside the simulation.
So the system now can't be much
work as a driving simulator
where you can drive your
car or your real car
and have the Unreal environment
projected around you.
So there's one
last thing we have
to do if we want to test
autonomous vehicles.
Because autonomous
vehicles not only
see the world with their
eyes, in their cameras,
but they also have other
things like radar, lidar,
and other sensors.
So vehicles see the world with
cameras, radar, lidar, GPS,
supersonic sensors, and so on.
So we have to convince all of
them, or at least the majority
that we are testing.
There are generally
two ways in which
you can convince a sensor that
it's seeing something real.
You can feed data in
front of it or behind it.
The in front example will
be you can have a camera

Chinese: 
你可以设置一个摄像机 然后在前面放一面镜子
并且要有很好的动态范围
足够明亮 看起来够真实
然后显示出
坐在车里应该可以看到的东西
如果使用后面的感应器
就不需要摄像机了 你可以将视频信号
直接发送到控制单元的缆线中
这两种方法都有各自的优点
当然如果你想要驾驶车辆
一般都希望在感应器前面实现这一点
但是这种情况不一定可行
虚幻引擎中的部分感应器是很好处理的
摄像机很简单
你只需要渲染一帧画面之类的就好
GPS也比较简单
只需要转换X轴和Y轴
但虚幻引擎中
没有雷达和激光雷达的解决方案
所以我们要自己构建
我在虚幻引擎中创建了
雷达和激光雷达的感应器模型

English: 
And then you can put a
monitor in front of it
that has high dynamic range.
So it is bright
enough to feel real.
And then you show
what is supposed to be
seen from your vehicle.
Going behind the sensor means
that you actually just cut
the camera and then you
feed the video signal
straight to the cable
to your control unit.
There are advantages of
this, advantages for both.
Obviously, if you
want to drive a car,
you may want to do it
in front of the sensor.
However, that is
not always possible.
Now, some sensors in Unreal
are generally easy to do.
Cameras are very easy.
You just need to render
a frame and so on.
GPS is also relatively
easy because it just
translates x and y.
But for things like radar and lidar,
there is no built in solution.
So we had to build our own.
So I created both a radar
and lidar sensor model
inside Unreal.

English: 
And I did that purely
using Blueprints.
They both work in a similar
way to a basic level.
They use light rays or
the raycast below,
which is an engine inquiry
between two points.
And if it is a
collision, it gives you
a bunch of information
that you can see on screen.
So for example, if you wanted to
make a lidar, you could raycast.
And then, when you get
to your xy set points,
all you have to do is
pack those into bytes
and then send that
through the server
to something like MATLAB.
So you could do a
sine integer for each
coordinate so each
point will be 12 bytes.
And then start the next
point, then the next point,
and the next point, as well.
Radar is a bit different.
Automotive radars don't
tend to give you points.
They tend to give you
objects classified already.
So you could do with Blueprints
a classification system.

Chinese: 
并且完全是用蓝图做的
两者的工作原理基本差不多
都使用了光线投射
引擎会询问两点之间的情况
如果有碰撞
就会返回一系列的信息 你可以在屏幕上看到
假如你要制作一个激光雷达 就可以使用光线投射
然后当你获得X和Y轴位置之后
你只需要把它们打包成字节
接着通过服务器
发送到MATLAB上
你可以给每个坐标
做一个正弦整数 这样每个点就是12字节
然后开始下一个点
再下一个点 以此类推
雷达有点不同
车载雷达不会给你点数据
它们会给你已经分类的对象
所以你可以用蓝图做分类系统

English: 
Or you could directly
hit the actor.
This is generally the base
for all the sensor models.
After that, you can add
your physics, if you want.
That can be as simple
or complex as you want.
Another thing you can
access with light rays
is the physics
material of the actor.
So you can store values there.
And you could use equations
to get to other values.
So you could do certain
things like lidar reflectivity
if you wanted to do
intensity calculations.
So I was mentioning that
you can pack the lidar
points into 12 bytes.
So once you send that to
MATLAB, what you will have to do
is tell MATLAB, every 12
bytes, that's your point.
Then, the first four
bytes are your x
and the next four
bytes are your y.
And the next four
bytes are your cell.
I want you to graph
all those points.

Chinese: 
或者直接命中Actor
这就是感应器模型的基本内容了
你也可以添加自己的物理效果
简单或者复杂的都可以
还有一个地方可以用到光线投射
那就是Actor的物理材质
你可以存储数值
使用等式来获得其它数值
还可以做其它事情 例如你要做密集的计算
那就可以做激光雷达反射
之前我也提到了
你可以将激光雷达的点数据打包成12字节
当把数据发送给MATLAB之后
你要做的就是告诉它 每12个字节会组成一个点
前面4个字节是X轴
之后4个字节是Y轴
最后4个字节是单元
让它显示所有这些点

Chinese: 
请看这个视频
这是一个环境演示
可以在虚幻商城免费下载
我们购买并添加了一点其它的东西
这两个画面是同时发生的
当然 我没办法同时向你展示两个屏幕
但确实是同步的
没有延迟
这些信息也不是提前录制的
所以当你在场景中驾驶时
激光雷达会发送数据
然后几乎同时在MATLAB实现可视化
有了这个数据
你就可以让控制单元去驾驶
同样的 我们还有雷达
这里雷达会显示
它侦测到的东西
这两个画面也是同时发生的
一边驾驶一边侦测
同时运行的还有激光雷达
另外
360视觉场景也在运行

English: 
And that is what you're
seeing on this video.
This is a demo from an
environment that's actually
free on the Marketplace,
with a few added bonuses
that we purchased.
And these two things are
happening at the same time.
Obviously, I cannot show you
two screens at the same time.
But it is happening.
So there is no delay.
And the information has not
been prerecorded or anything.
So as you drive
through the scene,
the lidar radar has been sent.
And it's been visualizing
MATLAB, almost side by side.
So you could be having a
control unit drive around
with this data.
Similarly, we also
have the radar.
In this case, the radar is
showing some radar detections
that it can see.
And it is also happening
at the same time
as the drive is happening.
And it's happening at the same
time as the lidar is running.
And it's also happening
at the same time
as the 360 visuals are running.

English: 
So the control unit
could be in the car.
And it could be driving
you while you're in it.
So lastly, I wanted to
showcase a few projects
that we are doing.
However, I only
managed to get footage
inside the simulator
for this one.
So for this project,
the company was looking
to do a Human Factors Study.
And they wanted a
driving simulator.
And they wanted people
to drive on the motorway
on different rain conditions.
So this is a very short
30 to 40 second demo
that I put together just to show
them what we were capable of.
And this is basically what it
looks like on a flat screen.

Chinese: 
控制单元可以在车内
你在车内时可以让它代驾
最后 我想要展示一下
我们正在做的几个项目
不过我只有
这个项目在模拟器中的演示视频
在这个项目中
公司想要进行人为因素研究
他们想要做一个驾驶模拟器
想要人们在不同的雨天条件下
在高速公路上驾驶
视频大概只有30到40秒
我就只是展示一下我们正在做的东西
这就是它在平面屏幕上的效果

English: 
So it's very short.
It'll run along the motorway.
You see a slick road.
And you see a few signs
showing your speed limits.
And then I believe you
get into a change of speed
and people start accelerating.
That's mostly what the
client wanted to see.
And this is what it looks
like from inside the car.
So it is very difficult to
capture what it actually
feels like, although I feel like
this video may do a good job.
You can also see that your
side mirror and back mirrors
do point at the right
places, so you can actually
use them when you're driving.
And that pretty much
brings us to the end.
So I just wanted to
do a quick summary
of how I built the system.

Chinese: 
演示很短
它会沿着高速公路行驶
可以看到湿滑的路面
还有一些路标
限速标志
然后你会感受到速度变化
开始加速
这就是客户想要看到的东西
这是车内的视角
虽然很难捕捉到真实感受
不过我觉得这个视频已经很棒了
可以看到侧视镜和后视镜
显示的位置也没错
完全可以使用它们来帮助驾驶
差不多就是这样了
最后我想快速总结一下
我是如何构建这个系统的

English: 
So I started with
a vehicle template
that you have in Unreal.
Then I created a road network
and an environment around it.
I used nDisplay to
run the cluster nodes
and be able to project
it on the 360 screen.
I connected the master mode
to an Echo server to flow data
in and out of Unreal.
I also then use CANbus hardware
to communicate with a car.
And I use some Blueprints
to create some sensor models
so we can convince
the systems of the car
that they are actually on a
real world and somewhere else.
As for future plans, we are
part of a big project called
the Midlands Freedom Mobility in
which we are setting up a test
bed for autonomous vehicles.
And we are going
to be a simulator
to run virtual trials.
So we have about
80 miles of routes
that we have currently
lidar scanned

Chinese: 
一开始我使用了
虚幻引擎自带的汽车模板
然后创建了道路网络和周围的环境
使用nDisplay运行群集节点
并且将它投影到360度的屏幕上
我将主节点连接到Echo服务器上
将数据导入和导出虚幻引擎
并且还使用CAN Bus硬件与车辆通讯
我还使用蓝图创建了感应器模型
让系统相信车辆
是在现实世界中行驶
至于未来的计划
我们加入了一个名为“Midlands
Freedom Mobilit”的大型项目
正在设计自动汽车的试验台
我们会使用模拟器
进行虚拟试驾
我们目前已经用激光雷达
扫描了80英里的路段

Chinese: 
正在将它转化为3D模型 以便在虚幻引擎中使用
我们还计划使用现实世界的硬件
来运行车辆动态模型
例如项目要求的“Car Maker”
还有几个项目
目前也在计划阶段
不过主要都和人为因素相关
这就是我今天要分享的内容
我先切回之前的幻灯片 之后大家可以随意提问

English: 
and we are converting into
3D models to use in Unreal.
We're also planning to
implement real world hardware
to run vehicle dynamics, things
like Car Maker for projects
that require that.
And we also have a few
projects down the line
that are in the planning
stage at the moment,
but mostly have to do
with human factors.
And that's what I wanted
to talk about today.
So I'll leave the previous
slides for the questions.
