
Chinese: 
大家好
我叫Alex
今天我想给大家介绍一下
Eager Execution
这个名字大家在前两个演讲中
应该已经听过了
但是我会给你们更到位的介绍
这个必要且以对像为本
使用TensorFlow
的Pythonic全新方式
我们今天要向你们介绍一下
这是TensorFlow核心的一部分
你们在这里，或已经知道
或者因为你们正在观看直播
我希望TensorFlow
已经成为机器学习的图像执行引擎
让你们大规模执行图像
以及其他很棒的东西
对吗？
我们当初为什么选择图像呢？
现在我要给大家介绍
Eager Execution
这能使我们超越用图像做的东西
我认为回顾一下什么东西
困扰着我们是个好主意
这是一个很好的理由
你想要你们的计算

English: 
♪ (music) ♪
(clapping)
Hello.
My name is Alex,
and I'm here to tell you
about Eager Execution,
which I think you've heard
in the last two talks now.
But I'm here to tell you
what it's actually about.
This new, imperative,
object-oriented,
pythonic way of using TensorFlow,
that we're introducing to you today
as part of TensorFlow core.
So, you know because you're here,
or because you're watching the livestream,
I hope, that TensorFlow has been
this graph execution engine
for machine learning,
that lets you run graphs
at like high scale
and all sorts of other nice things.
But has it?
And why did we choose to go
with graphs in the first place?
Since now I'm going to tell you
about Eager Execution,
where we move beyond
what we can achieve with graphs,
I think it's a good idea to recap
why we bothered.
And a really good reason
why you want

Chinese: 
作为一个独立于平台的图像
一旦你们有了这个
就会很容易区分图像
当我念研究院
在auto diff 还是
机器学习工具包的标准之前
我并没有希望能夠这样做
就像是...現在生活得更好
相信我
此外，如果你有一个独立于平台
计算的抽象表示
你们只需去部署
几乎任何想要的东西
可以在TPU上运行它
可以在GPU上运行它
把它放在手机上
亦可以把它放在Raspberry Pi
就像是各种很棒的部署方案
你们今天会听到的
这种独立于平台的视图
真的很有价值
此外，编译器也可以
内部处理数据流图
它们知道该怎样处理
各种不错的优化
依靠你们的计算的全球视野
例如不断的折叠
常见的公共子表达式消元
像数据laying和类似的东西
很多优化真的是特定深度学习
我们可以选择如何正确地
布置你的渠道和你的高度
宽度和其他东西

English: 
to have your computation represented
as a platform-independent graph
is that once you have that,
it's very easy to differentiate the graph.
And I went to grad school
before auto diff was standard
in machine learning toolkits
and I do not wish that on any one.
Like, it's... life is much
better now trust me
Also, if you have a platform-independent,
abstract representation
of your computation,
you can just go and deploy it
to pretty much anything you want.
You can run it on the TPU.
You can run it on the GPU.
You can put it on a phone;
you can put it on a Raspberry Pi.
There are like all sorts of cool
deployment scenarios
you're going to hear about today.
And it's really valuable to have this kind
of platform-independent view.
Also, compilers work with data flow
graphs internally,
and they know how to do
all sorts of nice optimizations
that rely on having a global view
of your computation,
like constant folding, common
subexpression elimination,
like data laying and things like that.
And a lot of these optimizations
are really deep learning specific.
We can choose how to properly
lay out your channels and your height,
and width and stuff.

English: 
So, your convolutions are faster.
And finally, a key reason that's very
important to us at Google,
and I hope important to you as well
is that once you have
a platform-independent
representation of your computation,
you can just deploy it, and distribute it
across hundreds of machines
or a TPU-pod, like they showed earlier.
And this is a very seamless process.
So, since graphs are so good,
what made us think that now it's a good
idea to move beyond them
and let you do eager execution?
Good place to start
is that you don't actually
have to give up
automatic differentiation.
I'm sure like you're familiar with like
other frameworks, like Pythons
Autograph that let...
sorry Autograd,
that let you differentiate
dynamic code and...
You don't need to have an a priori
of your computation differentiated.
You can just build up a trace as you go,
and then walk back the trace
to compute gradients.
Also, if you don't stop
to build a platform...
like this computational graph,
you can iterate a lot more quickly.
You can play with your model
as you build it.
You can inspect it.

Chinese: 
你们的卷积的速度就可更快
最后对我们Google来说
这是一个很重要的关键原因
我希望对你们也很重要
一旦你们有一个独立于平台的
计算表示
你们就可以进行部署
并分发它到数百台机器
或TPU-pod，像之前展示一样
这是一个无缝的过程
由于图像的质量非常好
这使我们认为
现在超越它们是一个好主意
并让你们执行eager execution?
良好的开端是
你们实际上不须放弃自动分化
我相信你们都会熟悉其他框架
例如Pythons
Autograph...
Autograd才对
让你们区分动态代码...
你们不需要
具区别的计算的先验
只需建立一个轨迹
然后沿轨迹往回走来计算渐变
此外，如果你们不停止建立一个平台...
像这计算图像
就可以更快地重复
当你建立时，也可以玩你的模型
检查它

Chinese: 
戳一下它及刺它
这可以让你们更高效
当你想进行所有这些改变时
此外，也可以对调试器及
分析器运行你的模型
对它们添加各种分析工具
去真正理解他们怎样做
正在做的事情
最后，如果我们不强迫
你们以不同的方式做计算表示
而继续用正在使用的
你们可以使用
全部主机编程语言的机器
以处理控制流和数据流
和复杂的数据结构
对某些模型来说
这使让模型能够运作至为关键
我希望你不要想，
“我该如何使用它呢？”
其实使用方式非常简单
首先你要导入TensorFlow
调用tf.enable_eager_execution
这样做之后
每次你要运行TensorFlow操作
就像在这种情况下.matmul
而不是TensorFlow稍后构建图像
被执行时将会运行那个矩阵乘法

English: 
You can poke and prod at it.
And this can let you
just be more productive
when you're like making all these changes.
Also, you can run your model
for debuggers and profilers
and add all sorts of analysis tools
to them, to just really understand
how they are doing what they are doing.
And finally, if we don't force you
to represent your computation
in a separate way than the host
programming language you are using,
you can just use all
the machinery of your host programming
language to do control flow and data flow,
and complicated data structures,
which for some models is key to being
able to make your model working at all.
So, I hope you're not wondering,
"How do I get to use this?"
And the way you use this is super easy.
You import TensorFlow
and you call
tf.enable_eager_execution
And once you do that,
what happens is anytime you run
a TensorFlow operation,
like in this case a .matmul,
instead of TensorFlow building a graph
that later when executed is going
to run that matrix multiplication,

English: 
we just immediately run
that matrix multiplication for you
and give you the result.
And you can print it,
you can slice it, you can dice it,
you can do whatever you want with it.
And because things
are happening immediately,
you can have highly dynamic control flow
that depends on the actual values
of the computation you're executing.
And here is just a simple
Wolfe conditions line search
example that I wrote.
And it doesn't matter, it just matters;
it has like while loops
that depend on complicated values
that are computer-based
on the computation.
And this runs just fine
on whatever device you have.
And together with this
enable_eager_execution thing,
we're also bringing you
a few symbols in TensorFlow
that make it easier for you
to write code that's going to work,
both when building graphs
and when we're executing eagerly.
And we're also bringing you
a new way of doing gradients,
because I'm sure you're familiar now
with how we do gradients
in normal TensorFlow.
Where you just create your variable,
you create your loss function,
and I hope you can think of a better
loss function than this one.

Chinese: 
我们会立即运行那矩阵乘法
并给你们结果
然后你们就可以打印结果
也可以进行slice
也可以进行dice
可以随心所欲地做任何事情
因为事情立即发生
你们可具备高度动态的控制流
这取决于正在运行的计算的实际值
这只是一个我撰写的
有关沃尔夫条件线搜索的简单例子
这没关系，这只是很重要
它有点像while loops
它取决于复杂的数值
这是基于计算机的计算
它在任何设备上都会运行得很好
并与这enable_eager_execution东西一起
我们也给你带来
TensorFlow的几个符号
这使你们更容易
编写可以运行的代码
无论是在建立图像
或要急切地运行时
我们也给你带来
一种制造渐层的新方法
我相信你们现在很熟悉
我们在正常的 TensorFlow
怎样做出渐层效果
你们刚刚在哪里创建了变量
创建你们的损失功能
我希望你们能夠想一个
比这更好的损失功能

Chinese: 
然后你们要调用tf.gradients
去区分它
但当你们有eager execution
我们就尽可能地提高效率
如果你们打算......
有一点需要考虑
如果你们要区分一个计算
就需要保持跟踪在记忆中的信息
迄今为止所有发生的事情
例如你们的激活及那样的事情
当你们不是在计算渐层时
我不想你们为此追踪付出代价
因为表现是真的
就像我们这样做的全部原因
因为我们想用这些大件
且精美的硬件
超级快地训练那些模型
当eager execution启用时
当你们想计算渐层
使用这个上下文管理器
保持卷带活跃
而卷带可录制
所有执行的操作
当你们计算渐层时
可以重复播放
否则，API还是相同的
另外，在eager撰写训练循环
正如Derrick指出
很多...
非常简单且直接
你们只需使用Python for loop
重复你们数据集
eager的数据集就很好
它们的表现与图像执行机器

English: 
And then you call tf.gradients
to differentiate it.
But when you have eager execution, we try
to be as efficient as we possibly can
And if you're going to...
one thing to think about is
that if you're going to differentiate
a computation,
you need to keep track
in memory of information
about what happened so far,
like your activations
and things like that.
But I don't want you to pay
for the cost of this tracking
when you're not computing gradients
because performance is really
like the whole reason why we're doing this
is because we want to use these big,
nice pieces of hardware
that train models super fast.
So, when eager execution is enabled,
when you want to compute gradients,
and you use this little context manager
to keep a tape active,
and the tape just records
all the operations you execute,
so we can play it back
when you've computed the gradients.
Otherwise, the API is the same.
Also, writing training loops in eager,
as Derrick pointed out,
is much...
is very easy and straightforward.
You can just use a Python for loop
to iterate over your data sets
and data sets work in eager just fine.
And they work with the same high
performance you get

Chinese: 
表现出相同的高水平
那么你们可以只做预测
计算你们的渐层
为你们的渐层供给
并做所有习惯做的事情
但有关eager execution真正有趣的是
在你们完成撰写这代码时
就是完成了
我们已经知道那程序了
但是当你还在撰写时
想做类似调试的事情时
那么...
当eager execution启用时
可以采用任何模型代码 --
我使用我的例子
这里渐层的例子--
添加一些笔记，像在你想要的
任何地方放入Python调试器
一旦放入Python调试器后
你们就可全力采用调试
可以打印任何tensor值
可以更改任何tensor值
可以在任何tensor上
运行任何操作
这将会允许你们
真正理解模型里发生什么事情
并真正能够修复任何问题
也可以采用eager execution代码
并分析它

English: 
in the graph execution engine.
Then you can just do your predictions,
compute your gradients,
supply your gradients
and do all the things
you're used to doing.
But really the interesting thing
about eager execution
is not when you're just writing
this code as finished,
that is done,
that we already know works,
but when you're still developing,
when you want to do things like debug.
So...
when eager execution is enabled,
you can just take any model code--
and I used my simple,
silly gradient example here--
add notes to like drop into the Python
debugger anywhere you want.
And once you're in the Python debugger,
you have the full power
of debugging available.
You can print the value of any tensor,
you can change the value of any tensor.
You can run any operation
you want on any tensor.
And this will hopefully empower you
to really understand
what's going on in your models
and really be able to fix
any problems you have.
You can also take eager execution
code and profile it,

English: 
using whatever profiling tool you are most
familiar and comfortable with.
So, here I have a little [inaudible] model
that just does a .matmul
and a bias_add.
And let's pretend I don't know
which of these operations
is going to be the bottleneck.
Which one is lower?
And I'm sure you all know the answer
that the matmul is a lot more expensive.
But here you can just run your code
through your Python profiler
like you would do with any
other programming job,
and find out that the matmul
is like 15 times more expensive
for my batch size here
than my bias addition.
And also,
by the way those examples are run
on the Google collaboratory thing,
which is this...
completely public-shared,
GPU-capable interface
for Jupiter notebooks
that are like hosted on Google Cloud.
It's pretty cool and I think
we have a demo on eager
that's hosted on that.
And you can play out with later,
or if you're on livestream,
you can play out of it now
if you can find the link.
But together with eager,
we're bringing you a lot of new APIs
that make it easier for you
to build TensorFlow graphs
and to execute models.
And these APIs are compatible

Chinese: 
使用任何你最喜欢且
最熟悉的分析工具
在这里我有一个小型模型
刚刚完成了matmul
以及bias_add
假设我不知道
这些操作中的哪一个
将会成为瓶颈，哪一个更低呢？
我相信你们都知道
那个matmul 要贵得多
在这里你可以通过
Python profiler运行那个代码
就像你使用其他编程一样
并找出针对我在这里的批量
matmul像贵了15倍
比添加了我的bias
并且顺便说一下
那些运行在Google合作实验室的东西
就是这个...
完全是公开共享
支持GPU的界面
亦可在Jupyter notebook运行
就像托管在Google Cloud一样
这真的很酷，我想
我们有一个eager演示
这是托管在那里的
你们可以稍后再试一下
或者如果你们使用直播
如果你们能找到链接
现在就可以玩了
但是与eager在一起
我们带来了很多全新的 APIs
这让你们更容易
构建TensorFlow图像
并执行各种模型
这些API都是

English: 
with both eager execution
and graph building.
So one that's been 
a recurring low priority feature request
is how to customize gradients 
in TensorFlow.
And I'm sure you are familiar with a few
of the tricks that people have,
like stop gradients and functions
and things like that.
But we're introducing a new API that works
in both eager and graph execution
And what I like about this example is that
it's a thing that's been asked by many,
many people, how to do it.
If I want to run my forward pass
and then in the backward pass,
take the gradient of a particular tensor
and clip it, clip it's norm to keep it
small to prevent it from exploding.
And it just takes six lines of code
to make a version of tf.identity
that in the backward pass
clips its gradient,
and I think this is really cool.
And I look forward to seeing
what you guys can do with this
when you're doing
more than six lines of code
and solving all sorts of new,
interesting research problems.
A big, big change when programming
with eager from graph
that I really want you to stop
and think about
is that we're trying
to make everything as pythonic
and object-oriented as possible.

Chinese: 
与eager execution
及图像构建兼容的
就一直是一个经常性的
低优先级功能提议
如何在TensorFlow中自定义渐层
我相信你们对其他
开发人员的伎俩也很熟悉
例如stop gradients及functions
或其他类似的东西
但是我们正在推出一种全新的API
可于eager及图像执行中运行
这个例子我喜欢的部份是
这是很多人问的问题
很多人，怎样去做
如果我想做正推法
然后是逆推法
用在特定tensor的渐层上
剪辑它，这是norm以保持它
细小以防止激增
这只需要六行代码
使用逆推法
制作tf.identity的版本
剪取它的渐层
我认为这真的很酷
我期待着看看你们能采用
这个所做到的东西
当你们不只是撰写六行代码
并解决各种全新的有趣的问题
当使用eager而不是graph编程时
发生了重要的变化
我真的希望你们停下来
并思考一下
我们正在尝试让一切
尽可能变成pythonic
以及对象为主

Chinese: 
TensorFlow中的变量是...
通常是一件复杂的事情
但当eager execution启用时
就变得更简单
一个TensorFlow变量
只是一个Python对象
你创建一个就能拥有
你可以撰写，可以改变它的值
可以读取它的值
当最后一个参考值消失时
就能取回你的记忆了
即使放在GPU内存
如果你想分享变量
只需重用那些对象
不用担心变量的范围
或任何其他复杂的结构
因为我们有这个对象为主的变量方法
我们可以看看一些
TensorFlow中的APIs
并且重新思考它们为
多一点以对象为主
且更易于使用
而且非常...
真正引起我们注意的是
metrics API需要彻底检修
因此我们向大家介绍
这个全新tfe.metrics包
每个指标有两种方法
一个将会更新数值
而一个给你们结果
希望所有开发人员都可以
熟悉使用这个API

English: 
So, variables in TensorFlow are...
usually are a complicated thing 
to think about.
But when eager execution is enabled,
it's much simpler.
A TensorFlow variable
is just a Python object.
You create one, you have it.
You can write, you can change its value,
you can read its value.
When the last reference to it goes away,
you get your memory back,
even if it's your GPU memory.
So, if you want to share variables,
you just reuse those objects.
You don't worry about variable scopes
or any other complicated structure.
And because we have this
object-oriented approach to variables,
we can look at some
of the APIs in TensorFlow
and like rethink them in a way
that's a little more object-oriented
and easier to use.
And a very...
one that really stood out to us
as needing an overhaul
was the metrics API.
So, we're introducing
this new tfe.metrics package,
where each metric has two methods,
one that updates the value
and one that gives you the result.
And hopefully, this is an API that everyone
is going to find familiar to use

English: 
and please don't try to compare
this with the other metrics API.
(laughs)
We're also giving you a way to do
object-oriented saving
of TensorFlow models.
If you've tried looking
at TensorFlow checkpoints now,
you know that they depend
on variable names.
And variable names depend not just
on the name you gave to a variable,
but on all other variables
which are present in your graph.
This can make it a little hard for you
to save and load subsets of your model
and really control
what's in your checkpoint.
So we're introducing
a completely object-oriented
Python object-based saving API,
where you...
it's like Python pickle,
like any variable that's reachable
from your model gets saved
and your model gets saved.
You can save any subset of your model.
You can load any subset of your model.
You can even use this tfe.checkpoint
object to build things you want to save
that have more than a model.
And here we have an optimizer
and a global_step
but really you can put
whatever you want in there.
And the idea is that this object graph
that eventually goes down to variables
is something you can save and load.
So you can have your [GAN]

Chinese: 
但请不要试图把这个
与其他指标API进行比较
(笑声)
我们也给你一个方法
保存对象为主的
TensorFlow模型
如果现在你们看看
TensorFlow 检查点
这些你们知道是依赖变量名称的
变量名称不只是依赖
你们给变量的名称上
还有所有在图像中的变量
这可能对保存及
加载模型的子集有点难
并真正控制检查点里的东西
因此我们介绍了
一个完全对象为主的
Python对象为主保存API
在那里你们...
就像是Python pickle
就像任何的变量一样
可从被保存的模型中获得
并且你的模型亦被保存
你们可以保存模型的任何子集
可以加载模型的任何子集
甚至可以使用这个tfe.checkpoint对象
建立你们想要保存的东西
这不仅是一个模型
在这里我们有一个优化器
及一个 global_step
但是你们真的可以
放任何东西在那里
这个想法就是这个对象图
最终变为变量
就是你们可以保存和加载的东西
使你们可以拥有[GAN]

English: 
and save and load your discriminators
and your generators
separate from each other.
Then you can take your discriminator
and load it backup
as like another newer network that you
can use in another part of a model.
And this should give you a lot more
control to get a lot more out
of TensorFlow checkpointing.
But the real question that everybody
asks me when I tell them
that I work on eager execution
is, Is it fast?
Because graphs...
have this high performance promise.
So, how fast can it make this thing
that runs Python code all the time?
And the answer is
that we can make it fast enough.
For models that are highly
computationally intensive,
you pretty much don't see
any Python overhead,
and we're as fast as graph TensorFlow.
For...
sometimes slightly faster,
in reasons that I don't fully understand,
even for highly dynamic models,
you have comparative performance
with anything else you can find.
And please don't get attached
to these numbers.
We have many more benchmarks
in our codebase.
And we're optimizing eager
performance very aggressively,

Chinese: 
保存并加载你的鉴别器
而与发生器彼此分开
然后你们可以拿走鉴别器
并加载备份
就像另一个更新的网络一样
你们可以在模型的另一部分中使用
这应该给你们更多控制
并得到更多
TensorFlow 检查点
当我告诉其他开发人员
我正在eager execution下功夫
所有人都会问我
它的速度快吗？
因为图像...
具有这个高性能的承诺
它能以多快的速度
每时每刻运行Python代码?
答案是我们使它足够快
对于计算密集型模型
你们几乎看不到任何使用Python
额外付出的时间
我们跟TensorFlow图像一样快
对于...
有时稍微快一点
由于我无法理解的原因
即使对于高度动态的模型
与任何能夠找到的东西比较
其表现不会更差
请不要附加这些数字
在我们的代码库中有更多的基准
我们正在非常积极地优化eager的表现

Chinese: 
我希望你们得到的信息是
如果你们的模型使GPU保持繁忙
如果你们正在做大规模的卷积
大型矩阵乘法
使用 eager execution 做实验
做研究和模型建设
几乎没有成本
但是当你们做更小的东西时
就需付出额外的时间
这个我想再复审
再说一次不要理会它们
因为我们非常积极地进行优化
如果你只是在TensorFlow中运行no op
像identity一样
它只需几微秒去执行
如果你们使用 eager execution打开它
就只需额外的几微秒
如果你们正在追踪渐层
还需3微秒
如果只是GPU 流排队
就只需一位数字的微秒
如果你们能够执行足够多的计算
以保持GPU繁忙
就不会看到任何
使用eager execution的坏处
再说一次这些数字正在迅速改善
不要太依靠它

English: 
but I hope that the message you'll get
out of this is that if your model
can keep a GPU busy,
if you're doing large convolusions,
large matrix multiplications,
there is almost no cost in experimenting
and doing your research and model building
with eager execution turned on.
But when you're doing smaller things,
there are some overheads
and I want to go over them.
But again don't get attached to them
because we're being very aggressive
about optimizing this.
If you just run a no op in TensorFlow,
like an identity,
it takes almost a microsecond
to execute it.
If you run that with eager
execution turned on,
there's an extra microsecond of overhead.
If you're tracing gradients,
there are another 3 microseconds
of overhead that you get.
But if you're just enqueuing
something on the GPU stream,
that alone takes like
single-digit microsecond.
So, if you can execute enough
computation to keep a GPU busy,
you're unlikely to see anything bad
from using eager execution,
and again these numbers
are improving very quickly.
Please don't get too attached to them.

English: 
But there is this large ecosystem
of TensorFlow code libraries,
models, frameworks, checkpoints
that I don't think anyone
wants to give up.
And I don't want you to give up
if you want to use eager execution.
So, we're also thinking really hard
about how you can interoperate
between eager and graph.
One way is to like call
into graphs from eager code.
And you can do that with tfe.make_template
which has this create
graph function argument
when you pass through
to it with build a graph,
for that little Python function
that you wrote.
And then you can use it an manipulate
and call the graph from eager execution.
We also have the reverse,
which is how to call
into eager from a graph.
Let's say you have a big graph
that you understand everything in it,
but there's a little chunk
of your computation
that you really don't know how to express.
And either you don't know,
or you don't want to bother expressing it,
in using like TensorFlow graphs.
So you can wrap it in a tfe.py_func
and what you get in there
when the Python function is executing
are eager tensors, that you can
run any TensorFlow op in,

Chinese: 
有一个有关TensorFlow的代码库
模型，框架，检查点的
大型生态系统
我不想任何人放弃
如果你们想使用 eager execution
我不想你们放弃
我们亦认真地想
你们怎样在eager和
graph之间进行互相操作
一种方法是从eager代码
调用graphs
你们可以用tfe.make_template
来做到这一点
这创建了图像函数参数
当你们通过它来建立一个图像
为那个你们撰写的
小型Python function
然后你们可以使用它作manipulate
并从eager execution调用图像
我们也可作相反的操作
从图像调用eager
假设有一张大图
你们已经了解它的一切
但计算中的一小块
你真的不知道如何表达
或者你真的不知道
或者你不想使用像TensorFlow图表表示
那么你们可以把它包在tfe.py_func
在那里你们得到的是
当Python function执行时
就是eager tensors
你们可以运行任何TensorFlow操作

Chinese: 
包括卷积以及其他在my_py
不可用的东西
你们也可以看看其他数值
并检查和使用那里的动态控制流
我希望这两件东西
使你们真的重用eager
及graph代码
使eager及graph变得兼容的
最简单方法是
撰写一些模型代码
在两种方式中运行也行
如果你考虑一下
一旦你们的模型完成
撰写调试和测试
就未能告诉你们
是否需要构建图像
或执行eagerly
编写，重复和调试eager
然后导入相同的代码到图像
把它放在Estimator
将其部署在TPU pod
将其部署在GPU
分发它并做任何你想做的事情
这就是我们在示例模型中要做的事
在演讲结束时有一条链接
所以你们不需要担心
不需要写下来
这里有一些实用的建议
编写一些在执行eagerly
及建立图像时
能够很好地运作的代码
要做到这一点，可使用Keras layers
它们真的很棒

English: 
including convolutions and other things
that are not available in my_py.
But you can also look at the values
and inspect and use dynamic
control flow in there.
So, I hope with these two things together,
you can really reuse eager
and graph code across.
But really the easiest way
to get eager and graph compatibility
is to just write model code
that's going to work in both ways.
And if you think about it,
once your model is fully written,
debugged and [tested],
there's not much there that tells you
whether you need to build a graph
or to execute eagerly.
So write, iterate, debug in eager
and then import that same code
into a graph,
put it in Estimator,
deploy it on a TPU pod,
deploy it on a GPU,
distribute it and do whatever you want.
And like this is what we've done
in our example models
and there's going to be a link
in the end of the presentation,
so you don't need to like worry
about writing this down.
So, here is some practical advice for you.
Write code that's going to work well
when executing eagerly
and when building graphs.
And to do that, use
the Keras layers. They're great.

Chinese: 
它们以对象为主
且是pythonic的
它们易于理解，操纵和玩弄
使用Keras model
将这些层面缝在一起
这将指导你们自动保存加载
训练以各种各样的事情
只要你们愿意的话
但是你们不会被迫使用
使用 tf.contrib.summary
而不是 tf.summary
它们很快将会移到Tensor board
开源软件包
如果你们正在观看视频
或者它已经移走了
使用tfe.metrics
而不是tf.metrics
因为这些都对象为主
且易于使用
更易于在eager使用
并使用对象为主的保存
无论如何更好的用户体验
所以我希望你们会想一直这样
如果你们这样做
这很可能你们的代码
在eager execution及图像构建方面
将会超级棒
现在我想花一些时间
告诉你们应该启用
eager execution的原因
你们知道就像很好的...
重要原因促使我们当初建立这个
如果你们是机器学习的新手
或者是TensorFlow的新手
但是你们想学习

English: 
They're object-oriented;
they're pythonic.
They're easy to understand,
manipulate and play around with.
Use the Keras model
to stitch those layers together,
that will guide you in saving and loading
and training and all sorts of things
automatically if you want,
but you're not forced to use those.
Use tf.contrib.summary
instead of tf.summary.
They will move to the tensor board 
open source package very soon.
So, if you're watching this on video,
it probably already happened.
Use the tfe.metrics
instead of the tf.metrics,
because these are object-oriented,
friendlier to use,
and friendlier in eager.
And use the object-based saving,
which is a much nicer
user experience anyway.
So, I hope you're going to want
to do this all the time.
If you do all of this,
it's highly likely your code
is going to work super well
in both eager execution
and graph building.
So now, I'd like to take some time
to tell you why you should
enable eager execution.
You know like a real good...
important reason for us that led us
to build this in the first place
is that if you're new to machine learning,
or you're new to TensorFlow
and you want to learn,

English: 
being able to play with these objects
and manipulate them directly
is just a much nicer experience
than having to build a graph
and interact with later in a session.
It's a lot more intuitive,
lets you understand
what's going on much better.
So I've shown you, just by all means
go straight into your execution,
play around with it
and figure out how to get graphs later.
Also, if you're a researcher
and you're quickly iterating over models.
You're changing their internal
properties and...
you're comparing them and you're
trying to do models that are non trivial
that we in the TensorFlow team 
were not thinking about
when we designed TensorFlow.
Eager execution will make it much easier
for you to understand what's going on,
to debug what's going on,
to be productive in advance.
So, if you're a researcher
this is for you.
Also, if your model's not working
and you want to understand why,
being able to enable eager execution
and then [start through] it in a debugger,
change some values, play around with it.
Understand that is priceless,
and that has saved me a lot of time.

Chinese: 
能够使用这些对象
并直接操作它们
比建立一个图像
并在稍后的对话中进行交互
是一个更好的经验
这是更直观的
让你们明白发生什么事情好得多
我已经向你们展示了一切
且直接进入执行的地方
跟它玩玩
并找出如何在稍后获得图像
如果你是一位研究人员
正在很快地重复这些模型
正在修改它们的内部属性...
比较它们
正在试做一些很棒的模型
而我们TensorFlow团队
在设计TensorFlow时未曾想到的
Eager execution使你们更容易了解
正在发生的事
调试正在发生的事
事先提高生产力
如果你是一位研究员
这就是给你们的东西
此外如果你们的模型未能运作
你想了解箇中原因
能够启用eager execution
然后在调试器中开始
修改一些数值，运行它
就会明白这是无价的
这为我节省了很多时间

English: 
Similarly, if you want to profile
a model using like the full power
of whatever tool you like
to use to profile Python,
eager execution is your friend.
Also, there are some models 
like recursive RNN's
that are just much easier to express
if you don't need to put
your entire computation
in a static data flow graph.
If you're working on one of those models,
eager execution is also a choice for you.
But really the reason
I think you should enable this
is that it's fun.
It's a very nice and intuitive way
of interacting with TensorFlow,
and I hope you're going to have
a lot of fun experimenting with it.
So now, I would like
to point to a few things.
Some of my colleagues,
sitting over there now,
they're going to be in the demo
room during the break
with laptops, with [collabs]
that are like Jupiter notebooks
to let you type and try
eager mode there.
Please go give it a try.
Or if you're watching this
on the livestream,
type that short link--
hopefully it will stay on the screen
long enough for you to type it--
and play with it right now.

Chinese: 
同样，如果你想剖析一个模型
使用任何你喜欢的工具
来剖析Python
eager execution就是你的朋友
此外，还有一些模型
例如recursive RNN
在静态数据流图像中
如果你不需要整个计算
这个更容易表示
如果你正在研究其中一种模型
eager execution亦是其中一个选择
我认为你们应该启用
这个的真正原因是
这真的很有趣
与TensorFlow交互是
一个非常好的直观方式
我希望你们乐在其中
现在我想指出一些事情
现在我的一些同事坐在那里
在休息期间他们将会到演示室
带上笔记本电脑
就像jupyter notebooks
让你们键入并尝试eager模型
请去试玩一下
如果你正在看直播
键入那条短链接 -
希望它会留在屏幕上足够长的时间
让你们输入
立即使用它

English: 
It's really nice.
We have a Getting Started
Guide on TensorFlow
that should be live now.
programmers_guide/eager
that tells you what you need
to know about eager execution
and what you need to know
about starting to use TensorFlow
using eager execution.
We also have a ton of example models,
like from RNNs to [inaudible]
to all sorts of things,
that are available behind that link.
And I encourage you
to look at them and see
how it's easy to write the model
and how easy it is to also reuse
the same code from graphs
for deployment.
We have deployment
for graphs for all models
except for the highly dynamic ones
that are just really hard
to write in a graph form.
And give it a try.
If you give it a try,
let us know how it went.
We're super excited
to share this with you.
I hope you're going to have
a great time playing with this.
And, yes.
Thank you.
(clapping)
♪ (music) ♪

Chinese: 
这真的很棒
我们有一个TensorFlow
入门指南
那应该现在上线了
programmers_guide/eager
告诉你们有关eager execution
你们需要知道的事
有关开始使用TensorFlow
你们需要知道的事
以及eager execution的使用
我们也有很多示例模型
从RNNs到各种各样的东西
就在那个链接后面
我鼓励你们看看
怎样轻松地编写模型
以及从图像中重复使用相同的代码
进行部署是多么容易
我们有所有模型的图像部署
除了高度动态的模型
这以图像形式撰写真的很难
试试吧
如果你们试试看
让我们知道它怎么样
我们非常兴奋能夠与你们分享
并希望你们玩得很愉快
对啊
谢谢大家
(掌声)
♪ (音乐) ♪
