
English: 
♪ (intro music) ♪
It's really just in all of --
what machine learning is capable of,
in how we can extend human capabilities.
And we want to think more than just
about discovering new approaches
and new ways of using technology;
we want to see how it's being used
and how it impacts
the human creative process.
So imagine, you need to find
or compose a drum pattern,
and you have some idea of a drum beat
that you would like to compose,
and all you need to do now
is go to a website
where there's a pre-trained model
of drum patterns
sitting on the online--
you just need a web browser.
You give it some human input and you can
generate a space of expressive variations.

Chinese: 
这真的是所有的 - 机器学习的能力，
在我们如何扩展人的能力这方面。
我们想要不只是考虑关于发现新方法和
和使用科技的新方法;
我们想看看它是如何被使用的
以及它如何影响人类的创作过程。
所以想象一下，你需要找到或
组成一个打鼓节奏，
并且你有一些你想要创作的鼓点的想法，
现在你需要做的就是去一个网站，
那里有一个预先训练好的打鼓节奏
在网上 - 你只需要一个网页浏览器。
你给它一些人类的输入的机会，
你可以产生一个表达变化的空间。

English: 
You can tune and control
the type of outputs
that you're getting
from this generative model.
And if you don't like it,
you can continue going through it
exploring this generative space.
So this is the type of work
that Project Magenta focuses on.
To give you a bird's eye view
of what Project Magenta is about,
it basically is a group of researchers,
and developers and creative technologists
that engage in generative models research.
So you'll see this work published
in machine learning conferences,
you'll see the workshops,
you'll see a lot of research contributions
from Magenta.
You'll also see the code
that after it's been published
put into open source repository
on GitHub in the Magenta repo.
And then from there we'll see ways
of thinking and designing creative tools
that can enhance and extend the human
expressive creative process.

Chinese: 
你可以调整和控制从此生成模型
获得的输出类型。
如果你不喜欢它，你可以继续
探索这个生成空间。
所以这是Magenta项目关注的工作类型。
为了让你了解Magenta项目的全貌，
它基本上是一组研究人员，以及参与生成模型研究
的开发人员和创意技术人员。
所以你会在机器学习会议上
看到这项工作被发布，
你会看到讲习班，
你会看到很多来自Magenta的研究成果。
你还会看到代码在发布之后
在Magenta仓库中，
被放入GitHub上的开源仓库。
然后从那里我们将看到思考
和设计创意工具的方式，
这些工具可以增强和延伸
人类富有表现力的创作过程。

English: 
And eventually, ending up into the hands
of artists and musicians
inventing new ways we can create
and inventing new types of artists.
So, I'm going to give three brief
overviews of the highlights
of some of our recent work.
So this is PerformanceRNN.
How many people have seen this?
This is one of the demos earlier today.
A lot of people have seen
and heard of this kind of work,
and this is what people typically think
of when they're thinking
of a generative model, they're thinking,
"How can we build a computer
that has the kind of intuition
to know the qualities
of things like melody and harmony,
but also expressive timing and dynamics?"
And what's even more--
it's even more interesting now
to be able to explore this for yourself
in the browser enabled by TensorFlow.js.
So,this is a demo we have running online.
We have the ability to tune and control
some of the output that we're getting.
So in a second, I'm going to show you
this video of what that looks like,
you would have seen
it out on the demo floor

Chinese: 
最终，落入艺术家和音乐家的手中
发明新的创造方式，发明新的艺术家类型。
所以，我将简要概述我们最近的
一些工作的亮点。
所以这是PerformanceRNN。
有多少人见过这个？
这是今天早些时候的演示之一。
很多人看过和听说过这种工作，
而这正是人们在思考
生成模型时通常会想到的，他们在想，
“我们如何建立一台
具有直觉能够了解
旋律和和声等事物的特质的计算机，
但也表达节奏和力度？“
而且，更加有趣的是现在
能够在由TensorFlow.js启用的
浏览器中自己探索它。
所以，这是我们在线运作的一个演示。
我们有能力调整和控制我们获得的一些输出。
所以一会儿，我会通过这视频
展示给你看它是什么样子的，
你可能已在演示层看到它了

English: 
but we will show you
and all of you watching online,
and we were also able to bring it
even more alive by connecting
a baby grand piano Disklavier
that is also a midi controller
and we have the ability to perform
alongside the generative model
reading in the inputs
from the human playing the piano.
So, let's take a look.
♪ (piano) ♪
So this is trained on classical music
data from actual live performers.
This is from a data set that we got,
from a piano competition.
♪ (piano) ♪
I don't know if you noticed,
this is Nikhil from earlier today.
He's actually quite a talented young man.
He helped build out the browser version
of PerformanceRNN.
♪ (piano) ♪

Chinese: 
但我们会展示给你和所有在网上观看的人，
而且我们还能够通过连接一台也是MIDI控制器
的小型三角钢琴Disklavier
将它变得更加生动活泼，
并且我们有能力通过读取弹奏钢琴的人的输入
与生成模型一起演出。
我们看看吧。
这是从现场表演者训练出的古典音乐数据。
这是我们从钢琴比赛中获得的数据集。
我不知道你是否注意到，
这是今天早些时候上台的Nikhil。
他是一个非常有才华的年轻人。
他帮助构建了浏览器版本的
PerformanceRNN。

Chinese: 
所以我们思考我们采取工作的方式，
我们从数据中训练出一个模型，
然后我们创建这些开放源代码工具，
使新形式的创造力和表达可以相互作用。
所有这些参与点都是由TensorFlow支持的。
我想谈论的下一个我们正在创造的工具
是Variational Autoencoders。
有多少人熟悉雷顿的空间插值？
好的，你们中有不少人。
如果你不熟悉，这很简单 - 你用人力投入
并在自己的网络训练它，
把它压缩到一个嵌入空间。
所以你把它压缩到一个维度，
然后你重建它。
你要将重建的与原始的比较，并试图训练，
在那附近建立一个空间，而这样做会创建
从一个点插入另一个点的能力，

English: 
And so we're thinking of ways
that we take bodies of work,
we train a model off of the data,
then we create these open source tools
that enable new forms of interaction
of creativity and of expression.
And this is all these points of engagement
are enabled by TensorFlow.
The next tool I want to talk about
that we've been working on
is Variational Autoencoders.
How many people are familiar
Layton's space interpolation?
Okay, quite a few of you.
And if you're not, it's quite simple--
you take human inputs
and you train it through
on your own network,
compressing it down to an embedding space.
So you compress it down
to some dimensionality
and then you reconstruct it.
So you're comparing the reconstruction
with the original and trying to train,
build a space around that,
and what that does is that creates
the ability to interpolate
from one point to another

English: 
touching on the intermediate points
where a human may have not given input.
So the machine learning model
may have never seen an example
that it's able to generate,
because it's building an intuition
off of these examples.
So, you can imagine if you're an animator,
there's so many ways
of going from cat to pig.
How would you animate that?
So, we're train--there's an intuition
that the artist would have
in creating that sort of morphing
from one to the other.
So we're able to have the machine learning
model now also do this.
We can also do this with sound, right?
This technology actually carries
over to multiple domains.
So, this is NSynth,
and we've released this,
I think some time last year.
And what it does is
it takes that same idea
of moving one input to another.
So, let's take a look.
You'll get a sense of it.
Piccolo to electric guitar.
(electric guitar sound to piccolo)
(piccolo sound to electric guitar)
(piccolo and electric guitar
sound together)
So, rather than recomposing
or fading from one sound to the other,

Chinese: 
触及人们可能没有给出输入的中间点。
所以机器学习模型可能从来没有见过
它能够生成的例子，
因为它是建立在这些例子之上的直觉。
所以，你可以想象如果你是一个动画师，
从猫到猪有很多种方式。
你会如何给它赋予生命？
所以，我们被训练 - 艺术家有一种直觉，
可以创造出从一种到另一种的变化。
所以我们现在可以通过
机器学习模型来做到这一点。
我们也可以用声音做到这一点，对吧？
这项技术实际上可以延伸到多个领域。
所以，这是NSynth，我们已经发布了它，
我想应该是在去年发表的。
它所做的是将一个输入移动到
另一个输入的相同概念。
让我们来看看吧。
你就会明白了。
短笛到电吉他。
（电吉他音响到短笛）
（短笛声到电吉他）
（短笛和电吉他一起响起）
因此，我们实际上能够做的是，

English: 
what we're actually able to do
is we're able to find these intermediary,
recomposed sound samples
and produce that.
So, it looks, you know,
there's a lot of components to that.
There's a wave and a decoder,
but really it's the same technology
underlying the encoder-decoder
variational autoencoder.
But when we think about the types
of tools that musicians use,
we think less about training
machine learning models.
We see drum pedals right? --
I mean not drum pedals.
Guitar pedals, these knobs
and these pedals
that are used to tune and refine sound
to cultivate the kind of art
and flavor a musician is looking for.
We don't think so much
about setting parameter flags
or trying to write lines of python code
to create this sort of art in general.
So what we've done--
Not just are we interested
in finding and discovering new things.
We're also interested in how those things
get used in general--
used by practitioners,
used by specialists.

Chinese: 
我们能够找到这些中间重构的声音样本
并生成这些样本，
而不是从一个声音重新构建
或淡化到另一个声音。
所以，它看起来，有很多组件。
有一个波形和一个解码器，但它的技术与
编码器 - 解码器变分自动编码器是相同的。
但是当我们考虑音乐家使用的工具类型时，
我们对于训练机器学习模型的想法就更少了。
我们看到鼓踏板是吧？ - 
我不是在说鼓踏板。
吉他踏板，这些旋钮和这些踏板
用于调整和细化声音，以培养
音乐家正在寻找的艺术和风格。
我们不太关心设置参数标志
或者试图写一行python代码
来创建这种艺术。
所以我们做了 -
我们不仅对寻找和发现新事物感兴趣。
我们也对这些东西如何被普遍使用感兴趣 -
如何被从业人员，专家使用。

Chinese: 
因此，我们创建了硬件，
我们采用了一部分
采用机器学习模型的硬件，并将其放入
一个音乐家可以插入的盒子中，
并在性能上探索这个潜在空间。
所以，看看音乐家，
他们在这个过程中的感受，想法。
我只是觉得我们正在转向一个
新的可能性的角落。
它可能会产生一种可能激励我们的声音。
有趣的部分是即使你认为你知道你在做什么，
有一些奇怪的互动发生，
可以给你一些你完全意想不到的事情。
这是很棒的研究，
这真的很有趣，
发现新事物真的是太神奇了，
但是看到它的使用方式以及人们的想法
与它创建到一起，更加令人惊奇。
所以，更好的是，它刚刚与伦敦创意实验室
合作发布，NSynth Super。

English: 
And so we've created hardware,
we've taken a piece of hardware
where we've taken the machine learning
model, we've put it into a box
where a musician can just plug in
and explore this latent space
in performance.
So take a look on how musicians feel,
what they think in this process.
♪ (music) ♪
(woman) I just feel like
we're turning a corner
of what could be new possibility.
It could generate a sound
that might inspire us.
(man) The fun part is even though
you think you know what you're doing,
there's some weird interaction happening
that can give you something
totally unexpected.
I mean, it's great research,
and it's really fun,
and it's amazing to discover new things,
but it's even more amazing to see
how it gets used and what people
think to create with alongside it.
And so, what's even better
is that it's just released, NSynth Super,
in collaboration
with the Creative Lab London.

English: 
It's an open source hardware project.
All the information
and the specs are on GitHub.
We talk about everything
from potentiometers,
to the touch panel, to the code
and what hardware it's running on.
And this is all available
to everyone here today.
You just go online
and you can check it out yourself.
Now music is more than just sound right?
It's actually a sequence
of things that goes on.
So when we think about this idea
of what it means
to have a generative music space,
we think also about melodies,
and so just like we have cat to pig,
what is it like to go
from one melody to the next?
And moreover, once we have
that technology, how does it --
what does it look like
to create with that?
You have this expressive space
of variations--
how do we design an expressive tool
that takes advantage of that?
And what will we get out of it?
So this is also another tool
that's developed
by another team at Google,
to make use of melodies in a latent space,
so how interpolation works,

Chinese: 
这是一个开源硬件项目。
所有的信息和规格都在GitHub上。
我们谈论从电位器
到触摸屏，代码以及运行的硬件。
现今这里所有人都可以使用。
你只要上网，你可以自己搜索一下。
现在音乐不仅仅是声音对吧？
这实际上是一系列发生的事情。
所以当我们想到这个是什么意思的想法，
有一个生成音乐的空间，
我们也想到了旋律，
就像我们有猫到猪一样，
从一种旋律到另一种旋律有什么感觉？
此外，一旦我们拥有了这项技术，
它又是如何 -
用它来创造结果会是怎么样？
你有这种变化的表达空间 -
我们如何设计一个充分利用它的表现工具？
我们会从中得到什么？
所以这也是Google另一个团队
开发的另一个工具，
它可以在一个潜在空间中使用旋律，
插值如何运作，

Chinese: 
然后制作一首歌曲或与它的某种组合。
所以让我们来听一听。
假设你有两首旋律....
♪（“一闪一闪亮晶晶”）♪
而在中间......
♪（钢琴演奏变奏）♪
你可以扩展它....
♪（钢琴演奏变奏）♪
我们真的只是挖掘了一点点的可能性。
我们如何继续让这台机器学习，
并对旋律是什么有更好的感悟。
因此，为了再次使用不同的作品和音乐作品将它带回原处，

English: 
and then building a song
or some sort of composition with it.
So let's take a listen.
Say you have two melodies....
♪ ("Twinkle Twinkle Little Star") ♪
And in the middle....
♪ (piano playing variation) ♪
You can extend it....
♪ (piano playing variation) ♪
And we really are just scratching
the surface of what's possible.
How do we continue
to have the machine learn
and have a better intuition
for what melodies are about.
So again to bring it back full circle,
using different compositions

English: 
and musical works, we're able to train
a variational autoencoder
to create an embedding space
that builds tools that enable
open source communities
to design creative artist's tools
to look at new ways of pushing
the expressive boundaries
that we currently have.
This is, again, just released!
It's on our blog.
All the code is an open source
and made available to you,
and also enabled by TensorFlow.
In addition to all these other things,
including Nikhil,
here, enabled by the type of work
and creativity and expressivity.
And so, in wrapping up
I want to take us back
to this demo that we saw.
Now the most interesting and maybe
the coolest thing about this demo,
was that we didn't even know
that it was being built
until it was tweeted by Tero,
a developer from Finland.
And the fact of the matter is that
we're just barely scratching the surface.
There's so much to do,
so much to engage in,
and so much to discover.

Chinese: 
我们训练了变化自动编码器
以创建一个嵌入空间，以创建工具，
这工具使开源社区能够设计创意艺术家的工具，
以找出推动我们目前拥有的表达性界限的新方法。
这也是刚刚发布的！
它在我们的博客上。
所有代码都是开源的，并且可供你使用，
也由TensorFlow启用。
除了所有这些其他的事情，
包括Nikhil在内，
都受到工作类型，创造力和表达力的影响。
所以，在结束之前，我想让我们回到
我们刚刚看过的这个演示。
关于这个演示最有趣和最酷的事情是，
我们并不知道它已被创建
直到芬兰的开发者Tero在推特发布这消息。
而事实是，我们只是勉强了解它的表面而已。
有很多需要做的事情，很多事情要参与，
还有很多事情等待我们去发现。

Chinese: 
我们希望看到更多这样的内容。
我们希望看到更多的开发者，更多的人分享东西，
更多的人参与其中。
不仅仅是开发人员，还有艺术家和创意人员。
我们希望探索，发明并想象我们可以如何使用
机器学习将其作为一种富有表现力的工具。
所以，请到我们的网站g.co / magenta。
你会看到我们的出版物和这些演示，
你可以亲自体验它，还有更多其他的功能。
你也可以加入我们的讨论组。
所以这里是g.co/magenta。
加入我们的讨论小组，成为社区的一员，
并分享你正在构建的内容，
所以我们可以一起完成这项工作。
非常感谢。
所以今天的会谈就到这儿。
我们有一个非常精彩的节目，
卓越的演讲者和话题。
现在，让我们来看看今天的精彩回顾。

English: 
And we want to see so much more of this.
We want to see more developers,
more people sharing things
and more people getting engaged.
Not just developers,
but artists and creatives as well.
We want to explore and invent
and imagine what we can do
with machine learning together
as an expressive tool.
And so, go to our website,
g.co/magenta.
There you'll find our publications
and these demos,
you can experience it yourself, and more.
And you can also join
our discussion group.
So here's g.co/magenta.
Join our discussion group,
become part of the community,
and share the things that you're building,
so we can do this alongside together.
Thank you so much.
(applause)
So that's it for the talks today.
We had an amazing, amazing show,
amazing spread of speakers and topics.
Now, let's take a look at
a highlight review of the day.

English: 
♪ (music) ♪
Earlier this year we hit the milestone
of 11 million downloads.
We really excited to see how much users
are using this and how much impact
it's having in the world.
We're very excited today
to announce that deeplearn.js
is joining the TensorFlow family.
♪ (music) ♪
(man) The software TensorFlow,
is also an early-stage project.
And so we'd really love for you
to get interested
and help us to build this future.
♪ (music) ♪

Chinese: 
今年早些时候，我们达到了1100万次下载的里程碑。
我们很想要了解有多少用户正在使用它，
以及它在这世界的影响力。
今天我们很高兴地宣布deeplearn.js
加入了TensorFlow家族。
软件TensorFlow也是一个早期的项目。
因此，我们真的很希望你会感兴趣
并帮助我们建立它的未来。

English: 
(man) I told you at the beginning
that our mission for TF data
was to make a live [inaudible] processing
that is fast, flexible and easy to use.
♪ (music) ♪
(woman) I'm very excited to say
that we have been working
with other teams in Google
to bring TensorFlow Lite to Google Labs.
♪ (music) ♪
(man) In general the Google
Brain Team's mission
is to make machines intelligent,
and then use that ability
to improve people's lives.
I think these are good examples of where
there's real opportunity for this.
♪ (music ends) ♪
(applause)
♪ (music) ♪

Chinese: 
我在刚开始就告诉你，我们TF数据的任务
是做一个快速，灵活且易于使用的实时处理。
我很高兴因为我们一直在与
Google的其他团队合作，
将TensorFlow Lite带入Google实验室。
一般来说，Google脑力团队的使命
是让机器变得聪明，
然后用这种能力改善人们的生活。
我认为这些都是为实现它的绝佳机会的很好例子。
