JESSE ENGEL: One of the
goals of the Magenta project
is to engage the
artist community
and get people making art
with machine learning.
So to help with that, we've
created an Ableton Live
set for synthesizing
audio, which we control
with a maximumus p patch.
And we've even created a
nice interface here for iPad
so that we get
real time feedback
during performance and
interactive control surface.
So a cool thing we can
do with this set up
is we can create continuously
evolving drumbeats.
Because I can input a very
basic pattern into the LCM.
Here, I'm going to play four
beats of a pattern here.
[DRUMMING]
JESSE ENGEL: And you'll
hear that LCM is now
doing the next step prediction.
It's predicting what the
most likely sound is.
And it sounds very much
the same as when I input,
because drummers often
keep the steady beat, maybe
throwing a fill in
every once in a while.
But what we can do
with our AI drummer
is we can add a little
bit of temperature
to add some more randomness
into the sampling process.
And then when we hit
the mutate button,
it's going to create a new
sequence of drum patterns
by feeding the old
one in and generating
new samples afterwards.
So here we go.
All right.
So you can hear that it
adds a little variety.
So this way, we can have a
continuously evolving drumbeat
over which we can play
melodies and chords on top of.
And it creates a real,
interactive musical experience.
SAGEEV OORE: So
one of the fun ways
of playing with the system is
using the call and response.
I'll play a call
using an organ sound.
The system will respond
with a digital piano sound.
I'll start by turning
the metronome on.
[KEYBOARD]
And another variation
on that is that I
use the bass sound and loop
whatever response it has.
[KEYBOARD]
And then I can play
over top of that.
[KEYBOARD]
So what's interesting
is the feedback loop
that happens between the
system and the performer.
So I make a choice and that
affects what it'll play back.
And then what it
plays back in turn
affects how I'll
continue playing with it.
And it's fun to
experiment with that.
DOUG ECK: What we
see from Magenta
is a future where artists,
musicians for example,
can use machine learning
as a genuine creative tool.
ADAM ROBERTS: So now that we
have this instrument out there
on our GitHub, we're really
excited about having people
download it, play with it,
and share the music back
with the community.
So we can hear what great
stuff they come up with.
CURTIS HAWTHORNE: The code
base that we've developed
makes it easy for researchers
and creative coders
to take information from
MIDI files and music scores
and extract that information and
make it available for training
models in TensorFlow.
You can then take those
models that you've trained
and use our generation API to
connect it to music production
software, like
Ableton or Pro Tools
and also interact
with it in real time.
[MUSIC PLAYING]
