So I hope you had a nice lunch.
And we're really excited to
start again the conversation
that we left with
Mario earlier today.
So this session is called
Programming the Physical World.
So I just I wanted to
introduce briefly the topic.
Basically, this is
a quote, actually,
from an interview from Professor
Neil Gershenfeld interviewed
by CNN a couple years ago.
For those who don't
know, Gershenfeld
is the founder of the Center
for Bits and Atoms at MIT.
And he's been mostly
at the forefront
of the sort of
discourses on fab labs
and the way they
can empower people
is through tools of
digital fabrication.
And basically, he was saying
quote, "What's emerging now
is the science of
digital fabrication
that lets you turn data
into things so we can
program the physical world."
So beyond just
digital fabrication
I really was interested
to that quote
because there's this
idea that we are almost
in a sort of a science
fiction terminology
where we are
entering territories
of the metrics or
X-Men and so on where
we're sort of imagining that,
with our data, our thoughts,
our emotions that can be tracked
and transformed into numbers,
we can then feed that back
into the tangible artifacts
around us.
So the topics we're
going to see today
is really trying to have
a very brief overview
of those possibilities
through speculative design
and imagining what is really
possible in the near future
and, actually,
already happening.
And also beyond
that, the math that
are at the root of that sort
of visualization and the way
we can represent
geometry that can then
be touched or used in space.
So yes, that's about what I
wanted to say on the topic.
I'm just going to first
introduce the first talk
by Jessica Rosenkrantz.
Here, her talk is
called Growing Objects.
She is going to mainly address
generative techniques, 3D
printing and interactivity
through also the company
that she co-founded in
2007, Nervous Systems.
And I think she's
going to also mention
some of her inspirations in
biology and natural pattern
information.
So Jessica is really at the
intersection of art, science,
and nature in that way.
She likes to play with
natural processes that
generate pattern and form.
I really like a quote where, in
an interview with her business
partner at Nervous System, where
they were saying that they got
together because they shared
an interest in creating tools
to generate architecture,
but we soon, quote,
"realized that we never actually
see these buildings being made
so we decided to make products
that people could have,"
so mentioning the jewelry
that they're starting off
with Nervous System.
But now they're also leading
into geometry and architecture
that you can wear with
a 4D printed dress.
So I'm just going to let her
talk and have her first talk.
Thank you.
[applause]
The secret button.
Well this is an interesting
experience for me.
I've told a number
of random people
I've met today that
actually the last time I
think I was at the GSD was
the day that I dropped out.
I'm not an academic
and I'm not architect.
In fact, I am commercial
product designer
so I'm sort of
perhaps coming at this
from a different
perspective, that I'm
going to tell you about today.
I think this probably
advances my slides.
Yes.
OK.
So that's me and
my partner Jesse.
And we founded our company,
Nervous System, in 2007
when we were both
still students.
The idea behind the company
was to create a place
where we could do
cross-disciplinary experiments,
sort of be a creative
outlet for things
that didn't seem to fit into
the structure of our educations.
His background is in
math and computer science
and I have a background in
biology in architecture,
and we wanted to
create a place where
we could combine all those
things together and see
what comes out.
Our projects really focus on
three main areas of interest.
We're interested in
science and nature.
So understanding the
processes by which
the shapes and patterns
we see in nature form.
That interest embraces
diverse phenomena
like biological, chemical,
physical, geological, and even
social processes.
We're also fascinated
by digital fabrication.
So how can new
computer-controlled
manufacturing techniques make
possible new types of design?
How can they
influence the way we
construct the world around us
and even express ourselves?
And the last thing we're
really interested in
is this idea called co-creation.
So combining together our
interest in algorithms
and our interest
in fabrication, how
can we create new
types of design tools
that are more
powerful, accessible,
and even democratic?
OK.
Now we have the quote because
all architecture talks
need to have a quote in them.
"We might call the
form of an organism
an event in space-time,
and not merely
a configuration and space."
I really like this quote by
D'arcy Wentworth Thompson,
which was probably written
about 100 years ago, because it
identifies the way in which
all the forms we see in nature
are the result of complex
dynamic processes.
These processes grow and
adapt to different conditions
and the results in
the form that we see
are expressions of the
processes and the conditions
that generated them.
I see this idea
of form as process
as an interesting
counterpoint to how
humans have traditionally
constructed objects.
Humans tend to take
a-- or all of us
in this room probably--
tend to take a very
top down approach to design.
We design objects by
directly specifying, maybe,
precise measurements and shapes.
We, essentially,
almost determine
the final shape
of whatever we're
designing at the very
outset of a project.
Computers are often advertised
as offering new ways of making.
But in fact, most
design software
just merely tries to
mimic or reproduce
methods by which we've made
things before computers.
We have software that gives
the experience of drafting,
sculpting, model making, and
even things like shipbuilding
and they translate how
we've worked traditionally
with materials like
paper, wood, clay,
and metal into some sort
of strange digital version
of those experiences.
At Nervous System we're
interested in trying things
in a different direction.
We take inspiration from
nature's process-based designs
and focus on developing
interactive processes
that we can engage with.
So instead of creating
static designs
we create dynamic systems.
And instead of
drawing structures
we're interested
in growing them.
We also, as I mentioned,
have a strong interest
in digital fabrication,
computer-controlled
manufacturing techniques
like laser cutting,
and 3D printing, and waterjet,
and all these things.
They open up new
possibilities for making,
which we heard about
in the earlier session.
They enable us to make
incredibly complex objects,
to make one of a kind
objects easily more
cheaply than you could ever do
before with mass manufacturing.
And in fact, they lower
the barrier to creation.
3D printers, for instance, are
becoming cheaper and cheaper
and they're sort
of proliferating
into every aspect of our lives,
in our offices, libraries,
and even our own homes.
However, everything I just told
you is a little bit of a lie.
That's the dream.
The reality is that
there's a key part
of the problem missing.
Software.
How do you design
the stuff you want
to make with these machines?
The machines are very powerful.
The complexity is free
and variation is free,
I told you all these things, and
it lowers barriers to creation.
But design software is hard
to use, very expensive,
it isn't designed
with complexity mind,
it's hard to generate
complex structures,
and in fact, it's also hard
to make variations on things.
So this idea of
variation is not embedded
in how we create things.
So part of what we're
doing at Nervous System
is trying to think
about how can we
create new types of design
experiences and tools
to leverage what computers
are good at to expose
the possibilities of this
type of manufacturing
to not just designers
but to anybody so we can
lower the barrier to creation.
A lot of projects that we do
start with a natural phenomena.
Our Hyphae project
from 2011 was inspired
by how veins form in leaves.
We ultimately
translated that into
an algorithmic
mathematical system
that we could use to explore
how these forms emerge, both
exploring the natural
pattern space of what
is possible in nature
but then also starting
to create different types of
things that aren't represented
in nature, thinking about it as
a broader type of design tool.
So leaves are flat
and the veins you see
in them are flat structures.
But what if we
apply the same logic
of that algorithm of growing
towards [inaudible] to 3D?
What do we get then?
We get, apparently, this
uninterpretable bushy,
branchy thing.
But what if we start to impose
some sort or order on that?
We can control parameters
of how it grows?
We can ask it to be more
dichotomously branching
in one area and then affected
in ways to create a more
dense reticulated structure.
Ultimately, we translate all
the things that we design,
all of our research and weird
ideas, into everyday products.
So our Hyphae
system, all of that,
ended up going into our
lamp collection from 2011.
Every lamp is one of a kind.
The customer can select
exactly which one they want.
And one of the
things we do is marry
the infinite possibilities
of algorithmic systems
to the infinite materialization
possibilities of 3D printing.
Why use 3D printers to make
something over and over again?
Why not open that
up to variation?
And different
shapes, cool lamps.
So another project--
this one actually
started when I was a student
here-- is called cell cycle.
And it was inspired
by Radiolarians
because all architects love
Radiolarians, apparently.
But in fact, it
was really looking
at can we play in this sort
of space of cellular forms?
You can make forms
like these in,
let's say, Maya or
any other software
but with great
difficulty, essentially,
a lot of very careful modeling,
very careful scripting.
But how can we explore the
incredible expressiveness
of this?
And then how also can we
take advantage of this,
maybe, to make 3D
printed products that
use very little material,
that are very efficient?
So in 20-- no, I
think, actually,
in 2009 we published
our first 3D modelling
tool for the internet.
This runs completely
in your browser
and it's sort of a web
application for modeling
complex cellular forms.
Oh, actually, can we
turn the sound off?
I have all sorts of sounds
that I don't want to hear.
So you can essentially
see what it does.
So the idea is, how
can we create really
lightweight, real time
interactive design
tools for the web that let
you do the things that we want
to do with 3D printing,
make complex forms,
make unique things that fit you
directly that are your style?
And then how can
we do fun things,
like since you're
live modeling let's
live calculate the
volume of your design
and give you live price updates?
Let's make it so you
can save your design
and share it, send
it to other people.
You can tweet it.
They can start
editing it, and you
have this branching
proliferation of designs.
You can subdivide
things and sculpt them,
you know, all that good
stuff that we like.
So yes, we develop
a lot of things
like this, which are, I
guess, at the cutting edge
of web technology but
turning it into, essentially,
tools for product design.
And people have
made a lot of things
with it, mostly jewelery.
We tend to work at
the scale of jewelry
because it is an
affordable scale for both
us to work at and
for other people
to purchase our products.
So we're really interested
in the diffusion
of all of these things
into everyday life.
And this has been a
good vehicle for us.
People have made
a lot of things,
like their own wedding
bands and other pieces
that they give to people, which
are very meaningful to them.
And that is interesting.
So I'm going to
talk about a project
that we're currently working
on, called floraform.
And this project deals somewhat
with differential growth
and the development of form
in biological processes.
So there's numerous
questions in that.
But for instance, how do
organisms go from a single cell
to a complex differentiated
structure like a human being?
If a single cell were to
divide and grow uniformly
it would essentially
result in a formless blob.
However, through carefully
coordinate subdivision
and differentiation,
biological systems
produce structures with
specific reproducible forms
and functions.
Essentially, you have a
structure where some areas grow
more than others, and you end
up with a shape, let's say.
And there's all sorts
of interesting questions
underlying this.
How do underlying
cellular processes
produce macroscopic shapes?
And how do those
macroscopic shapes then
influence the underlying
cellular growth processes?
How do cellular growth
processes interact
with the mechanics of
biological materials?
And the mechanics of
biological materials
is the part that we're
dealing with more
specifically in this project.
Maybe a very naive look
at differential growth,
something that we
see every day is
a plant growing towards light.
Phototropism, how
does that work?
You essentially, if we dumb
it down to a two-dimensional
diagram, do have a differential
elongation of cells
to produce a
three-dimensional curvature.
If you think about
that as a surface,
maybe, you have two
layers of surface,
one is growing faster
than the other,
and that produces a defined
curvature that could explain,
let's say, the
blooming of flowers
due to temperature differential.
We were really interested in
these types of ruffled forms
that you see on the edges
of leaves and flowers,
in particular, that they have
a very complex curvature that
seems, maybe at first glance,
to be difficult to describe.
If you were trying to
turn that into a 3D model
it'd be pretty hard.
And if you were describing it
in terms of surface base growth,
like grow little bit
on the side and then
on the other side
it would be really
hard to produce this geometry.
However, a professor at
Harvard proposed a very simple
explanation.
His name is Mahadevan.
He's at the Wyss Institute.
And his explanation
is, here you have
a surface that's growing
differentially from the edge.
What if you just grow
more at the edge?
It actually, we
think, can produce
these types of structures.
And in fact, the
more I got interested
in this I started
seeing them everywhere,
the arms of jellyfish,
lettuce, sea slugs, rhizomes,
leather curls, et cetera.
Are all sorts of things
display this growth model.
So we built a
mathematical model of it.
I don't have time to
go into all of this
but if you're interested in
math, ask me about it later
and I'll gladly go
on at long detail.
But essentially, we're
dumbing everything down,
all the complexity
in a leaf or a petal
into a model of linearly
elastic materials.
So we have a mathematical model.
And the first thing we do
are explore different base
scenarios.
Let's say, what happens
if you grow uniformly?
This is a hemisphere.
It's just growing
the same everywhere.
And we get this wrinkly
blob that I predicted.
This is, maybe,
undifferentiated growth,
nothing different is happening.
Now what if you grow from a
point and that point splits?
This is very similar
to what happens
in plants when you have
a single growing point,
the apical meristem stem.
What if instead of letting
your points split it lengthens?
It produces, apparently,
this sort of shape.
It's actually kind of similar
to a mutation that happens
in plants called fasciation.
Here we get to the
last one, growing more
at the edge than
everywhere else.
Ta-da.
It makes the cool
convoluted shapes.
So this is like at the very
early part of a project.
You have a basic model of
some physical phenomena.
This is a physics-based
model of an elastic system
that we can control and
manipulate in some ways.
But how do we go from there,
how do we work with it?
And this image, for me, sort
of represents our philosophy
of how we work with
dynamic systems.
So I describe our
practice almost
as a kind of digital
gardening but instead
of cultivating plants we're
cultivating algorithms.
And we ultimately want to
create or breed systems
that have their own
innate behaviors
but that we can also
sculpt and manipulate.
So as we're developing the
software we're constantly
encoding sets of influencers,
gradients, manipulators, et
cetera, that allow us to sculpt
or interact with the system
as it's growing.
And there's an interplay between
what the system innately does
and our manipulations.
And this back and forth sort
of continues to change as we go
and recode and reconstruct
the simulation.
So just to give you an
extremely brief idea of what
I mean by that, imagine you have
three identical surfaces that
are growing and they have
different bend strength,
they resist bending
a different amount.
What happens then?
Well apparently, it changes
the wavelength of the surface.
Or something even simpler.
Imagine that they're affected
by environmental gradient.
So the one on the right,
in this bottom right area,
has more potential to
grow in that space,
let's say, due to nutrients
or lighting conditions,
whereas the one on the left has
a more uniform distribution.
How does that affect the growth?
And we can ultimately
change those through space,
through time to produce very
specific types of effects
that we're interested in.
So this is a surface
that's changing thickness
through time.
And I'm also letting
you see what's going.
So we're doing a very complex
mathematical simulation
on triangular mesh with
collision detection
and other things.
And that constantly has to
be, essentially, subdivided.
Edges get flipped to keep the
triangles nicely shaped, which
is how you do math on a surface.
And at this point we're
very early in the project.
We've translated
some of the things
that we've been exploring
into 3D printed sculptures.
The one on the bottom
is one of my favorites
because we've essentially
sort of hooked up two
algorithmic systems together.
The one that I was talking
about that does leafing,
and this one that
sort of does leaves,
if we're dumbing it
down a lot, to see
what happens at the intersection
of those two things.
We're also experimenting with
using full color 3D printing
to make sculptures that
express the growth rates that
determined the
form through color.
So that gradient of color
expresses the geodesic distance
between growth zones.
And also playing around
with 19th century
zoetropes to produce objects
that grow before your eyes
and express the algorithm
when you just look at them.
And then we're also working
on a jewelry collection.
As I hinted at, Nervous
System's sort of thing
is that we think that
really complex ideas
about the world around us,
about science, about technology
shouldn't just be limited
to academia and research.
Why not have them diffuse into
everyday affordable objects?
That seems like a good idea.
And I don't really have
anything to say about this
but this is an example
of one of the growth
processes we used to
create one of the pieces
in the collection.
So you can see the
degree to which
we can control the
system, producing
very specific types of results.
I have no idea how I'm doing on
time but I'm on my last project
so probably we're good.
So this is a project
that I've been working
on last year called kinematics.
And this project reflects
more of our focus
on digital fabrication
and co-creation.
It isn't really inspired or
based on nature in any way.
So 3D printing opens up a
lot of new possibilities
for creating different
types of materials.
I'm interested in
textiles in specific
because they're sort
of man-made materials.
They have certain
raw fiber and then,
based on the arrangement of
that fiber through space,
we're creating different
types of behaviors.
And they're also sort of
historically computationally
mediated.
You have things like
the Jacquard loom,
invented in 1800s,
which have always
sort of had computation embedded
in the process of producing
textiles.
3D printing lets us make
all sorts of new material
configurations through space.
So maybe we can
use that to create
more constructed
materials like textiles.
A project that I worked
on for Google in 2013
led to me creating some sort of
bracelet-like wearable objects
that have a kind of hybrid
behavior between hard and soft.
And we are interested in
exploring these things
like past the scale of jewelry.
So we made bracelets,
and those are well
within the size of a 3D printer.
But what if we could expand past
the confines of a 3D printer
to make something
bigger, like how
would this hard-soft
material work if it
was a long, flowing dress?
How would that move and behave?
So we started
thinking about, well,
how would we make
a bigger thing?
Lots of digitally fabricated
projects that do this
follow the same idea.
I have a big thing I
designed at a computer.
I want to make it.
I'm going to chop it up into
thousands of tiny parts,
print them all separately, and
then, through great expense,
I'm going to organize them
by hand and assemble them.
And often, that sort
of assembly task
takes far more time
than the design task
did, which seems kind
of backwards if you
use computation to make your
problem much, much bigger.
So we were curious, maybe
we could print something
bigger but as one thing.
And maybe we could do that by
taking advantage of the fact
that our structures
are flexible.
So what if we have these
flexible structures,
we scrunch them up
before we print them,
we print them scrunched
up and then we unfold them
into their final configuration?
So that was sort of the
seed of this project.
And once we started
thinking about you
could make big, flexible
structures all in one piece
we started thinking about
clothing in general.
And then we were thinking,
well, if you can make clothing
in 3D, that changes
the whole idea of how
people make clothes.
Normally, people
have a 2D pattern
that is mapped onto a 2D
piece of fabric that's cut out
and then it's sewed
together painstakingly.
You have to reassemble a
three-dimensional shape.
But if we can make something
with a three-dimensional shape
at the beginning then we can
do the entire process in 3D.
We can capture your exact
body shape with 3D scanners,
we can let you design it, and
then we can just print it.
So we thought that this
was worth working on.
So we started working
on it but there are lots
of avenues of research to do.
There's problems to do with 3D
printing, material strength,
tolerances, error print
orientation, machine size.
Problems to do with designing.
How do you design
these structures?
How do you make it so anybody
can design these structures?
They're very complex.
You'd be hard pressed
to 3D model any of these
by hand and any
traditional CAD package.
And then how do we
capture your body data.
How can we make sure
that your garment
that we make from that
data will fit the body.
And how do we actually
do the simulation
that I claimed we
could do, which
is really the heart
of the entire project?
And I don't have time to discuss
any of those in depth so I
won't.
But we didn't make
this web application,
similar to the cell
cycle one I showed you
except for it's
designed so anybody
can create their own garments.
There are different
aspects to it.
You can create your
exact body shape,
then you can go in and sculpt
the form of the garments,
you can construct
the textile itself,
so the pattern of small
and large triangles
that define the
performance and behavior.
You can save designs, you can
share then with your friends,
you can all edit
them, the whole thing.
At the heart of it is
this remeshing algorithm
that's happening constantly.
So as you paint, different
densities of triangles
were constantly recalculating
the mesh in real time
to show you the results
of your interaction.
And then you can go in and
specify these different module
styles.
So right now this
project is sort of
like we did the
most obvious thing,
like let's make the easiest 3D
modeling program for clothing.
What do you want to do?
You want to change the shape,
you want to change the pattern?
And then let's have
some different style
patterns you can put on.
So you can make a solid
dress or a perforated dress
or tetrahedral protrusion dress,
or really anything in between.
You can map any garment from
any body to any other body.
So we're partnered with a
company called Body Labs, which
has a machine-learning
parametric body
model that they've
constructed from thousands
and thousands of scans.
They have fixed our
data problem of how
to map from one body
to another because they
have this model that
we can work off of.
And then we get to the
heart of the matter, which
is the simulation.
So originally we
had a naive idea,
let's just crumple it
up and shove in a box,
and if it fits in the
printer that's good enough.
It doesn't need to be efficient
it just needs to print.
But when we got to the
point where we were actually
ready to make a
dress we were like,
that's not really good enough.
We actually want
to use simulation
to optimize not just
to make it possible.
So instead of thinking
about crumpling it up
we actually just
thought about something
even more naive than what
we originally thought of,
which is, oh, we should fold it.
Like you're putting a
shirt in your dresser,
why don't we fold it.
So we created our rigid
body physics simulation
that we can use to
interactively fold
garments to reduce their size.
So this dress we
reduced in size by 85%,
making a very condensed volume
that it can be printed in.
And in addition to that, it's
not just about optimization
but it's about
understand the behavior.
So if I'm creating a dress
made out of 5,000 interlocking
unique components
I should probably
see how is that
going drape and move,
like how does what I'm designing
actually affect what it does?
So we created a tool that
allows us to understand
how the dress will
drape and move
and then we can use
that to feed back
into how we're designing it.
So we actually made the dress.
That's a someone
recent development.
It was printed all in
one piece, as I've been
telling you, which is exciting.
And then they dig it
out of this powder.
And the video somewhat boring
but, yes, we'll just move on.
So ultimately, we ended
up with this garment.
One of the main things
that I was interested in
is making it actually
wearable because there
have been a lot of 3D
printed garments making
that are advertising how
amazing 3D printing is.
But they're fragile
and uncomfortable
and they're more like
sculptures that you would wear.
So I was thinking, this
doesn't express the potential
of technology to me at
all, I want something
that anybody can design,
that anybody can wear,
that will fit any body type.
So that's what we
set about to do.
And I feel like we're at
least-- we're getting there.
I've worn it, I've sat
in it, I've danced in it.
It is pretty cool.
One thing we're really
interested to see
is how well would the draping
of the garment in real life
reflect our
simulation that we did
to predict how it would behave.
And they are quite similar
so that seems good, useful.
And here we have
the final-ish slide,
where you can see
the finished dress.
Let's see.
So this custom-fit dress is an
intricately patterned structure
of more than 2,200 unique
triangular panels connected
by more than 3,300 hinges.
They were all 3D printed
as a single piece of nylon.
While each component is rigid
in aggregate they behave
is a continuous fabric
allowing the dress
to flexibly conform and flow
in response to body movement.
Unlike traditional fabric
this textile is not uniform.
It varies in rigidity, drape,
flex, porosity, and pattern
through space.
The entire piece
is customizable,
from fit and style to
flexibility and pattern.
For us, this is kind of
just a very first baby step
towards a larger
project about how
can we leverage digital
fabrication and simulation
to create complex digital
materials and products.
And I guess, how can we
empower the design process
through using these
types of simulations?
And I do believe that I have
concluded my presentation.
[applause]
Like we did this
morning, we're going
to take the questions at
the end of all three talks.
So I'm going to just briefly
introduce our next talk.
Alma Steingart is
a junior fellow
in the Harvard Society
of Fellows, which
is sort of an elite society.
No pressure there.
Alma has been examining
mathematical abstraction
in mid-century America.
And recently, more
placing the emergence
of new mathematical epistemology
in the cultural and political
milieu of the Cold War.
And she is also investigating
the introduction
of computer graphics in
mathematical practice
in the '70s.
I think she's always mainly been
interested in how mathematics
and physics have been
intersected in multiple ways,
and in general,
the new techniques
by which mathematicians
represent
abstract ideas in
multiple media, such as 3D
physical modelling,
early computer graphics,
immersive virtual environments.
So she's really
telling us all the sort
of backstage of most of
our research elements.
So without further ado, Alma.
Thank you for joining us.
[applause]
Can we get the volume
back up because I'll
have a few animations
that I want on the volume.
Thank you.
In 2010 topologist
William Thurston
collaborated with the creative
director of Japanese fashion
house Issey Miyake Dai Fujiwara
on their fall collection
entitled, "8
Geometry Link Models
as Metaphor of the Universe."
The collection was presented
at the Paris fashion show
and was influenced by the time
Fujiwara and his team spent
at Cornell University attending
Thurston's seminar on topology
and geometric group theory.
From The New York
Times Style Section
to Art Magazine to the notices
of the American Mathematical
Society, the show
garnered attention
from an eclectic array
of media outlets.
Needless to say,
Thurston was a newcomer
to the Paris show world.
But among mathematicians he
already had a celebrity status.
In 1983, Thurston
received the Fields Medal,
which is the most prestigious
prize in mathematics,
for his work on
low-dimensional topology.
His fame was mostly due to
his geometrization conjecture,
according to which,
a certain class
of three-dimensional manifolds
can be canonically decomposed
into parts, each of
which admits only one
of eight possible
geometric structures.
Now the exact meaning of the
theorem is really not important
but it does explain the
name of Miyake's collection.
Now such collaboration might
appear strange at first.
But considering Thurston's
approach to mathematics,
the bond between the
two men become clearer.
Despite working in the area
of three-dimensional topology,
which by definition defies
our perceptual capabilities--
and I'm going to say a
little bit about this soon--
Thurston has promoted
throughout his carrier
a tangible, hands-on,
unintuitive approach
to mathematical research.
He used to build
models out of paper
and he collaborated with
other mathematicians
on illustrating his ideas.
In courses on
geometry and topology
Thurston used cardboard and glue
as well as fruit and vegetable
peelers in order
to teach students
about complex
geometrical concepts.
So in fact, it was
Thurston's realization
that both he and Fujiwara
required the students
to peel oranges in
order to investigate
how three-dimensional
forms are constructed
from two-dimensional
surfaces that
led to their collaboration.
In the 1980s, it
was this tendency
towards hands-on, materially
mediated approach that
propelled Thurston, together
with several colleagues,
to promote the use
of computer graphics
in mathematics as a way of
creating a lively engagement
with topological theories.
Now topology can
loosely be defined
as the study of
qualities of space
that do not change under
continuous deformation.
Unlike in geometry,
topological surface
do not depend on a
metric whether one
can stretch and twist a surface
without really altering it.
This is why topology
is sometimes referred
to as rubber sheet geometry.
From a topological
perspective, for example,
a sphere, a cube,
and a cylinder are
one of the same thing
because one can smoothly
form one into the other.
And just as an example, this
is not true about the torus
or what's known as
a doughnut, which
is not the same as a sphere.
And the basic idea is that
the torus, the doughnut,
has a hole in it, which is
why it cannot be the form
into a sphere smoothly.
Now the title of this panel,
Programming the Physical World,
therefore seems to
conflict directly
with the work of the
mathematician I hear describe.
In appealing to
computer graphic,
topologists goal after
all was to break free
from the physical world.
It was abstract
mathematical-- It
was the abstract
mathematical world
that they wish to program
not the physical one.
Yet what I want to suggest
is that by approaching
their investigation from a
phenomenological perspective,
paying close attention to
the strategies they employed,
the research speaks as
much about the world
we inhabit as the one
that we can only dream.
Moreover, in constructing models
of non-Euclidean and higher
dimensional geometry,
in seeking to extend
viewer's perceptual
capabilities and challenged
their apprehension, and in
working with and developing
technological media, these
mathematicians effectively
configured the
world around them.
And just to know, this is not
limited to mathematicians.
Behind me, for example,
is an artistic rendition.
Homeomorphism is the name of the
map by which topology transform
one space into the other.
Mathematicians were
early to recognize
the potential use of computer
graphic in scientific research.
And in doing so,
they followed up
on a longer
mathematical tradition
that incorporated illustration
and three-dimensional models
in both research and pedagogy.
At the end of the 19th
century, for example,
three-dimensional
plaster and string models
we're common fixtures
in mathematical centers
around the world.
And actually, a
little bit later,
they became interest for
several artists as well.
Yet starting in the
1920s and in the 1930s,
this intuitive
approach to mathematics
fell out of fashion.
It was eclipsed instead by a
formal and abstract approach.
So it was only with
the introduction
of computer graphics that
mathematicians turned
once again to visualization.
Already in the 1960s,
the late 1960s,
when computer graphics was
really still in its infancy,
consisting only on vector
graphic display, Barnard
University mathematician
Thomas Banchoff, on the top,
collaborated with a computer
graphics specialist, Charles
Strauss, began exploring how
can the new technology be used
to visualize four-dimensional
objects such as, for example,
the hypercube?
What Banchoff did was
to compute the shape
of the shadow of a
four-dimensional object
in three-dimensional space.
In the same way that we can
watch a shadow of a rotating
three-dimensional object
changes on screen,
the computer display can serve
as a three-dimensional screen
for this four-dimensional
rotating cube.
So that's like what
will be on the bottom.
And I just want to give
you a little idea, if you
haven't seen some of
these films before,
how these films look like.
So this was made, I
think, in '71 or '72.
This is not a hypercube.
It's a four-dimensional surface.
As computer graphic technology
developed in the following two
decades, other mathematicians
trying Banchoff
and began asking,
what can be gained
by approaching mathematics
using computer graphics?
Yet it was only in the 1990s
that these discrete efforts
coalesced around a
central organization.
The Geometry Center
and NSF-funded Science
and Technology Center at
the University of Minnesota
was funded following a proposal
by 13 notable mathematicians
and computer
graphics specialists
as the first center dedicated
to the investigation of computer
graphics in pure mathematics.
Thurston was among
its leading members
and his research
served as inspiration
to the first two mathematical
animations produced
at the center.
So today, I'm really going
to talk only about two
of the films produced in the
center, now entitled Not Knot
and The Shape of Space.
Outside In shares very
similar characteristics
to the other two.
All three films were pitched
at an elementary level.
They're producer
hoped they would
be accessible to the lay public
and so they included a fair bit
of introduction to the subject.
Yet despite this,
it's really worth
noting that Not Knot and Outside
In reported on what at the time
was actually quite
advanced area of research.
The results that they
describe were relatively new
and would have held interest
to any mathematician interested
in more intuitive
understanding of the subject.
In appealing to
computer graphics,
the members of the
Geometry Center
hoped to use data
driven computation
to transform abstract
mathematical theories
into virtual phenomena.
In doing so, they embarked
upon a highly imaginative work,
integrating their
visual, haptic, tactile,
and kinesthetic perception.
So Not Knot, the first
film, introduced viewers
to hyperbolic geometry
and knot theory.
Hyperbolic geometry is
distinguished from Euclidean
geometry in that it breaks
with the parallel postulate.
And I will say that, actually,
in Jessica's presentation,
that's kind of an example of,
actually, hyperbolic geometry.
So given a line and a point,
in hyperbolic geometry
there are at least
two distinct lines
through the points that do not
intersect the original line.
So wherein Euclidean geometry
is said to have zero curvature,
hyperbolic geometry has
a negative curvature.
And maybe another way to
describe it more common,
wherein Euclidean
geometry, if we
take the sum of the interior
angles of a triangle
we get 180, in
hyperbolic geometry
it will be less than 180.
And a mathematician will
view three-dimensional models
and illustration to study
non-Euclidean higher
dimensional geometry
for decades.
However these models were
always external to them.
They were able to
observe or touch
the model of a non-Euclidean
geometrical space
but they remain outside of it.
What the producer of
Not Knot set out to do
is to provide viewers with
an experiential introduction
to hyperbolic geometry.
The film entices
viewers to imagine
what it will feel like and be
like to live in such a world.
What sort of qualitative
perceptual changes
would you experience?
Stated somewhat
differently, how would one
know if one was living inside
a three-dimensional hyperbolic
manifold?
Now manifolds are topologists
favorite object of study.
They are spaces the locally
resemble Euclidean space,
although globally they can
have a completely different
geometry.
So earth can serve as a really
good, useful guide here.
In our daily life we perceive
of the surface of the earth
as flat although we know
that it's actually a sphere.
So if we look at earth from the
moon we see that it's a sphere.
But how can we
determine the shape
of a three-dimensional
manifold we inhabit?
The only way to see
it from the outside
would be to look at it from
a fourth-dimensional space.
And since this is not
an option, the film
asks the viewers how can we
recognize a hyperbolic manifold
from the inside?
So behind me you can
see, this is the moment
in the film when the sort
of viewpoint of the viewer
switches from an outsider
to an insider perspective.
And the narrator explains,
"we are escorting you
into Lobachevskian or
hyperbolic geometry."
Now I want you to listen
to, in next clip that I'm
going to play, what's the
narrator's asking you to do.
[video playback]
-Let's fly around a
little in hyperbolic space
to get a better feel for it.
Notice how quickly apparent
size changes as we move.
This is one of the
biggest qualitative
differences between our everyday
space and hyperbolic space.
This is what it looks like to
live inside the space created
by order four axes along the
edges of the dodecahedron.
[end playback]
So it might be hard on first
watching this to pay attention
to all the various
features of the space
and how it could actually defer
from our everyday experience.
And if you are feeling
somewhat distorted,
it's probably worth
noting that the Grateful
Dead to use this video in the
1990s during their concerts.
The film does not
provide a model
or define what
hyperbolic space is
but rather, it transforms
it into a lived experience.
In doing so the film
pushes back to a horizon
of what is mathematically
perceptible and opens up
the mathematical rim
to visual exploration.
Now the Shape of Space was
produced six years later.
It was based on a book
of the same name, written
by MacArthur Fellow and student
of Thurston, Jeffrey Weeks.
And like Not Knot, the aim
is to give viewers intuition
into topological spaces to defer
it from our own, specifically,
spaces that are
finite yet boundless.
So an example of such a space
is a two-torus or just a torus.
We know that the surface
of the torus is finite
but we can't see a boundary.
If there was ant that's
living on the surface
the ant will never arrive at the
obvious kind of stopping point.
So the animation spends some
time explaining this idea
but the goal is not really
to understand the two-torus
but to understand the
higher-dimensional analogy
of the torus, what's
called the three-torus.
The surface of the
two-torus is two dimensional
so we think about it
as hollow, there's
nothing inside, just a surface.
The higher-dimensional
analogy is
a torus in
four-dimensional space
whose surface in
three-dimensional.
And if you are
somewhat confused,
here's a demonstration
of what it might feel
like to live in such a space.
[video playback]
-Let's ride the spaceship
inside the three-torus.
Even though the
three-tours is finite
we have the illusion of
flying in an infinite space.
There are two stars
in this universe
but we see each
one over and over.
[end playback]
So there are two noticeable
differences here that
are evident in this later film.
First, a viewer is now
placed inside a space ship.
And it's through
active navigation
that one is supposed to get
familiarized with the space.
Second, this space is now
populated the objects.
As the narrator explained,
while there are only
two stars in this universe
you will see infinite copies.
Now the reason, in principle,
is that the rays of light
would not behave as
we expect them to.
They will wrap around
the space reaching you
for various directions.
And if you're really
curious, I should say,
all these films are
available on YouTube.
And in fact, on Jeffrey
Weeks' website, the last time
I checked, also he had
games that will teach you
how to have an intuitive
understanding of living
in a three-dimensional torus.
For example, you can
play tic-tac-toe in that
and you will always lose.
I'll tell you that but that
you should go and check it out.
But the point I really wish
to draw attention to here
is it by calling upon viewers
imaginative navigational skill
and by populating this
space with objects
they can relate to, the
film calls upon the viewers
perceptual capabilities
even further than before.
And this approach is motivated
by Thurston, who noted
that, "word are one thing.
We can talk about your
geometric structures.
There are many precise
mathematical words
this can be used.
But they do not automatically
convey a feeling for it."
What does Thurston mean
by having a feeling
for a geometric structure?
And how does it infer
from our understanding
of its precise
mathematical formulation?
According to Thurston,
there is more than one way
of knowing geometry.
Formal abstract
mathematics provide
with only one mode with which
to apprehend hyperbolic space.
Imagination is another.
And computer graphics could help
train topologist imagination.
Of course, in aiming
to provide viewers
with such an experience,
Thurston and his colleagues
were restricted by the
technological medium.
The lighting in the
animation, the first animation
that I showed you,
was computed according
to a hyperbolic perspective.
But the screen mediated
all engagement.
But it is exactly this
feature of the work that
make topologists
exploration extend
beyond the world of mathematics.
In the act of trying to
imaginatively apprehend
non-Euclidean
geometry topologists
were forced to query into the
nature of human perception,
asking questions such
as, on which senses
does space perception build?
How can technological
and material exploration
augment one's
perceptual capabilities?
And what is the relation between
understanding and sensually
apprehending?
Such a connection between
topology and perception
was drawn before, albeit
in the reverse way.
Not how can theories
of perception
inform topological research
but whether topology
can serve as a model for
theories of perception.
In his notes that were
later published posthumously
in The Visible
and the Invisible,
Merleau-Ponty directly
drew such a connection.
He wrote, if "Euclidean
space is the model
for perspectival being,
topological space,
on the country, is a milieu
in which are circumscribed
relation of proximity of
envelopment, et cetera."
Topology,
Merleau-Ponty proposed,
offer a model for
human perception
that broke away from
Cartesian epistemology.
Instead of positing a knowing
subject in an objective world
exterior to that
subject, topology
reaffirmed the whole body
as the site of perception.
According to Merleau-Ponty,
a fateful mode
of embodied perception required
a topological conception
of space, a qualitative rather
than a quantitative approach.
"I describe perception as
a diacritical, relative,
oppositional system--
the primordial space
as topological (that is, cut
out, in a total voluminosity
which surrounds
me, in which I am,
which is behind me as
well as before me."
Space did not extend
in front but expanded
around the subject,
seeing and being seen,
touching and being touched,
Now such a statement,
perhaps as poetically,
could have been
stated by Thurston,
in trying to experience
non-Euclidean geometries,
topologies had to
imaginatively extend
their perceptual capabilities.
What they affirm in the process
is that space perception is not
restricted to vision
but rather, requires
an embodied subject attuned to
tactile, kinesthetic, and motor
cues.
So I want to play
for you a final clip.
This one is not from
a computer animation
but from a lecture that Thurston
gave in 2010 at the Clay
Mathematics Institute.
So in front of a big
roomful of mathematicians,
Thurston explained the
qualities of a specific--
the properties of a
specific manifold.
[video playback]
-That doubles the world.
So we have a loop here, and
when you pass through the world,
suddenly, you're in
the United States
and the coffee is
brown-colored water.
You go you go through the loop
again and we're back in Paris
and the coffee means something.
[end playback]
So the topological
space Thurston described
is not an object of knowledge
which is exterior to him
but rather, one he inhabits.
His apprehension
is fully embodied.
Topologist film built
upon optic technology
but they demanded the
viewer to imagine himself
as moving around such
constructed spaces.
They incite them to transform
formal mathematical expression
into imagined
kinesthetic experience.
And to accomplish
that all they require
are the techniques of the body.
Exploration of
topological spaces,
whether material or digital
were, by their nature,
also inquiries into
human perception,
it's extension,
mediation and limit.
So to conclude, maybe
such exploration
reveals that imaginative work
is itself a form of design.
Thank you.
[applause]
Well thank you so much.
And the coffee is not so
great in Paris anymore.
I don't know if it's just
a myth but I can confirm.
It's much better in Italy.
So our last talk is
Gentiane Venture.
Sorry, it's the English, French.
So she's a roboticist
now working in Tokyo.
She heads her own research
lab, GV Lab, at the-- sorry,
the-- it's a long, it's a
long-- sorry-- Tokyo University
of Agriculture and Technology.
I should be able
to remember that.
And she's an associate
professor there.
She's trained,
basically, as an engineer
and she also obtained her Ph.D.
on modeling and identification
of car dynamics from the
University of Nantes,
in France.
So she's researching
mostly human-robot
interaction public
and private spaces
and investigating how
robots interact with us.
And her talk is really
going to show us
how we can use human motion
data to create robots
that are going to be
more and more familiar
to us and with us.
So thank you.
[applause]
So thank you [inaudible]
for the introduction.
Thanks a lot for
inviting me here.
It was a long trip to come
from Tokyo for this events
but it was really worth it.
I'm completely jet lagged but
I didn't fell asleep so far
so I'm just waiting so that we
can switch to the HDMI inputs.
OK.
So we're good.
I'm going to show you
something very more practical
and down to earth rather
than the presentations
that we have been
hearing so far.
And I'm going to
start by showing you
a small experiment
that we do in our lab
and that has been the starting
point of a lot of new works.
Starting on how
people perceive robots
and how familiar do we need
to be with robots and robots
needs to be with us?
And how we perceive all
of these interactions.
So one day on our
campus, we asked
people that were walking
around if they wanted
to come to our lab
and to participate
in a small experience
that consisted
of filling a questionnaire
here about robots.
Nothing too complicated.
A lot of people said, no, sorry.
OK, good.
Finally, after a lot
of trials we managed
to gather a lot of people
because, of course,
few people said,
yes, sure, they were
interested in filling
a questionnaire.
What's very exciting about
filling a questionnaire?
I don't know yet but, anyway,
they were OK to do that.
And then we just
tell them, well,
for the purpose of our
experimental analysis
we will record a
few data while you
are filling this questionnaire.
OK.
And basically, we will
record your motion data.
Well basically, when you
write, when you move,
we will record how
your head is moving
and how your hands are moving.
OK.
So please sit down, make
yourself comfortable,
we will bring in
the questionnaires.
And that's what
happened, basically.
Basically, participants
were seated--
so we are in Japan so they
are seated on the floor.
So that's normal.
And while they were seated
they were waiting together.
We ask them to come by pair.
And instead of having somebody
bringing the questionnaire,
we had this silly, small
humanoid robot bringing it in.
So the robot was bringing two
envelopes with questionnaire
inside and people
had no instructions.
Our participants were asked
to fill the questionnaire
but we didn't tell
them how we would
bring in the questionnaire
and what they
are supposed to do with this.
So a lot of people understood
that, actually, there
was written question
there on the envelope.
Well actually, it was
not written questionnaire
it was written in
Japanese but it was still
written questionnaire in
Japanese on the envelope.
So most of them would take
the envelop from the robots
while it was delivering
the envelope to them.
A few people didn't even
bother taking the envelope.
They didn't understand what
was the point of the experiment
or what was point of all these.
And the robots
was a bit playful.
It was not behaving
the same way he
was with the participant
on the right side
then with the participant
on the left side
so that we mixed up experiences.
And in the envelope
there was actually
a questionnaire about robots.
And we were actually measuring
their head and hand motions
when they were interacting
with the robot.
So this little
robots is called NAO.
It's a French robot built by a
company that's called Aldebaran
that is not a Japanese company.
And so basically, that's
how it was going on.
So what we see here is that the
whole experience is non-verbal.
The robot didn't say a word.
And participants were
allowed to talk together
but we didn't specify
anything so most
of the time they were like
looking at each other,
smiling at each other,
acknowledging with each other
but they didn't
really talk together.
And what we're
interested in is to see
how this non-verbal
interactions was
correlated with the way they
behave with their emotions.
Why we're interested in that
is because in the literature
we can find that 93%
of our communication,
in human-human
communication is non-verbal.
That's quite a lot, right?
So basically, without
even saying any words
we can communicate a
lot of information.
And this is not very
recent work so it seems
to be quite valued even now.
So these non-verbal
communication, well basically,
it's motion data.
So it's motion of
whole body motions,
you can be facial expressions,
it can be a lot of things.
In our case, we are
only interested in what
we call kinematics and dynamics
in the robotics [inaudible]
engineering world, which
basically, for everybody, could
be body language.
The way I move my hand,
the way I move my body,
the way you can feel I can
be stressed speaking in front
of you or I'm
completely super happy,
and these kind of things.
So this information is actually
super rich in information.
This data is really
rich in information
because it gives us information
about what we are doing,
the action we are
performing right now,
about who we are because
everybody has a different body
language signature.
And it gives you
also information
about your emotions, what we
are feeling at the moment we are
performing,
interacting, discussing,
and these kind of things,
and about your health.
Health, today we
won't speak about it
but it's also something
really important.
So using this non-verbal
communication, this body
language, we were
interested in trying to find
what is a successful
human-robot interaction?
So human-machine
interactions in more general
or in a broader sense, if
you want to see it that way.
So how can we use the
motion information
to create a satisfying
human-robot interaction
in the case of this interaction,
or in any case in general?
Because as you may
imagine, in the future
we will probably be
surrounded by tons of robots,
if it's not already the case.
There are some
freaking scary robots.
The one on the right
side is a Japanese robot
that has been
actually in classrooms
for probably primary
school or super young kids.
We have the seal robot
that is used in Japan also
to heal elderly
people with dementia.
A lot of people might know
the Roomba robot, the vacuum
cleaner robot, the lawn mowing
robots, these kinds of things.
Baxter is also from Boston.
That is a robot that
is used in factories
but interacts with humans.
And of course, you see
two of these robots
from Aldebaran Robotics.
Pepper, that was
supposed to be sold
in Japan for a very cheap
price but which launching
has been postponed and
postponed again and again.
And the model
that's we were using
for experiments, which is
a very common robot used
in robotics research.
So all of these
robots are supposed
to interact with us somehow
someday in the near future
or already now but it's
not so easy to build up
a real human-robot interactions.
I mean, even building a
human-human interaction is not
that easy.
Anytime you are interacting with
somebody and, in particular,
at the very first time when
you have never met this person,
it's always a mixture of
emotions, of sensations.
There is a lot of ambiguity
in what we are perceiving.
And with the robots it's even
worse because there a lot
of things that we
don't know, a lot
of things that the
robot doesn't know
does not program for
because so far, robots are
programmed to interact with us.
So how this non-verbal
interactions, this body
language can be used for
human-robot interactions
is one of the key questions
right now in robotics research.
And basically, how the
whole body motion, so again,
this body language, correlates
with how familiar we perceive
the robots because
familiarity, perceiving
the robot as being familiar,
or the interaction as being
familiar, is one of
the keys to solve
one of the problems of
human-robot interactions.
Because if you think that
this robot is familiar,
if you feel familiar
with the interactions
then it's going to be
much more easy to interact
with the robot.
So back again to
this experiment.
Basically, so we measured
the head motions,
we measured the hand motions,
we ask their participants
to answer the questionnaire
in which we were asking them
to rate the behavior
of the robot,
to rate their
perception of the robot,
and to tell us a little bit
more about their own history
of interaction with robots.
So most of our participants have
never interacted with a robot
before.
It was their very first time
they were seeing, actually,
a real robot and they had
to interact with someone--
with this.
I say "someone."
I'm sorry.
And what we found with this very
specific robot, NAO, was that,
basically, the
behavior of the robot
was pretty much understandable.
The robot comes in,
he bows, Japanese way
of saying hello politely.
He hands the envelope,
he says goodbye.
That was fairly OK for
most of the participants
to even understand.
But non of the participants
could understand
the personality of the robot.
While the question
is, does this robot
have a personality actually?
That's also a problem
but still, if you
want to project a
personality to this robot
it was very difficult
for them to do that.
So it was difficult for them
to see the robot-- I mean,
they were seeing it
as a social entity
but not as a full
social entity as humans.
Then a lot of participants
understood the task
because actually, it was
not difficult, right?
It was just taking
this envelope.
But they were neglecting
social interactions.
So the robot was
bowing, the robot
was saying hello, goodbye,
these kinds of things but they
were just ignoring that.
OK give me this
envelope, goodbye.
Wow, cool.
And then, it was
very first encounter
for a lot of our participants
but they rated the interaction
as being very familiar
to them, which is quite,
of course, for me because
having a robot delivering you
an envelope is
not something that
could be very familiar
to me but still, it was.
So that was pure ratings
from the questionnaires.
Now we are interested in how
these human-robot interactions
and emotions relate together.
So what we found that
was quite interesting
is that the way people
were grasping the envelope,
so basically,
taking the envelope
from the end of the robot, was
correlated with their feelings
because we asked them
to rate their feelings
during the questionnaire.
And there was a high
correlation when
they were grasping
the envelop-- the way
they performed the motion
to grasp the envelope
and what they were feeling.
And in particular, most
of the participants
that felt the robot that
was familiar to them
had the same type of motion
to grasp the envelope.
So there were some kind of
completely not unpredictable
but quite surprising
results that people
that felt the same way
had a similar manner
in interacting with the robot.
The other thing that was
quite also interesting
is that people, when they
saw the robot sociable,
they had more intense and
faster and more important
motions when they were
greeting back the robot.
So not everyone greeted back
the robot but the people that
found that the robots
was a social entity,
they were greeting
back quite a lot.
People that didn't find anything
were not even greeting back.
And some people didn't
even look at the robot.
So there is really something
quite interesting here.
So we have all this
information collected
on motions, on what
the participants felt,
and this correlation
together saying
that the emotions, the
familiarity, and they way they
interacted with
the robot is highly
correlated with the way their
body reacted to the robot, so
basically, how they
physically interacted
within this interaction.
And it was very interesting
to use all this information
but we were very much limited
by the robot that we were using.
We are using a commercial robot
that has a very specific image.
I play you the
sound of the movies
but the noise that the robot
NAO is making is extremely high
and the gears and the
motors are very noisy.
So it's very hard to imagine
that robot can be at this stage
an entity, a real social entity,
when you have something that
is [making mechanical noises]
all the time
during the interaction.
So we really think that it is
a really important thing for us
to design robots in
terms of appearance
but not only the appearance
because we are here not
with just an object but we
are with a moving object.
So we also need to design
the motion of the robot,
the way the robot is
interacting with the humans
or with the environment to try
to solve the problem of really
seeing the robot
as a social entity
not just as a tool to convey
or to bring in envelope
and to serve people.
So we are now working on
a very new project that
is called the metamorph robot.
So it's a robot that can change
shapes and appearance that I
cannot disclose right now to you
but which is just like a very
new starting project
and a very exciting one,
where we are taking into account
how the design of the robot
and how the motion generation
of the robot will influence
the way people are
interacting with this robots.
And in particular, we are using
a lot of humanoid-based based
systems.
But we also want to investigate
if humanoids, or at least
anthropomorphic robots, are
the best option for people
to understand what robots are
trying to do and to interact
with the robots.
So basically, I'm going
to conclude here my talk.
And I would like to thank
all the people in my group
that worked on this
project and that are
going to work in this project.
I would like to thank
you for your attention.
[applause]
So I would like to [inaudible].
Yes.
Thank you.
So wow first, right?
What a great diversity
of talks addressing-- I
want to actually try to find
some sort of common thread.
And for me, what struck my
mind was this relationship
with the term of perception.
And when you were showing the
slide with the Grateful Dead
it made me really think of
the doors of perception,
the [inaudible] kind
of experimentation.
And I feel like we're
having this relationship
with the physical and the
material world where we live in
while we are dealing
with things that
are fairly abstracted away.
So in terms of how
would you-- and it's
an open question for
the three of you--
how do you understand,
in the end,
the way you grasp the
data that you're using
or the geometry or the
sort of range of motions,
in the case of the robots,
with the actual world that we
live in?
And how do you think
the material culture
that we have can
resolve this tension
between this abstraction
and this physicality?
I mean I can take a go at it.
So I think that part of
what-- it's working, right?
Part of what's--
You have to speak in the--
OK, speak into it.
So I think part of what I
find interesting, at lease
in the cases that I
was talking about,
is exactly this
idea of the question
of the limits of perception
and ends up always
having, at least
as a mathematician,
I look at the work
end up question
the perception, the everyday
perception and eventually.
So they're trying to both
extend it and mediate it
but that means paying
very close attention
to the way you interact
with every kind of material.
So the early
mathematicians that looked
at the question about
how you're going
to visualize in hypercube, the
first thing they talk about
is, if we look
about a space and we
look at certain objects we
understand the object just
by walking around, touching it.
So it's this sort of
understanding that then they
wanted to bring into their
computer visualization.
again they wanted to bring
in this disembodied way
of thinking about
space perception.
But I don't think there's a way
to necessarily-- it's always
trafficking between this kind
of abstract understanding
and this very
material-mediated engagement.
So in the case of the
robots, if I may speak,
the robot is basically stupid
and doesn't have any perception
by itself.
So everything that
we can do is try
to project our own
perception on the robots
itself and try to mimic
or try to understand
what could be human
perception and try
to put it inside the robot.
On the other way around,
from the user point of view,
the perception is
quite interesting
because it's a very
complex process.
As some of the talk this
morning were also mentioning,
there is an emotional
balance in the way
people perceive the robots and
a projection of what they are
expecting from the robot, which
basically, most of the cases,
leads to a great disappointment.
Because we must say
that most of our robust
nowadays are just unable to
fulfill any of our expectations
unless you have
the vacuum cleaner
and you are very satisfied with
the way it's vacuum cleaning.
Other more complex
robots are just dumb
and you can project as
much as you want on them
but it's like
projecting anything
on your, maybe, goldfish.
And even probably,
my goldfish are
more intelligent than my robot.
I feel that the term
"disappointment" is
very interesting in technology.
And nowadays, especially
with that the new emerging
technology, there's a sense of
deception or disappointment,
which is interesting.
"Deception" is actually the word
of "disappointment" in French
but it means some
sort of betrayal,
which I think, there's
like-- I don't know,
Jessica, if you have this
experience in your line of work
because what I really
liked about your talk,
as well, in that
sense of you wanted
to ground somehow
the investigation
in digital fabrication and make
it affordable and wearable.
So do you have any
conflict with the fact
of the relationship
between the expectation
and the results or the outcomes
or are you fairly satisfied?
Let's see.
I guess in general,
since we make
things that are intended
to be sold to people
and intended to be
sold in some volume,
I'm not just making five
limited edition things sold
for $20,000, I'm making
thousands and thousands
of things that I sell
between $5 and $100,
I focus very strongly
on making things
that won't disappoint people
and focus on making then
durable and wearable
and comfortable,
and all those things
so not only do I not
have business problems,
like everybody
wants to return
stuff, but also I
am sort of almost like
an ambassador for all
of these ideas and technologies
to everyday people who may not
have encountered them before.
And I sort of aim to make
those first experiences
not a terrible experience.
I don't know if I'm interpreting
this question to literally.
In terms of the design
process, I imagine,
before you actually sell
the product, maybe, is there
in your first tests or
the first expectation
you might have on the behavior
of material or the fabrication
process itself?
Do you find that sometimes do
you play around with the limits
or do you manage
to get around them?
I'm still not sure if
I'm totally understanding
the question but of
course, everything
goes wrong in your
first idea of how
you should build the system.
It never works.
And most of the projects
that I showed I've
worked on for several
years to get to the point
where I am now.
So I'm working on five or six
generative systems that may not
get to the point where they
actually do anything of value
for a long period of time.
These things are
fairly difficult.
There are all sorts
of limitations
to all the materials and
technologies that we have today
but I guess, part
of what I like to do
is figure out what
can you actually
do with them that is good
with the level of technology
we have now seeing as I
don't develop 3D printers
and I'm not a
material scientist.
Do you have a
reaction, comments,
questions to the talks that
we've heard in this session?
Yes.
Yeah.
Thank you all.
They're lovely talks.
I just wanted to ask
Jessica if you guys have
done any analytics
on the interaction
with that interface.
Do people build 100 dresses
and then decide not to buy one?
Because it's a really
interesting interactive tool
and I think all of the
work is really great.
I'm just curious if
you've done any analysis
on how the interaction
with it is unfolding,
both in terms of its
being passed around
and the individual user?
The short answer is no.
I haven't done any
actual analytics of that.
I guess when it comes down
to it, since our company just
has two designers/progra
mmer/engineers/artists we spend
essentially all of our
time creating this stuff
and we're not that interested
in the analytics of it.
Although, I guess if I had a
business manager or something
like that they'd be
really interested in that.
But I sort of have a
sense of that though.
In general, we see thousands
and thousands and thousands
of designs ordered
to-- thousands
of designs created
to every, let's say,
10 or 20 designs ordered.
So I think a lot of
people, actually,
come to the site
with no intention
to purchase anything at all.
They just saw it tweeted
somewhere, they saw it
in a news story and they are
like, oh, I can play with this,
I can get an experience
of 3D modeling or creating
my own clothing design.
And then some small
percentage of people
actually might want
to purchase something.
Have you changed that
platform since you started?
Yes.
We're constantly rewriting
all of our code and apps.
Since we started, there was no
WebGL in 2007, when started.
Since the invention of that
suddenly everything that we do
got a lot better.
We used to work with
Java applets that
were embedded on
our site and then
we upgraded, essentially,
to full JavaScript code that
creates interactive
experiences with no plug-ins
and all of that stuff.
And that spec is constantly
improving so we're
just always, yes, rewriting.
All right.
My question for Jessica.
I don't think I need the mic.
Yeah, you need the microphone
for our video viewers.
Thank you.
So my question is for
Jessica, but maybe something
that others could
speak to as well,
is how are you
approaching or thinking
about external factors.
So you have the one
organism that grows,
are you thinking about
what if the light source
changes or there's the
unexpected interaction
between two organisms, and
how you would model that?
In general, yes.
The project that we're
working on right now,
the whatever it's
going to be called,
floraform maybe, that one
we're working a lot with things
like gradients of
environmental factors
so thinking about these
things like nastick movements
and tropisms in
plants, which are
responses to either directional
or undirectional stimuli.
So that's sort of
a big inspiration
behind this project.
So we're, of course,
are dealing with that
but we don't deal with
everything in every project.
Different projects
have the aspect that
is what I most want to explore.
I haven't really done
anything with things,
different organisms that
we're growing interacting.
I guess in our leaf vein
system, the Hyphae system,
there's this thing where if you
have multiple things growing
in the same hormone source they
might eventually merge and join
but we don't do
anything in particular
at that point of interaction.
So sort of no.
Yeah, thanks.
Great talks.
I have a question for Alma.
Your computer
animations that you've
shown all seem to have
the purpose of making
this abstract mathematical space
as smallest sensory-- making
it accessible to
sensory experience.
So I was wondering,
do you also have
examples where these
specializations helped
mathematicians in
theory development?
No.
I mean, it's a great question.
I think you're right.
The first part of
these kinds of projects
is they call into
question, what does
it mean to understand space.
And then it plays
with the notion
of what does it mean to
understand the space abstractly
as opposed to
understanding, essentially.
But to answer your
question, yes, this
trafficks both direction.
With the earliest,
earliest, earliest film,
the Thomas Banchoff film, he
would create this animation,
this very short
animation and those
took hours, hours,
hours of labor
because there was a camera and
they had to-- it's like frame
by frame.
And then he was able to
see things in the film
that he didn't know before.
And then he went back and proved
it rigorously, mathematically.
The film was not a proof
because mathematician would not
take-- that would not be an
evidence for a mathematician
but it was an inspiration and
a new tool for exploration.
And yes, it has been.
Hi.
Thanks a lot for your
contribution tonight.
I have a question for Gentiane.
You showed us all
this research where
you were trying to
answer or propose
what is the relation of
human beings with robots
and how perceived
sociability in those robots
affects the way we interact
with them and stuff.
But I'm not sure
if you conveyed,
in a way, your desire
for how that interaction
you would want me to be.
You talked a lot about
analysis but I'm not
sure-- I was wondering if you
could give us a-- because you
seem to draw a line.
I acknowledge that there's
this barrier that we probably
won't ever surpass
but I'm not sure
if you would want that
to happen or not, or what
is your feeling about this?
So the way the experiment I
presented today was constructed
was that we were given total
freedom to the participants
in interacting with robot.
So you can see it a way as
like you build an object
or something and, as this
morning it was also mentioned,
users can do whatever
they want with it.
So you can create a cup and
they use it as, I don't know,
whatever else.
So basically, the
interaction with the robot
was built in that way.
So we didn't have
any expectations
about how people will
interact with the robot
and we didn't want to
have any expectations.
Of course, we had
secret expectations
that the people that would
interact with the robot
would understand that there
was an interaction that
was going on and
they were supposed
to do something with
the robot but we never
specified anything.
And we never specified anything
because we wanted people
to have total freedom in this
first encounter, for most
of them, with a robot,
with a living robot,
and interacting with this
robot with a task specific.
The thing that was
really interesting
is that the questionnaire
was designed by psychologists
to try to see how people
perceive the robot
and interact with the robot
but without the [inaudible]
that they would have
interacted with it.
So just like looking
at it without exactly
really interacting with
it, they could also
answer the questionnaire.
And also, in the time we
evaluated the questionnaire,
we had no expectations because
we are completely in the blue.
We didn't know how
people would react
to this specific robot,
the specific interaction
and everything.
So basically, we just put it
in the room with the robot
and let them do
whatever they wanted.
And that is what is really
interesting with robots
because in a lot of human-robot
interactions experiments,
the experiments are
designed in the way
that you conduct the
participants to interact
in a specific way, in a way
that you expect them to react
within the experiments.
And this is kind of biasing
the whole experiment, right?
It's like, OK, you give
me a bottle of water
you expect me to drink
not to throw it at you.
And the people could have kicked
the robot, touched the robot,
and do these kind of things.
Well most of them didn't.
But still, yeah, it
was like totally free
in this interaction.
Rest assured that I understand
your neutral approach
to the scientific method of
doing this analysis I think
I'm more in the lines
of asking personally,
what you would want the future
of our relation with robots be?
Would you rather have us
be more friendly to robots
or do you think you acknowledge
that there will always
be a barrier and you like
that barrier to be there?
No.
It's more of a
personal question.
I don't expect
anyone to-- It's just
like in human-human
interactions, right?
You might expect that somebody's
going to be nice with you
and then it's not the case.
Or you might expect that
everybody's going to like you
but it's not the case.
And it's going to be exactly
the same with robots.
What is interesting
for us is that-- It's
like when you're interacting
with people in a public space,
you want to have a standard
behavior for these people that
comply with social rules and
with global rules in general.
And then when you're
in the private space
you might want to have this
robot to interact differently
with you.
And what we are
interested in is trying
to create these public
space-based behavior
for our robots and our more
personal space behavior.
And all based in this
non-verbal interactions
without having to
specify, please, do that,
do that, and these
kinds of things.
As a roboticist, I'm very
skeptical about my own field
actually.
There's been a lot
of research, there
has been a lot of progress
but we are far away
from science fiction movies
and these kind of things.
So far, I'd love to have
people interacting was robot
as they might interact with
other objects, like loving them
like you may love your iPad
or your computer or whatever
thing.
But I'm not so sure it's going
to happen very quickly because
of this aspect of motion
that is inherent to robot.
We want the robot to move and
that is a very big problem.
Other questions?
Yes.
Up there, up there.
So we'll go from
there and come back.
Hello.
I have a question for Alma.
I'm curious, you made
mention of the Goettingen
mathematical model
collection and said
that there were
some sort of return
to this intuitive models
in mathematical education
through the films.
And so, I'm wondering if you
see a return to the 19th century
ways of understanding?
Is that the aim?
Do you think that's coming next,
a kind of reemergence of that?
I'm a historian so I
don't like to predict.
It's not my job.
But I think that
it's definitely true
that with computer graphics
there was a sort of a return
to this sort of engagement and
understanding but part of it
is not.
So that's one way of
telling the story,
that technology came, computer
graphics came, and then people
just kind of picked it
up, and technology--
But I think that's
it's one level
but it's not the whole story.
Part of it was a reaction.
So like I said, this kind
of idea-- this approach,
it was very common at the
end of the 19th century
and, actually, really early
on in the 20th century,
was really eclipsed by
this abstract mathematics.
And you see that in education.
So the new math, to people
that in the United States--
the new math education
should, actually, it
went all the way to
elementary education,
this abstract approach
to teaching mathematics.
So part of it is really
a reaction to that.
It's really just a
reaction against people
trying to go back and
saying, wait, this
is just one way that
you can understand
and you can study mathematics.
There are more than one way.
So geometry, that higher
dimensional geometry,
obviously, doesn't
lend itself as easily
to this visualized approach.
So part of the
work of Thurston--
That's why Thurston's work is
important because he studied
in two-dimensional topology
and three-dimensional topology,
which are just at
the level of what can
your still perceive or
can you still perception.
So this is why he
was very important.
So there are a lot of levels.
I think that computer
graphics, it true,
mathematicians today use
computer graphics a lot.
And to say that this
didn't have an effect on,
or doesn't have an effect
right now on research
would just be not true,
the way students are
being taught at any level.
There was another
question, right?
There's two more questions.
I'd like to have a
question for Alma.
I think your focus from
reality of topological space
to hyper reality
of Euclidean space,
this is, I think,
quite about-- We
need a certain
degree of sensitivity
about selecting our variables
in space in any scale,
not like just architectural
space, it may be urban space.
So how do think-- Let's
assume that we have
really variable design tools.
But how do you recommend to do
that choosing the variables.
I guess, I probably
shouldn't be the person
that you're asking
that question.
You should ask
somebody like designers
but you probably just
got the answer to that.
I mean, actually, I would be
interested to hear your answer.
But I would guess that it's
a lot of trial and error
part of it, but [inaudible].
You have to speak in the mic.
Sorry, sorry, sorry.
I was trying to get you
to answer the question.
I'd like to hear
from Jessica as well.
I can talk about how it's
been done in research but I
don't know-- and the goals are
different than I would assume
that the goals for designers,
who just because of that I
think that maybe
Jessica would be able--
Yeah, please, if you like.
I actually somewhat
zoned out because I
thought the question is
going to be for Alma.
Was the question
something to be something
about how do we choose what sort
of variables we're exploring?
Yeah.
Place spaces a lot of
scales, a lot of objects,
a lot of things.
So once we try to understand
the art of some relationships,
I believe, we need to go deep
in terms of some spatial,
let's say, data, which
is similar in its nature.
For example, buildings
different from trees, trees
are different from
people on the street.
So how do you
recommend, essentially,
such categorization
of the variables
is essential, how do you how
would you think about it?
I don't know if I'm the
best person to answer
this question
either but, I guess,
I fall into that category
of digital designer
who doesn't specifically
work with scale that much.
In some ways I work
with scale a lot
because I'm materializing
specific objects
for specific uses of extremely
tight material tolerances
so I have to deal
with it on that level.
But at another level
I think about things
more almost in terms of
their dynamics and behavior
than in terms of
what size they're at.
So I might see similarities
between how city street
grids act as network structures
that have redundancies in them
and how that could be
similar to vein patterns
that you see in leaves,
those sorts of things.
And how could you
have, let's say,
a set of partial
differential equations
that could describe those
systems with similar dynamics
even though they have a
completely different way
in which they arise.
So in some ways I completely
don't think about scale.
I'm like, oh, this
giant leather coral
is the same as this extremely
small nano flower that
forms through some process.
So I don't really
have any advice
for you whatsoever
other than just
do whatever makes sense for you.
Hi.
Thank you, thank you.
All your talks were awesome.
And I wanted to ask a
question that probably it's
between Gentiane
and Jessica but I
think the question could
be answered by any of you
or just one of you.
So it seems like, when
we look at-- both of you
are really operating at
the forefront of design
technology in different ways.
And Gentiane, you
were saying that you
face a kind of
constant disappointment
with the interaction
with robots.
And I would say, Jessica,
having seen the dress in person
and here, you know, I think
it's a constant sense of delight
to see this kind of work
literally unfold before us.
So it seems that we
have the technology
but our reaction to it can
be dramatically different
depending on it could
be a matter of scale,
but not just spatial
scale, it might be a kind
of conceptual scale issue.
And so I'm wondering, for this
audience that is often involved
in design and computation
and social interactions,
where do you see some of the
key questions, the forefront
of research or exploration
that might really break some
of these constraints
either that produced
a constant sense
of disappointment
or really push,
Jessica, in your case,
to technologies beyond
even what you've already
been able to accomplish?
Thanks.
I wish I had the answer
because if I had I
could go straight there and try
to be not so much disappointed.
But anyway, I think in
robotics, and it's maybe
the same in other fields
where the science fiction has
been abundantly writing
and shouting about,
our expectations
are extremely high.
We're all expecting
robots that are like I,
Robots or even like D2-R2, or
these kinds of things, right?
But we're far away from that.
That's just the point.
We do have part of the
technology to do it
but we don't have all of it.
And robot interaction, as
I was mentioning it today,
is also a lot of like
human-human interaction.
And there is a lot
of things, there
are a lot of things
that we don't know
in human-human interactions.
And that is also one, maybe, of
the drawback and the problems
we have now and robotics,
and that we are facing.
And that's why we're
trying to address right
now the human understanding,
from the psychological,
from the sociological
point of view,
and trying to reproduce,
to model, to gather
all this information, this
data, and to try to find a way
to have a global general
model at the social level,
but also at the individual
level, which might be
different, in the
case of Jessica,
for new fabrics and new
materials, if I'm not wrong.
I'll just say some stuff.
I'm, apparently, not really
good at answering questions.
One thing that I'm
really interested in
is that there are all these
different disciplines which
are exploring computation in
completely different ways.
So you have scientists, like
Mahadevan, who I mentioned,
at the Wyss Institute, who's
really interested in coming up
with theories that allow
him to model things
that he sees in nature,
like the ruffling of a leaf.
And he'll come up with
a mathematical model
but only explore it so
far to the point where
he can describe something
that's seen in nature
and publish like a paper
in Science about it,
or something like that.
And then you have people
in computer graphics
who are developing all
sorts of techniques
that design things
really fast and have
these other interesting
characteristics.
And then you have us
in architectural design
who are just using the
software that people
made for us most of the time.
Not very powerful.
And I'm interested in what
happens when we break down
all these walls between these
powerful computational models
that are being developed
and take those in our hands
as designers and use them
for not just modeling
the natural world, not just
making video games where you
can kill things in
highly realistic worlds
but you're thinking
about how we can
use them to design our world.
So that's one thing
I'm interested in.
I don't know if that
answers your question.
I'm really also interested
in advances in biology
and biological engineering.
So if you think about
all of us as machines,
essentially, we're living
machines that transmit
information in the form of DNA.
And we're just starting
to get to the point
where we can
understand any of that.
And it's so beyond
anything that we have data
for and can measure
because it's so
hard to measure living things.
But people are starting to think
about instead of just printing
materials, what if we
actually create real,
living biological systems.
And that seems extremely
fruitful for thinking
about very powerful ways to
really change everything.
And those are the random
things I'm going to say now.
There is a question
waiting for a long time.
I just want to make a
comment on your question.
Actually, maybe
in Gentian's case,
the promise was made not
by her but lots of people
before her, 50 years ago, when
science fiction was written.
And maybe Jessica, you
made your own promise,
and you're not
disappointed because you're
answering your question.
And you are trying to create a
human intelligence or some kind
of, yeah, an intelligence.
So it's very complex.
So good luck.
Wait, there's a lot of questions
so let's take them in order.
Do you want-- So there's
one here and then one there.
Yeah, just a specific one.
When you said the
edge is growing,
creating that pretty
form, was there
any explanation on
whether it actually
helps with evolution or strength
or fitness of that plant,
or anything, or creation?
I think that's kind of
thing that people say.
They're like, oh, things
in nature are optimal
but really all they need
to be is stable solutions
that can exist.
For instance, the sea
slug that I showed,
they actually are, basically,
one of my favorite animals.
They harvest
chloroplasts from algae,
they store them in the
ruffle on their back,
and they use it as
like a solar farm.
It's called a [? kleptomancy. ?]
That's really neat.
So apparently, they
can use that ruffle
to store lots of
chloroplasts that they've
harvested from algae.
But whether or not a
flat leaf, I think,
could probably photosynthesize
better than a ruffled leaf.
Maybe there's some reason
but, I think, the reason
you see that shape everywhere
is because that shape is
very easy to make.
All I need is one rule,
grow more at the edge,
and it makes that shape.
So you see it popping
up all over the place.
And it's not a
function of biology
it a function of,
essentially, physics.
Sure.
Another question.
I guess it's actually a
comment/mash note, which
is, I should say,
I love how-- I feel
like the combination of
the work you talked about,
the sort of great examples of
using data and science to get
outside our puny
little minds, and it's
like there are these invisible
binoculars and new lenses
that the universe has out there.
And when you look at things,
like new ways of thinking
in math or through nature, it's
like you take these things out
of the invisible spaces
where they were and then you
can put them on and look
at things in a new way.
So I think that's really cool
about the work that you do.
Question, Felix?
And I have one more after.
I'm calling dibs.
Thank you for your real
great presentations.
I have a question for Jessica.
In your presentation
and your discourse
you seem to be very
much interested
in the idea of democratizing
your production.
And I was wondering if, apart
from delivering the means
for general public to
design with your interfaces,
if you have any idea or
interest in actually unveiling
a little bit more of what's
happening be behind to,
let's say, a more
technical audience?
That can be in the form
of patents or papers.
Some of your findings
are really cutting
edge, the folding systems and
how to make a 4D printing.
So that's kind of my question.
Have you thought about that?
Do you have any interest on
going in that direction or it's
more like generating the
platform for finite users?
We typically publish a lot
on our website in our blog,
in our portfolio, about how
we implemented these systems.
So we have a lot of details
about it's rigid body physics
simulation and how we had
to simplify it in order
to make it something
simulatatable
and what went into that.
We don't go as far as open
sourcing all of our code.
So I think in some ways,
we describe very well
the methods that we're using.
And in the case where
we're referencing
a certain researcher's
work, we'll
call out a specific paper,
like it's Ma Dewani's
paper from this year.
This year it inspired
this and this thing
by Eitan Grinspun,
whatever it is.
But as of yet, we haven't really
open sourced much of our work.
Part of it's because we're
essentially two people
and we make very specific
tools, which we then make money
off of so we can have a
place to live and eat,
those things, which is not to
say that I wouldn't eventually
open source some things.
We have some projects,
some libraries,
which are intended to
be used by anyone, which
we do publish on our site.
But in general, we try to be
very open about our methods
and discuss all of that so
people can make similar things
without making things
identical to ours
by using the exact
same algorithm.
Other comments?
I just had one last
comment before we break.
Continuing on Felix's
questions, and it might also
be a question that you might
all be interested to answer,
was this notion of
accessibility and the fact
that you mentioned,
for instance,
that one of the main
obstacles for accessing
digital fabrication is the
3D design software that
are seriously impairing
most of the people I know,
including myself, into
reaching to those technologies.
And you were also sort of
grounding your research
within the idea that it's
ultimately for people to use.
Or you were talking about
this intuitive understanding
of the knowledge that justifies
some of the approaches
that you were describing.
So I'm just wondering, do you
want to specifically set up
strategies for breaking that
wall of knowledge obstacle?
Yes.
That's sort of what I
was trying to get at.
So we have, let's
say, mass production,
it was this new
thing that happened.
And suddenly, though, we
had to invest a lot of money
in some sort of tool.
And then we're going to use
that tool a million times
in order to recoup that cost.
So maybe before, your shoes
were handmade by some guy
and they fit you
perfectly, they had
that function of being for you.
And maybe they also made
you happier because they're
exactly what you wanted.
Now you go to the
store and you buy
what is there because that's
what you can afford to buy.
That's what we had to make
because of the limitations
of mass production.
And it might not fit you and it
might not make you super happy
but those are the shoes
you are going to get.
So this if part of the
promise of digital fabrication
is we can make anything, why not
make it exactly what we want,
so both in terms of function,
something that performs
to your specifications, and
in terms of what actually you
like, that you enjoy.
So then, yes, we're
trying to make
tools that let you do that.
But the problem is
how do you actually
let people engage in
the design process,
and how you get their input,
and how do you do that
without driving the prices up?
So sure, you can still go
to the handmade shoe guy,
and he'll make your shoes
for $900 but most of us
can't get $900 shoes.
Any thoughts on
accessibility are intuition?
In the case of
robots, accessibility
is yet a big problem.
There was this
promise by SoftBank
that they would sell
a very cheap robot
for less than $2,000,
the one that I
showed on one of my slides.
But basically, they were
completely out of scope
when they were planning that
and they didn't even sell it.
And they didn't even plan
to sell it right now.
We definitely need, in our
field, a lot of accessibility
because yet now the
robot to still something
from the science fiction and
people are not used to it.
And that's why also
I was mentioning
this problem of expectations
that are too high.
If people were confronted with
real robots, which was exactly
the case when the personal
computer started to happen,
people knew their
limitations and people just
accommodate with that.
So if we could have
super cheap robots
that people can
play around with,
that would be of great help.
And in terms of
behavior modeling
and these kind of
things, it's just
a software problem which can
be very easily turned around.
And that is not a
very big problem.
In terms of design, I
have great expectations
with the project
I was mentioning,
the metamorph robots and
these kinds of things.
But using new design
technologies we
can have personalized
robots that at least respond
in terms of appearance
and motion abilities
to a certain number of
personal preferences.
And I'm thinking like Jessica,
like a few decades ago
everything was like
custom made just for you,
and now we have to
do to mass production
to adjust to the product we
are buying instead of having
the product adjusting to us.
And that would be great
with robots, in particular,
interacting with us, that
we could personalize them
as much as possible and having,
really, your own robot made
just for you.
Thank you so much,
all three of you.
Can we give them another
round of applause?
[applause]
So we're breaking for just five
minutes just for you in time
to get some coffee.
And then we're getting to
our third panel of the day,
on the urban studies
and big data.
