"Good morning everyone". Thank you for
coming to out four seminar series on
"Circuits, TeraHertz (THz) and Beyond" on behalf of NYU WIRELESS and Electrical and
Computer Engineering Department, it's a
pleasure to welcome you to today's
seminar. "NYU WIRELESS" is
sponsored by a large number of
industrial affiliates supporting our
research in frontiers of wireless
technology. Today, I'm delighted to
introduce our seminar speaker Professor
Mark Rodwell. Professor Mark Rodwell is
a is an endowed chair at University of
California Santa Barbara. Professor Rodwell is an inventor and a pioneer of
several high frequency and ultra high
frequency devices and circuits. Today, he
currently directs the circuit - sorry the
center and or rather I should say a new
SRC DARPA Center for Communications on
Converged TeraHertz Communication and
sensing. So we are excited to learn about
 his research on wireless above
100 gigahertz. Please join me to welcome
Professor Mark Rodwell.
Professor Rodwell: Thank you, it is an honor to be here.
Thank you so much for the invitation. NYU
has a fantastic reputation in the
wireless area. I am privileged to work in
the center with the NYU faculty leaders
and I'm here not only to give this
presentation but to see the excellent
resources and activities in the wireless
area. Okay, I would also like to, I didn't,
rather than putting down a list of about
a hundred co-authors, I put down none - so
please let me acknowledge the many
unacknowledged collaborators, who
actually did most of the work that I
will present to you today. Okay, I'm Rodwell at UC Santa Barbara.
The interest here is above wireless frequency, carrier
frequencies above 100 gigahertz. Why.. and
I'll be discussing a little bit services
systems, integrated circuits and
transistors, pretty much soup-to-nuts in
the discussion. Why do this? Well today,
wireless networks are seeing exploding
demand and are running out of spectral
capacity. The immediate response of the
industry to that is so-called 5G systems
which are occupying carrier frequencies
around 30-40 gigahertz around 60 and
around 70 hertz and 90 gigahertz and
these are bringing in increased
available carrier spectrum and also
moderate degrees of beamforming, more on
that later. So the next generation, which
beyond that - which this center is
empowered to do, is to look at wireless
systems beyond that and we are therefore
looking at carrier frequencies above 100
gigahertz, bringing us two opportunities.
One is simply in much more available
spectrum, the other is because the
wavelengths are short, we can do massive
spatial, that's the wrong word there,
I'm sorry,
massive spatial multiplexing. There are
military applications also in imaging, in
sensing, and in radar and in
communications. Now all of this talk will
be about the future. That's what we
should concentrate on, but since I'm an
old guy, one slide and one slide only
about the past and how we got here..
because I think it will be relevant a
little bit. From the roughly, the 1950s
through the 1990s, many groups worked on
GaAs Schottky diodes mounted
in microwave wave guides and with these
they built frequency multipliers and
mixers for transmitters and receivers
operating in the few hundred gigahertz
range.. up to about a terahertz today
and these were and continue to be widely
used in radio astronomy and instruments
and in spectroscopy, measuring,
identifying a gas remotely by it's
emission or absorption spectra. I was in
the 80s, in a research group, doing a PhD
where most of the students were in a
community, where we're doing terahertz
spectroscopy using picosecond or
femtosecond pulse lasers. So you had some laser
that is enormous, it's putting out a
train of optical pulses that might be a
picosecond in duration either less and
one way or another turning that optical
pulse into an electrical pulse both in
the transmitter and sampling in the
receiver and using time domain
techniques to produce spectroscopy,
absorption spectra of some samples and
that community was there for in the 80s
fond of saying optics as fast,
electronics is slow. More on that in a moment,
I was rapidly bored with this and so
within the same group worked on
producing electrical picosecond pulses,
big ones, several volts amplitude, using GaAs Schottky diodes in a
graded circuit form, not in the frequency domain,
but in the time domain. These were called
nonlinear transmission lines and these
were in fact a very rapid commercial
success and were used for about a
25 year period in microwave
sampling oscilloscopes and network
analyzers now kind of falling into
disuse as other things have become
available, Right !!  Beyond that, once we were
done with that, I took this as a
declaration of war, as did a few of my
colleagues, collaborators, and competitors
across the world, who I again don't have
time to acknowledge. And so we said to
ourselves, this community, can we build
transistors that work at a terahertz or
beyond even more? Everyone was telling us
at the time if we did,
that the transistor had to operate on
new physical principles, could not work
on charge controlling electrostatic
barriers that would never work at this
frequency, gotta be something like a
quantum transition laser.
They also said foolishly can cicrcut
concepts work at a terahertz? Well that
one's easy and silly to answer, of course...
circuit series is just Maxwell's
equations and the limits of things being
small in comparison with a wavelength.. so
make it small and of course you can do
that,
Alright !! But the other was a more serious
question: Can we make transistors work at
a terahertz? There's no time for it, but
just a list of things that we'll look at
and abandon sensibly along the way is
time domain techniques in the terahertz
using electron devices turn out to be a
really bad idea. We've done it because it
doesn't work, we don't publish it. There
was a belief post September 11th this
would be used for concealed weapons
detection or remotely detecting poison
gas. Both of these are failed ideas and
I'll be happy to discuss in the question
if you're interested. Ok, so what do we
think this is really for? Millimeter
waves will allow us high-capacity mobile
communications as we go above and
approaching and above 100 gigahertz,
providing mobile communications to
people walking around the ground or in
cars or other vehicles at very high
aggregate data rates with the endpoint
being delivered by spatially multiplex
hubs and the backhaul the connection
back to the internet with some
combination of optical fiber when you
can run it and where you can't run
optical fiber by millimeter wave backhaul.
In addition the other application in the
area of communications, is to the offices
into the home.. providing with millimeter
wave very high capacity backhaul to the
house or perhaps even a little bit less
probably to individuals within the
office. And here simply it's an issue of
business, it provides a convergence of
the business models of the cellular and
the internet companies so that you get
some competition in your market, some
choice of who's delivering you the
internet and hopefully lower cost and
broader deployment. The second
application of high frequency systems is
imaging through fogs, clouds, dust and
smoke so that you can drive at breakneck
speed in heavy fog without killing
yourself or others or is to compliment
things like autonomous vehicles.
So things that we're very interested in is, high-definition video resolution radar frequencies in the
several hundred gigahertz range where
the radar is good enough to give you a
TV like picture, so that you can not
only say something, hey something's out
there, which a lower frequency system
will give you but I can see what it is..
so that would compliment radar at lower
frequencies, which gives you that longer
range, objects and velocity resolution.
There's something beside that side of
the road, then you fire up your
millimeter wave radar and you get a TV
like picture and you can say fine it's a
light post, it's supposed to be by the
side of the road or it's a child sitting
there. Okay so that's the trade-off and
how these things compliment. In addition,
those high-frequency imaging systems
will allow you not only traffic
coordination, but the
high-resolution imaging will allow you
to produce the LiDAR, sorry, will allow you
to provide the vision systems that allow
you to image and control autonomous cars
and yet work in conditions where LiDAR
doesn't work: it's raining, it's foggy..
okay it's cloudy. So in the interest of
time, I won't repeat it but the same
considerations exactly come in terms of
imaging for military and national
security applications, lower frequency
systems will provide that longer range
and tell you something's there, higher
frequency systems will have lower range
but will give you a picture of what
you're looking at... so you'll know what
you're dealing with and what to do about
it. Okay, so what are the benefits and
challenges of the high frequency systems?
Let's be realistic, not delusional,
there's enormous available bandwidth
between BC and say 300 gigahertz. That's
good, but under either heavy rain, fog or
even on a sunny day, if the humidity is
high the attenuation didn't get up into
the range of even 10 or 20 DB per
kilometer, so the attenuation is high
so these inherently a short range
systems, not long range. The wavelength is
very short... What that means is in an
aperture of a given area, you can form
beams of a very narrow angular beam
width and you can produce many of them.
So you can have massive numbers of
channels in a base station that is
communicating to many communication
partners is one technique. The second is, you can
have an array of transmitters that's
even that's transmitting an individual
high speed data stream from each one of
those transmitters and if you have an
array on the receiver, think of that
array is like a TV camera. If that is
array is big enough from basic
diffraction limits to have sufficient
angular resolution to tell one
transmitter from another in its angular
resolution, that is sufficient for it to
independently recover the data streams
from each individual transmitter. So we
can get within the limits of diffraction,
we can get multiple parallel radio links
on a point-to-point link to increases
capacity and the higher the frequency
the shorter the wavelength for more of
these parallel transmitters and
receivers, we can pack in and the larger
the capacity we can get, so that's why
high frequencies become important.
Now, some of the challenges are: we have
terribly high attenuation one because of
the atmosphere, two just because of
elementary diffraction limits on radio
wave propagation. The shorter our
wavelength at a given rate, the smaller
our signal λ^2/R^2.
So we need to recover our signal
strength, so we must use multiple
transmitters and multiple receivers to
do that with the phasing set correctly
so that they add up in phase; that is
known AKASA phased array. We must
have phased arrays to get even possibly
acceptable transmission range. The other
point to understand is in a radio link
between two points, the useful radio
energy is carried in what's called the
first Fresnel zone and the area of the
first Fresnel zone is approximately
equal to the distance from the
transmitter of the receiver
times the wavelength. So you go on high end
frequency, that Fresnel zone becomes
smaller, the beam is more easily blocked..
so you must have ways of rerouting the
signal information from one point to the
next. To reroute your signal information,
that is called a mesh network. So in
talking about these technologies, we are
talking about systems with massive
communication capacity and where the
hardware is inherently, incredibly
unreliable and you must have good
network management algorithms to deal with that.
Okay, so now let's talk briefly about the
applications concentrating on fore here
between 140 and 340 gigahertz. All of
these I've already mentioned to you, I
will just now take you through in a
little bit greater depth. So the first of
these is spatial multiplexing, and what
do you want to call it a base station or
a hub? The base station for your cell
phone receiver where each one of these
is an individual person walking around
for example, with a cell phone. Okay,
so the idea of spatial multiplexing is
you are carrying multiple independent
beams. It's neither frequency nor time division
multiplexed, it's spatially multiplexed.
Each beam is carrying different data,
each beam is independently aimed, each is
operating the same spectral bandwidth so
you are reusing the spectral capacity
you have available to you and you can
have at least on paper.. as many beams as
you have or elements in your array.
The system Saris are working out now what in
fact is practical probably of order 50%
and the argument for doing this at high
frequencies is because it becomes small,
if we go to 220 gigahertz and put
elements at the half-way flavor spacing,
that you'd have for an array that is
looking over a hemisphere that if we
have a thousand elements.. which would be
a thousand beams that array is still
only three square inches. Okay, that's the
reason for going to higher frequencies
so the wavelength is short, so that we
can support massive number of beams from
a small array, so the hardware is a multi
beam phased array. The second application
repeating in more detail, what I've just
said to you is spatially multiplex
point-to-point link and we're not the
first have done this but this is an
early experiment that's professor
madavan I did..
I think gosh 12 years ago, with a 4
channel link at 60 gigahertz the idea is
you have an array of transmitters each
spitting out an independent data stream
and then you have an array of receivers,
which are a phased array and the width
of this array in wavelength is
sufficient that it's got enough angular
resolution to be able to tell one
transmitter from another and if so it's
simply a matter of signal processing to
sort out the individual data streams
from the fact that we are dealing with
essentially the angular resolution,
widths compared to wavelengths of these
arrays in a square array the number of
channels goes in proportion to the
square of the ratio of the aperture area,
so the products of wavelength and
distance, so the key is by going to high
frequencies - meaning short wavelengths,
you can build an array like this of
reasonable size and support a massive
number of channels. Again the third
application is high frequency imaging
and here what I would comment is a lower
frequency radar might give you sort of
some classic TV show like picture like
this of different objects at different
range as a function of angle of
direction that you're looking.. well
that's fine that tells you something's
out there, but here you are you're in
your BMW 500 series, it's heavy fog and
you really want to drive 80 miles an
hour to the freeway, but what you can see
is the car about 10 feet away and
everything else what you want to see is
this, in other words.. you want radar with
sufficient angular resolution that maybe
you have a heads-up display and it just
fires up and you have a picture like
that when what your eye see is that okay
so that's just the most elementary point
the angular resolution of an array is
the wavelength divided by the width of
the array D and if we imagine something
that we can put in the nose of a car
behind the radiator grille that's maybe
30 centimeter aka 1 foot, if we pick 300
some gigahertz which is a wavelength of
about a millimeter, then we get about a
tenth of a degree angular resolution, so
we can get a nice high resolution image
from this system.. that we could fit
behind the radiation of grill of a car,
on a small plane or with some loss in
angular resolution on a UAV - that's a big
military application, is reasonably high
angular resolution on something that
weighs about a pound and is about that
big.. so you've got about that much size
to build an array, ok !! So here are the
target demonstration applications for
our center that are designed to test the
feasibility of these comment.. these
concepts in an extreme limit, the first
is to build a spatially multiplex base
station at a hundred and forty gigahertz
and the idea is you have a telephone
pole here, then on the top of the
telephone pole or the light pole is
cell-fone base station, the cellphone base
station might have four faces on it -
north, south, east, west facing to talk to
users, we might want to talk to a total
of a 1024 users, which would be
256 elements.. 256 users per face. In the
end my system colleagues and experts are
telling me that it's not feasible to
support as many beams as the number of
elements, but they're tending to tell us
right now, that 50 percent is a good
number
nevertheless.. because I'm not updated the
slide let's use a hundred percent and
you will forgive me for that, Okay ! so
I've got an array here and the array is
a linear faced array, each element
has about a 20 degree up/down beam width,
so that covers everybody that's walking
around, there's nobody in the sky, there's
nobody in a hole in the ground..
okay, just need to steer in the plane
people are gonna be and then this array
has got about 250 elements laterally to
allow me to steer laterally and to pick
off you versus.. you versus.. you versus.. you..
okay ! So if I do.. excuse me I need to
update this link this slide if I'm doing
250 beams at 10 gigabits per second per
beam and I'm doing it 200 meters range
and it's raining at a rate of 2 inches
50 millimeters per hour you probably
should be indoors but nevertheless, let's
do the link budget there, I'll show you
the link budget on there's the handset
in the phone, it's an 8 by 8 array,
meaning it's about a centimeter by
centimeter that'll fit okay.. I'll show
you the link budget and it's feasible,
but the real questions are what's the
required component dynamic range,
what's the required complexity of the
back end beam former. Nevertheless,
if you just do the first transmission
formula elementary radio link signal
processing formulas, what you come up
with and if you provide sort of
industry characteristic conservative in the
design, a total of about 20 DB safety
margins distributed over manufacturing
aging loss in your packaging design
variations, if your receiver has a 3 DB
noise figure, you need to radiate about
40 milliwatts per element from the
transmitter, that can be done and that
can be done easily.. many of us have
bought
so this frequency that will do that.. so
the link budget is infeasible,
okay !! How about 75 Giga Hertz bit more
rational if we move to a 75 Giga Hertz
carry frequency and keep the same power
numbers, the range goes up from 200 to
300 meters, but because the wavelength is
longer, the array in eyes handset goes
from being nine by nine millimeters to
sixteen by sixteen millimeters and
you've got a fear that it won't fit.. okay !!
So you need bigger arrays or if you say
fine.. I'll keep the array the same size
at 75 gigahertz, now it's less directive
you lose signal-to-noise ratio because
of antenna gain and you're back to about
the same range, so interestingly enough
from the propagation physics point of
view.. about 140 gigahertz and 75
gigahertz are the same, 75 gigahertz will
happen first because right now it's
easier to build power amplifiers and
low-noise amplifiers at 75 gigahertz
than at 140, but 140 will come key in my
presentation is the goal in this Center
is it's not to do 5G, others are being
paid to do that, it's to look at 6G. So we
should not be looking at systems at 75
gigahertz because there's already
extensive examination of those issues,
the two questions we need to answer in
the centre is.. can we really do massive
MIMO ? Can we do hundreds of beams instead
of 10 per aperture and what's the
highest frequency for carriers that
turns out to be useful for these various
applications ? Is that clear !! Now those are
orthogonal, if we end up proving that we
can do a couple hundred beams and the
right answer is 80 gigahertz.. would still
succeeded clear.. but the interesting
point here is on link analysis, it's kind
of a wash.. until you look at in present
state of hardware what are you gonna do
it at 75 or 140. Let's kick it up to 340
gigahertz for a second and now let's
look at that point to point backhaul
link okay, and I'm watching time, I got a
speed-up, we can use a linear array, we
can use a square array okay.. if I use the
linear array and I say that the array is
going to be about 1.6 meters long, I've
got a range of about 500 meters.. with it
is readily feasible to do about 80
gigabits per second per element of this
array, which is 640 gigabits per second
for the overall link..
and that's in a linear rate and its 1.6
meters long, that's easy to fit on a
telephone pole or light post, it's a bit
less physically convenient to have a
rectangular array that's 1.6 by 1.6
meters, but now at the same element
spacing.. I've got a square instead of a
line.. I've got 64 elements instead of 8,
I've got eight times capacity more and I
now can build a radio link at about one
point three terabytes per second range
rate, at half a kilometre range and
that's where the single polarization
throw in two polarizations you double it
again, are these numbers
absurd, am i smoking crack.. I don't know !!
Let's do a link budget analysis again,
I'm going to give myself a total of 20
DB safety margins, just like industry
would for all of these things and if my
receiver is 4 DB noise figure, I need to
radiate 80 milliwatts per param amplifier
at 340 gigahertz, that's beyond
state-of-the-art.. but not by much, my
group's done 20 milliwatts at 300
gigahertz, other groups have done 30.. so
within factor of 2 or 3 what's being
done already, so I'm not smoking crack
from the hardware RF budget point of
view, but what about the back end signal
processing and other issues, so if we go
from a linear array we need 80
milliwatts per element, if we go to a
rectangular array not only do we get
larger capacity but because the
aggregate arrays are larger and hence
more directive and hence more sensitive,
the required power per element is about
10 milliwatts per array element, we've
got a lot of transmitters over overall
we're transmitting 10 watts.. but only 10
milliwatts per pair amplifier.. that's a 5
terabit per element per power..that's a 5 terabit
radio link.. over half a kilometer okay..
and again this is computed half a
kilometer range when it's raining at 2
inches per hour, so we're not doing
ludicrously optimistic analysis.. why do
it at 340 gigahertz ? why not do it a
140 gigahertz ? The atmospheric
attenuation becomes less and the
required power goes down to kind of
about two milliwatts for the linear
array and even less than a milliwatt per element for
the rectangular array, the only caption
going to lower
frequency's is that the length of the
array goes from a being about one and a
half meters to being two and a half
meters and so physically it's becoming
less convenient. So in the design of this
system it's a trade-off between
compactness and letting budget okay.. I
doubt you would one to 2.6 by
2.6 meter rectangular array on
the top of your telephone pole, it's
getting to be a monstrosity.. okay.. now
let's look finally at the case of the
imaging radar okay.. we want to drive at
seventy seven miles an hour in heavy fog
or we want to fly a military aircraft in
smoke and fog associated with a
battlefield and we don't want to crash
into things, the short wavelength will
give us TV or maybe even
high-definition TV like resolution, I'll
show you the link budget again that
looks reasonable the real channel is
challenges complexity, if we build it a
classic phased arrays with an N by N
array.. with N squared elements, the number
of pixels in our TV resolution will be
the same, so if we want say megapixel
display, we need to build an imager with
a million RF channels...
that's not affordable, so the real
challenge is hardware efficient imaging,
so that the number of required RF
channels is vastly less than the number
of pixels.. now I'm introducing no new
ideas here, I'm just going to show you
one of many every one of these is an
established technique in the
millimeter-wave community. But one of
them is to use what's called frequency
scanning scanned imaging, in this case
what one does - is one has a linear array,
the lasers dying so allow me to just
jump on points.. I'm not very dignified
anyway, so here we go
alright there's a linear array and that
linear array points up and down... okay we
steer in the horizontal direction by
shifting the transmission frequency by a
few percent and then bouncing the beam
off something whose reflection angle
depends on the frequency, so I've drawn
this to be conceptually simple as a lens
and a diffraction grating.. the reality is
you merge those two objects into a
common object called A for an L lens.
Okay.. so you steer vertically with phased
array techniques, you stir laterally by
frequency scanning and suddenly you
don't need n squared element,
you need n elements for an N squared.. for
an N by an array. Okay.. now again I do the
link analysis.. I use the radar range
equation.. that's well known.. let's say we
want to see a soccer ball at 300 meters
range that's roughly a child, we're
driving at freeway speeds in heavy fog,
we give ourselves about 10 DB margin and
we want 10 DB signal-to-noise ratio,
6 DB noise figure on the receiver we
need about 40 milliwatts per element 300
gear goes... that's done.. we can build this,
Okay so that will give you not a
high-definition TV image, that'll give
you sort of a VGA, for those that
remember VGA resolution.. but that's
pretty good... okay so, that's why we want
high frequencies... okay, let's talk about
transistors okay and I'm showing you a
few transistors from my group, so my
assertion is that the higher frequency
architectures.. when talking to many of my
colleagues in the silicon.. I see the
design community, they say as I'm gonna
say gosh.. you're nuts.. you're gonna use
these impractical compound
semiconductors that will never be used
in wireless links, you got them in your
pocket - 99.99999%
all that all 50 billion transistors are
silicon except.. two or three and the two
or three of the power transistors on the
transmitter and their gallium arsenide
HBT, so my assertion is except for the
shortest range.. what has been happening
in the past will continue to happen in
the future, we will have VLSI in every
aspect including beam forming of the
signal chain of the transceiver, but if
we want to go anything but the shortest
range... the shortest distance, we will have
as has always been the case compound
semiconductors in the power amplifiers
to get more output power.. at a decent
efficiency and possibly as well compound
semiconductors in the receiver to get as
lower noise in the receiver for lower
noise figure and I saying that's
absolutely nothing new, here I'm slope
showing you a slide that represents our
plans within our Center, but that's not
propaganda.. that's States my view of how
things are likely to evolve okay.. the
transmitter will have the MIMO beam
former.. whoops that was the one button
you shouldn't push.. that's the off button
and the lasers kind of there, the MIMO
beam former is
probably done digitally.. we're working with
baseband I and Q and doing IQ up
conversion.. up to the RF carrier, if it's
it either 140 or 220 gigahertz silicon
do can do that directly and if the range
is short, we go directly out onto
antennas, if we need a bit longer range
we bring in either indium phosphide or
gallium nitride power amplifiers to give us
more output power and hence more range,
if we want to work at 340 gigahertz or
indeed a higher frequency, that's
probably a higher frequency.. where we can
with decent performance even up and down
converting silicon, so we're probably
going to do the final stage of frequency
conversion say from a couple hundred
gigahertz to 300 gigahertz in the
compound semiconductors as well.. or maybe
we will see we can do that final
frequency conversion silicon, the receive
chain is pretty much the same story... I
won't repeat it.. in the interest of time
I think you get the picture and I've
just got you here, I'm just repeating the
numbers that I've already shown you on a
few of the distant different systems.. so
the the discrete device technologies for
parents would be gallium nitride or
indium phosphide bipolar and for the low
noise amps.. that be indium phosphide
field effect transistors known aka
hemp's high electron mobility
transistors, talk briefly about
millimeter wave CMOS, I'm not
self-centered... I just could grab these
quickly.. okay so a number of groups,
probably better than me, but also mine
have built amplifiers in CMOS to
frequencies between 100 and 200
gigahertz, if we're working in 45
nanometer SOI CMOS, which actually turns
out to be about the best you can get, you
can get about 6 DB gain per stage at 140
gigahertz, so you can do amplifies and
CMOS to 140.. 150.. 160, as you get towards
the 200 gigahertz.. it gets harder and
harder and harder ok...
got it. So we'll just use a better
scaling generation.. no you won't.. Moore's
law has been over for a while
that's a nice jargon.. what do we mean, the
gate dielectric cannot be further
thinned without tunneling leakage
becoming excessive and that's a
well-known limit to the scaling of VLSI..
what that means in the context of RF is
you can't thin the gate dielectric, you
cannot increase the on current and you
can't increase the transconductance
of the transistor per unit gateways, so
GM concentrates.. then you say fine.. I'll
just reduce the gate lengths and 15-20
years ago.. reducing the gate length would
have reduced the gates of channel
capacitance and the capacitance would
have got less, but today somewhere around
70 percent of the capacitance ain't
between the gates in the channel, it's
just between the metal electrodes the
gate source and drain the inter
electrode capacitance, so shortening the
gate length doesn't significantly reduce
the capacitance and you can't increase
the transconductance.. you're done. The
ratio transconductance to capacitance
which is the current gain cutter
frequency.. ain't gonna increase and at a
transistor level it's about 350 to 400
gigahertz, when you wire up the multiple
transistor fingers.. that you need in an
RF circuit to impedance match to 50 ohms,
all of that wiring makes it even worse
and you're at the range of about 250 Ghz..
okay. It gets worse in the wiring
resistances and the metal stamp reduce
the power gain of the transistors, what
about FinFETs
well.. maybe that's better.. but the way
that FinFETs are made today.. is that the
source and drain electrodes and the way
they're formed.. makes these inter
electrode capacitances even worse, so it
seems unlikely that the MOSFETs designed
for VLSI will support circuits much
above a couple hundred gigahertz okay.. as
I've stated... nothing new here.. a lot of
Wi-Fi base stations and most cell phones,
I think all cell phones.. use compound
semiconductor power amplifier today, in
the few gigahertz range.. millimeter
wavelengths need low noise on the
receiver. they need high output power on
the transmitter, they need high power
added efficiency to go with that high
power, three chips either from my self
and my students or from collaborators,
here's a half watt at 90 gigahertz.. two
tenths or watts at 200 gigahertz and a
couple of milliwatts at nearly 600
gigahertz... in this case in indium
phosphide technology. Okay..
the other technology that's coming on
hard.. it's a.. these fellows are frenemies,
they are my competitors on a individual
group level.. but they are my esteemed
sensor members, as we try to execute the
center vision is gallium nitride which
is the premier power technology,
currently at low a millimeter way
frequencies producing astonishing eight
watts per millimeter of transistor
periphery and 94 gigahertz and they are
pushing hard to get reasonable fractions
of that at 140 and 220 gigahertz, so that
brings the potential for whacking out a
couple of what's more at sort of thirty
percent efficiency even at a couple
hundred gigahertz, meanwhile I will
persist in my work on indium phosphide,
so here is indium phosphide at the 130
nanometer node.. where my collaborators
and friends at Teledyne have produced
transistors with power gain cutoff
frequencies of about 1.1 terahertz and
current gain cutoff frequencies of about
500 gigahertz,
that's Teledyne my students a little
later produced transistors with almost
but not quite the same bandwidth.. okay.
So here are a few integrated circuit
examples, a couple of them I've already
shown you, here from my group has a point
to what parent put 200 gigahertz, here's
a couple hunt well 15 milliwatt medium
power amplifier at 300 gigahertz and I
wasn't involved in this except in
planning the program.. but it is the
outcome of the DARPA terahertz program,
this is an integrated 600 gigahertz
transceiver with a phase lock loop for
the local oscillator frequency up
conversion and gain and a medium power amplifier at what was designed to be 600
gigahertz and came out just a little bit
low.. I think it was, but the output power
is about 1 milliwatt for them okay..
watching time trying to move rapidly.. I
wanted to give you the whole picture of
terahertz technology, my presentation may
be over ambitious because we're covering
every subject subarea.. but it's fun,
let's see if I can get it through it in
time.. let's understand how to build high
frequency transistors, this was a
question we asked ourselves.. myself, my
collaborators, my competitors, a small
community of people who are saying
baloney..
don't tell us transistors will never
work at a terahertz.. man will never live
at the speed, you got to use quantum
transitions or give up.. you know wait,
let's look at it, what sets the frequency
limits of transistors.. something's
happened strange to the software here ok..
and what are the limits, if we have a
depletion region.. we have a capacitance..
epsilon a over D.. power points is free
chance and I've lost it here, but in
addition to a depletion capacitance, we
have a sorry.. I'm just losing it here
because stuff is lost.. it's a strange.. let
me just sort of try to remember what's
on the slide, you can see this, there's
half of the image I want and
spontaneously something that doesn't
belong here at all is has appeared there..
okay.. we have a depletion layer that
gives us a capacitance.. okay, epsilon a
over D.. we have a carrier transit time,
which is the image that's missing, which
is moving electrons across it,
let's forget fine details... we've got a
transit delay that's give it a 
distance divided by the velocity.. so low
capacitance.. thick low transit time.. thin
sounds like a contradiction, what people
don't know.. is a little bit less familiar..
is space charge limits in that depletion
region will also limit the amount of
current you can run through it and the
maximum current goes of the inverse
square of the thickness, we have
resistance, everybody knows that there's
resistance from the bolt resistivity of
the semiconductor.. less people know that
we have contact resistance between the
semiconductor in the metal, fewer people
still know this very important point
that is dominant in high frequency and
nano meter devices not the bolt
resistivity okay.. in addition to, in the
transistor we have fringing capacitances
between electrodes, surprisingly you
might not think of this, but even in
terms of transistor scaling
particularly bipolar transistors, we've
got to calculate the heat rise in the
junction and the variation of the
thermal resistance with geometry is also
one of our scaling laws.. okay.. all right !
Finally and I won't touch on it, except
to simply acknowledge its existence in
whether it's a nanometer terahertz
device.. another scaling limit is we are
trying to push.. so much current
through the transistor.. that we're
running out of available quantum
mechanical states to put the electrons
in, this is the density of states limits
and we see those in all of our
transistors.. okay, that's
too hard for this audience.. let's make it
easy.. here's a PIN photodiode.. I've got a
diet of a certain bandwidth.. I want to
make the diode twice as fast okay.. so I
thin the depletion region by a factor of
two to reduce the transit time by
factory - that's good okay, but that made
because the capacitance is the area
divided by the thickness... I just made the
transistor capacitance twice as big.. I
didn't want it twice as big, I wanted a
half as big.. so I got to make the area
four times smaller.
I've also got a resistance, I've got a
resistance from my top contact divided
by the area, the area's gone down by a
factor of four... so the contact resistance
per unit area has got to go down by a
factor four, so two to one thinner.. four
to one smaller area, four to one less
resistance per unit area in my contacts
and I'm running out of time.. so I'm gonna
skip the issues of the bottom contact
except to say.. that my access resistance
is proportional to the length of the
stripe, so I have two choices reducing
that area by factor four.. I can reduce
the width.. I can reduce the length, what
we just  learned is I can't reduce the
length, so the width has got to go down
by a factor of two.. by a factor of four,
there we go.. to double the bandwidth.. I
got to reduce the thickness by a factor of two
I've got to make
my contacts four times better, I got to
reduce my width, my lithographic line
width in my lithography system in my
processing by a factor of four, I got to
keep the length the same and I skipped
it in the interest of time... but let's
just give me a mulligan on that, I've got
a quadruple the current density.. okay. I
don't have time to do this for all
devices, but I just like to lead you
through the style of
that game, so I've got about ten minutes
left, so here is the bipolar transistor
and I have expressions for the base
trends at time.. the collector trends at
time, the collector capacitance, the
maximum current, the temperature
temperature rise and the
resistances, if I want to double the
speed of the transistor.. I need those
transit delays to go down by a factor of
two, if I want to double the speed of the
trends of stuff... I need the capacitances
to go down by a factor of two...
I need the resistances to stay the same,
I need the current to stay the same,
I need the temperature rise to stay the
same, in the interest of time I won't do
it.. I will just refer you to the simpler
game I played here.. it's the same game
just slightly more complicated.. fair
enough and I look at those four terms
like that and I'll just lead you through
a little bit of that is okay based
trends at time.. the upper lines got to go
down by factor of two, basis got to get
square root of 2 thinner, collector
transit zones got a go down by a factor
of 2.. collector needs to be 2 times
thinner, the capacitance needs to go down
by a factor 2, but the thickness went
down by a factor of 2.. therefore the area
must go down by a factor 4 ok.... I'm gonna
stop now in the interest of time, but can
you see how easy this game is to play ok.
it's pretty fun.. ok, so I skipped those
steps but there we have it, if I want to
double the transistor bandwidth.. I need
to reduce by a factor of 4 the
lithographic feature size, I need to
increase by a factor four the current
densities, I need to reduce the layers
thicknesses by a factor of 2.. I need to
have a factor of 4 reduction in the
resistance per unit area of my contacts
ok. Narrow junctions, thin layers, high
current density, ultra low resistance
contacts.. that's the game, those are not
"Dennard" scaling laws for FETs,
they're my scaling laws for high
frequency bipolar transistors, so pause
for a second,
the next scaling generation we are
trying to push.. a tenth of an amp per
square micron through the transistor.. did
you get that number ? ok.. one tenth of the
current of your hairdryer.. per square
micron.. ok, that's where this is going.. so
here let me now continue, here from my
group is a cross-sectional image of a
transistor at the 130 nanometer node
with about a 1 point something ish power
gain cutoff frequency and I'm showing
you here the emitter electrode... it's
bigger the emitter semiconductor at the,
base semiconductor is here, here is the
base contact.. the base is about 17
nanometers thick, here is the base ohmic
contact which
as sunk through chemical reaction about
two-and-a-half nanometers into the
transistor, here are the measurements of
a transistor with about a 200 nanometer
emitter feature size in this technology,
with about a one point zero seven
something terahertz power gain cutoff
frequency, when we go to measure
transistors that are more highly scaled,
what happens is the measurement
methodology fails and that's an issue of
microwave precision in the measurements,
which is very difficult, this probably
has a higher bandwidth.. what we can't
tell, issues of own way for calibration
precision or a major issue in high
frequency transistor work okay. So the
cheat challenges in going forwards.. are
particularly the resistivity of
the base on contact to build a
transistor with a three terahertz power
gain cutoff frequency, we need less than
10 to the minus 8 ohm centimeter squared
contact resistance... oh goodness,
that's funny.. okay.. whoops.. let's go back
okay.. we have got resistances
sufficiently low for even the three
turrets power gain cutter frequency on
ohmic contact test structures.. the
challenge we're facing is getting
similar performance in a contact
actually in a transistor okay and there
are a lot of issues of just things like
metal resistance along the transistor
itself.. in another venue.. I'd talk
extensively about what we're doing on
the transistor development, but there's
no time okay.. already pushing a little
bit, here are I see my
students some done.. by students pass
between my group and then doing and
getting long-term jobs at Teledyne,
others done that Teledyne with nothing
to do with me.. 600 gigahertz oscillators,
340 gigahertz dynamic frequency dividers,
there's a 570 gigahertz dynamic
frequency divided by the same people..
fundamental phase locked loops at 300
gigahertz amplifiers at 200 sorry at 600
gigahertz with 20 DB gain, digital
circuits at 200 Giga Hertz, power
amplifiers at 200 gigahertz, integrated
transceivers at 600 gigahertz okay... I
comment in passing there's nothing
special about indium phosphide,
it will take a higher degree of scaling...
but my analysis and I think they would
agree says that silicon germanium
community ought to be able to produce a
transistor with the two terahertz power
gain for cutter frequency as well, it'll
just have to be much more extremely
scaled, so expect that to happen, field
effect transistors... why are we interest
in those, they are the premier low noise
device and the higher the current gain
cutter frequency relative to the signal
frequency.. the lower the noise figure and
if I reduce my noise figure by 3 DB, I'm
twice as sensitive and I only have to
radiate half the power in my transmitter,
so I save masses of DC power.. so reducing
noise figure is in fact very important
and the goal therefore is to increase
the current gain cutoff frequency.. now
the world's best hemp's are from
Northrop Grumman.. jobbing may in bill
deals group and they have built indium
phosphide hemp's, which reformer FET with
one and a half terahertz power gain
cutter frequency and they have built
three years ago an amplifier at 1.0
terahertz and that is the world record
in defense of the Tala Dyne team which
is part of my team.. my collaboration with
them.. the IC is only up to 650 gigahertz
but the integration complexity
functionally is vastly larger.. that's the
trade-off of the two technologies in
scaling transistors and I'm worried
about time.. what are the key scaling laws
and the scaling limits? Again playing the
same game that I played.. but skipping the
steps.. these are the things that we need
to do to double the bandwidth of a field
effect transistor, everybody knows we
need to reduce the gate length.. but it's
not enough, we need to reduce the
thickness of the gate dielectric by a
factor of two and no surprise from what
it said earlier, the source and drain
contacts need to become correspondingly
less resistive as well, there's a lot
else that I'll skip.. but that's clear
okay.. Now.. these scaling laws are broken,
we can't do it anymore,
again because of tunneling leakage.. true
of these as well as in VLSI, if we thin
the barrier further.. we get too much
leakage through the barrier, so we can't
and we can't send the barrier.. we can't
increase the
transconductance of the transistor and
it's exactly the same statement I made
earlier, you can reduce the gate length
to reduce your capacitance, but the end
capacitance is dominate.. so again the
bandwidth the device will not go up. So
we need to do something to increase the
capacitance per unit area between the
gate in the channel, now the one good
thing of the community's work on 35 MOS..
that was otherwise a waste.. is the
community now knows how to put decent
high K gate dielectrics down not only on
silicon.. but on the compound
semiconductors and the work of Susanne
stemmer.. professor stemmer in like at
UCSB my colleague.. has I think the
world's best such interfaces and we can
use those and we are working on building
hemp's like those Northrop Grumman
world-record ones, but with a high K gate
dielectric.. that increases the gate
capacitance per unit air in the channel
and allows us to remove that scaling
Barrier to continue to progress, we're
not there yet.. but I'm showing you
electron micrographs of the transistors
and preliminary data.. but we think we
might be able to get the current gain
cut of frequency up to about a one and a
half terahertz and give us a further
improvements in noise okay, I have five
minutes left.. that's not enough so I love
circuit design.. I'm just going to have to
flash you images without discussion in
integrated circuits, the challenges are
we're working at a significant fraction
of the transistor cutoff frequencies.. so
we don't have much gain to throw away, so
we have to match perfectly.. the
transistors are.. get the wavelengths of
getting so small.. that the transistor
footprints are a decent fraction of a
wavelength, so just wiring transistor
fingers up in parallel is added
distributed inductance and capacitance
at levels that are reducing the
transistor gain, the transmission line
losses are high, which kills gain, which
kills parroted power combining in power
amplifiers and kills our queue of
filters in oscillators... killing their
phase noise and kills the sharpness of
filters.. wiring environments we use micro
strip transmission lines within the
upper wiring of the IC stack, I'm
watching time.. let me just show you
results very quickly.. we did a lot of
work on digital circuits and optimizing
both the circuit
design and the transistor design for that
and learned that we could run digital
circuits at nearly the speed of
millimeter wave circuit, so this result
is almost a decade ago... this is a master
slave flip-flop.. not an inverter.. a master
state slave flip-flop it is clocking at
200 gigahertz. A master slave flip-flop
that's 5 Pico seconds delay overall, two
and a half for the master.. two and a half
for the slave.. so it's a two and a half
Pico second gate delay at a fan in and a
fan out of two and that's not in the
leading generation of indium phosphide,
we've probably could have done 320 Ghz,
so we can do digital circuits of real
complexity.. of very high speeds, the
wiring even within the gates.. is in the
form of 50 ohm terminated transmission
lines, transmission lines used not only
between gates but within the circuit
structure of the gates themselves
together this degree of speed okay..
There is that power amplifier that's
kicking out two tenths or watch.. I'm
running out of time.. so forgive me.. I'm
just gonna flash pretty pictures at you
right now, there's lots we could talk
about on circuit design, but I'm afraid
we'll have to not do that.. but this
amplifier has 22 B DB gain and it was
actually designed for half a watt, but
without better heat sinking thermal
limits.. limited us to well-seasoned half
of what point three was limited us to
180 milliwatts. Here is a dynamic
frequency design divider as part of a
phase lock loop, the dynamic divider
differs from classic designs of this
kind of circuit class by using
transmission line resonant loads rather
than resistive loads okay.. here is a
nearly 500 gigahertz oscillator,
culmination of a series designer designs,
as months passed the designer progressed
from being a grad student and then a
postdoc at my group and then moving full
time to Teledyne... there are later
oscillators here designed to.. demonstrate
it to a little bit above 600 gigahertz.
Here a recent result on using transistor
series combined or stacking techniques
pioneered by Jim Buckwalter, but doing it
at 340 gigahertz and showing that those
techniques can be applied even at these
extremely high frequencies..
Okay, so I get byte by tearing through
that very fast.. I give my
so a minute and a half.. to talk about
systems and packages, one is in doing
massive beamforming, one of the real
questions is.. What is the computational
complexity of the back end ? What is the
linear dynamic range of the channels ?
How many bets and how many.. How many
bits in the A to D converters ? How many
watts and how many square centimeters
are we going to burn in the backend
signal processing ? So in this large team
working with excellent signal processing
people.. we're answering these questions
one by one, surprisingly we've concluded
is going to be no problem at all in
terms of linearity and dynamic range to
support the massive MIMO, surprisingly
we've learned that the phase noise
requirements of the massive array are
also probably not a problem, if you have
a reasonable design.. where the level
working with Christoph Shula I'm losing
names the team is.. I'm just getting
familiar with.. where the team is working
on examining the power consumption
required to do all digital beamforming
at hundreds of beams a data rates in the
range of one to ten gigabits per second.
But we believe now.. that it can be
done all digitally rather than through
other architectures.. that I'm showing you,
I am already over time.. so I'm going to
do this extemporaneously and more simply,
this is an area that we're having
trouble gaining support to attack, but it
is in fact one of the most pressing
problems in millimeter wave system
designers.. how do you package this stuff ?
Many of these arrays the element spacing
has got to be a half a wavelength, the
half a wavelength is not very big.. how do
you fit all of the electronics you need
in this very small space ? If you decide
to not fit in that space and put the
chip somewhere else and write signal
leads you're dead.. because of the
enormous losses in those signal aids, so
that's not gonna work.. one of the things
to understand if you've been listening
to the presentation is that different
arrays have different requirements, if
I've got a fighter plane and I want an
imaging radar... I need a two dimensional
picture and I've got to be able to steer
the beam in both horizontal and vertical
planes and I need to stir it over
hemisphere which forces me to first
element spacings.. however, in the systems
that we are designing... often we are
steering only in a horizontal plane,
which allows us to do a linear array
rather than a rectangular array.. or we
are steering in both planes but only of
a small angle, which means that the
element spacing can be larger.. since I am
over time, I will not continue further
with this points.. except to say in a
linear rate.. we can make everything fit
because the electronics can be placed on
either side of it, if we're steering over
a small angle then the element spacing
can be several wavelengths.. which leaves
a space to fit it within there and so
this is one of the additional challenges
that we're facing and the interest in
not going and being ill-mannered in
going over time, I will simply scroll
through some of our visions there.. okay.
Wireless above 100 gigahertz, massive
capacities are available because we've
got a lot of bandwidth, but also because
the wavelength is short and so we can do
massive spectral reuse of the spectrum
either in a base station or
point-to-point link.. don't be deluded
unless you are above the atmosphere, you
are not going to go kilometers with this..
this is few hundred meters range stuff
and this high atmospheric losses and
there's easily blocked beams, the signal
chain will be silicon entirely
everywhere except in the power amps and the
low noise amps and the low noise amps
and the power amps will be compound
semiconductors to get larger range or is
to allow us to reach up to about 300
gigahertz, the challenges are a massive
spatial multiplexing or in computational
complexity of the backend processor and
we thought three months.. six months ago
dynamic range in the signal chain.. now we
don't think so, another challenge is in
packaging.. fitting all of these signal
chains in the available very small area..
sorry I ran a little over
thank you so much for your time.
Thank You professor Rockwell for the
comprehensive talk... Professor : soup to nuts !!
Host : A long range from systems to devices to
circuits.. so now time is we have time for
a few questions..Audience Question:  I given the current
state of play in the technology and
where you think this is going... what how
deep do you think we could image things
underground and what do you think we'll
be able to do and say the next three to
five years ? Professor Rockwell: So you're asking if given
current technologies if we were
designing a ground-penetrating radar.. how
well we could we do.. I know very little
about this other than.. what I've read in
sort of popular magazine.. sort of
Discover Magazine as opposed to the real
literature, but my impression is that in
the whole ground imaging business.. the
you're dealing with two profound
problems, one is attenuation, the other is
backscatter right.. and so my two
impressions are that you would probably
not want to beat these high frequencies
because of that.. you probably want to be..
my impression is unless you're trying to
do ultra high resolution imaging a few
inches deep, that you'll probably be
better off using one gigahertz or even a
few hundred megahertz to get less severe
attenuation in the ground and to get
less severe back scatter, now the back
scatter there are sort of frequency
swept techniques.. that in the when
converted into the time domain.. give you
depth resolution and allow you to
separate out the back scatter from the
from the reflection of the targeting
question.. but I was talking to Ted who
brought up medical imaging, the earlier
work in the photonics community with
terahertz spectroscopy.. there was working
imaging through chunks of well.. go to the
supermarket buy a piece of chicken
breast and image through that and there
was demonstrations of sort of a few
maybe a couple of inches depth through
chicken of rate of resolving objects of
order one millimeter resolution using
several hundred gigahertz, so again few
inches looking for millimeter objects,
I'd go to several hundred gigahertz.. if
you want to go deep.. I drop the frequency
yeah thank you..
Professor Rockweel : well you don't mean snow in the it for..
not you don't mean snow in the area mean
snow on the ground okay.. so in the
case of atmospheric propagation, I'm not
an expert but having to do these
estimates.. I've had to search the
literature.. I haven't done that for snow
but I think it's virtually the previous
question made even worse because snow
last time I checked is made of water and
water has these very strong dipole
moments between the hydrogen and the
oxygen which produce all of these
absorption lines even that sort of the
atomic concentrations associated with
water vapour in the air.. so imagine that
sitting on the ground.. in that in the in
the white eight orders of magnitude
greater density associated with a solid
12 orders of magnitude I think, that I
would imagine that the millimeter wave
absorption above roughly 20 30 gigahertz
would be catastrophic, so again if I
strain an image through solid snow I
would probably be down in the high radio
frequencies.. a few hundred megahertz.. but
again I'm a ignorant.. Audience question : do you envision
that the base widening is going to be a
limiting factor for basically a scaling
of the indium phosphide HBT.. sorry base
widening because basically been good you
showed that that you have a very cold no
structure right
yeah the transistor so do you envision
that the base widening...Professor :  Base push.. do
you mean by push out.. Host : base
push out...yes. Professor: I was already panicking due to time.. okay so thank you for raising that
question, so I know not any panicking due
to time that was the slide that got
killed by this graphics failure.. okay the
the base widening actually relates to
this effect which is fill collapse in
the in a depletion region and and that
maximum current per unit area goes as
the inverse thickness square inverse
square of the thickness of the region, so
built into the scaling laws of the
transistor as each time I double the
current density, I have to sorry each
time I quadruple the current density..
I have to cut the depletion region
thickness down by a factor of two, so
that's built into the scaling laws.. it's
not any built into the scaling laws.. it's
fundamental to the scaling laws, so that
was a good question.. yeah okay !!
