I should point out that I don't actually
super hate light or as much as they
sound
but at SpaceX SpaceX dragon uses lidar
to navigate the space station or dock
not only that we SpaceX developed its
own light off from scratch to do that
and I spearheaded that effort personally
because in that scenario lidar makes
sense and it carves is friggin stupid
so no Elon didn't actually change his
mind but yes SpaceX does actually use
lidar stick around to the end of the
video to hear Ilan's true opinion on
lidar it's good even experts don't seem
to agree on the etymology of the name
lidar which is either a mash-up of light
and radar an acronym for light detection
and ranging or in some references laser
detection and ranging if you're totally
new to the world and research of
autonomous vehicles these are the main
technology systems that the companies
and the full self-driving race have
chosen to use there are now many lidar
startup companies a few main players and
some automakers such as GM and whammo
are actually designing their own lidar
systems in-house to reduce the
astronomical costs of the system before
whammo created their own system bringing
the cost of a unit to $7,500 which is
still high they cost $75,000 per unit
here are the choices the main players
have made so far in the lidar corner we
have whammo GM Ford uber lyft Toyota BMW
Volkswagen and in the radar corner we
have
Tesla so what his lidar it measures the
distance to a target by illuminating
that target with laser light and
measuring the reflected light with a
sensor differences in laser return times
and wavelengths can then be used to make
a digital 3d representation of the
target in the surrounding area this is
accomplished by a sensor being placed on
the vehicle which for the love of
aesthetics alone let's hope that this
isn't our best option Tesla's radar
system works similarly but uses radio
waves instead of lasers and is heavily
focused on pairing data gathered from
radio waves with computer vision and a
neural network think of a human brain to
learn what is being detected not just
that something is being detected the
major difference all comes down to
vision lidar can detect obstacles in the
road such as a plastic bag but won't be
able to determine what it is and thus
may cause the car to slam on the brakes
which as you know if this were to happen
on the highway at high speeds it could
be a fatal situation
Tesla software however uses its neural
net to learn from vision imaging that
the obstacle is a plastic bag allowing
the car to continue on without hitting
the brakes lidar does have viable use
cases in a static environment without
edge cases or any unexpected scenarios
it does a great job of creating
high-resolution 3d maps and it's already
being implemented in places like Phoenix
Arizona
way Nemo is doing some pretty cool stuff
with lidar and ride-hailing but it's
limited to very very specific use cases
and locations however to obtain level-5
autonomy where there is no driver
activity needed and the car can travel
anywhere in the world in any weather at
any time of day it's critical that
visual recognition is part of the
equation for all of the edge case
scenarios before I cover the critical
aspects of this conversation please take
a second to like this video and
subscribe to the channel if you haven't
already I will be creating a Facebook
group or a discord chat for our
community please let me know in the
comments below which you'd prefer so
back to the discussion it boils down to
adaptability and date
lidar is an inferior option when it
comes to adapting to unexpected events
where Tesla's system will function as
close to a human as possible with vision
and a brain neural network to make
decisions in any scenario this neural
network is only as good as the data it
can learn from
that's why tehsils approximately 650,000
car fleet with hardware 2 and above
driving 1,000 miles each per month for a
total of 650 million miles monthly is so
incredibly important for context way
most systems gather about 1 million
miles of data each month to date Tesla
has over 1 billion miles driven compared
to way mo which has surpassed just 10
million miles you get the idea
more real-world data equals a better
trained neural net in bringing society
that much closer to fully autonomous
cars to be fair way mo does have
simulation that it uses to improve but
the question is what's more valuable
real-world data or a simulation I'll let
you all debate this in the comments a
paper was recently published that agrees
with Elon titled pseudo lidar from
visual depth estimation bridging the gap
in 3d object detection for autonomous
driving the paper discusses how cameras
can be used to generate a 3d map nearly
as accurate as lidar the point of the
Cornell paper is instead of a $7,500 or
more bulky lidar system you could have
cameras for a fraction of the cost to
provide the same benefit perhaps Elon
has a point here lidar is is a fool's
errand and and anyone luck relying on
replied R is doomed doomed expensive
expensive sensors that on are
unnecessary it's like having a whole
bunch of a expensive appendices that
compact one appendix is bad well now
that won't put a whole bunch of them
that's ridiculous
you'll see there's far too much money at
stake in this game someone will figure
out the solution and of course there can
be more than one solution for a wide
variety of use cases the thing is right
now you can actually buy a Tesla
relatively affordably with all of the
hardware and software required for full
self-driving so if Tesla does
solve this problem first you can have
full access to the benefits of a car
with full self-driving at the moment you
can't go in the market and buy a wiimote
or an uber or a cruise I just created a
video about tussles full self-driving
chip if you're interested in this space
I'd encourage you to take a few minutes
to check it out it was a very
controversial decision and we're about
to find out in the coming months if
Tesla made the right decision as always
thank you for watching
