Reality Lab Lectures: Gordon Wetzstein – Computational Near-eye Displays


– Welcome everybody. Thank you for coming. This is our first presentation
of the Reality Lab lectures. Many of you may know that UW Reality Lab launched earlier this year. We’re very excited about that. We rolled out a lot of ton
of exciting research projects and a lot of initiatives coming out for educating the next generation of researchers and developers and creators in virtual and augmented reality. And one of the things
that we wanted to do was bring to you some of the
greatest minds in the field in augmented and virtual reality. And so here is one of those great minds. (laughs) Our inaugural presenter for
the series is Gordon Wetzstein. So Gordon did his PhD with
Wolfgang Heidrich at UBC. Went on to do a post-doc
with Ramesh Raskar at MIT and since then has been a
professor at Stanford University in the electrical engineering department. Has been doing amazing work
there for several years now with his focus on computational
imaging and displays. And I’m very excited to
hear what he has to say about some of that great work. – Great. Thank you so much for the introduction and thanks for coming out on
this last day of the quarter. What I’d like to talk
to you about today is some of the tech that
goes on behind the scenes of making AR and VR happening. So wearable computing like AR and VR is widely believed to be the
next major computing platform. One of the big benefits that it offers is the seamless interface between the digital world and the physical world. For that to happen, we kind
of think about optical see through AR. I know that there’s a lot
of work on video-based AR going on here but we
think about optics a lot, headsets, how to build
the seamless interfaces. And so the near-eye display is
really the primary interface between the user and the digital world. It’s supposed to be wide field of view, high resolution, high dynamic
range, high peak brightness, low power, wearable, and so and so forth. And so there are a lot of challenges and some of these we will
be talking about today. So AR and VR has, of course, applications in many different domains. VR mostly in gaming right
now but potentially also education, traveling to distant places that you couldn’t reach otherwise. Some of my favorite applications include robotic surgery so eventually
a robotic surgical system is basically an AR system if you want. It’s a robotic arm or five robotic arms that are remotely operated by a surgeon. I was looking into this
box on the left there and you know, visual
comfort, image quality are really important
if you want the surgeon to perform a procedure on
you for hours at a time. Also, remotely operating
vehicles like these drones is a really big thing so I
wasn’t really aware of that until a couple of months ago but these drone racing world championships are done by people flying these drones at incredibly high speeds
through crazy mazes and remotely operating them
with low latency headsets. So in that case, the
requirements on the tech are more on the latency
than on that field of view and things like that. But AR and VR also enable completely new and unprecedented applications so I left a placeholder for
Alice in Wonderland over here and on the lower right. So basically I think about
AR and VR as a new medium that allows us to generate experiences that are very different from what we know from television, film, gaming,
and so on and so forth. But people have to learn
how to use this medium to tell these stories and
create compelling applications and hopefully you will be
among the bright people who come up with these
types of experiences. So at Stanford, there’s a
lot of interest on AR and VR, especially actually in the medical school. So in the hospital, people
use VR in particular to alleviate anxiety and
pain for pediatric patients so you can think about a
child wearing a headset and seeing some kind
of immersive experience to help overcome fear of
getting a procedure done or things like that. There’s actually a whole
VR technology clinic also for training, of
course, for mental health, for people with phantom
pain, and so on and so forth. So there’s a lot of activity
going on in that space and that’s just something
that’s already happening today so it’s not just a technology of future, it’s something that’s happening today. I’m an engineer. I train in computer scientists now electrical engineering
and as an engineer, I’m really excited about AR and VR because there are so many different challenges that we can work on. And these challenges include
pretty much everything in this ecosystem that includes cameras to record immersive content and stereo. It includes all the compute
that goes on in the Cloud for things like that. It includes the chips that do low latency, low power processing for
things like CPU, GPU, image processing unit but also chips that are direct at driving
the pixels on the displays. So there’s a lot of room for
innovation in that space. And then of course on the sensors, the computer vision, the imaging, low level vision, scene understanding, a lot of the things
that are going on here, lighting estimation, occlusions, animating photographs, things like that. These are all big challenges
on the computer vision imaging processing side. Then on the optics side
you have things like wave guides and the photonics
that are very important to build small form factor glasses and deliver these highly
compelling experiences. And then we talk about the displays a lot but there are many other
sensors that humans use such as the audio system,
the vestibular system, our sense of gravity, haptics and so on so there’s a lot of research going on in all these different domains and that’s what makes it so
exciting from my perspective. So today we’ll talk mostly about some aspects of human perception, especially on the visual side and some of the display
technology that we’ve been developing and others also in that side. So just keep in mind that
this is a really big area and probably over the
course of these lectures, you’ll get insides into many
of these different areas. So the big vision on the optics
side and the devices side is to get these devices into a form factor that we already know. So if we look around in the room, more than probably 50% of the people are wearing glasses already so if we could integrate the technology into a device form factor like this with very low power requirements, that will be ideal because it doesn’t require anything extra. But it does seem a little
bit black science fiction because we need to
integrate a big computer into the form factor the
frames of these glasses. We need to turn the glasses into a display and so on and so forth. So it sounds a little
bit like science fiction but if we think about what
we’ve already achieved in terms of technology developments starting in the 80s with
the desktop computers or room-sized computers and
the types of experiences that they would deliver at the time, they were limited, right? But then over the course
of only maybe 20, 30 years, we’ve shrunk down this compute power into cell phones that fit in our pockets and just to give you a sense
of how far we’ve come along, just on the cell phone that
you have in your pockets today, that’s probably only about a
1,000 times more compute power than the computer that
was on the Apollo mission that propelled man to the moon. So within a few decades, we shrunk compute into wearable form factor
with massive amounts of processing power. And so thinking about the
next generation as wearable computing is just a natural step forward. So we can think about this
as the future of mobile compute technology but
then people have really been working on it for a
very long time already. As my version of the brief history of VR. So starting in the 1830s,
we’ve had things like the stereoscopes where people would gather in these Victorian times and look at these stereoscopic photographs. You can actually find a large collection of stereoscopic photographs
from the American Civil War in the Library of Congress
which is all available online. So it’s really nothing new in that sense. In the 60s, people like
Ivan Sutherland and others worked on making this
technology electronic so we could do computer
graphics, compute images that would be overlayed with
the physical environment, working on the tracking and other aspects. I will talk more in detail
about this in a second. It’s not the first time
now that we’ve seen consumer electronics type devices and Nintendo actually had the virtual boy as a consumer electronics product on the market in the mid-90s already. We’ll see in a second why
that may have not been quite successful yet at the time. And then over the last six years or so, we’ve seen a big explosion on VR starting with oculus obviously but now there’s a lot of
stuff going on in this domain and let’s say in the academic world and in the research labs of the world, we’re thinking about okay, what’s really the next
generation of these displays and that’s what we’d
like to focus on today. But just to give you a sense of what people have done already, this is Ivan Sutherland’s HMD that caused a sort of Damocles. It was suspended from the ceiling. It had two CR displays that would be feeding images in from the side so there are already view splitters that allow you to get optical
see-through capabilities. It was a true optical
see-through AR display. It had computer graphics which was just at its infancy at the time. Had tracking using ultrasound
and mechanical tracking. Interaction model generation
and so on and so forth. So the group at that
time worked on all these different aspects like
human-computer interaction, computer graphics, robotics,
electronics, and so on all at the same time to make
something like this happen and I think that’s a lesson
we should learn today is that big innovations really
come at this intersection of multiple different fields
and we go on into our own convenient communities that’s great, but we’re probably not
going to be able to make a big jump there. So the Nintendo virtual boy
was a stereoscopic display that was available. You can buy it on eBay even
today for maybe 100 bucks or so but the computer graphics
hardware wasn’t quite ready yet at the time yet. So what GPUs could do in this context were very simple, low
resolution line renderings and those are just not very immersive or really interactive or really realistic. And so today we have GPUs
that can really generate photorealistic content at real-time at high resolution, stereoscopically, and that’s a really big difference. So I would go so far as to say
that the cell phone industry has really enabled this
latest breakthrough in VR technology because if
you look at what’s inside a headset today, it’s actually
pretty standard components, lenses that we know pretty well, injection model plastic,
but then we have these really small high resolution screens and those really enable high resolution immersive experiences and
then also the tracking system. So IMUs, inertial measurement units, they’re in your phone
also that are low cost, low latency, and very, very precise, enable orientation tracking for example and I think these two components with a clever hardware architecture and graphics pipeline
were really what enabled the DK1 and the recent
success of VR again. But so how far have we really
come from these stereoscopes? Well the basic principles of operations remain the same today. We have interactive experiences. We can share them with our
friends over the internet and we can dynamically
update that so that’s great. But the optical principles haven’t really changed all that much. So just to give you a
sense at a very high level of what’s happening here
is we have two lenses, front of the cellphone or any
other kind of a microdisplay, the lenses basically act as magnifiers that create what we call a virtual image as a magnified version of that image that’s close to your eye
that just floats in space. So you’re basically looking at a 2D plane that is magnified at some distance away, maybe two meters or so, and we’re using simple equations, we can actually predict
exactly where that object is going to be floating
based on the focal length and the physical distance
between the microdisplay and the lens, we know exactly where this virtual image is going to be. It’s not really a natural
viewing experience because you basically look
at two floating planes, one for each eye. So there’s a couple of problems with that. It’s a fixed focal plane display. It doesn’t support focus cues that we use in a natural environment. I’ll get back to that in a second and it doesn’t really drive accommodations so we can’t focus our eyes really to that and we can’t change that in software. So focus cues are a really
important part of human vision. They give you a sense
of the spatial layout of the scene that you’re looking at. The defocused cues help you understand which object are in front
of which other ones. They give you a sense of scale also and they’re very important
for visual comfort also which we’ll talk about in a second. But you need to understand
the focus cues are differently important for
different types of people. So this is a platform of paper from 1912 where you have age from eight to 72 plotted against the closest distance that a person can focus
assuming they can also see sharp at infinity. So when we’re very young,
the crystalline lenses in our eyes can deform in
a way that allows us to refocus to arbitrary distances. But as we get older, we
get this condition called presbyopia which simply means
that the lenses in our eyes get stiff and we can’t
deform them anymore. So at the end of the day, they’re gonna be fixed focal power lenses and we need reading glasses, bifocals, or other types of vision
correcting glasses to be able to correct that. So this condition is really
important to understand how we should drive focus in VR as well. So taking a step back again, I talked about these
focus cues a little bit but how do they compare
to other types of cues? So the human visual system
uses many different cues to see 3D or depth so the idea of 3D doesn’t really mean anything by itself. There are many different cues that feed independent signals to the brain and I’m just gonna classify them here as stereoscopic cues and binocular cues. So those that are using both eyes and those that are just
using a single eye. Then we’re gonna have ocular
motor cues and visual cues. So vergence is an ocular motor cue that is sent from the
muscles that are rotating the eyeballs in their
sockets to the brain directly and if you fixate on
objects that’s far away, eyeballs are gonna rotate out and they’re almost going to be parallel. You look at something close,
they’re gonna rotate inwards and this is simply to keep the object that you’re fixating directly on the fovea because the fovea is
the area on the retina that has the highest spatial acuity. So this ocular motor cue vergence is driven by binocular disparity and that’s simply the fact that we see two different images with the eye. So two slightly different perspectives. Most people consider this
as stereoscopic 3D basically but if you can render in these
cues in computer graphics, you can automatically
drive the vergence cue. So now accommodation is a monocular cue. It’s the deformation of the lens, the crystalline lens in your eye, and these are driven
by the ciliary muscles. The focus cues is accommodation is driven by retinal blur
and that’s the defocus. So you can think about
the focus cues really as an autofocus mechanism of your eye. Whatever object you want to look at, there’s gonna be some kind
of an autofocus mechanism that brings that object in focus. And in the real world,
these cues are coupled. That just makes our brain
work more efficiently because we can use this cross-correlation between these different cues. So I’m going to talk a
little bit more about these focus cues also because
they’re very important for VR and not supported today. So if you look at a display
that is at some fixed distance or the virtual image of the VR display, you never look at the microdisplay itself. You look at the magnified
virtual image of it and it appears that some distance if you’re going to be
accommodated somewhere else, the image is going to be blurry obviously. It’s very simple and
intuitive to understand that if you’re not accommodated
it’s going to be blurry, but many people don’t get that right away and it helps us understand
later the problems of not supporting focus cues. So as we accommodate
to different distances, the image gets sharper up until the point where image is in focus
and then it’s sharp. So if you’re working
in the world of optics, you usually characterize things like that using the points function of the display. So if you turned on a
single pixel on the display and everything else is black, then depending on the
accommodation state of the lens, you’re going to get something like this so that what the autofocus
mechanism of the eye does is tries to find the sharpest image or you get kind of a gradient
descent of the retinal blur. We’ll get back to that concept also later. So one of the things that
emerges from lack of focus cues by just shining, basically
presenting these two images, flat images, to each eye is this vergence-accommodation conflict. And the vergence-accommodation conflict comes from the mismatch
of vergence and focus information in VR of AR displays. So again in the real world, as
you look at different objects our eyes are gonna be verging to keep the fixated object on the fovea and they’re going to be accommodating to keep them in sharp
focus at the same time. So these are linked together and vergence accommodation match. In VR and most AR systems,
these are decoupled so you’re gonna render
in these stereo images and drive the vergence
to arbitrary distances but the focus of the
eye, the accommodation is also linked to the
virtual image of the screen simply because if it wasn’t,
we wouldn’t see a sharp image as I was just showing you. So that means by default, if
you want to see a sharp image in a stereoscopic display, your eyes have to be
accommodated on the actual screen which means that they’re by
default not where they should be based on the vergence so this mismatch between vergence and accommodation is known as the
vergence-accommodation conflict. And it’s been shown in many
studies that for long-term use, let’s say longer than 20 minutes or so, it creates discomfort,
eye strain, tiredness, and so on and so forth and it negatively degrades performance in specific tasks. So think about the DaVinci
surgical system again. If your surgeon is
looking into this box bars at a time, performing a surgery on you, you really want to make sure
that they don’t have eye strain and that their performance is up to speed. So for short-term effects,
you also see double vision. So the inability to diffuse
stereoscopic image pairs. You’re going to lose visual clarity because the vergence is
going to try to drive your accommodation away from that screen and so if it is successful, you’re going to lose visual clarity so the images look blurry. And people also get nausea to some extent. So some of the research questions
that are interesting here are how can we address this
vergence-accommodation conflict or good technologies for that? How do we address it for
people of different ages? And then a lot of different
proposals have been done on technologies in this space and which ones of these
are actually effective? So I want to talk about a few of these. The first one is varifocal displays so gaze-contingent focus. That’s probably one of
the most intuitive ones. So going back to this
fixed focal plane display, we have fixed focal length
and a fixed distance between the microdisplay and the lens. That creates the virtual
image that’s floating somewhere in space outside
of the physical device. But to dynamically change that, we could use something like an actuator. So the actuator would
physically move the microdisplay and thereby change this distance here. We change the distance, we’re gonna change the
distance of the magnified image and with a very small amount
of motion inside the headset, we can actually create a very large motion of the virtual image. So only about a motion of
about a centimeter or so will actually allow you
to drive the virtual image over the entire coordination range. The other way of doing it
is using what’s known as a focus tunable lens
element so in this case, we’re gonna change the
focal power of the lens which changes this
parameter but equivalently, it changes the distance
of the virtual image without any mechanical motions. So this idea isn’t really new. Even the first types of
immersive experiences like these arcade type experiences has mechanic or manually adjustable focus just to dial in your
prescription basically. The first paper treating
this subject that I found was from 1984 from Henry
Fuks’s group at UNC and they basically took a loud speaker, coated it with a reflective material, and then reflected a CRT monitor in that to create computer graphics images at multiple different distances. And so over the last 10 years or so, we’ve seen better versions of
focus tunable optical elements emerging and that’s something that a lot of people have started to use. So I started with a vision
of making VR small or AR. We’ve probably built the world’s biggest VR display in this case. This is a bench top setup that is really good for use of studies and what’s in there is two high resolution displays on the side. We have these relay optic systems. We just use camera lenses because they’re really well calibrated
for chromatic aberrations and just optical image quality. We have these focus
tunable lenses in there that are really fast also, about 15 millisecond settling time which means that you can
change the focus at 60 hertz. We have these beam spurs which are great because we can now see
the images from the side but we can also see through it so we could use it as an AR display also. Translation stage could be used to ingest the interpupillary distance
which is very important. We actually used the other optical path for this autorefractor
so this is a machine that you find in an
optometry office usually and it will measure the
accommodation state of the user. So this device allows us to
present stereoscopic images and measure the response of the user as a really unbiased way of
measuring the effectiveness of different types of technologies so different ways of delivering
the images to the user. And we wanted to do a
couple of very simple tests in actually verifying this hypothesis of if you have a fixed focal
plane in your display, the hypothesis was that
if you render a target that’s away from that plane, your vergence would be
driven to that distance but the accommodation would be linked to the focal plane of the display similar to here, vergence is
here at the rendered target, whereas the accommodation would be linked to the actual focal plane of the display whereas if you actually
link the two together, it dynamically drives the
focal power of the lens. You could attach it to the rendered target and thereby drive vergence-accommodation to the same distance. So at least that’s the hypothesis. So the test that we did was very simple. We just rendered a cross and move it in and out of the screen and
assign a sort of fashion from two meters which
is open five diopters to about 25 centimeters
which is four diopters. So people in the vision science community usually measure distances in diopters so that’s pretty much the
entire accommodation range and this is the kind of data that we got. Measure it actually from a
large number of participants so here we have time, 25 seconds, and then this is the diopters
of the rendered target so it goes in, away from
you, and then towards you, away from you, towards you,
and so on and so forth. So it’s a really boring scene. You just see this cross moving in and out and we ask the user to look at it and then assuming that
they will actually fixate at that particular target. We’re gonna measure the response and what we got was a
little bit surprising but not very surprising is that we got these are the individual
responses, the gray ones, and this is the average response. I mean, we do get a little bit of response but again factor of 0.3 which means that the vergence will actually
drive accommodation to some extent and if it was
to drive accommodation to more then it would pull away
the focus from the plane and you’d get problems with visual acuity and it gets blurry. Yes. – [Audience Member] So is
with the physical object or is this with the system trying to? – This is with the system
rendering in the target that doesn’t change it’s size so the size will remain the same. All visual cues are exactly the same independent of the depth. The only thing that
changes is the disparity. So then we know that okay,
with a rendered disparity, we’re gonna likely drive
vergence to whatever distance the target has and now we’re gonna see where’s accommodation at the same time. – [Audience Member] How do
you measure accommodation? – With an autorefractor. So the autorefractor is the big machine. Yeah, so it operates in new infrared. It has a ring of LEDs
that will cycle through it and actually measure it’s defocus and also a stigmatism at any point in time at about five hertz. So the response time of accommodation is about 300 to 500 milliseconds. So with a five hertz
measurement apparatus, you should be able to get pretty good data on where’s the user accommodated. Okay. So it’s not flat. We would have assumed
it to be flat, right, to always fix on the
actual physical screen. But it does move a little bit. Okay so now the dynamic response. So in this case we would assume that well, we’re gonna link the
focal plane of the display to the rendered target and ideally it should match exactly what we see here. So what we found is that
it doesn’t quite match it. Disregard the face. We haven’t synchronized them exactly. We’re just looking at the amplitude here and we get a gain of about 77% which isn’t as great as we had hoped but the problem here is
that we actually just tested a whole bunch of people
and discarded the age. So as I was saying earlier with age, the crystalline stiffens
and some people just, you lose the ability to actually focus. So it doesn’t matter
what you show the person. It simply won’t be able to focus. So if we break up this
data into age versus gain, where a gain of 1.0 would be perfectly following the stimulus, then we get something like
this for the conventional mode with a fixed focal plane so you do actually see a
little bit of response here and a couple of outliers
but it reduces over age. And then with this dynamically
adjustable focus mode, it actually looks pretty good
for these people over here but then again, as you age, you’re not going to be
driving anybody’s focus. So what’s a little bit strange is that there are these data points
over here that are larger than one which means that
people actually over-accommodate and we try to explain
that by looking into the vision science literature and
actually it turns out that people have found that
with physical stimuli, too. So here’s a paper from 2004 from Berkeley where younger people which
actually over-accommodate if you’re looking at these
sort of varying stimuli. So this is interesting and
basically just shows that we see the same transport,
this very focal technique, but so far we’ve only looked
at a target as baseball. Just a single rendered object and we asked people to look at this. – [Audience Member] So since
people with actual objects set up all these differences, what’s the kind of tolerance that we should really be targeting
with these kinds of displays? – That’s a very good question. Aero-tolerance of plus/minus
may be a quarter diopter would probably be fine because
the eye actually oscillates low frequency within that range anyway and you don’t see a lot of
visual degradation in that space. So this is not something that
needs to operate super fast or super accurate. But one thing that it does need to do is it actually needs to work
in a gaze-contingent mode because so far we’ve
only looked at one object and if you have an immersive display with large field of view, you don’t know what the user’s looking
at at any point in time. So you actually really
need the eye tracking for this to work and then
based on what you’re looking at and how far away that object is, you need to adjust the
focal plane on the display dynamically to that. Here’s our prototype that
we built in basically 2016. We have a stereoscopic eye tracker in here that again operates in neo infrared so you don’t actually see it. It shines a light to the eye, measures in this case the gaze direction. We have a big motor that
is not very comfortable but it allows us to quickly
change the focus of this gear VR so we’ve just had a ring that was coupled to the manual focus adjustment
so it’s really just a hat really but it worked pretty well. You can see it in real-time here. We can actually change the focal plane over a range of three to four diopters within 300 milliseconds so it’s
pretty heavy but it is fast and that worked pretty well for the first, I would say this is almost
the first gaze-contingent to varifocal display that
was really demonstrated. And then so kind of what it does is this. So this was at the data that
we captured in this case, the red circle corresponds
to what the user’s looking at and stay with me for a second. The user looks at an
object that’s close by, farther away, and
immediately the display plane would adjust to that focal distance and then we filled it
with a camera that had manual focus adjustment so we just turn the focus wheel of the camera
to follow where the display goes which the eye would
normally do automatically within these 300 milliseconds. – [Audience Member] The manual focus is slower than the actual? – Yeah, in this case we’re
just going to try to adjust it which is what the eye
would do automatically. So all you’re seeing is
that as soon as the user looks at a different object, immediately the display adjusts its focal point. So we actually showed
this in 2016 at SIGGRAPH. Some of you may have tried it there. It was a user study in disguise which was one of my
best ideas of that year because we got access to like 175 people and measuring them within a couple of days and they saw a really cool demo also. So that was great and you know, the technology’s pretty straightforward but it’s actually not that straightforward to actually build it. If you’ve seen the F8
developer conference year, that was just in May, you may have seen the half dome prototype from Oculus. Here’s a video that is courtesy of Oculus that actually has this
technology integrated. So in this case, I can only speculate but it looks like it’s
mounted on a couple of rails. We have these mechanical deformations, probably because of
some kind of an actuator that moves these displays
independently along these rails. And so the way they advertised it was actually with this visual clarity. So varifocal off, you’d
see a blurry image. Varifocal on, you see a sharp image. I mean it’s a little bit complicated to explain it to a
non-expert in two minutes but so basically they motivated
with this visual clarity. So this is great and what
we basically learned is that adaptively driving this
accommodation works really well in a natural way but we
do need eye tracking. In addition to driving the
focus cues for young people, we can also correct the refractive errors. So if you myopia or hyperopia, we can at least correct for defocus. We can’t correct for a
stigmatism in this case but that’s a big thing as
these devices get smaller, you may not be able to
wear eyeglasses underneath. What we also learned
is that for presbyopes, you actually make the experience worse if you try to drive their accommodations. So presbyopes, again somebody
who cannot focus their eyes, you just need a fixed focal
plane at exactly that distance. That’s the best you can do, right? So if you drive an old
person’s accommodation, they’re just gonna see a blurry image. Alright so let’s talk
about light field displays a little bit because that has been a very hot topic for the
last couple of years also and I just want to give you an intuition of what a light field really is in the context of a near-eye display. So let’s think about our eye
and especially the pupil. So the pupil has a diameter
of about three millimeters to eight millimeters depending
on lighting conditions. If it’s very dark, it’s gonna get bigger. If it’s brighter, it’s gonna get smaller. But it always has a finite size. And so you can think about it as a camera that has a finite sized aperture that creates this retinal blur, this depth of field cue on the retina. So the light field is
basically the wave front that enters the pupil and you
can think about it as this. If you take a pinhole and
you look through the pinhole, if you put it at any point in the pupil, you’re going to see an image that has extended depth of field so that has everything in focus. And as you slightly change
the location of this pinhole over your pupil, you’re
going to see slightly different perspectives of the real world. It’s very similar to stereo images where you see one
perspective from one eye, other perspective from the other eye, and in this case you see
many different images entering the pupil at slightly
different perspectives and then will integrate
again on the retina to create this blur. And so the light field
that enters the pupil is basically this
collection of perspectives showing the same 3D scene from ever so slightly different perspectives. And so a light field display will try to create that light field in the head mode and then project it into these different parts of the pupil. So our approach to this was what we called a light field stereoscope. We also showed that at SIGGRAPH 2015 and the light field stereoscope was basically a hat version of
the head mounted display where we have a backlight and an LCD panel and then we have a little bit of a spacer just about six millimeters wide and then we have a second LCD panel. So a light ray that is
emitted by the backlight would go through the
LCD panel in the back, propagate a little bit more, then go through the LCD panel here again, and then go through the lenses. So the form factor of the
device looks the same. And the goal now is to actually render a light field for each eye. In this case we have this
collection of pinhole images, in this case seven by seven, that for you all look the same because they’re so small
but there is a little bit of parallax in there that you don’t see at this resolution really and we want to synthesis this light
field with the device. So what this means is that
whatever we do with the display at this part of the pupil, we need to project this image into there. We need to project this image into here and we’re gonna have to
project that image into here. So you can image that
that’s a pretty challenging engineering task and the way
we did that with our duel layer display without too much going
into the mathematical details is we rendered the whole light field that each pixel on this light
field corresponds to one ray. We computed exactly their
point of intersection on the front and the
rear panel for each ray. All the pixels are shared
between all the rays but we can take the light field and then factor it into the
best set of pixel states that in the best sense
approximate this light field and we used basically non-native matrix and tens of factorization for that. So without going too
much into detail here, I just want to show you
what that looks like. So on the left, let’s
just look at the left as a head mounted display
without any focus cues to just an image rendered at one plane. If we defocus the camera
and go back and forth, I mean it’s either fully
in focus or out of focus. With this light field
display on the right, without changing any optics, this is a static pattern on the pixels and without any eye tracking, we can basically create
this depth of field effect where you can accommodate an
object at different depths and create the retinal blur cues which will then drive your accommodation. So again, this is really
important for objects close by, anything you can touch, this is where the focus cues
are most important and yeah. It kind of did the thing
that it was supposed to do. So this is all based on earlier technology that we worked on at MIT
on multilayer LCD displays. We’ll call this the
tensor display for example where we actually developed
a lot of the algorithms of factoring a light field in
to this sent of pixel states on a multilayer display,
potentially even with time multiplexing here to catch
up with the highspeed camera. Also related to an idea
that we worked on for this vision correcting aspect so in this case, we actually built a light field display by just taking a parallax barrier which is one of the easiest
ways of creating light field. You just lose a lot of
light and resolution. You put it on an iPhone
touch and you can correct for the apparitions in the eye
digitally in this display so here you see an image
captured with a camera focused far away from the physical screen so everything is out of focus and this simulates a
hyperopic viewer basically. So you can’t read the font,
you can’t read the buttons, but if we precompute this pattern and observe it through this
parallax barrier display, we can create a light field that encodes a 2D image that floats
outside the physical device. So we call it a vision correcting display and the motivation was that
we get so many pixels now. I mean 2014 300 dpi was a lot. Now, 3,000 dpi maybe is a lot. But sometimes these pixels,
you don’t even see them anymore because they’re beyond
what we can actually resolve so how do you use these
pixels in a meaningful way? Well, we can do things
like vision correction, driving focus cues, and just thinking about light fields as well. So then, I’ll talk about
one more technology before highlighting some more
open challenges basically. So this is a slightly different approach to creating focus cues
which is not really creating focus cues but trying to
circumvent the problem that the lack of focus
cues really creates. And in this case, I showed you
this already earlier, right. If your eye accommodates
at many different distances and does only one image
at some fixed plane, you’re going to get this
point spread function. So what people in optics
and imaging have been doing is they have been actually
working on techniques that are known as point
spread function engineering. So in point spread function engineering, you can change the optics of the system to design the point spread function and people have been using
that for microscopy for example for super resolution
localization microscopy or for creating all in focus
images over a wider range and we can use some of
these techniques also for near-eye display design. So in this case, for example,
looking at a point spread function that is the same irrespective of the depth at which
the eye is accommodated. So in imaging, this is called
standard depth of field and we call it accommodation
invariant display. So doesn’t matter where
your eye accommodates, you should always see the
same point spread function which means that the image
should always be in focus. So then the question is if we
can drive the accommodation without this retinal blur cue by simply using these stereoscopic cues. So just to make that a
little bit more clear. So our goal was to do optical engineering to remove this retinal blur cue and then the hypothesis was
that due to the cross-coupling, we could potentially drive
the accommodation also with a binocular disparity. But you can’t do that in rendering because the point spread
function of the eye comes in from the accommodation state. So you have to change the optics for that. Okay so how do we remove the blur cue? Well, the easiest way is
to dumb down the aperture. So you may know this from photography. If you have a large
aperture or a small F number you’re going to get a very
shallow depth of field. If you stub down the aperture, for example to a large F number, then you’re going to get
a large field of view. So you lose a lot of light in this case and it may not be quite practical but you could try to achieve
the same effect in the eye. So let’s say if the pupil is large, then you’re going to get
shallow depth of field but if you can stub
down the pupil somehow, then that would achieve that effect. So I haven’t figured out
a way of stopping down your pupil yet, if anybody’s up for some psychophysical experiments
potentially with electrical stimulation,
please let me know. (laughs) But you know, people who
are actually designing these displays are thinking a
lot about the exit pupil and that’s basically the same. So in this case, we want
to design the optics and the illumination in
a way that it creates a virtual small pupil. So for example, these are known
as Maxwellian-type displays or pinhole displays so you can have a small point light
that is focused exactly on one point of the pupil
so that creates a very small exit pupil and then you could put your spatial light modulator
in the optical path that is then conjugate with the retina so you can actually scan
out an image on the retina, ideally over a large field of view but with a small exit pupil. So there’s actually been
a lot of work here at UDF on this back in the 90s already, I think, from you guys with these
scanned retinal displays, laser-based systems
and so on and so forth. So this is certainly a
very interesting method. I mean one of the challenges
with these types of pinhole displays is that the eye
actually moves quite a bit, the interpupillary distances are different for different people. Even if I just look over there, my pupil moves quite
a lot so you’d have to dynamically steer the
exit pupil for the person and for that, you again need eye tracking. There are a couple other
problems like seeing floaters in your eyes if you focus it through a very small part of your pupil you see actually abhoration
in your own eye or you can. So what we were thinking is like is there a way of getting large exit pupil and to extent of the field and we used this technique called extended depth of field basically. So the idea is to use a focus tunable lens and to just change its
focal power very rapidly, oscillating it over the whole depth without changing the image. So we just use a regular
display that runs at 60 hertz but we can oscillate this
lens at a very fast rate that’s much faster and
then we’ll basically blur out the image over the
entire accommodation range and it creates a point
spread function that has a finite size so what that means is if we show an image on the screen, it’s going to be convolved
with this point spread function and we see a slightly
blurrier image on here but it will be independent of the depth where the object is. And so then what we can do is
we can deconvolve the image that we present on the screen, try to regain some of that sharpness to try to approximate the target image. So here’s the image that we want. If we show that on a conventional display, this is a photograph. When it’s in focus, it looks great and the point spread
function is very small. If the conventional image is out of focus, obviously it’s going to get blurry and the point spread function is large. With this accommodation
invariant approach, we get slightly blurrier
in the focal plane abut we get a lot sharper
outside of the focal plane and we have this depth invariant property. So now the hypothesis was does it actually drive accommodation? And so again we have a
stimulus that changes over time and if we do the conventional
fixed focal plane design, again in this case we
just used young users and a different set of users. We’ll get a gain of around 35%. If we use the dynamic
mode, in this case again only young people so we get a higher gain actually on the dynamic mode, but 85% and if we use this technique, the accommodation invariant
display, we get about 60% so it’s somewhere in the middle but it’s statistically significant compared to the conventional mode. Again, this only works
with stereoscopic displays because the stereo cues would
now drive the accommodation. This wouldn’t work with
a monocular display. And we implemented that
on our benchtop setup but you can think about this also as a contact lens or some kind of optics that would create multiple
focal planes at the same time. Alright, so let’s get back to
this vision that we started out with which is AR
display, wide field of view, multiple focal planes, in the
form factor of that of glasses and that’s very challenging. And if we think about AR, that’s way more challenging than VR and there are a couple of things that we haven’t really talked
about that are specific to AR. So in AR, the idea is that
we’re going to combine light that comes from physical objects with that of digital objects which wasn’t the case
for any of the VR stuff. And in this case, we need
a view combiner also. So the easiest way of
doing that is using a view splitter cube such as this one where we’re going to have
a digital display here that’s going to be reflected into the eye. We have the real world that’s going to get reflected in there, too. But that’s something
that Google glass used and the form factor isn’t very good. So a lot of engineering challenges
go into these wave guides where we have very thin optical elements. We have a small projector,
projects an image in here, and some kind of a coupler
that deflects the light into the wave guide so we
use total internal reflection so the light bounces within the wave guide and then comes out here again. So we can do the same thing
in a very small form factor. The idea of this beam combiner
is the same as Pepper’s Ghost basically has been around
for a long time also. For example, in theater,
we have the audience here. You have the actor on
stage and then somebody dressed up as a ghost that’s
reflected towards the audience and the person will appear as
semi-transparent on the stage. Very similar to I think
Tupac Shakur had a live show after he was actually dead already. So this is all using digital technology that’s very similar to this. So let’s look at a couple of case studies. For example, Google Glass
was using this very simple beam combiner design but I
think what they really did well was they’re shrinking the electronics into a form factor that
you could actually wear. So if you look at these patents, these patent drawings,
it’s really very simple. There’s an LED here, a
reflective microdisplay, the image gets reflected
from this curve reflector which acts as a lens basically, back to the eye so you
get a virtual image again floating somewhere in space and the virtual image
distance is defined by the curvature of this reflector. Same as the magnification
but then you would also see the real world through this beam splitter. Meta is another technology
that’s out there right now. You can think about the Meta display as an analog of this
simple magnifier design that we use in VR displays. In this case, we have a
see-through beam splitter here. It’s got a curved surface and so whatever you show on the display
up here is gonna get magnified and reflected into the eye. So the field of view is very large but the form factor’s also pretty big. Microsoft HoloLens is actually
a really well-designed and engineered piece of technology. It uses a wave guide already. It has a small field of
view but there are simply physical trade-offs that you have to make. It has light engines embedded in here where the images are
projected into the wave guide. It thumbs around inside the wave guide, come out somewhere else. So it’s really something
that works quite well and you can actually read up
the details on this wave guide in the patent that’s
called wave guide I think. So you can see the idea here again. You have a little projector,
projects an image in here, it bounces around, actually comes out at multiple different positions. So a couple of the challenges
that are there is that this eye box versus
field of view trade-off. What does that mean? We want to have a very small projector that projects an image into the wave guide that comes out here. Our eye moves quite a bit
so we want to make sure that we can magnify the pupil. This is the entrance pupil. This is the exit pupil of the eye box. In order to see an image for us, we need to be able to move
our pupil in that large area. So we can magnify that
but we also want to get a large field of view so the
field of view is basically the magnification of the display. This is the microdisplay. It gets projected on the
retina and we want to stretch that out as much as we can so
we want to magnify that, too, so now we want to magnify
this image on the retina and we want to magnify
the pupil on the eye box and we can’t do both at the same time. This is this concept known as etendue. We can only magnify one and
thereby minifying the other. We can’t get both at the same time so that’s a big challenge. Then chromatic aberrations are
usually a challenge, right. You can have a micro projector if you have a refractive optical element that is just etched on
a piece of glass here or on some other material. I mean, you’re going to
get chromatic aberrations no matter what you do on the display side and it’s something that’s very difficult to correct optically so people have come up with
kinds of ways of correcting for chromatic aberrations
and these things such as using stacked wave guides
for different wavelengths. So for example, if you have
a different wave guide for the green wavelength that would be good because then blue and red
are not quite as important and you can differently
engineer all of these. And you could potentially put
these at different depths, too to get a multifocal plane
display or something like that or you can use a volume hologram where you can actually correct for these chromatic aberrations. I’d say one of the big challenges is mutually consistent occlusions. In optical see-through AR, just to give you a sense
of what that means, is like if I have a real object
here and a digital object, the beam combiner simply
additively combines them. So that means the objects
always look semi-transparent, the digital objects. So there’s an easy case which
is the Pokemon is behind the actual real object
in which case if I have a depth camera, I can simply
not render the Pokemon wherever the physical
object is and it appears as if it’s secluded by the tree. But the other case where the
Pokemon is in front of the tree, that’s really hard because now, I would actually have to block real light coming from the tree with
physically sharp boundaries in the scene and that’s very difficult because I need a different
type of a modulation technology that actually selectively
blocks the real light. And so this is a big challenge
that nobody has a great answer for so I’m also teaching
a course, EE 267 at Stanford where the goal is to build a headset. So it’s not so much application focus. Really build the headset. Everybody gets this kit with LCD displays and we implement stereo rendering, just rendering in general, we have a little bot
that has an IMU on this which you don’t see here but so we do orientation tracking with centrofusion. We have photodiads on there now so we do post-tracking as well. If you just want to read up
more on the technology itself, I think this is a good resource because all of the
material is on the website. My group actually works on a
couple of different things, not just VR and AR. We work on displays in general. We work on light field cameras. We work on image processing, deconvolution for fluorescent microscopy, work on optimizing optical
elements in general and just low level computer vision and we actually work a
lot of time-of-flight and non-line of sight imaging so looking around
corners with pulse lasers and picosecond detectors
is a big topic for us and what I shared with you
today is just a small part. Some of the things that will be coming up is at SIGGRAPH this
year, we’re showing this non-line of sight imaging and life and then we’ve also been working on actually getting some of
these technological components like these focus tunable lenses into real-world glasses
so I showed you that people as they age get presbyopia so they can’t correct
for the focus anymore and the current way of correcting that is to use reading glasses or
bifocals or progressive glasses but in all of these cases, you actually reduce the field
of view for any focal distance so the natural way of focus
that we have when we’re young is we just look at an object and our focus is automatically driven there
and you can recreate that but in order to do that,
we actually have these eye-tracking cameras that
track the vergence distance which gives us a sense of
how far away the object is and we also have a depth
camera in a real sense here that we use for a little
bit of centrofusion. Sorry. A little bit of centrofusion
so we get an estimate of how far away the object is that a person looks at at
any given point in time and then we can dynamically
adjust that focal power. So we call it autofocals because it’s an automatically focusing
pair of eyeglasses. So with this, I think that’s it for today. I’d like to thank my
students, Robert and Nitish. Long-time collaborator
Fu-Chung was in NVIDIA Research but just moved to Apple and then Emily Cooper is also a
long-time collaborator. This is the rest of my group. Thanks for your attention. I’d be happy to answer any of
your questions that you have. (audience applauds) – Time for questions from the audience. – [Audience Member] This data, how is the energy department? Do we need to save more lives,
more lives with this thing and what can it do? – Great question. So the question is what
are the cradles, right? So you can gain some of these capabilities but what do you lose and I
think one thing I learned is that there’s no free lunch. You always have to give up something and in the academic world, we have the freedom of
building big benchtop setups but if you actually
want to make a product, it’ll have other requirements on the size, weight, power, and so on and so forth. So the light field display in particular is actually fundamentally
limited by defraction. So the resolution is
something that is challenging to scale to very high
resolution simply because you’re looking through an
array of very small apertures in the first panel onto the second panel so as you make that
smaller and smaller to get higher resolution, the defraction of the resolution actually gets worse. Also you lose a lot light. LCD is only about 6% efficient or so so you lose 94% of the light every time you go through an LCD. On the computational side the light field is very expensive so it only renders a stereoscopic image pair. In this case, we render 50
images in total per frame. So that’s 25 images for the right eye, 25 images for the left eye. There’s a significantly
higher rendering cost and then we solve this
nonlinear optimization problem based on that light of field to compute the pairings that go on the pixels. I mean, we weren’t able to
implement in real-time on the GPU for moderately high resolution
so about 1280 by 800 for both eyes which is not very high and that worked in real-time but there are significant
computational challenges and with that also power requirements. So I think the varifocal
approach that I showed where you actuate the display
or change the focal power, that’s something that would
require a little bit more power actually but that may be feasible. So if you want to build a
really high quality AR display that actually has calibration between what you see and the real world, you need eye-tracking anyways because you know what the relationship between the world and the display
is but unless you know also where the eye is,
you can’t really show the object at a precise point, right. If you have eye-tracking anyways, then it makes sense to
build on top of that and have some kind of a
gaze-contingent technique. But there are other challenges with gaze-contingent techniques
especially in AR that may not apply to all
different coordinations. So there are always different
trade-offs and I think depending on what your constraints are and what your applications are, different of these technologies may apply. Yeah? – In addition to gaze tracking, is it feasible to measure
a person’s accommodation based on the curvature of
their eyeball or something and just solve the
problem of accommodation entirely in software? – Yeah, that’s a great idea. Actually, it’s one of my midterm
questions in the VR class. Can you solve this
vergence-accommodation conflict in software, right? So the answer is no. You can’t, without going into
the accommodation tracking, I’ll get back to that in a second. So if you have a fixed
focal plane display, it doesn’t matter what image
you show on that plane. It will always be the sharpest image because if the eye accommodates
at any other distance, it will still be sharper
if you’re accommodating to that physical plane. So if I change the image on this screen, it doesn’t matter for the
accommodation state of my eye. My eye will still be accommodating
on the physical screen because if it accommodates anywhere else, it’s going to be more blurry
than it is on the screen. Does that make sense? Okay, so any software
approach would only change the pixel values on the screen
but it doesn’t optically move the screen so it doesn’t
change your accommodation because your accommodation
will be driven to that plane. – Well in that case,
there’s still a problem with the varifocal
approach, the moving plane, because you can only change
the focus of the entire plane and so you do have the ability
to tell that the focus is outside of our fovea as well. – That’s an excellent
point and that’s a big limitation for varifocal displays. You can only globally change it. The thing is, accommodation
is only driven by the fovea which is only about one
degree of the fovea at least. One degree of the visual field so only this small, like
this two degrees about, your thumb at arm’s length. So only this part of your visual field is actually important
for driving accommodation and after that, the visual
acuity drops off so dramatically that the blur doesn’t really
make a difference anymore. So if you have precise eye tracking which has to be much more
precise than one degree and very fast, then a
globally adjusting focal plane is actually perfectly fine and it’s very difficult to see the difference. So it’ll be fine. What you can do is you can
actual render in the retinal blur to make it more plausible, the image. You don’t need to leave
it completely sharp. You can render in the right blur, but you still need to globally adjust it to whatever your gaze position is. – So to regulate the right blur, you do need to record
the focus of the eye. – You don’t because you assume that it’s wherever the optical distance
is of the focal plane. So your actuator of the varifocal display has to have some kind
of a feedback mechanism to tell you where that plane is. You can think about the
plane as guiding your focus so as long as you know where the plane and you know the
prescription and you assume that they can actually
accommodate in that range, then they will always be
accommodated at that plane. So tracking accommodation
in a small form factor is actually very hard
and also I don’t know if it’s really useful but you
can use light field cameras or type sensors to measure the
appropriate show of the eye. But I think the best scenario
would be globally adjusting it depending on your gaze
and rendering out the blur everywhere else to make it most plausible and actually drive accommodation. Hopefully that answered your question. Ope, sorry. I saw you had a question. – Physical constraints to or limits to getting like retinal resolution
for passenger display. So there’s kind of two I
would see long-term approaches like one is passive display for AR. The other one is to
just completely recreate the real-life feel. It’s some kind of display technology. So I was curious if you develop any constraints one or the
other over the long-term be a real viable option. – Well you can create retinal
resolution type displays but over small field of view. I mean it’s really the
pixel count that matters and I think industry has been pushing for more and more pixel resolution. It doesn’t really make
sense to go beyond 4K or 8K on a big television. It really only makes sense
if you stretch that display over the entire visual
field because I mean, if you do the math and we
have visual field of about maybe 160 degrees horizontally,
maybe 135 degrees vertically per eye, if you assume
that retina resolution is about 58 or 60 points
per visual degree, then that’s about 9,000 by 8,000 pixels that you would need per eye
to get that retina resolution so in that case you either
need to have a panel that has that resolution or
you have to dynamically steer a smaller foveated kind of a screen. You could combine optically
like a large field of view display with a small field of view display where only the small field of
view has high pixel density and if you contract that with your gaze, that would probably be enough. But then the bandwidth is a challenge, we don’t have that number of pixels, we can’t really drive it, we can’t really render that
high resolution in real-time. – [Brian] I think we have time
for maybe one more question. We’re running behind. – You had a question, sorry. – [Audience Member] Why a very quick time? In your research, what was
the frame rate or speed used to do this tracking for the eye? It seems very challenging mechanically to keep up with a person’s eye. – So eye tracking, it seems simple but it’s actually very difficult
to get it to work right so we actually used a
stereoscopic eye tracker from a company called
Pupil Labs in Berlin. They have this open source eye tracker. Runs at 120 hertz. It worked okay. I mean they specify the
accuracy with below two degrees. It sounds like not much
but two degrees is that, that’s the size of your fovea, right? (laughs) So I think eye tracking sounds simple but it’s actually very hard to do right and we don’t claim that– – That used to be your autofocus. Is that– – Oh no, autofocus uses the
pupil lapse tracker, yeah. So we didn’t custom develop this. Like we changed the software,
the control mechanism by incorporating the
information from the real sense which is actually much
slower than the eye tracker. But we know what the distance
is of the scene everywhere. We know kind of the
vergence which is noisy and we kind of developed
acute version of a sensor fusion algorithm, yeah. Sorry, I’ll also be around for a bit if you want to talk more. (audience member mumbles) – [Brian] Okay, sure. – [Audience Member] Oh
yeah, I was just gonna ask. Are there any major differences
in solving this PAC issue for AR verus VR because I know
AR has both that and real. – Yeah, I think that’s an
excellent question actually. So most people talk about it
in the context of VR actually and so I’ve only been talking about VR. And VR, the problem is our visual clarity, potentially discomfort,
eyestrain, things like that. Right, in AR you have the real object also so you don’t actually
know where the person is actually going to accommodate. If there are conflicting stimuli between the physical object and the digital object and it’s supposed to be
at the same distance, I mean I don’t know exactly where the user is going to accommodate. It’s going to be potentially driven by the stereoscopic cues, by
the relative brightness of these objects, and so on and so forth. Just creating a consistent representation is actually much more challenging. Also then, it doesn’t just
matter to drive accommodation which is the big goal for VR. For VR you just want
to drive accommodation somewhere close to where
it’s supposed to be and that’s good. For AR, you need to create
consistent retinal blur cues between the two also and
so I think the position has to be a lot more, a lot better in that case for focus cues and I think it’s a lot more important to get them right for AR. (audience member murmurs) Because fundamentally, these
two cues mismatch in VR. So vergence, I can measure
the vergence by just taking, if I know the IPD and the
gaze angles for both eyes, I can roughly estimate the vergence angle which is the angle
between these two lines. But the accommodation will
be probably somewhere else. – [Audience Member] Yeah but if you want the accommodation to be at a position to where the vergence is? – Yeah, you want that but it
doesn’t necessarily happen. (audience member murmurs) Oh yeah, yeah, yeah. Yeah, yeah. So that’s what you’d like to. So you can measure vergence and then try to drive accommodation there. But it’s very difficult to
measure vergence precisely because such small angles
and such small changes in this angle gets very noisy. I mean that’s what the eye tracking does. So the eye tracking actually
either measures vergence or if you have computer-generated content, you can do gaze angle and do a look up in the depth of that so
of the rendered scene or you can fuse both but usually
you would measure vergence angle that tells you where you want to drive the accommodation. That’s what the eye tracking does. (audience member murmurs) Oh for the presbyopic glasses? – [Audience Member] Yeah. – I mean vergence tracking
is just very noisy. So that’s why we also
want the depth information just to have more robust
information basically. But if you had the clean vergence tracker, that’s all you’d need, yeah. – [Brian] Alright, we do
have one more question. – In an AR situation,
like the eye charting doesn’t really necessarily
give you the depth information so yeah but I guess if
you use a depth path, you could have a depth
camera and you can do it. – Yeah, I mean some of the headsets the AR headsets like the Meta, I mean they use town of flight cameras. I think all the lens two are, right, and mostly for hand tracking then but you could use it
for other things, too. I mean at the end of the day, as much information as
you can of the real world and the digital world and
then optically combine them in a seamless way, getting
the lighting effects to be consistent, getting
the shadows and shading to be consistent, getting
these occlusions right, that’s really hard, there’s
a lot of excellent work here going on, I mean some of you
are working on this probably like this lighting
estimation problem, right. I mean that’s one thing
to do that on a cell phone but then on an optical device, I mean it’s even harder doing
relighting, things like that. There’s just so many different challenges which makes it such an exciting topic. – [Brian] Okay, let’s thank Gordon. (audience applauds)

Add a Comment

Your email address will not be published. Required fields are marked *