Perception and Psychophysics; Myopia Programs – NEI Council 10/04/2019


Alright, so as foreshadowed,
I’m going to be covering the last 2 sections of SAVP,
Perception and Psychophysics Program is really about vision
functioning as it should, and Myopia and Refractive Error
is vision that’s a bit disordered. Alright, so the field of
psychophysics has a pretty long history, you can go back to the
mid-1800’s with the German scientist and philosopher Gustav
Fechner. He asserted that the mental processes that we use to
interpret information from the physical world is susceptible to
measurement. Now that was actually a pretty
bold idea at the time, but he had to put his career on the
line to show that it could be done, and as a result, he
founded the field of psychophysics. There are fundamental
quantitative methods that you use to be sure that you can
titrate the relationship between the physical stimuli and the
perceptions they affect. And so just as an illustration,
I’m bringing up a well-known psychophysical function that
Fechner worked on, that is the absolute detection of light. So you can see on the graph,
along the X-axis, you can incrementally increase the
intensity of a stimulus and then simultaneously plot on
the Y-graph the perception that resulted. And so this strategy of tight
experimental control of the physical stimuli connected to
the functional experience of perception continues to this
day, and so stimuli such as these sinusoidal gratings or
Gabor patches are in widespread use, and they’re
really appealing for a number of reasons. One is, as you can see the
relatively simple stimuli, but they have a number of ways
that they can be varied experimentally, either by the
frequency or the orientation of gratings. They can be characterized
mathematically pretty easily, so that’s actually very useful
for computational modeling. And we know that cells in the
visual cortex just eat em up. So that’s really really useful
for neuroscience programs. But. The fact of the matter is
the world looks more like this. And so it’s cluttered, it’s
complex, it’s colorful, it’s meaningful, it’s dynamic.
It’s all of those things. But people have been really
interested to see – if – it is possible to study these
without sacrificing that experimental and computational
rigor that psychophysics brings. And so, I’m going to highlight
3 different labs that are doing just that. And the first one is Johannes
Burge, at University of Pennsylvania. He’s an early
stage investigator that we just recently funded. And he
combines computational modeling and psychophysics to understand
how it is that we estimate depth from natural scenes. And so
in developing his algorithms, he recognized it would actually
be really useful to find out if the estimates are accurate.
And so he developed what he refers to as a “Franenstinian”
device to collect these images with this data. It has
a digital SLR camera set atop a laser-range scanner set atop
a robotic gantry, and the gantry continuously aligns the scanner
and the camera so that they are getting their data – that is,
the images and the distance data – simultaneously in the
right places. And so just to give you an
example of the templates that result, he can get these low-
noise, high-resolution stereo images from the camera, and then
those are templated and aligned with these co-registered laser-
measured distance data. So he’s actually collected the
full dataset, and what I want to highlight
here is that, yes, he’s starting to use them in his own studies
on depth estimation, but importantly has put this
dataset out, it’s available to the scientific community.
And it’s not just psychophysics labs that contacted him to use
these data, he’s also heard from neuroscience labs and machine
learning researchers. The NEI funds that went towards
this are actually being distributed not just in his lab
but beyond. Another example of how we
process natural scenes is – I can show you a real world
image like this, and if I had an eye tracker – I’m getting
data from all of you – I’d get something that looks like this. So you can actually empirically obtain where your fixations
landed, and then I can take those data – well, I can’t, but
people are much more mathematically inclined can take
those data and actually transform them into an attention
heat-map. In other words, those hot-spots
indicate where fixations landed more frequently, or stayed on
the image more. And that gives you a sense of
how people, when looking at a scene, prioritized which regions
they wanted to look at. But these are just the empirical
data of where your eyes landed. The question is, why? Why are
they going to certain places? So, one theory is that it’s all
driven by low-level features, such as color, orientation, and
brightness. And Laurent Itti developed a model to test this.
He could take these visual images, get the mathematical
properties of where most of those low-level features were
spatially, and then could create a heat-map that he could then
compare to see how predictive it was of the empirical
attention map data. Alright. But, John Henderson at
UC Davis said, “Well I disagree. I don’t think it’s actually low-
level features that are directing our visual guidance,
but it’s actually the high- level semantic content that’s in
the scenes.” And so he used a procedure very
similar to Itti’s in terms of developing the salience maps for
low-level features, but did that with semantics.
And so now we have 2 models here, the low-level feature-
driven visual attention model, and the meaning-map model for
high-level semantic. And I know that I’ve glossed
over the methods so that it’s almost like I’m saying “and some
magic occurred here” to make these maps mathematically
viable, I’m happy to answer questions about those later, but
in the interest of time, I’m going to say that the home
point, the important one, is that the salience map and the
meaning map can then be tested against each other to see
which one is more predictive of the actual empirical
fixation data. And if you have just a free-
viewing episode like this, where you’re just shown the
image and you look, the meaning map is actually more predictive.
But one could say, well maybe we’re biased to do that, and so in a recent
study that Henderson’s lab just published
this year, they pushed the subjects to treat it like a low-
level feature experience, so they instructed them to just
search for how many bright patches they could see in
the image. And meaning still won out. So
the meaning-maps accounted for the overall variance better than
the low-level feature ones, and the individual correlations
also mapped out better. I will say again, in terms of a
really innovative, unique method that was developed with this NEI
funding, it’s not just staying in John’s lab, but his colleague
Lisa Oakes also at UC Davis who’s studying how visual
processes develop, is using this same method in 4, 8 and 12
months old, what she says the period of time where children
transition from learning to look, to looking to learn.
And it’s going to be really interesting. She’s just starting
this data collection, so it’s going to be really interesting
to see if these semantics come online during a certain point
of time in development. Ok, so the last lab that I want
to share is not just natural scenes, but putting you into
nature for the visual experience So Mary Hayhoe at the University
of Texas at Austin and her postdoc John Matthis, who was a
K99 recipient and – yay Neeraj – and he just transitioned to his
R00 at Northeastern… They developed this remarkable
data collection system to see how it is that walkers
gather visual information that they need to stay upright
as they locomote through various terrains, including pretty rough
terrain. And it combines a mobile eye
tracker, and a motion caption system, so you can record the
person’s gaze, and their full-body kinematics as they are
walking over these terrains. And it’s really a great
opportunity to look at unique datasets, such as how we tune
our gaze to different environmental demands, or how
real-world motion patterns are experienced, unlike in the
lab. So I want to give you a sense
of what this data collection looks like. In real time, it’s
multiple streams of data as the person is walking. On the
left, you can see a reconstructed skeleton, with the
footholds, past, present and future shown in the dots. The
pink line is the gaze vector, where they are looking as
they’re walking. And then on the right, you can
see a frame from the eye-tracker It’s showing where the eye is
in the head, and then the blue cross-hairs are showing the
person’s 2D point of regard, as they’re moving over rough
terrain. And I don’t think anybody fell
during the experiment. Although, I would have!
[Laughter] The next one is really
interesting. So what they’ve done is…
There’s a number of ways they can pull these data,
and what they did here is to look at the retina-centered
vision. In other words, how the motion patterns present
on the retina as you’re walking. And what you will probably
notice is that it is not a smooth, non-stop velocity, but
there’s a lot of acceleration and deceleration. And this goes
counter to what the field has supposed for the last 30 years.
So studies on optic flow, which is the perceived motion
that an observer has as they are moving relative to that object
or surface, has been studied in electrophysiological
laboratories looking at area MT. And this actually suggests that
maybe those studies should be revisited using more appropriate
stimuli with ecological validity such as this. Ok, so I’ve given you a bit of a
taste of some research that we’ve funded in the perception
and psychophysics program, just from real-world and natural
scenes. But that doesn’t really reflect the breadth of the
program. There are a number of different
areas that people are pursuing, using a number of different
methods. Although I will highlight, as
you probably noticed, even though these people were
captured by trying to apply psychophysics to studying real-
world natural scenes, they also embedded a few other
areas of interest, so looking at depth, looking at motion,
looking at visual attention and search. And so more and more
we’re seeing projects coming in that combine these different
areas, looking at them as more complex scenarios, but also
combining different methods. So I promised you I would cover
the normal visual function, how it’s being studied by
researchers in the program, but now I’m moving to perceptual
disorder, and in particular, refractive error. To understand what refractive
error is, I’m actually going to put you in the point of view
of a 6-year-old in the first grade, maybe the first week of
class, and if you are a student who has myopia, and this is the
age when generally it’s initially diagnosed, the objects
that are close-by like that annoying student in front of you
raising their hand, that’s nice and crisp and clear, but the
lesson on the board is not. It appears blurry. And then the converse if for
hyperopia. If you are a student you may have difficulty seeing
objects that are close, like the words in the book that you’re
trying to learn how to read. And so, as I’ve insinuated with
this example, there’s a cascade of problems that can come from
uncorrected refractive error. The educational prospects can be
severely impacted. Fortunately, it’s relatively
straighforward to correct with spectacles or contact lenses or
surgery, but, despite that, uncorrected refractive errors is
the leading cause of visual impairment, worldwide. In terms of close-to-home,
roughly 40% of Americans have myopia. 4% have high myopia,
myself included, and this latter group actually is particularly
vulnerable to pathological ocular conditions, such as
retinal detachments, macular degeneration, glaucoma, that are
blinding. So it’s really a serious issue. On top of that, the incidence of
myopia has been becoming more and more common over time, and
so currently, next year, the global number of people with
myopia are – it’s going to be about 2.5 billion, but
projecting to the year 2050, will almost double. And just to, again, look at how
that prevalence maps out closer to home, shown in red
here you can see the 2020 prevalence level, about 42% for
North America, including the United States, of course, but
that’s expected to be upward of over 58% by 2050. I will say this table is showing
the estimations of myopia growth in different regions, and you
will probably notice regions regions such as Asia – different
parts of Asia – the numbers are even higher. So reaching epidemic proportions So it’s an important health
issue. So it’s really important for NEI to do what NEI does best
and that’s to support work to understand the mechanisms. So
part of that is actually understanding normal eye
development. We’re all born with hyperopia,
and during infancy, our eyes experience very rapid growth.
It starts to slow down but continues during school age. By
late teens, your eye should be emmetropic – that is, it’s at
the ideal refractive state so that light comes back to a
point to land right on the retina, perfectly. And this
growth involves a regulated coordination of the lens
curvature, and the axial length. That’s when it goes right. But for some, like me, and a few other folks in the audience
wearing glasses, there’s a myopic shift, so that growth
becomes accelerated, and you get an excessive axial elongation.
And there are a number of risk factors that have been noted,
that the trait runs in families, so there does seem to be a
genetic component. There’s been an association for the time
spent on near work and a higher incidence of myopia, and it does
appear that time outdoors may have a protective factor for
developing myopia. But these are relatively broad
descriptions, each one containing a number of
confounds. So the best way, of course, to really understand is
to look at the mechanisms of onset and progression. And so. It’s been very very well
established, lots of studies, that we do require visual
experience for refractive development. And so experience
with high pattern, high contrast stimuli such as this, gives
meaningful information to the eye about its refractive state. But if you lose that detail or
you lose that contrast, then you can develop a form
deprivation myopia. The retina loses that
information. And this finding has actually been used to
develop a very reliable and widely used animal model of
myopia. We can have one eye receiving normal vision, but the
other one is fitted with a translucent diffuser goggle, so
that they lose the detail in that eye. And then you can see,
in this example, with form- deprived chicks, the normal eye
has an intact refractive state, but the one with the diffuser
goggle develops myopia. And this model cuts across many
species, from zebrafish to non-human primates and actually
to humans. We all show this form deprivation myopia. And that’s
really useful, because you can develop these animal models.
They each have unique experimental advantages to use
them to study myopia. And of course, each one has
positives and negatives, ranging from appropriate anatomy to what
you’re looking at, to practical reasons on when they breed or
how difficult they are to handle. These all need to be
considered for researchers. But, I want to emphasize that
all of these models are represented in the myopia
program. And a number of researchers have
actually used them, inspired by some of those risk factors
that I mentioned earlier, to try to get at what is it
about those risk factors that may affect whether you develop
myopia or not. And so Terri Young at the
University of Wisconsin is harnessing the advantage of
the zebrafish models so she can ask how genetics might play a
role. So she’s combining genome- wide linkage studies on large
multi-generational families, who have high myopia, and is
combining that with mutant zebrafish functional studies.
When she gets those candidate genes, can knock them out to see
how that affects myopia development. Deb Nickla, at the New England
College of Optometry, is harnessing the advantage of a
chick model, to address that risk factor of how time outdoors
might play a role in myopia development. The advantage of
the chick model is it’s very responsive, it’s quick, to the
light environment, and it’s also diurnal. And so what Deb’s
interested in is not just light, but those light cycles, and how
they can affect retinal rhythms, particularly related to dopamine
release by the retina, and how that affects axial
growth in the sclera. Chris Wildsoet at UC Berkeley
has a long history of working with the chick model. But she’s
now harnessing the advantage of the guinea pig model, to ask how
therapies might work. And so the nice thing about
using the guinea pig, a mammal, is that she can
actually mirror the same type of administration protocol, and
dosing, for atropine interventions, and to look to
see how that mechanism ends up working. We can also harness the
advantage of the primate model and human data, and rather than
trying to give any sort of overview on that, I’m going to
say that we can ask Earl Smith, because we’re very very lucky to
have him here. I’m going to say in terms of
question and answer, because Earl really follows up
nicely, diving right into the myopia research, we’re going to
have him come up, give his presentation, and then at the
end, if people want to have questions, the three of us will
be available.

Add a Comment

Your email address will not be published. Required fields are marked *