Hang tight while we fetch the video data and transcripts. This only takes a moment.
Connecting to YouTube player…
Fetching transcript data…
We’ll display the transcript, summary, and all view options as soon as everything loads.
Next steps
Loading transcript tools…
BIO 347 / NEU 547 - Midterm 1 review (part 1) | Braden Brinkman | YouTubeToText
YouTube Transcript: BIO 347 / NEU 547 - Midterm 1 review (part 1)
Skip watching entire videos - get the full transcript, search for keywords, and copy with one click.
Share:
Video Transcript
Video Summary
Summary
Core Theme
This content explains the Poisson distribution, its relationship to the Fano factor and signal-to-noise ratio (SNR), and how these concepts are applied in a linear-nonlinear Poisson (LNP) model for neural responses.
The first one I start with the final
factor because this would I know a lot
of you say you understand it very well
and but this is a very easy or natural
segway for us to move the proound
probability distribution.
So final factor is a ratio of the
variance over mean. So just saying how
spread a probability distribution is
compared to where its center is at. And
um a lot of you are confused about
phenof factor and so let me no a lot of
you confused about signal to noise
ratio. Yes. So now the question is can
anyone type in the chat what's the
difference between SNR and a phactor?
What are the difference? Actually there
Yep.
The denominator and the numerator are
So uh signal to noise ratio is actually
mu over sigma. There's no square. And a
lot of a lot of you had made that
a lot of you who which did
had some issues in your worksheet was
exactly because you didn't take the
square root. So one more time remember
Okay. And again for post on uh neuron or
post on trial to trial variability it is
very important to remember the final
What is the signal to noise ratio a for
a pson neuron?
If I provide you that mu
mu
= to sigma squar = to 1.
Why would you do a take a square root of mule?
mule?
Well, that's right. But really,
Yes, Babina had it right. And then if we
Yeah. So now you get it. Remember take
the square root of two. Take the square
root of your from your uh variance.
You'll be fine. And pos is exactly how
we put a tuning curve first. That is we
convert external variable to a mean
varing rate. Nambda is just give us
what's the mean varing rate look like.
The num n number na here defines the mu
and the poson randomness defines the
and then we have the spread count
and that actually tells you poson
distribution only need actually it only
needs mu to define the distribution
because we have the square we have its
randomness coming from the poson form
and this comes back to the mathematical
definition that is when we have the the
poss probability or the density of a posible
posible with
means by count lambda is given by the
lambda power to k and e minus lambda and
divided by factorial of k and this
equation you probably going to use it in
your midterm so keep a note or have it
handy and that's exactly where that's
exactly some something we're going to
ask you to refer back to
and this is the distribution for so for
this particular distribution if we give
you some questions I'm providing this
form what we want you to first look for
is what is the mean fing rate in this
person distribution
and if what is the mean fing so for
example um given from the lecture I had
earlier that we had a mean fing rate of
10 but we are only observing from a 100
millisecond bin the mean fing rate or
the mean spike count for that particular
pson distribution within the 100
millisecond number equals to one because
it's 10 times.1
so it's very important first you know
what's the mean by count for the proound
distribution is and second is what is
the um sample value we want you to
compute the probability
So if you remember to do this step one,
what's the mean? Step two, what's the
and leaky integrated fire neuron
not sorry linear nonlinear poson model.
So we have the psonum. That's what we
just review. And the linear is just a
way we convert the stimulus going
through the receptive field. And that's
linear and nonlinear is this response.
We want to convert them to something
that we can easily add for some
variability. nonlinearity is killing all
of those that's possibly being negative.
So that's linear and nonlinear why we
have both combinations.
So f so this excuse me
the sigma ai x i is the linear part and
g is the nonlinear kernel and combining
them we have the linear nonlinear and
addon variability we convert the
stimulus into the spec count and any
Okay, if you guys don't have any
question, the last part of my review
And indeed u as we already saw in the
previous review for phenof factor it's
different from phenof factoractor
because now we are dividing the signal
by noise instead of having variance
and another key difference is um we have
standard deviation instead of the
variance. So there's no square
So the weak signal as we showed earlier
is the signal that spreads very wide and
spread with very wide means even if we
have the decision variable as the black
threshold there are large area under the
curve that locate on the opposite side
of the threshold and strong signal is we
are very far away from the threshold. So
either we move the mean or we make the
distribution narrower. So there are very
little area under the curve that's on
the wrong side of the um decision threshold.
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.