0:01 The first one I start with the final
0:04 factor because this would I know a lot
0:07 of you say you understand it very well
0:10 and but this is a very easy or natural
0:12 segway for us to move the proound
0:15 probability distribution.
0:18 So final factor is a ratio of the
0:22 variance over mean. So just saying how
0:24 spread a probability distribution is
0:28 compared to where its center is at. And
0:30 um a lot of you are confused about
0:33 phenof factor and so let me no a lot of
0:35 you confused about signal to noise
0:38 ratio. Yes. So now the question is can
0:39 anyone type in the chat what's the
0:42 difference between SNR and a phactor?
0:43 What are the difference? Actually there
0:53 Yep.
0:55 The denominator and the numerator are
1:21 So uh signal to noise ratio is actually
1:25 mu over sigma. There's no square. And a
1:30 lot of a lot of you had made that
1:35 a lot of you who which did
1:38 had some issues in your worksheet was
1:39 exactly because you didn't take the
1:41 square root. So one more time remember
1:55 Okay. And again for post on uh neuron or
1:58 post on trial to trial variability it is
2:00 very important to remember the final
2:12 What is the signal to noise ratio a for
2:14 a pson neuron?
2:19 If I provide you that mu
2:21 mu
2:25 = to sigma squar = to 1.
2:42 Why would you do a take a square root of mule?
2:44 mule?
2:48 Well, that's right. But really,
2:57 Yes, Babina had it right. And then if we
3:28 Yeah. So now you get it. Remember take
3:30 the square root of two. Take the square
3:33 root of your from your uh variance.
3:37 You'll be fine. And pos is exactly how
3:39 we put a tuning curve first. That is we
3:41 convert external variable to a mean
3:44 varing rate. Nambda is just give us
3:47 what's the mean varing rate look like.
3:51 The num n number na here defines the mu
3:53 and the poson randomness defines the
3:59 and then we have the spread count
4:01 and that actually tells you poson
4:04 distribution only need actually it only
4:06 needs mu to define the distribution
4:08 because we have the square we have its
4:12 randomness coming from the poson form
4:14 and this comes back to the mathematical
4:17 definition that is when we have the the
4:22 poss probability or the density of a posible
4:23 posible with
4:31 means by count lambda is given by the
4:34 lambda power to k and e minus lambda and
4:37 divided by factorial of k and this
4:40 equation you probably going to use it in
4:43 your midterm so keep a note or have it
4:46 handy and that's exactly where that's
4:49 exactly some something we're going to
4:52 ask you to refer back to
4:55 and this is the distribution for so for
4:57 this particular distribution if we give
4:59 you some questions I'm providing this
5:02 form what we want you to first look for
5:05 is what is the mean fing rate in this
5:08 person distribution
5:10 and if what is the mean fing so for
5:13 example um given from the lecture I had
5:16 earlier that we had a mean fing rate of
5:19 10 but we are only observing from a 100
5:21 millisecond bin the mean fing rate or
5:23 the mean spike count for that particular
5:26 pson distribution within the 100
5:30 millisecond number equals to one because
5:32 it's 10 times.1
5:35 so it's very important first you know
5:36 what's the mean by count for the proound
5:39 distribution is and second is what is
5:43 the um sample value we want you to
5:45 compute the probability
5:48 So if you remember to do this step one,
5:49 what's the mean? Step two, what's the
5:57 and leaky integrated fire neuron
6:00 not sorry linear nonlinear poson model.
6:03 So we have the psonum. That's what we
6:06 just review. And the linear is just a
6:09 way we convert the stimulus going
6:12 through the receptive field. And that's
6:21 linear and nonlinear is this response.
6:23 We want to convert them to something
6:25 that we can easily add for some
6:28 variability. nonlinearity is killing all
6:31 of those that's possibly being negative.
6:33 So that's linear and nonlinear why we
6:37 have both combinations.
6:41 So f so this excuse me
6:45 the sigma ai x i is the linear part and
6:48 g is the nonlinear kernel and combining
6:50 them we have the linear nonlinear and
6:52 addon variability we convert the
6:55 stimulus into the spec count and any
7:13 Okay, if you guys don't have any
7:16 question, the last part of my review
7:24 And indeed u as we already saw in the
7:27 previous review for phenof factor it's
7:29 different from phenof factoractor
7:31 because now we are dividing the signal
7:33 by noise instead of having variance
7:43 and another key difference is um we have
7:46 standard deviation instead of the
7:50 variance. So there's no square
7:58 So the weak signal as we showed earlier
8:02 is the signal that spreads very wide and
8:04 spread with very wide means even if we
8:06 have the decision variable as the black
8:09 threshold there are large area under the
8:13 curve that locate on the opposite side
8:16 of the threshold and strong signal is we
8:19 are very far away from the threshold. So
8:22 either we move the mean or we make the
8:25 distribution narrower. So there are very
8:27 little area under the curve that's on
8:29 the wrong side of the um decision threshold.