Hang tight while we fetch the video data and transcripts. This only takes a moment.
Connecting to YouTube player…
Fetching transcript data…
We’ll display the transcript, summary, and all view options as soon as everything loads.
Next steps
Loading transcript tools…
Quantum Sensor Networks (Lecture 1) by Saikat Guha | International Centre for Theoretical Sciences | YouTubeToText
YouTube Transcript: Quantum Sensor Networks (Lecture 1) by Saikat Guha
Skip watching entire videos - get the full transcript, search for keywords, and copy with one click.
Share:
Video Transcript
after lunch
trying to keep awake to continue to do
so because we have got a
sort of change of gears and something
quite exciting coming up uh with uh
from the University of Arizona Tucson
I was going to talk to us about Quantum
sensor networks and he is sort of the
key man behind building large Quantum
networks in the US
so let's listen to thank you very much
Daniel thanks kasturi for inviting me
here and all the other organizers so
I'll try my best to do my part of
keeping you all awake
um and uh what I have decided to do I
made these slides after I heard both uh
size lectures and Carl's lectures uh you
will hear a little bit of a repetition
in terms of the the fundamental tools
behind sensing and Metrology but I'll
take a bend on where we will be doing
some math and actually applying the
tools on some problems
um so my first lecture today will be
focused on
developing some of those tools and
applying it to some problems in
photonic sensing and then we will look
at what you can do when you have
multiple sensors that are working
together to accomplish a task together
and in my second lecture we will be
talking about about a very specific application
application
um of a Quantum estimation Theory tools
for determining fundamental limits of
passive Imaging you're looking at
resolving stars for example in astronomy
or trying to resolve emitters in a
fluorescence microscopy setting
my name is said I'm at the University of
Arizona's College of optical sciences
and our we are located in a city called
Tucson in Arizona a few let's see if
all right yeah it works
um so Tucson is a city about 100 miles
south of Phoenix very close to the
Mexico border
the College of optical Sciences at the
University of Arizona and it's another
College focused in Optics in photonics
in Rochester so these are the two big
centers uh institutions in the U.S where
pretty much every single area of Optics
and photonics is represented all the way
from say high power lasers to non-linear
Optics Quantum Optics both Einstein
condensates every area of Optics
at the University of Arizona we are
I'll say a few words about the center
for Quantum networks that Anil pointed
out but I will not be talking about
networking but that's a huge Focus right
now within our Quantum information
science and engineering effort so these
are the faculty members and we are a
growing team and this last year we
launched a master's program in Quantum
information science and engineering and
we expect to offer that online or at
least parts of that online at some point
and I'll keep the organizers posted if
there is interest in this community
so let me say tell a few words about the
center for Quantum networks and then
I'll dive into the topic for today on
Quantum Metrology and sensing
so this Center is funded um as part of
the national science foundation's
engineering Research Center Program
there's a program called the ERC which
started in the 1980s and the objective
of that program was to take a technology
area that is right at the point where a
10-year long highly highly
transdisciplinary push and take it to a
point where that technology could be
ready for for further acceleration by
the industry and and by the society
so this is the first Quantum program as
that was funded as part of the ERC this
is about three years ago and it's a full
stack Quantum networking effort with the
objective of delivering fault tolerant
entanglement among distant locations
while serving multiple applications
simultaneously so just like the internet
supports many applications like the Zoom
call happening right now and the VoIP
and lots of different things happening
emails simultaneously the quantum
network should be supporting
entanglement delivery for so long it's
like telescopes and record 91 session
and maybe a blind Quantum computation
session all of those simultaneously so
we have people working all the way from
Quantum memory physics to spin Photon
interfaces to have the memories transfer
their qubits into the photonic domain
and vice versa Quantum error correction
to distal entanglement across a single
hop uh and then doing Quantum logic at
the memory nodes using uh nuclear spin
assisted hyperfine Logic on color
center-based Quantum memory since many
of these terms may not mean a lot to you
but I'll just I'll talk about some of
these Technologies a little bit later
and then ultimately once you have long
distance entanglement we are looking at
developing routing algorithms scheduling
algorithms Network tomography algorithms
and these are people who are working on
computer network Theory so it's all the
way from physics to computer science so
if you're interested in learning more
about our work in Quantum networking
definitely feel free to reach out to me
but with that I will switch to our topic
of discussion today
so to me this whole field of quantum
enhanced or Quantum inspired or Quantum
assistive sensing boils down to the fact
that you know life matter are they're
fundamentally quantum mechanical objects
even classical light that we are seeing
around us there is a Quantum State
description of this slide
it's a Romain phase insensitive
multimode gaussian State I mean it's
thermal light it's a it's a fancy way of
saying thermal light but but once you
cast light in the quantum mechanical
language you certainly have access to
some very very powerful tools
that allow you to do Quantum treatment
of information embedded in that light uh
and help you understand what is the best
possible Precision with which you can
measure certain quantities
now what does best mean
so that that that word best uh has to be
qualified what is your measure for
precision are you trying to tell apart
between a few different states of light
are you trying to estimate a parameter
embedded in a in a pulse of light
um are you trying to track how a
parameter changes in time
um are you trying to do a hypothesis
test between one known thing that you
are trying to discriminate between
absolutely anything else
all of those things that I just told you
about and there are more in the context
of communications there are different
Quantum information Quantum estimation
theoretic tools that tell you if that's
the parameter you care about how should
you measure that information bearing
light in order to get the best possible
best possible position for that
definition of test okay so that's that's
about the whole field of quantum
information Theory apply to photons now
another thing that I think is very
important to understand and we have
heard this in all of the talks in the
last couple of days but I still wanted
to stress it there is no way to evade
measurement noise there's no matter how
you choose to measure your light
there will be a fundamental noise I'm
not talking about you know excess noise
but there is something called short
noise so there's a Quantum limited noise
even when you are measuring uh squeeze
sliders a minimum amount of noise that
that quantum mechanics says that you
have to add okay so all of this entire
field of quantum Metrology to me it is
the it is a whole field that that that
looks at how can you engineer that noise
to your advantage by perhaps doing
something to the optical domain
information before you subject to that
inevitable noise perhaps you would
position that information in that light
in a more favorable way to that
inevitable noise that is yet to come so
that is about uh that that's how you
design optimal receivers or optimal
measurement strategies
so these Quantum Metrology tools give us
the fundamental limits of precision but
as uh you know Carl also pointed out in
his talk that oftentimes even though
these Quantum Metrology tools give you a
hint at how you we would design a scheme
that will actually get those limits
that going from understanding the
fundamental limits to finding out that
that measurement strategy is often very
very difficult
um and that itself is is a science and
art on its own right and we will see
some examples of those today
okay so with that Prelude about My
Philosophy about the field let me tell
give you an outline of what I'm going to do
do
so you will hear me talk about a whole
bunch of bounds you heard from both PSY
and Carl about the crema Rao bound
we'll talk about other Bayesian bounds
and estimation Theory
helps from bound chair enough bound to
level bound and all of these different
bounds they talk about you know
understanding what is the best possible
Precision for a particular definition of
the word best you define what you what
you want what you care about so in
today's lecture I will start with some
some primer on estimation Theory I will
take a more practice Bend where we will
be actually doing some math and doing
some calculations I'll spend a little
bit of time talking about the quantum
description of light and then I will
launch into applications first the
applications will be on uh you know
individual sensors and then I will go
into network of sensors and depending
upon where we finish today with that one
and a half hour mark I will might not
cross the line right here either like
might spillover or leave something for
the next lecture so we'll see how far we go
okay so let's go ahead and get started
um so we I'll first start with
estimation Theory
um and uh I would I'll put some pictures
on on um of various hikes my group loves
to do in the Tucson area and
people who have visited my group they
know that we would
take them out on Hikes too so if you if
you wish to come to Tucson we would love
to take you out you know see these um
these cactuses these are called the
saguaro cactus if you haven't seen them
they are um
grow up to 20 feet or so high
and they live for more than 200 years in
fact people who are experts on them they
can look at the you know the uh the
different widths of these cactuses along
with along its height and they know
which year Tucson got how much rainfall
so they're absolutely gorgeous scenery
so let's talk about estimation Theory
classical estimation here in this very
very basic simple construct X is my
parameter of Interest
but what you actually see is Y which is
a noisy version of x
and you know P of Y given X
and you know POI gave an X perhaps
because you know where X comes from and
what measurement you have choose Chosen
and why is the output of your measurement
measurement
your job is to estimate X and let's take
the uh the what I call the fisherian
point of view where X is a fixed
parameter that is what it is but you
don't know what it is there is no prior
probability distribution of it's called
the frequentest approach now
now
if you really wanted to estimate X
what's the best why you hope for
y equal to X well if if it would be
wonderful if y was actually X and you
could measure it right but then Y is a
noisy version of X so what you the next
best thing you do is that uh you from
your probability distribution you pick
the value of x
for the Y that you just got you pick
that value of x for which this
probability distribution P of Y given X
is the highest seems like the most
obvious natural thing to do
and this estimator of X and I'm using
the the hat so I use the inverted hat so
it's the same thing
um this is called the maximum likelihood estimator
estimator
and it's the most common sense estimator
but it also happens to be the estimator
that minimizes the variance of that
estimator so gives you the best quality
estimate of that of that parameter
um so this maximum likelihood estimator
um if you were to write down the
variance of absolutely any estimator
that you could get here from y so
estimator is simply a function that
learns this Y into your belief and
estimate about X and you might have n
trials getting your same parameter X is
going through multiple instances of this
probabilistic channel to get you these
different outcomes y1 y through to y n
and and this Y in general could be a
vector and the estimate is just one
let's say in this case a scalar
parameter that you are estimating
so absolutely any estimator satisfies
this Fischer information driven this is
the criminal outbound inequality
um and uh there is a so you saw some
versions of this criminal bound where
there was nothing in the numerator there
was one in the numerator and I believe
that PSI U had covered this bias term
um this m of X is the estimate the the
expectation of the estimator if this m
of X happens to be equal to the value of
the parameter then you have an unbiased
estimator in which case m is derivative
M of M of X is one so you get a one n is
the number of Trials and J of X is the
is the official information now let's
scare this facial information for a
moment and you have seen again this
written in multiple different ways in
the last few days look at this quantity
over here inside the square bracket it's
the derivative of the log of the
probability distribution
now if if y so this this condition just
the this this log of P or Y given X that
um is uh this is called the the
derivative of that is called this four
function by statisticians
if you get a y okay and if you change
your X
what kind of Y is a good y to estimate X
a y that actually changes as you change
X right because then you will be able to
estimate X
so if you have a value of y's such that
the probability distribution P or Y
given X doesn't change too much it is
not giving us much information about X
so so that's why if this partial
derivative magnitude squared if this
thing is small then that value of y is
not a good value of y in terms of its
ability to give you a good estimate of x
so that's why it seems reasonable that
the expected value of that squared
quantity which quantifies the the
goodness of that particular y
to give you an estimate uh
expectation taken over the probability
distribution of Y given the value of x
should be a quantity that tells you on
an average
uh what is the amount of information
that you're going to get from one sample
of Y about X now obviously I just said
it when Carl proved it and so I also
showed a proof for the quantum case but
that's the intuitive understanding of
why Fisher information is uh does what
it does if it's large it means your
variance scales more rapidly as n increases
increases
another thing that's important here is
that the maximum likelihood estimator if
you were to plug in this estimator over
here as n increases to Infinity this
bound is saturated and I'll talk more
about that
okay so now there is another view of the
world that that's called Bayesian
estimation Theory where you have a prior
probability distribution available on x
maybe because you have a knowledge of
the problem itself maybe you have made
prior measurements that you have used to
convert that to a prior probability
distribution that you could leverage or
maybe it is just the setting of the
problem setting that has been solved in
the past that you have some reason to
believe that you know something about X
before even you start to measure it now
if you have a prior probability
distribution then the quantity that you
care about minimizing is called the mean
squared error so I should have just
written NSE mean squared error and this
first m stands for the minimum mean
squared error so what I'm writing here
is the This Is The Joint distribution of
P of X and Y so you have P or Y given X
already given P of X is also given this
is your prior and this quantity is your
squared error and you are just taking an
average over the joint distribution so
that's called the mean squared error so
your job is to find the estimator for
for X that minimizes that mean square
error so what is that estimator that
estimator is given by the expected value
of x given y so what you do to calculate
this estimator so if I give you a p of I
given X and I give you a p of X what you
do is that you apply Bayes rule to take
these two and get P of x given Y and I
believe that you all here know the base
rule so you can go from P or Y given X
to P of x given y
and then if you think about this P of x
given y this is the probability that the
parameters value is x given that you
have observed y
so it's actually probably the
description of X like given y so if I
had if I integrate that x times PX it's
the average value of x right so this
thing is the average value of x
conditioned on the measurement outcome
that you have gotten
and that is the mmsc estimator so you
can prove that if you use this as your
estimator and plug it in here the value
of this mean squared error you would get
is the minimum possible value you could
have gotten
now even for classical estimation Theory
this thing is actually not easy to
calculate for all values of P of I given
X this estimator can also get very
complicated to calculate so people came
up with bounds to this and there's
there's many many bounds to the this
mmsc there's the van trees bound there's
the zip zakai bound there's there's many
different bounds this one that you
learned in textbook estimation here it's
called the Bayesian Cramer row bound
which is kind of a weird phrase I mean I
don't like that at all it's like
criminal bound is a fisherian bound
which you're saying sebation Kramer
outbound but that's what it is called
it's called bcrb and and why is it
called that if you look at this JB the
expression for that it looks identical
to the fission information except that
you replace the conditional distribution
by the join distribution everywhere this
one by the joint distribution this one
by the joint distribution and you can
prove that one over the JB
is a lower bound
to to the minimum to the mean squared
error and the thing is that this bcrb is
not saturable always meaning there are
problems I'll show for example for which
you calculate this bound and it's not tight
tight
okay so this is all I'll talk about
classical estimation Theory now this um
why care about Quantum estimation so as
I as I told you in the beginning in my
view uh Quantum estimation Theory or
Quantum information theory is just
speaking a little bit inside where this
poy given X comes from
okay so this P of Y given X which is the
you know the probabilistic description
of a noisy channel it's actually a uh it
comes from a collection of two things
you know y given X it comes from your
Quantum description of the information
bearing I'm saying like it doesn't have
to be light it matter so Quantum
description of the parameter that you
care about
and a povm description of the
measurement that you are using to
generate an estimate of that parameter
so this P of I given X the stress of rho
of x times pi y this is the povm
operator corresponding to the wire
outcome now all of classical estimation
Theory from say van trees book it it
deals with POI given X and Quantum
estimation Theory we take a peek inside
and we say well rho of X is something
that we can write down because we
understand how to write down the quantum
state of information bearing light
but we oftentimes we don't want to make
a guess on this measurement and Quantum
estimation Theory tools like the quantum
fissure information it can give you
official information like bound without
even having to talk about that measurement
measurement
and it automatically optimizes over all
possible measurement choices and this is
a common feature across all of these
different Quantum information Quantum
estimation bound the whole ever bound
the the quantum criminal bound the
hellstrom Bound for minimum probability
of error discrimination and so forth
so Quantum estimation Theory again that
the fisherian point of view where you
don't have any priors the Fischer
information uh is is replaced by the
quantum fission information I'm not
putting the bias term here but you you
there is a corresponding bias term you
can put there
this K of x
you want to write something like the
classical facial information you have
seen derivations that Carl showed you
derivation for example but if you look
at that quantity that D DX of P of I
given X you want something that um
um
that replaces this thing this log of P
and if you look at this thing uh you
know it is uh what is it it's a log so
so this thing um
um
if you look at that that quantity that L
of rho of x squared
this thing is just a Quantum version of
something you would directly write
instead of the probability distribution
would write with respect to the row
you write this operator L of L of rho of
X such that it is if I were to take this
on the other side
okay if I wanted to have a square of
so this squared is the con is this thing
is what I want to replace by this L of
rho of x
and because I cannot divide by a density
operator I take the d DX of the P of I
given X I have a d d x of rho X on this
side and the left hand side instead of
being just a multiplication by the
probability for technical reasons you
have to symmetrize this thing and write
this symmetric logarithmic derivative
but then it can actually solve for the
sld for the symmetric logarithmic
derivative operator that appears here
that takes the role of this score function
function
and you can write this in this very
interesting way
um and I think say you had derived this
um it goes as e to the minus rho of x
times Z this is the derivative of rho of
X with respect to X and then again e to
the minus rho f x of Z so if you look at
this the quantum pressure information I
never even specified the measurement choice
choice
but then
the point of the qfi is that this K of X
will be always larger than equal to J of
x no matter what measurement you pick
so it is an upper bound to the classical
efficient information and not just that
if you pick a measurement your
measurement this pie y to be a
projective measurement that is defined
by the eigen basis of this sld operator
which is implicitly defined by this
equation or explicitly defined by that
equation and plug it in here you will
get a p of Y given X for which if you
were to evaluate this equation
you will get K of X here
meaning you can achieve the quantum
fissure information
for a measurement Choice which is given
by the eigen basis of the sld operator
and you have again heard this previously
but I just wanted to instill this
important thing
and another thing to know is that this
measurement choice is not the only
measurement that will achieve the qfi
there are other measurements that will
also achieve the qfis you will see in
some examples I'll talk about in passive
Imaging contexts
now just like classical there is also a
Quantum version of the um the Bayesian
estimation Theory
and in Bayesian estimation Theory what
happens that you want to minimize the
mean squared error just like classical
and in this case you can write down an
explicit expression for the mmse
and you define an operator this operator
B that looks like the sld operator look
here instead of the row I have replaced
this by the average value of row with
respect to the probability distribution
P of X so this there are three operators
here gamma zero is the average of rho of
X over the P of X Gamma 1 and Gamma 2
are moment operators comma 1 is xpx rho
X DX and Gamma 2 is x squared e x row X DX
DX
and this operator B its eigenvectors if
you were to choose this as the
measurement here
you will get a PO by given X which if
you go back and put into this expression
and evaluate the nmsc estimator and put
it back here you will get the minimum
possible mean squared error that quantum
mechanics allows you for the given row
of X and P of X and this expression was
derived by Supersonic in a paper in 1971
and actually transactions of information
Theory it's a beautiful result and it
again people have started looking at
multi-parameter versions of that and
there is a multi-parameter
Bayesian bound that was developed
recently but for a single scalar
parameter in principle if you solve this
operator equation
um to get these Gammas and then find B
from this equation you can put it here
into this formula and you can derive the
mmse and we have used that in the
context of passive Imaging that I'll
talk about in my next lecture okay so
that's all I'll call going to talk about
this though this slide is the overview
of all of estimation Theory tools but
let's remember this expression for
deficient information because we are
going to use that a little bit so let's
play with this estimation a little bit
more so as I said that we'll Define this
log likelihood function L of Y and
statisticians like to call the
derivative of that this score function
so this score function is you know
really if this function this is small it
means that that value of y is not as
sensitive to changes in x
uh let's massage this equation a little
bit more so as I Define the facial
information as the average value of this
L Prime operator in my previous slide
right so this is my definition of
sufficient information
so I'll take that definition and now
let's observe that if I were to look at
this um the P Prime in the numerator
here the L Prime is just P Prime over P
right so if you look at P Prime
um take the integral with respect to Y
and I'm going to assume I'm being sloppy
here I'm assuming that I can uh you know
move around the derivative and the
integral which can't always do but let's
say I can do that the derivative of this
P of I given X is obviously zero because
this integrates to one that's the
probability distribution similarly the
double derivative also is zero because
that integrates to one
so if I were to write the expected value
of this L Prime function where I treat y
as a random variable
this is the expectation now if I
substitute the value of L Prime from
here P Prime over p
the P cancels out and the P Prime
integral is 0 from here
right so I have the expected value of
this L Prime is 0 hence
the expected value of the squared of L
Prime is nothing but the variance of L
Prime right because the variance of a
random variable is the expected value of
the square of the random variable right
and then minus the square root of the
mean the mean is zero in this case
so this is another formula that will
sometimes come in handy so let's
remember that as well that's the second
way to write the official information
and I'm going to derive a third way to
write the official information and I'll
continue after that two examples
um so let's do a little bit more math
here a very simple math I'm going to
take that L of Y given X that L Prime
function that I defined earlier so this
is L Prime remember L Prime is p Prime
over p
let's let's take a derivative of that
one more derivative of that
so P Prime over P 5 derivative with
respect to x what will I get you apply
the U by V rule
okay so p double Prime P minus P Prime
squared over P of Y given x squared and
you get a p Prime over P from the first
term minus this is just L Prime squared
now if you take the expected value on
both sides the right hand side I take
this whole thing I take an expectation
with respect to this P or Y given X distribution
distribution
um and if you look at the first term the
P cancels out you get a p Prime integral
and I just showed in the previous slide
that this is zero right this integral is zero
zero
so you just get this this thing left so
what we just derived is a j facial
information is the negative expectation
of the second derivative of the score of
the uh of the L of Y given X the
likelihood function
so this gives us the yet the third way
of writing the official information
and depending upon the problem context
we are looking at you can use the first
the second or the third wave they're all
the same thing
okay so just this is just a summary so
if I give you a p of Y given X if I tell
you to derive the efficient information
you can either take the expectation of
these of the square of this code
function take the variance of that L
prime or the negative expectation of L
double Prime all of them are the same thing
thing
all right so uh what if
X was not the quantity you were
interested in and we will see some
examples in Quantum Metrology uh where
we will see this happen X is something
that is your parameter that gives rise
to Y but what you really care about is
uh is this new parameter Z such that X
is a function of Z so F let's assume
it's a differentiable function but it is
z that you actually care about
estimating so you want to estimate Z so
this is that's why I'm calling it Z hat
of Y but you know the functional
relationship between x and z and you can
invert it to write z as some f inverse
of x
have you heard of jacobians in calculus
when you you probably have to use that
in a different context but it's
basically the idea of Jacobian so you
can write down the Fischer information
for Z from the fission information of X
with X substituted at F of Z but you
need this additional correction term
which is the squared of the F Prime of C
and we will see we'll actually use it in
some examples that we will derive
together okay so let's remember this uh
that how you estimate a function of the parameter
parameter
okay so let's do a couple of examples so
the first example I'll do I think you
heard this example both in Carl's talk
and I believe in size talk as well so I
will do an example where we will look at um
um
say you have a box so I wrote this box
where X gives rise to y
let me say that y Only Takes Two values
okay so Y is either one or zero
and it is 1 with uh with a probability X
and say it is 0 with probability 1 minus X
X
so X in this case the parameter that you
care about is nothing but the
probability of Y taking the value 1 or 0.
0.
I want to find the facial information
you want to try this calculation
okay so let's let's say that let's start
with writing a p of Y given X
any ideas on how we might compactly
write this probability distribution of Y
given X
you can do it in many different ways but
any any ideas of how you might try to do
that such that because you have to take
this derivatives and so so on to
calculate these the facial information
so I have to first have a POI given X in
order to go from here to L Prime and all that
when
cosine function so what what cosine
it might work what what do you have in mind
so how would you fold in X into that so
you need something such that this is
going to give you X or 1 minus X if I
substitute x equal to
um if the value of x I substitute I want
to get 1 or 0 depending upon whether uh
you know with the right probability
so let me give you actually there are
many functions that work let me pick a
simple one that you know that is easy to
work with so let's look at that so x to
the Y times 1 minus x to the 1 minus y
okay so why does this work so if you
just stare here now if I put in the y
equal to 1 it's a very trivial way if I
put y equal to 1 I get X this becomes 1
if I put y equal to zero
but what happens I get 1 minus X so very
simple one okay so in this case I we
have to pick our one way toward the
other to derive it so we'll take the log
likelihood function so log of P of y k 1
X log multiply so y log X Plus 1 minus y
log 1 minus X
and uh then L Prime can just take a
derivative it's very easy L double Prime
you can also differentiate it one more
time and here I'm just using the Third
Way of calculating the official
information it's the negative
expectation of the second derivative of L
L
and uh if you take this if this thing
you take the expectation of this
how do you take the expectation of this
so if you want to take an expectation of
this you have to take an expectation
over that probability distribution right
so what you will want to do is that with
probability X you are going to put in y
equal to 1 meaning you will say y equal
to 1 you put here and then multiply that
by X then put y equal to 0 multiplied by
1 minus X solve couple of lines and you
will get this answer 1 over x times 1
minus X okay
so yes you should try this this is a
very simple example and this example is
what we are going to use to derive the
Heisenberg limit and some very very
interesting examples of quantum
Metrology calculations we will do
probably the simplest example the next
one again I would like you to remember
this let me write this formula down so
that we can use it in the future
calculation so the J of X here is 1 over
x times 1 minus X
the other one is a gaussian distribution
so let's say Y is a gaussian comes from
a gaussian distribution whose mean is
unknown and the variance is Sigma
squared now if you were to calculate the
uh the J of x from this you will take
the likelihood function and this here
I'm using the same formula minus
expectation of the L double Prime I get
a 1 over Sigma squared now let's look at
it and see sense
sense
if the gaussian distribution has an
unknown mean and you are getting a
random sample from that if the variance
of the distribution is very small then
that y will give you X the information
about X the mean value much better right
than if it's a fat distribution with a
big variance so it makes sense that if
we have a small variance your facial
information will be high about X
but you can calculate this also very easily
easily
the third one that this that is also so
this one will not use but it's good to
interesting to note that what if the
mean is known and the variance is unknown
unknown
and here you can calculate the official
information comes out at 1 over 2x
squared again it goes as one over this
the variance
now the square of the variance in this
case but the main difference you see
between these two is that the efficient
information in this case is a function
of the parameter itself
this is another annoying thing about
fishery and estimation Theory which we
have to live with unfortunately that the
estimation Precision about a parameter
is oftentimes dependent on the parameter
itself so the true value of the
parameter also determines how well you
can estimate it but not always there are
example situations where it doesn't
appear in G of x
this one I am going to use I am going to
write down so it's a poisson
distribution have you heard of the
poisson distribution the poisson
distribution and x y takes values over
integers and X is a scalar real number
between 0 and infinity and I get a
random sample y I want to estimate the
mean of the distribution X and in this
case J of X is given by 1 by X so my P
of Y given X is is a poisson
distribution it's e to the minus x x to
the Y over y factorial
and J of X for this is given by 1 over X
so we'll be needing this in some examples
examples
uh I suggest you try out all these
examples yourself they're very very easy
you can take a picture or these slides
are these are recorded so you can
download them and work these examples out
out
okay so a couple of important points
about estimators I talked about Fisher
information you know how to calculate
pressure information from P or Y given X
the whole mechanism is there
I told you about the maximum likelihood
estimator I told you about the mmsc
estimator the minimum means quad
estimator which is the expected value of
x squared y so if I give you P of Y
given X give you P of X also you
calculate P of x given Y and its
expectation is your optimal estimator
for Bayesian
there is no P of X right that's the main
difference between fishery and Bayesian
and here the maximum likelihood
estimator is simply the picking the
value of x for which the likelihood
function is the highest
for Bayesian estimation Theory there is
also a much easier estimator that people
often use because this integral is
sometimes hard to calculate or the mmsc estimator
estimator
what if I use the P of X information for
Bayesian you have to somehow use the
information for the prior right so what
you can do is that just like the maximum
likelihood estimator you pick the value
of x for which the probability
distribution highest but instead of
picking the value of x for which P of Y
given X is the highest you pick the
value of x for which is the joint
distribution is the highest
and this is called the map estimator or
the maximum a posteriori probability estimator
estimator
now what you can show is that so by the
way so mmsc obviously is the best
because it achieves the minimum mean
squared error but the map estimator is
easier much easier to write down if I
get a value of y you can go ahead and
look up a probability distribution you
don't have to do any base rule you don't
have to do any integrals
but the very interesting thing is that
when n becomes large
the mean squared error for all of these
three estimators they all approach the
expected value of the one over the
efficient information
under the prior probability distribution
of X the Bayesian case this is all Bayesian
Bayesian
so even for the Bayesian case if you
wanted to use ml meaning you completely
ignored the prior or you used uh map or
mmse doesn't matter which one you choose
to use all of them their MSC will
saturate to the expected value of one of
the official information
and what is it saying why is this true
well the reason this is happening is
that as n goes to Infinity you're
probably your your conditional
distribution of the true value of the
parameter given the measurement observed
that you're observing
converges to a gaussian distribution
whose mean is the true unknown value of
the of the parameter uh its average
value under the distribution P of X with
a variance given by 1 over n times G of
X 0. so what's happening is that the
distribution of P of X as you are
getting more and more and more samples
is like a gaussian that is doing this it
is moving like this with the mean going
to x0 and the variance also shrinking as
1 over n times 0 of x 0 add the facial
information value evaluated at that true
value of x0
so that is why when n becomes large you
can completely ignore the the prior and
it will still work so let's do c One X
this is one example so this by the way
this expectation of the one over J of x
uh this people often call the expected
Crema Rao bound or the ecrb so I talked
about the bcrb remember that was the
potentially loose lower bound to the
minimum mean squared error and this is
the expected criminal bound which is
tracked by the maximum likelihood estimator
estimator
so this is one example I stole from Van
trees book and by the way if you are if
you're looking for a textbook for In
classical estimation Theory this is my
favorite uh there's many many good books
but I learned at least as a graduate
student undergraduate student all of my
estimation Theory from bantry's book so
in this example he takes a poisson
distribution just like I wrote over here
and he took this crazy prior
distribution so this is a generalization
of the exponential distribution but with
these parameters Alpha and beta or a
alpha and B when you set Alpha to 1 and
B to 1 by mu you get a the person the
the uh sorry the uh exponential distribution
distribution
all right so what he plots here is this
is the number of samples
and on the y-axis he's plotting the root
mean squared error the mean squared
error with a square root so see what is
happening the stars are the simulated
value of Maximum likelihood so you are
never you are not even using the prior
information and as expected the ecrb is
tracking the ml estimator's performance
and when n becomes High
all of these estimated ecrb and the and
the map and the mmsc estimators
performance which is the circles and
these pluses they all converge to the
same thing which is given by the ecrb
which we saw in the previous slide right
and then if you look at this black line
this is the actual mean Square the
minimum mean squared error which they're
numerically computed and the pluses are
the simulations and see the map
estimator even though it is a much
easier to calculate than mmsc it is only
a little bit worse than an MSC so map
estimator is pretty good it is it is
much easier to calculate that nms you
don't have to do this integral and the
inversion to get P of x given y
but this is that dcrb that Bayesian
creamer outbound is that I that I talked
about uh and see that this is a loose
bound it's it's a bound to the mmsc but
it's not a good Bound for in this case
uh because there is no estimator that
will attain it the minimum mean squatter
is above it
all right so uh for fishering and
Quantum estimation Theory I'm only going
to make a few notes because you have
heard a lot of it from Carl's talk and
size lecture a couple of notes uh the
sld eigenvectors attain the qfi and
whenever I say attain the qfi what I
actually mean is that if you take that
sld operator the L operator I shown
calculate its eigenvectors and use that
as a projective measurement to measure
rho of x you get a PO by given X
calculate the classical official
information G of x from it it will equal
the quantum official information K of X
that's what I mean by attaining the fish
the quantum facial information
so it's CFI of this measurement equals
the qfi
and the sld eigenvector measurement
could depend on the parameter itself and
this is again a funny thing that happens
with the sld measurement the sld
measurements eigenvectors themselves can
depend upon the parameter so how do you
deal with it then so there is a very
recent paper that addresses this issue
so if I have n copies of my density
operator my state that I want to want to measure
measure
so what you do is that you calculate
let's say the sld operator
and you want to you know make start
making measurements so what you do is
that you take the first square root of n
copies of the density operator
and you measure it with absolutely any
measurement you want that has a positive
classical facial information
use that measurements output to get an
estimate a pre-estimate of the parameter X
X
take that pre-estimate and plug that
into your sld eigenvectors expression
which can depend on X and use that as LD
eigenvector for measuring the remaining
n minus square root of n copies of rho
and you can prove that as n goes to Infinity
Infinity
even though your sld eigenvectors
project your eigenvectors dependent on X
that you will end up achieving the qfi
for that prior unknown value of x so
this is the two-stage measurement that
was developed by um
a few years back so very very
interesting result
okay so another thing I mentioned or I
had also mentioned this qfi attending
measurements are not unique uh and this
expression is something I'm going to use
a couple of times so I will not derive
it but I will urge you to try to derive
this if your row of X is a pure State
meaning if rho of X is PSI X gets PSI X
bro uh you can actually write a very
nice compact expression for the
efficient information
um in terms of the inner product of the
derivative of the PSI x with respect to
X and the derivative magnitude squared
of PSI and PSI dot inside
we'll be using it in some calculations
you will do
okay so let's now take a break from
estimation Theory we will start looking
now at uh the the quantum description of
light and then we'll we'll go from there
no my my research group when my students
go on for a hike they have this uh
they're obsessed with this thing I don't
know why they do this they look for the
perfect size shaped Cactus and they take
a picture of that
and we have this repository in our group
Google Drive of all these PSI shape
campuses so
but I don't know if this is where this
ranks in our in that list
all right so let's start with a pulse of
laser light
um I'm going to assume no quantum
mechanics here I know you all have
different various levels of quantum
mechanics this is the simplest picture
of a laser light pulse
um where I first let's say Define a mode
a mode is just like it's a shape it's a
shape of a field it's like a bucket in
which you put some water so I Define a
mode this is a flat top mode and the
mode functions always will be normalized
to one so magnitude squared of Phi of T
over T from 0 to T is one
once I Define a mode
the only thing you need to just to
specify to fully specify the quantum
state of an ideal laser light pulse
are two things an amplitude complex
number Alpha
and a phase Theta which is a phase
reference with respect to some chosen
phase reference like a phase from from
chosen reference so Alpha
so sorry it's Alpha meaning a real
number and that phase so that's the
whole thing is a complex number Alpha so
square root of n times e to the J Phi
here I'm just showing schematically a
coherent State and in this thing what is
buried I'm not showing is the the
oscillatory portion so there's a over
here the mode shape you'll have to
multiply by some e to the J Omega naught
T so this is the oscillatory portion if
that doesn't change I don't have to
worry about it writing it in this blue
thing I'm showing this is just the noise
ball around the oscillatory component so
I try to measure a quadrature I'll get
some noise I'll come to that in a second
but keep this picture in mind with that
picture again if I specify a mode the
only thing you need to specify for a
laser light console is one complex
number Alpha and in Quantum Optics we
call that a coherent state of that mode
in Quantum Optics you the most General
way to talk about light is first
described first identifying what setup
orthogonal modes you want to talk about
the three degrees of freedom space time
and polarization or space frequency and
polarization time and frequency are just
too conjugate of each other
so one should Define orthogonal buckets
in this three variables then you can you
have to talk about the quantum state of
this collection of modes if you're
dealing only with laser light all you
have to do is to specify one complex
number for each one of those buckets
if you're talking about statistical light
light
meaning the normal life for example then
you have to describe not just those
complex number for each of those buckets
but a probability distribution over
those complex numbers there's this whole
field of statistical Optics that deals
with coherence Theory and all they are
doing is this problem they're playing
this probability distribution over those
complex numbers over orthogonal modes
okay so that's the coherent State and if
I were to detect a coherent state with
an ideal Photon counting detector and
infinite bandwidth ideal no dark click
Unity Quantum efficiency Photon detector
I'm going to get a poisson point process
what is a poisson point process okay
okay um
um
how many of you have heard about a point
process before if you have taken any
course on queuing query perhaps um
um
probably not okay so let me describe
what a person point process is it's a
very simple concept you look at so you
start observing your light pulse starts here
here
uh if you were to feed this into an
ideal detector you will see arrivals or
Photon clicks if you want to call them
that arriving at a rate given by Lambda
which is the magnitude squared of these
of the field which in this case I'm just
taking to be a flat top it is just e
squared where e is the value of the
field over here okay
and uh constant rate but it also has
this property that if I were to slice
this time into tiny little segments
let's say of segment length Delta say
Tau then for every segment I'm going to
have a probability of let's say this is
my 0 to T
and I like look at a small segment town
the probability of there being a click
or an arrival in this uh
segment will be Lambda times Tau and
probability of no click
and another thing is that the
probability of a click appearing in the
next time bin will be independent of the
probability of actually whether click
occurred or not in the previous timeline
so that's all so every little time then
you have a probability of a click or not
click but it's an independent from bin
to bin
that's all you need to derive the fact
that if you count the number of clicks
over the entire duration of the pulse it
will have a poisson distribution with a
mean given by the mean photon number
which is the integral of the rate over
the time so this is the photons per
second clicks per second integrated over
the time so this is the unit of photon
so n is e Square t and n is in this
notation it's magnitude Alpha squared
okay so that's the poisson point process
so this is as you can see with the
poisson point process your your rate of
arrival is driven by your squared
intensity in photon units of the
coherent state but it's a random process
meaning if I were to prepare identical
laser light pulses and did this
experiment each time I'll get a
different click pattern but each time
the click pattern will come from that
poisson point process
so oftentimes I'm going to use this
notation a coherence State Alpha going
into a detector generating a random
variable that has a poisson distribution
with a mean magnitude Alpha squared
now this measurement uh there's no way
for me to measure the phase five right
because no matter what Phi is I'm always
going to get the poisson point process
with the same exact rate so this direct
detection cannot measure the phase of alpha
alpha
to break the measurement of phase of
alpha there are two things I must
introduce so the one is interference of
when you take a beam splitter a beam
splitter looks like it's a piece of
glass okay bulk beam splitter it's a
cube you can also buy fiber beam
Splitters that look like couplers two
fibers come together and then they meet
each other and then they go out okay so
beam splitter has two inputs and two
outputs is defined by two quantities one
is transmissivity the other one is a
phase so I'm going to take
ETA as the transmissivity which is the
fraction of input energy that I put at
this input that goes out to this output
meaning if the input has some energy eat
a fraction of this energy will go this
way and one minus Theta fraction of the
energy from this input will go this way
and phase is our number between 0 to 2 pi
pi
a beautiful property of coherent States
is that if I were to interfere two
coherent States in a beam splitter
at the output I get two coherent states
that are completely independent of each other
other
um in some tensor product cohere instead
and their amplitudes are given by this
equation so this is a unitary matrix
multiplication right so beta 1 beta 2 is
multiplied by alpha 1 Alpha two as a
column Vector with this unitary Matrix
so the convention I'm going to stick to
is this convention for the later calculations
calculations
um so for example if I were to take this
example here Alpha One is alpha alpha 2
is minus Alpha and let's say Phi is 0
ETA is square root of one half meaning
it's a 50 50 beam splitter and I get an alpha
alpha
plus minus Alpha by root 2
over here which is 0 and Alpha minus
minus Alpha over root 2 which is square
root of 2 Alpha okay
so this is uh just showing how you can
get fully constructed versus fully
destructive interference this is well
known how you can interfere two laser
light pulses
you can also do more fancy things for
example what if I choose Alpha 2 to be
some complex number beta divided by
square root of 1 minus ETA
and the face to be zero and the amp the
X the transmissivity I take it to we say
99.9 something very close to one
then if you were to substitute this into
this first equation let's see what we
get take this this is Alpha two let's
put Alpha 2 over here
the square root of 1 minus Theta cancels
it's if I put Alpha 2 here Phi is
already 0 so this whole term becomes beta
beta
and the first term alpha 1 is Alpha but
ETA is very close to 1. so I get
something that is close enough to Alpha
plus beta by choosing ETA to be very
close to 1. so this setup can be used to
add two complex numbers I can add Alpha
to Beta And I can get Alpha plus beta
and this is something in tomorrow's
lecture you will see how I'll use it to
design better receivers for State
discrimination and this is something
that was first uh studied in the context
of optical State discrimination by by
Kennedy by many years ago 1970s and then
his former student Sam dolinar who did
some very very interesting work on using
this just this complex number addition
and Photon detection to come up with the
quantum optimal receiver to teleport
between two states of laser light pulses
okay so with that
um I'm gonna just you know make a point
that if you were to do this displacement
or this edition of a complex number
prior to detection
then you still get a poisson outcome
except that its mean has now changed to
Alpha plus beta magnitude squared so by
doing something before you detect the
information bearing light you can change
the detection basis and and and such
changes to the detection basis is
something you can has have as a tool to
uh for for use for maximizing the
information extraction efficiency and
again we will see examples of that in
tomorrow's lecture
all right so what if you were interested
in estimating the phase of the field
then what you do is um you can do
homemodyne detection so the way homodyne
detection works is that you take that
coherent State Alpha
and you interfere that with another
coherent state that is locally prepared
and this coherent state is a very strong
coherence State Alpha Lo
but see the beam splitter here is a 50
50 beam splitter
so what you get on these two outputs are
two coherent States uh can you tell me
what are the coherent States you're
going to get are the two outputs
so you have Alpha and Alpha yellow and
it's a 50 50 beam splitter so if you
remember the convention that I that I
wrote down the first output will have
Alpha plus Alpha Lo over root 2 right
and the second one is going to have
Alpha minus Alpha Lo over root 2. and
what if I now do Photon detection that
normal Photon detection that will be
your poisson outcome on both of those
outputs so what will happen is that you
will get two random variables K1 from
the first detector which will be a
poisson random variable
whose mean is magnitude squared of alpha
plus Alpha Lo over root 2.
and the second one will be a poisson
random variable
with a mean of alpha minus Alpha L over 2.
2.
magnitude squared
and then I will take you know this that
for a poisson distribution
if you take the poisson distribution
and make the mean value very very large
it approaches the gaussian or the normal
distribution so it goes to the normal
distribution with that same mean and
because the poisson has the variance
equals the mean the variance of the
gaussian is also Lambda
so if you were to take K1 and K2 and
pretend that they are gaussian
distributions why can you pretend that
their gaussian distributions because
this Alpha no has chosen to be very
large the mod squared of that is Lambda
is very large for both of those detectors
detectors
and if you write this as K and you you
scale it appropriately you can prove
this or gaussian minus a gaussian will
get a convolution of two gaussians also
a gaussian its mean value you can prove
will come out to be a this is the two
lines of a calculation you should try to
do yourself is equal to the real part of
alpha times e to the E uh say this is I
and J are the same thing I'm an
electrical engineer I still oftentimes
would write J for the I because in
electrical engineering they they reserve
the letter i for current
but it's the same thing it's the square
root of minus one
so Alpha e to the J Theta L over Theta l
o is the phase of the local oscillator
with respect to the input Alpha and the
variance will come out to be one fourth
now this is a very interesting thing
that with homodyne detection I can get a
measurement whose mean value is any
chosen quadrature of that complex number
meaning if I choose the Theta Lo such
that I can measure the real part of
alpha I can measure the imaginary part
of alpha I can measure any quadrature in
between real and imaginary
but the price I pay is that no matter
what I'm trying to measure
I will always have this one by four at
the variance
and this variance sometimes in the
classical world people will call that
short noise of the local oscillator
okay but in the quantum language there
is nothing but the quantum limited noise
of a single of the quadrature operator so this homogene detection measurement
so this homogene detection measurement on a coherent state is something that
on a coherent state is something that people often use to measure phase
people often use to measure phase information both in Communications and
information both in Communications and sensing
sensing all right so now let's go a little bit
all right so now let's go a little bit more Quantum so I talked to you about
more Quantum so I talked to you about the mode and the coherent state of a
the mode and the coherent state of a mode
mode there's something you can talk about is
there's something you can talk about is the number state of a mode or a fox
the number state of a mode or a fox state of a mode so remember when I said
state of a mode so remember when I said a coherent state of a mode I said that
a coherent state of a mode I said that if you were to detect photons in it you
if you were to detect photons in it you will get a poisson point process you get
will get a poisson point process you get a random number of photons with a rate
a random number of photons with a rate that is given by the the in the
that is given by the the in the intensity they pulse
intensity they pulse but in a number state of a mode if I
but in a number state of a mode if I give you that same mode that flat top
give you that same mode that flat top laptop mode like that but I say I am
laptop mode like that but I say I am exciting that in the fox say one
exciting that in the fox say one what does that mean if I prepare that
what does that mean if I prepare that and put it onto a detector
and put it onto a detector I'll get one click exactly one click no
I'll get one click exactly one click no probability
probability where the click will come
where the click will come during that zero to T that will be
during that zero to T that will be probabilistic that will be given by the
probabilistic that will be given by the pro the probability distribution will be
pro the probability distribution will be the the integral of the the mode shape
the the integral of the the mode shape which integrates to one magnitude of Phi
which integrates to one magnitude of Phi t squared
t squared but but they will be exactly one photon
but but they will be exactly one photon now if I give you a two Photon Fox State
now if I give you a two Photon Fox State um prepare that and I detect photons
um prepare that and I detect photons with it you'll get exactly two clicks
with it you'll get exactly two clicks every time you do an experiment
every time you do an experiment this time these two clicks don't come
this time these two clicks don't come from a poisson point process they don't
from a poisson point process they don't come up from a memory less process in
come up from a memory less process in fact there is a there's a process
fact there is a there's a process underlying that random arrival of these
underlying that random arrival of these photons which I will not have time to
photons which I will not have time to discuss today that Horus un and Jeff
discuss today that Horus un and Jeff ship here and I believe Carl had done
ship here and I believe Carl had done also some work on that writing down this
also some work on that writing down this um
um Markov or Stat process description of
Markov or Stat process description of arrival of photons for screen slide and
arrival of photons for screen slide and number States but the main thing to
number States but the main thing to remember here is that it said exactly
remember here is that it said exactly the number of photons at the end that's
the number of photons at the end that's the definition of a fox state
the definition of a fox state okay so
okay so um Fox States they form a complete
um Fox States they form a complete orthonormal basis for any state of a
orthonormal basis for any state of a bosonic model of an optical mode and a
bosonic model of an optical mode and a simple conceptual way to think about it
simple conceptual way to think about it is that if I give you two states two Fox
is that if I give you two states two Fox States you can tell a part between those
States you can tell a part between those two Fox states with probability of error
two Fox states with probability of error Zero by simply doing a photon
Zero by simply doing a photon measurement because you'll get a number
measurement because you'll get a number of clicks on you that will tell you what
of clicks on you that will tell you what fox State you had
fox State you had and the only way you can have two states
and the only way you can have two states that you can discriminate with
that you can discriminate with probability of error zero is if they are
probability of error zero is if they are orthogonal states so all the foxtes are
orthogonal states so all the foxtes are orthogonal and they span all the photon
orthogonal and they span all the photon numbers so they must span all the states
numbers so they must span all the states that uh of a bosonic mode where this is
that uh of a bosonic mode where this is being sloppy but this is a good way to
being sloppy but this is a good way to understand why Fox states form a basis
understand why Fox states form a basis for any state
for any state so a coherent State can also be written
so a coherent State can also be written in The Fault basis in this following way
in The Fault basis in this following way and now you can see that you can
and now you can see that you can interpret Photon detection that I talked
interpret Photon detection that I talked about as a measurement of the coherent
about as a measurement of the coherent state in the fog basis
state in the fog basis and as you know from quantum mechanics
and as you know from quantum mechanics the probability of the outcome n is the
the probability of the outcome n is the magnitude squared of the fox State n
magnitude squared of the fox State n with the coherent State magnitude
with the coherent State magnitude squared which is this poisson term e to
squared which is this poisson term e to the minus n n to the N Over N factorial
now uh if you were to apply a small phase to a coherent state
phase to a coherent state phase can be applied by a small time
phase can be applied by a small time delay for example in an optical fiber or
delay for example in an optical fiber or free space setting so if you apply a
free space setting so if you apply a phase Theta on a coherent State you get
phase Theta on a coherent State you get Alpha times e to the I Theta you saw
Alpha times e to the I Theta you saw that in the description of the beam
that in the description of the beam splitter I showed you this is a phases
splitter I showed you this is a phases simply the special case of the beam
simply the special case of the beam splitter when ETA equal to one
splitter when ETA equal to one if your transmissivity was one there was
if your transmissivity was one there was only one phase this is what you will get
only one phase this is what you will get if you apply a phase to a fox state
if you apply a phase to a fox state would you still get a fox statement with
would you still get a fox statement with the face outside as a as a term that is
the face outside as a as a term that is irrelevant if you just had a single Fox
irrelevant if you just had a single Fox state but if you had a state where you
state but if you had a state where you had superpositions of foxtails then this
had superpositions of foxtails then this phase plays an important role you don't
phase plays an important role you don't have to worry about the a and a dagger
have to worry about the a and a dagger operators because I don't know how many
operators because I don't know how many of you are familiar with Quantum Optics
of you are familiar with Quantum Optics and we will not need to use these
and we will not need to use these Annihilation creation operators in in at
Annihilation creation operators in in at least this these two lectures
least this these two lectures okay so what is a squeeze state so you
okay so what is a squeeze state so you have again seen these uh pictures before
have again seen these uh pictures before uh but now you can understand squeeze
uh but now you can understand squeeze state in terms of what it does in a
state in terms of what it does in a homody and receiver so let's say take
homody and receiver so let's say take that same homody and receiver that I
that same homody and receiver that I showed you but instead of a coherent
showed you but instead of a coherent State Alpha I put in something that is
State Alpha I put in something that is called a squeeze State
called a squeeze State um and if you remember for the coherence
um and if you remember for the coherence State I showed this noise ball with this
State I showed this noise ball with this one fourth and one-fourth so no matter
one fourth and one-fourth so no matter which quadrature you measure you get
which quadrature you measure you get this exactly the same noise variance for
this exactly the same noise variance for a squeeze state that noise variance uh
a squeeze state that noise variance uh is different depending on which
is different depending on which quadrature you are measuring so your
quadrature you are measuring so your output is still a gaussian distribution
output is still a gaussian distribution with the mean equal to the center that
with the mean equal to the center that complex number Alpha but your variance
complex number Alpha but your variance sweeps anywhere between 1 4 times e to
sweeps anywhere between 1 4 times e to the minus 2 R to 1 4 time e to the two R
the minus 2 R to 1 4 time e to the two R where R is often termed the squeezing
where R is often termed the squeezing parameter
parameter so this
so this um this PSI is the complex quizzing
um this PSI is the complex quizzing parameter whose real part is the amount
parameter whose real part is the amount of squeezing
of squeezing and it's a phase of this is the
and it's a phase of this is the direction of squeezing so if I take this
direction of squeezing so if I take this ellipse and turn it like that that will
ellipse and turn it like that that will be changing Theta
be changing Theta so now you have two complex numbers
so now you have two complex numbers Alpha being the center of the ball
Alpha being the center of the ball and this is I which tells you how much
and this is I which tells you how much to squeeze
to squeeze in which direction the squeezing is in
in which direction the squeezing is in okay so that's a squeeze state
okay so that's a squeeze state and if you look at in the time domain
and if you look at in the time domain what the squeezing means is that
what the squeezing means is that depending upon which quadrature you are
depending upon which quadrature you are measuring the noise balls size can
measuring the noise balls size can change
change and you can squeeze it in different
and you can squeeze it in different directions you can squeeze it this way
directions you can squeeze it this way you can squeeze it this way this is
you can squeeze it this way this is called phase quizzing this is called
called phase quizzing this is called amplitude squeezing and the noise ball
amplitude squeezing and the noise ball where it amplifies and where it shrinks
where it amplifies and where it shrinks it that changes
it that changes okay so
okay so um another important thing to remember
um another important thing to remember is that uh squeeze state for coherent
is that uh squeeze state for coherent State the mean photon number was just
State the mean photon number was just this magnitude squared of the center of
this magnitude squared of the center of the ball
the ball for a squeeze State there are two
for a squeeze State there are two contributions the mean photon number one
contributions the mean photon number one comes from magnitude Alpha Square which
comes from magnitude Alpha Square which is still the magnitude squared of that
is still the magnitude squared of that complex number which is the center of
complex number which is the center of the ball at ellipse but plus there is an
the ball at ellipse but plus there is an additional term which is a sine
additional term which is a sine hyperbolic squared of this R parameter
hyperbolic squared of this R parameter so there are two contributors so now
so there are two contributors so now even if Alpha is zero even if my ellipse
even if Alpha is zero even if my ellipse is centered over here in the origin that
is centered over here in the origin that squeeze state which is known as the
squeeze state which is known as the squeezed vacuum still has photons in it
squeezed vacuum still has photons in it sine hyperbolic Square R whereas a
sine hyperbolic Square R whereas a coherent state in the middle has no
coherent state in the middle has no photons even though it still has to have
photons even though it still has to have the one over four variance on both
the one over four variance on both quadratures so even a vacuum state if
quadratures so even a vacuum state if you were to do homonym you will get a
you were to do homonym you will get a zero mean gaussian variable but variance
zero mean gaussian variable but variance of 1 4. but it has no photons in it
of 1 4. but it has no photons in it okay so for a squeezed vacuum State uh
okay so for a squeezed vacuum State uh if you were to write this down in the
if you were to write this down in the fog basis uh what you what interesting
fog basis uh what you what interesting thing that you see is that you only get
thing that you see is that you only get contributions in the even photon number
contributions in the even photon number terms and I will not be deriving this
terms and I will not be deriving this expression but but the main thing I want
expression but but the main thing I want you to take home is that first squeeze
you to take home is that first squeeze vacuum has photons in it and then it has
vacuum has photons in it and then it has this non-personian Statistics it's very
this non-personian Statistics it's very weird statistics so you only have zero
weird statistics so you only have zero two four six photons and so forth
two four six photons and so forth um in the time domain if you want to
um in the time domain if you want to visualize vacuum it is nothing right so
visualize vacuum it is nothing right so there is no oscillation in a vacuum
there is no oscillation in a vacuum adhesive System Flat so that you if that
adhesive System Flat so that you if that the one-fourth noise ball is always
the one-fourth noise ball is always there but there is no notion of a
there but there is no notion of a oscillatory component even it's just
oscillatory component even it's just vacuum is vacuum
vacuum is vacuum but notice in squeezed vacuum
but notice in squeezed vacuum it is still vacuum
it is still vacuum okay so there is no
okay so there is no oscillatory component that does this
oscillatory component that does this like a coherent state but there is still
like a coherent state but there is still an oscillatory component with a given
an oscillatory component with a given Center frequency so squeezed vacuum
Center frequency so squeezed vacuum you will have to say squeezed vacuum at
you will have to say squeezed vacuum at 1550 nanometers or squeezed vacuum at
1550 nanometers or squeezed vacuum at 780 nanometers so that determines
780 nanometers so that determines the the frequency with which you see
the the frequency with which you see this uh you know that the quantitarians
this uh you know that the quantitarians go up and come down back from this mine
go up and come down back from this mine one over four e to the minus two out one
one over four e to the minus two out one over four e to the two r
over four e to the two r and this is a coherence State this is a
and this is a coherence State this is a phase squeeze and amplitude squeeze
phase squeeze and amplitude squeeze State all right so how do you generate
State all right so how do you generate squeeze vacuum well what you do in the
squeeze vacuum well what you do in the lab is that you squeeze the vacuum to
lab is that you squeeze the vacuum to take the vacuum State and then you pass
take the vacuum State and then you pass it through a chi to non-linear Optical
it through a chi to non-linear Optical process for example there are other ways
process for example there are other ways to do it as well and this thing is
to do it as well and this thing is called a squeezer
called a squeezer again I will not go into the derivation
again I will not go into the derivation of a squeezer in this talk but I'm happy
of a squeezer in this talk but I'm happy to describe to you those of you who are
to describe to you those of you who are inclined to understand how this process
inclined to understand how this process works but this is just a picture from my
works but this is just a picture from my lab my graduate student Alex vent has
lab my graduate student Alex vent has built a squeezer this is how it looks
built a squeezer this is how it looks like in the laboratory it's a he has a
like in the laboratory it's a he has a periodically full KTP crystals sitting
periodically full KTP crystals sitting in a uh in a in a bow tie cavity this is
in a uh in a in a bow tie cavity this is an oven to fine tune the face matching
an oven to fine tune the face matching conditions and this thing generates a
conditions and this thing generates a squeeze vacuum at a few DBS or squeezing
squeeze vacuum at a few DBS or squeezing at 15 50 nanometers
at 15 50 nanometers all right so why care about squeeze
all right so why care about squeeze vacuum so let's look at perhaps one of
vacuum so let's look at perhaps one of the earliest examples of use of squeeze
the earliest examples of use of squeeze vacuum so if I take a hormone detection
vacuum so if I take a hormone detection on a coherent State Alpha what did I say
on a coherent State Alpha what did I say let's say that Alpha is a real number I
let's say that Alpha is a real number I will get a gaussian distribution with a
will get a gaussian distribution with a mean of Alpha and a variance of
mean of Alpha and a variance of one-fourth so I told you about that
one-fourth so I told you about that so this was a
so this was a it's a very very simple observation
it's a very very simple observation I don't even remember who made this
I don't even remember who made this observation Carl you may remember was
observation Carl you may remember was this your
this your Horus one we wrote paper that that
Horus one we wrote paper that that injecting squeezing
injecting squeezing onto the vacuum Port of a beam splitter
onto the vacuum Port of a beam splitter you can recover the signal to noise
you can recover the signal to noise ratio of a hormone receiver as if this
ratio of a hormone receiver as if this beam splitter was never there
beam splitter was never there so what I mean by that is if I take a
so what I mean by that is if I take a homodyne detection on a coherent state
homodyne detection on a coherent state the signal to noise ratio is the squared
the signal to noise ratio is the squared of the mean divided by the variance it's
of the mean divided by the variance it's 4 times Alpha squared
4 times Alpha squared but so if I had a coherence State beta
but so if I had a coherence State beta if I pass it through some beams that are
if I pass it through some beams that are Kappa with vacuum here I would get a
Kappa with vacuum here I would get a signal to noise ratio that will be given
signal to noise ratio that will be given by four times Kappa times beta square
by four times Kappa times beta square right because the mean will be square
right because the mean will be square root of Kappa times beta because the
root of Kappa times beta because the beam splitter will change the amplitude
beam splitter will change the amplitude of the coherence from beta to square
of the coherence from beta to square root of Kappa beta so this will be the
root of Kappa beta so this will be the signal to noise ratio but if I inject
signal to noise ratio but if I inject squeeze vacuum into this port
squeeze vacuum into this port with enough squeezing these things uh
with enough squeezing these things uh this SNR goes to 4 beta squared as if
this SNR goes to 4 beta squared as if that loss never happened it's a very
that loss never happened it's a very very interesting observational squeeze
very interesting observational squeeze vacuum you're just injecting squeezing
vacuum you're just injecting squeezing into the quadrature that you're home
into the quadrature that you're home aligning
aligning um another thing
um another thing um that you can you know this was also
um that you can you know this was also people use this to propose multiple
people use this to propose multiple applications of this
applications of this squeezing inline squeezing so this is
squeezing inline squeezing so this is the same operation I showed in the
the same operation I showed in the previous slide where you go from vacuum
previous slide where you go from vacuum to squeezed vacuum so if I had say a
to squeezed vacuum so if I had say a coherent State beta
coherent State beta some loss transmissivity Kappa
some loss transmissivity Kappa and I have a homodyne detector here but
and I have a homodyne detector here but this time this homodyne detector has
this time this homodyne detector has some inefficiency meaning it is as if I
some inefficiency meaning it is as if I had a beam splitter of transmissivity
had a beam splitter of transmissivity data followed by an ideal homeowned
data followed by an ideal homeowned receiver
receiver if I were to put a squeezer with enough
if I were to put a squeezer with enough gain before that non-ideal
gain before that non-ideal inefficientness let's homodyne detection
inefficientness let's homodyne detection which has a sub Unity mixing efficiency
which has a sub Unity mixing efficiency detection efficiency product I can
detection efficiency product I can recover the effect of that lost photons
recover the effect of that lost photons before the homodyne by having enough
before the homodyne by having enough gain in that face sensitive amplifier or
gain in that face sensitive amplifier or a squeezer so these two effects of
a squeezer so these two effects of squeeze vacuum injection and phase
squeeze vacuum injection and phase sensitive amplification is something
sensitive amplification is something that we explored using this was many
that we explored using this was many years ago my my former PhD advisor Jeff
years ago my my former PhD advisor Jeff Shapiro and his colleague Prem Kumar and
Shapiro and his colleague Prem Kumar and and myself and a few others we were
and myself and a few others we were involved in this um DARPA program back
involved in this um DARPA program back in 2007.
in 2007. and uh in this we were looking at
and uh in this we were looking at um classical laser radar using a laser
um classical laser radar using a laser light to interrogate a scene
light to interrogate a scene um and uh we had a a pupil through which
um and uh we had a a pupil through which your light was coming back was a soft
your light was coming back was a soft aperture pupil so it was not a hard
aperture pupil so it was not a hard circular aperture it has a attenuation
circular aperture it has a attenuation and by injecting squeeze vacuum from
and by injecting squeeze vacuum from behind the aperture you could as if half
behind the aperture you could as if half the light coming from the scene see a
the light coming from the scene see a nice circular aperture as if there was
nice circular aperture as if there was no attenuation so that was the effect of
no attenuation so that was the effect of squeeze vacuum injection
squeeze vacuum injection and we had an imperfect hormone
and we had an imperfect hormone detection receiver which we preceded by
detection receiver which we preceded by a face sensitive amplifier before the
a face sensitive amplifier before the detector and we were able to get much
detector and we were able to get much better images of uh of saved the targets
better images of uh of saved the targets we were looking into this Homeward and
we were looking into this Homeward and detection radar so anyway so there are
detection radar so anyway so there are other many other applications of these
other many other applications of these Concepts that people have people have
Concepts that people have people have looked at
okay so let's now do some examples on Quantum Metrology with the tools we have
Quantum Metrology with the tools we have learned so we'll be using the tools that
learned so we'll be using the tools that I talked about and do some examples you
I talked about and do some examples you have seen some Heisenberg sensitivity
have seen some Heisenberg sensitivity examples in the previous lectures but
examples in the previous lectures but we'll now do some derivations
we'll now do some derivations this is a hike we did with kasturi when
this is a hike we did with kasturi when she visited my group for a couple of
she visited my group for a couple of um weeks a few months back and it was
um weeks a few months back and it was this beautiful waterfall then remember
this beautiful waterfall then remember going back to that same location in a
going back to that same location in a couple of months and there was no
couple of months and there was no waterfall and no water over here so
waterfall and no water over here so Tucson goes through these cycles of
Tucson goes through these cycles of rainfall
rainfall um and uh sometimes you can have
um and uh sometimes you can have completely different scenery when you
completely different scenery when you visit at different times
um so let's look at this canonical problem which Carl had stated in his
problem which Carl had stated in his lecture
lecture um the conjugate phase interferometer so
um the conjugate phase interferometer so I have Theta by 2 phase on one arm and
I have Theta by 2 phase on one arm and minus Theta by two in the other
minus Theta by two in the other and I will give you a photon budget in
and I will give you a photon budget in terms of an average mean or a mean
terms of an average mean or a mean photon number n that you can use so you
photon number n that you can use so you can use it to generate a coherent state
can use it to generate a coherent state or some other state or squeeze State
or some other state or squeeze State whatever you want
whatever you want and you want to estimate the phase
and you want to estimate the phase so with a coherent state I'm going to
so with a coherent state I'm going to use the coherence State let's say in
use the coherence State let's say in this way I'll split the coherent State
this way I'll split the coherent State on a 50 50 beam splitter
on a 50 50 beam splitter so if this Alpha were square root of 2
so if this Alpha were square root of 2 gets that phase Alpha over square root
gets that phase Alpha over square root of 2 that gets that phase
of 2 that gets that phase and now this is your Quantum state that
and now this is your Quantum state that encodes the phase
encodes the phase what would you do to find the Fischer
what would you do to find the Fischer information for that Quantum State
information for that Quantum State meaning at this point
meaning at this point I'm tasked with designing a receiver
I'm tasked with designing a receiver that has the highest sensitivity of
that has the highest sensitivity of estimating that parameter Theta so we've
estimating that parameter Theta so we've already learned the tools so what will I
already learned the tools so what will I do I'll calculate the the quantum
do I'll calculate the the quantum efficient information
efficient information and recall this formula that I wrote
and recall this formula that I wrote down when your Quantum state that
down when your Quantum state that carries Theta Theta is same as x x is
carries Theta Theta is same as x x is that unknown parameter so that's Theta
that unknown parameter so that's Theta here when rho of X is a pure State I can
here when rho of X is a pure State I can evaluate this this simple expression for
evaluate this this simple expression for the quantum facial information
the quantum facial information how would you go about doing this
how would you go about doing this calculation well
calculation well so as you see here you are going to need
so as you see here you are going to need a PSI dot right you have to
a PSI dot right you have to differentiate your coherent your your
differentiate your coherent your your state with respect to X
state with respect to X so what you would do is that if you
so what you would do is that if you remember that formula for a coherence
remember that formula for a coherence state
state um
um so coherent State I wrote down as was
so coherent State I wrote down as was that
that e to the minus 1 you should Alpha Square
e to the minus 1 you should Alpha Square over 2 Alpha to the N over root n
over 2 Alpha to the N over root n right so now instead here I have Alpha
right so now instead here I have Alpha over square root of 2
over square root of 2 so
so so that thing is 4
so that thing is 4 2 and through four
2 and through four and then this is Alpha over square root
and then this is Alpha over square root of 2.
of 2. and then there's a there's a phase
and then there's a there's a phase so that phase will come over here so
so that phase will come over here so it's e to the I
it's e to the I the N over 2. so take that expression so
the N over 2. so take that expression so this is not Alpha so Alpha over 2 e to
this is not Alpha so Alpha over 2 e to the I Theta over 2.
the I Theta over 2. and then uh you take the next cohere
and then uh you take the next cohere instead
instead so I wanted insert product so your Alpha
so I wanted insert product so your Alpha over square root of 2 e to the I Theta
over square root of 2 e to the I Theta over 2
over 2 um
um and another one is Alpha over square
and another one is Alpha over square root of 2 e to the minus I Theta over 2
root of 2 e to the minus I Theta over 2 . and for that you're gonna have another
. and for that you're gonna have another summation
and you will write the same thing to M the same coefficient but except that now
the same coefficient but except that now you'll have an e to the minus I Theta
you'll have an e to the minus I Theta over to end
over to end and that's your
and that's your that's your PSI of theta
that's your PSI of theta now you can take this expression take a
now you can take this expression take a derivative with respect to Theta so
derivative with respect to Theta so calculate PSI dot of theta
calculate PSI dot of theta and write down those inner products
and write down those inner products magnitude squared evaluate this
magnitude squared evaluate this expression leave it to you as a homework
expression leave it to you as a homework should try this out it's a very simple
should try this out it's a very simple calculation and what you should get is K
calculation and what you should get is K of theta equals to n for this setting
of theta equals to n for this setting so what what it means is that your there
so what what it means is that your there exists some measurement
exists some measurement such that the the variance of the
such that the the variance of the estimator that results from that
estimator that results from that measurement goes as one over
measurement goes as one over n times capital N where n is the number
n times capital N where n is the number of number of modes number of Trials of
of number of modes number of Trials of this measurement that you made with that
this measurement that you made with that same mean photon number Alpha coherent
same mean photon number Alpha coherent State prepared n times is that clear so
State prepared n times is that clear so this is I'm just Quantum official
this is I'm just Quantum official information remember it just tells you
information remember it just tells you that there exists some measurement that
that there exists some measurement that will do the job that will have that
will do the job that will have that sensitivity
sensitivity but now we are tasked with finding a
but now we are tasked with finding a measurement that achieves that
measurement that achieves that sensitivity so I told you that you can
sensitivity so I told you that you can calculate the sld and its eigenvectors
calculate the sld and its eigenvectors that's very hard for you to describe to
that's very hard for you to describe to an experimentalist like what does it
an experimentalist like what does it mean to calculate to do that experiment
mean to calculate to do that experiment the projective measurement defined by
the projective measurement defined by the sld eigenvectors but in this case
the sld eigenvectors but in this case it's actually not a hard to design
it's actually not a hard to design measurement so what we will do is that
measurement so what we will do is that we will just to recombine the two
we will just to recombine the two coherences another 50 50 beams later
coherences another 50 50 beams later so now you can do this math in your head
so now you can do this math in your head I don't need to write it down so this is
I don't need to write it down so this is Alpha
Alpha that coherent State here add these two
that coherent State here add these two and subtract these two what will happen
and subtract these two what will happen you will get a cosine and a sign right
you will get a cosine and a sign right you just take these two coefficients so
you just take these two coefficients so this plus this over square root of 2
this plus this over square root of 2 This plus this this minus this over
This plus this this minus this over square root of 2. now I have two
square root of 2. now I have two detectors
detectors they will both generate a poisson random
they will both generate a poisson random variable
variable whose mean is given by n cosine squared
whose mean is given by n cosine squared theta by 2 and N sine Square Theta by 2
theta by 2 and N sine Square Theta by 2 N is mod squared of alpha
N is mod squared of alpha okay so now how I need to calculate the
okay so now how I need to calculate the classical facial information of this
classical facial information of this measure because I have not specified the
measure because I have not specified the measurement you have a random variable
measurement you have a random variable so you have B of Y given X already and I
so you have B of Y given X already and I only need to calculate what is the
only need to calculate what is the official information so what's the first
official information so what's the first step well
step well take a look here we wrote down that the
take a look here we wrote down that the poisson distribution with an unknown
poisson distribution with an unknown mean the fission information is given by
mean the fission information is given by 1 by X right so the estimating Lambda 1
1 by X right so the estimating Lambda 1 from Z1 Lambda 1 being Let's uh it's a
from Z1 Lambda 1 being Let's uh it's a parameter which is the mean here this is
parameter which is the mean here this is official information
official information and then I also told you this Jacobian
and then I also told you this Jacobian rule
rule what if I don't care about Lambda 1 I
what if I don't care about Lambda 1 I actually care about Theta
actually care about Theta so I want to calculate the facial
so I want to calculate the facial information of theta it will be equal to
information of theta it will be equal to the fission information of Lambda 1
the fission information of Lambda 1 times the the squared of the derivative
times the the squared of the derivative of uh of this function f f one of theta
of uh of this function f f one of theta with respect to Theta squared
with respect to Theta squared so if you do this math again just a
so if you do this math again just a couple of lines you will get J1 equal to
couple of lines you will get J1 equal to n times sine Square Theta by 2.
n times sine Square Theta by 2. is that clear so I'm just calculating
is that clear so I'm just calculating the fission information for estimating
the fission information for estimating Theta from differential information for
Theta from differential information for estimating Lambda 1 using that Jacobian
estimating Lambda 1 using that Jacobian Rule now similarly the facial
Rule now similarly the facial information for estimating Theta from Z2
information for estimating Theta from Z2 is given by n times cosine squared Lam
is given by n times cosine squared Lam Theta by 2 you can calculate that using
Theta by 2 you can calculate that using the same method
the same method and because these two measurements are
and because these two measurements are statistically independent random
statistically independent random variables their Fischer informations add
variables their Fischer informations add and if you don't find the total official
and if you don't find the total official information for estimating Theta from
information for estimating Theta from both of these you can add these two fish
both of these you can add these two fish informations you get n again
informations you get n again which is really cool which means that
which is really cool which means that now if I were to write down the minimum
now if I were to write down the minimum maximum likelihood estimator on Z1 and
maximum likelihood estimator on Z1 and Z2 my variance of the estimator is going
Z2 my variance of the estimator is going to go as 1 over n n which is exactly
to go as 1 over n n which is exactly what the quantum pressure information
what the quantum pressure information did
did which means in this particular case is
which means in this particular case is that this measurement is a Quantum
that this measurement is a Quantum optimal measurement is this the
optimal measurement is this the projection onto the sld eigenvectors it
projection onto the sld eigenvectors it is not
is not okay this is a different way I can write
okay this is a different way I can write down the quantum description of this
down the quantum description of this measurement
measurement how I can take the fog basis go
how I can take the fog basis go backwards so this beam splitter it will
backwards so this beam splitter it will be a crazy looking measurement basis
be a crazy looking measurement basis it's not the sld eigen measurement basis
it's not the sld eigen measurement basis necessarily so but it achieves the qfi
necessarily so but it achieves the qfi nevertheless
nevertheless all right let's take another example of
all right let's take another example of something called a noon state in the
something called a noon state in the literature it's a very very well known
literature it's a very very well known state in the context of quantum
state in the context of quantum Metrology in fact one of the earliest
Metrology in fact one of the earliest examples of quantum limited sensitivity
examples of quantum limited sensitivity for estimation phase estimation that
for estimation phase estimation that came what described as with respect to a
came what described as with respect to a noon state so noon state but but I will
noon state so noon state but but I will pretend n is an integer
pretend n is an integer um
um and that there's a fox state with photon
and that there's a fox state with photon number n so this has a mean photon
number n so this has a mean photon number of n right
number of n right um and then I will have the state of the
um and then I will have the state of the output and remember I said that if a
output and remember I said that if a phase acts on a fox state it gets this
phase acts on a fox state it gets this uh phase in front of the fox state but
uh phase in front of the fox state but now I'll get two different phases e to
now I'll get two different phases e to the I Theta by 2 because the fox State
the I Theta by 2 because the fox State hits this mode uh and this foxt state
hits this mode uh and this foxt state hits that mode here now you can do the
hits that mode here now you can do the same thing go ahead and calculate the
same thing go ahead and calculate the the quantum efficient information in
the quantum efficient information in fact you will find this calculation a
fact you will find this calculation a lot easier than the coherent State One
lot easier than the coherent State One so the side dot you are just going to
so the side dot you are just going to take this see you will differentiate
take this see you will differentiate with respect to psi you will get a
with respect to psi you will get a i n over 2 e to the i n Theta over 2
i n over 2 e to the i n Theta over 2 will get a minus i n over 2 to the i n
will get a minus i n over 2 to the i n Theta over to do the differentiation do
Theta over to do the differentiation do these inner products and if you do the
these inner products and if you do the math you should see K of theta will come
math you should see K of theta will come out as N squared
out as N squared what does it mean well it means that
what does it mean well it means that there exists a measurement on this
there exists a measurement on this output such that the variance of the
output such that the variance of the estimator will go as one over n times N
estimator will go as one over n times N squared so little n is still a number of
squared so little n is still a number of copies of the state
copies of the state so this is what is you know called the
so this is what is you know called the Heisenberg limited sensitivity
Heisenberg limited sensitivity um in this case I will not show an
um in this case I will not show an actual example calculation of a receiver
actual example calculation of a receiver even though that same interferometric
even though that same interferometric receiver for the coherent State actually
receiver for the coherent State actually works in this case also but I just
works in this case also but I just wanted to show you something little
wanted to show you something little different
different I will Define a measurement by these two
I will Define a measurement by these two projectors
projectors where these two projectors oh I didn't
where these two projectors oh I didn't even write down these projectors so PSI
even write down these projectors so PSI one I intended it to be
i1 is n 0 plus 0 n over root 2
over root 2 and PSI 2 is n 0 minus 0 n over root 2.
now these two are mutually orthogonal States and hence describe a valid point
States and hence describe a valid point of measurement
of measurement and then if you were to work out the
and then if you were to work out the probability of getting the one outcome
probability of getting the one outcome or the two outcome you will just take
or the two outcome you will just take the magnitude Square inner product with
the magnitude Square inner product with respect to the noon state
respect to the noon state and you get these two formulas cosine
and you get these two formulas cosine squared and sine squared and N Theta by
squared and sine squared and N Theta by 2.
2. what do we have here so we have exactly
what do we have here so we have exactly that setting that we derived earlier
that setting that we derived earlier you have little n copies of a Bernoulli
you have little n copies of a Bernoulli random variable
random variable with a parameter P which is the
with a parameter P which is the parameter that you care about
parameter that you care about but not quite you don't care about P you
but not quite you don't care about P you actually care about Theta so you have to
actually care about Theta so you have to still use that Jacobian
still use that Jacobian right
right so first question is that how do I
so first question is that how do I calculate the classical facial
calculate the classical facial information of this measurement well the
information of this measurement well the first thing is that recall from that
first thing is that recall from that formula over here on the board that the
formula over here on the board that the official information for calculating for
official information for calculating for estimating p is 1 over P times 1 minus P
estimating p is 1 over P times 1 minus P we just derive that for the Bernoulli
we just derive that for the Bernoulli random variable
random variable but then if you want to estimate Theta
but then if you want to estimate Theta so the J of theta is given by J 1 of
so the J of theta is given by J 1 of this J of P times this this mod squared
this J of P times this this mod squared the Jacobian F Prime of theta when F
the Jacobian F Prime of theta when F Prime of theta is just the derivative of
Prime of theta is just the derivative of the functional form of theta be written
the functional form of theta be written as F of theta take this differentiate
as F of theta take this differentiate that square that multiply by this thing
that square that multiply by this thing do a couple of lines of math you will
do a couple of lines of math you will see G of theta equal to N squared
see G of theta equal to N squared which means that this particular
which means that this particular projective measurement achieves the
projective measurement achieves the quantum fission information
quantum fission information um and uh again it shows that the
um and uh again it shows that the classical facial information of a
classical facial information of a measurement again just like the previous
measurement again just like the previous example equals the quantification
example equals the quantification information which means that this
information which means that this measurement is Optimum or one of the
measurement is Optimum or one of the optimal measurements and hence the
optimal measurements and hence the maximum likelihood estimator at the
maximum likelihood estimator at the output will achieve that same scaling
output will achieve that same scaling all right so let's look at using a
all right so let's look at using a squeeze light probe with the same mean
squeeze light probe with the same mean photon number in this case I will not
photon number in this case I will not have the mathematical tools to show you
have the mathematical tools to show you the full calculation but this is the
the full calculation but this is the example which I want to use for some
example which I want to use for some photonic sensing applications that I
photonic sensing applications that I will talk about afterwards
will talk about afterwards okay so now if I take a squeezed vacuum
okay so now if I take a squeezed vacuum state with uh with the squeezing
state with uh with the squeezing parameter squeeze this way
parameter squeeze this way and I squeezed display squeeze state
and I squeezed display squeeze state with a mean of Alpha and a squeezing
with a mean of Alpha and a squeezing parameter of PSI the same squeezing
parameter of PSI the same squeezing parameter
parameter I'll get a state here I calculate the
I'll get a state here I calculate the official information this also goes as N
official information this also goes as N squared okay and in this case the
squared okay and in this case the measurement that works to achieve this
measurement that works to achieve this is a homonym detection measurement on
is a homonym detection measurement on only one of the quadratures after you
only one of the quadratures after you put it through a 50 50 games later so if
put it through a 50 50 games later so if you do a homodide measurement write down
you do a homodide measurement write down an estimator if you calculate the
an estimator if you calculate the variance of this estimator you will get
variance of this estimator you will get the variance of the estimator to be
the variance of the estimator to be equal as 1 over N squared and here we
equal as 1 over N squared and here we arrange the squeezing such that the
arrange the squeezing such that the total mean photon number of these two
total mean photon number of these two squeeze States is equal to n capital N
squeeze States is equal to n capital N just to be fair in my calculation
just to be fair in my calculation all right so this this was used by my
all right so this this was used by my former student Michael in designing a
former student Michael in designing a Quantum enhanced fiber optical gyroscope
Quantum enhanced fiber optical gyroscope so gyroscopes are a device that can be
so gyroscopes are a device that can be used to measure you know acceleration
used to measure you know acceleration for example so yeah in a normal
for example so yeah in a normal gyroscope if a laser gyroscope you have
gyroscope if a laser gyroscope you have a laser going into a long Loop and into
a laser going into a long Loop and into homodyne detection
homodyne detection and in this case he injected squeezed
and in this case he injected squeezed vacuum through a circulator and then
vacuum through a circulator and then looked at some applications of that not
looked at some applications of that not just with a single mode squeeze vacuum
just with a single mode squeeze vacuum but extensions of that where you split
but extensions of that where you split the squeeze back into multiple modes to
the squeeze back into multiple modes to get an entangle State and get further
get an entangle State and get further better performance but again I will not
better performance but again I will not go into the details of this calculation
go into the details of this calculation here
here now the moment you have loss in any one
now the moment you have loss in any one of these sensing examples what happens
of these sensing examples what happens is that this Heisenberg limited
is that this Heisenberg limited sensitivity goes away so see the
sensitivity goes away so see the variance of the noon State or the
variance of the noon State or the squeeze State transmitter that goes 1
squeeze State transmitter that goes 1 over N squared the variance for the
over N squared the variance for the coherency transmitter goes as 1 over n
coherency transmitter goes as 1 over n but if you have loss then what happens
but if you have loss then what happens is that it starts scaling like the
is that it starts scaling like the quantum case for low mean photon number
quantum case for low mean photon number and when you go to High n it becomes
and when you go to High n it becomes parallel to 1 by n
parallel to 1 by n so basically there is no Heisenberg
so basically there is no Heisenberg limited sensitivity as a scaling law for
limited sensitivity as a scaling law for large n you always go back to 1 over n
large n you always go back to 1 over n scaling for the variance but you can
scaling for the variance but you can still get a good constant Factor
still get a good constant Factor Improvement in the variance using
Improvement in the variance using Quantum Resources and how much
Quantum Resources and how much improvement you get depends upon how
improvement you get depends upon how lost your system is
lost your system is okay so the last example I want to do is
okay so the last example I want to do is the magnetic field sensing with two
the magnetic field sensing with two level atoms
level atoms and in this example it's a very very
and in this example it's a very very simple example you have a state of a
simple example you have a state of a qubit zero plus one over two root 2 and
qubit zero plus one over two root 2 and as a function of time you get um
as a function of time you get um uh you get a Time varying a Time varying
uh you get a Time varying a Time varying phase being applied to it and what you
phase being applied to it and what you want to estimate is the is Theta meaning
want to estimate is the is Theta meaning the rate at which this phase is
the rate at which this phase is oscillating okay so this Theta is what
oscillating okay so this Theta is what you care about
you care about and what you can do is that if you take
and what you can do is that if you take the state and make a measurement in this
the state and make a measurement in this 45 degree rotated basis you will get a
45 degree rotated basis you will get a Bernoulli outcome again P or 1 minus P
Bernoulli outcome again P or 1 minus P this these two probabilities will be the
this these two probabilities will be the plus outcome or the minus outcome it's
plus outcome or the minus outcome it's very easy to calculate from this right
very easy to calculate from this right how would you calculate from this you
how would you calculate from this you will take a plus inner product with this
will take a plus inner product with this I of T magnitude squared you will get
I of T magnitude squared you will get this formula minus the state inner
this formula minus the state inner product with PSI magnitude Square you
product with PSI magnitude Square you will get sine squared theta T by 2. so
will get sine squared theta T by 2. so we know that the fission information is
we know that the fission information is given by that formula again 1 over P
given by that formula again 1 over P times 1 minus p
times 1 minus p and apply the Jacobian again your J of
and apply the Jacobian again your J of theta will come to t squared
theta will come to t squared so what it means is that the variance of
so what it means is that the variance of your ml estimator will go as one over
your ml estimator will go as one over little n times t squared with little n
little n times t squared with little n here is the number of atoms that is
here is the number of atoms that is let's say sensing the same magnetic
let's say sensing the same magnetic field so each atom is being subject to
field so each atom is being subject to the same field
the same field and you want to write down the variance
and you want to write down the variance of the estimator for estimating that
of the estimator for estimating that that that parameter Theta which is the
that that parameter Theta which is the rate of oscillation
rate of oscillation now what if you had these n atoms that
now what if you had these n atoms that were
were initialize instead of all of them in the
initialize instead of all of them in the equal superposition of one and zero in
equal superposition of one and zero in this g8z state or what Carl called the
this g8z state or what Carl called the cat state zero zero zero zero zero plus
cat state zero zero zero zero zero plus one one one one one you will get that
one one one one one you will get that phase e to the i n times Theta times T
phase e to the i n times Theta times T in front of that term
in front of that term and again you can do a measurement on
and again you can do a measurement on this in The Logical plus minus basis
this in The Logical plus minus basis meaning these two vectors
meaning these two vectors and you will get again a Bernoulli
and you will get again a Bernoulli outcome but with a probability cosine
outcome but with a probability cosine squared theta and T over 2 and 1 minus q
squared theta and T over 2 and 1 minus q and again I'm losing the same formula my
and again I'm losing the same formula my J of theta will simply come to be NT
J of theta will simply come to be NT Bank total square it is like I am doing
Bank total square it is like I am doing the same calculation as before like J if
the same calculation as before like J if you look at this here
you look at this here uh this J of theta is t squared
uh this J of theta is t squared where T was this term over here
where T was this term over here um and uh and the T appeared in the in
um and uh and the T appeared in the in the Bernoulli probably distribution is
the Bernoulli probably distribution is cosine squared theta T by 2. so if I
cosine squared theta T by 2. so if I replace T by n t
replace T by n t the J the whole calculation is the same
the J the whole calculation is the same this becomes since the p square becomes
this becomes since the p square becomes NT times whole squared
NT times whole squared and because I just had one copy of this
and because I just had one copy of this state
state I don't need a little n multiplying
I don't need a little n multiplying outside my ml estimator will be just 1 1
outside my ml estimator will be just 1 1 over J of theta so it's n Square t
over J of theta so it's n Square t squared
squared so what you see here is that the main
so what you see here is that the main difference between the previous slide is
difference between the previous slide is that P Square is the same but that n has
that P Square is the same but that n has become N squared now so this is the
become N squared now so this is the Heisenberg limited sensitivity of field
Heisenberg limited sensitivity of field field sensing
field sensing okay so let's see I am now at 3 30.
okay so let's see I am now at 3 30. um
um and I was I'm going to talk about
and I was I'm going to talk about network of sensors and how do I create
network of sensors and how do I create these Atomic sensor networks and a
these Atomic sensor networks and a couple of examples of that so what do
couple of examples of that so what do you think should I stop here or go for
you think should I stop here or go for another 15 minutes this section
another 15 minutes this section um and uh what's your what's your
um and uh what's your what's your preference
but yeah that's that sounds good that sounds
yeah that's that sounds good that sounds good yeah that's perfectly fine so if
good yeah that's perfectly fine so if you want to take questions at this point
you want to take questions at this point so my plan for the next part here is to
so my plan for the next part here is to talk about uh not just a single sensor
talk about uh not just a single sensor if you have multiple sensors working
if you have multiple sensors working together towards one task
together towards one task um then uh how that how can entanglement
um then uh how that how can entanglement among Those sensors gives you a better
among Those sensors gives you a better sensitivity and then I'll show examples
sensitivity and then I'll show examples of such sensor networks for magnetic
of such sensor networks for magnetic field sensing for RF photonics sensors
field sensing for RF photonics sensors for things like long Baseline astronomy
for things like long Baseline astronomy and one example in multiple spatial
and one example in multiple spatial modes that are entangled for higher
modes that are entangled for higher Precision beam deflection measurements
Precision beam deflection measurements so my talk is going to get more and more
so my talk is going to get more and more and more applied as I go towards the
and more applied as I go towards the latter parts of my talk
latter parts of my talk but any questions yeah so
but any questions yeah so questions
questions sure
oh uh sir I have a question with the with regards to the estimator section uh
with regards to the estimator section uh you talked about uh how the facial
you talked about uh how the facial information may or may not depend on the
information may or may not depend on the variable so uh is that uh dependent only
variable so uh is that uh dependent only on the type of distribution you have or
on the type of distribution you have or is there anything else that may uh
is there anything else that may uh there's nothing other than the type of
there's nothing other than the type of the distribution
the distribution um in general the facial information
um in general the facial information will will be a function of the parameter
will will be a function of the parameter when it is not is what I consider as the
when it is not is what I consider as the special case like that gaussian
special case like that gaussian distribution with an unknown mean and a
distribution with an unknown mean and a known variance definition is one over
known variance definition is one over the variance squared but I would say
the variance squared but I would say that is more of a
that is more of a exception than the rule typical
exception than the rule typical efficient information is a function of
efficient information is a function of the parameter
the parameter you will see one more example in my talk
you will see one more example in my talk tomorrow
tomorrow on Quantum limited
on Quantum limited estimation of the separation between two
estimation of the separation between two stars they are also you'll see facial
stars they are also you'll see facial information does not depend on the
information does not depend on the parameter you care about but that's uh
parameter you care about but that's uh mostly it does mostly it does yeah
mostly it does mostly it does yeah and also the basic question regarding
and also the basic question regarding the squeezing of light so you talked
the squeezing of light so you talked about the noise ball right so I'm how I
about the noise ball right so I'm how I understand is that when you squeeze the
understand is that when you squeeze the light in towards one quadrature so the
light in towards one quadrature so the uncertainty increases in the other
uncertainty increases in the other orthogonal quadrature so the noise wall
orthogonal quadrature so the noise wall is is that the uncertainty is is the
is is that the uncertainty is is the uncertainty that noise ball is it's like
uncertainty that noise ball is it's like that natagram I had the coherent State
that natagram I had the coherent State oscillatory component and the noise
oscillatory component and the noise balls vertical length was the same
balls vertical length was the same which simply means that if I did the
which simply means that if I did the homodyne detection no matter which
homodyne detection no matter which quadrature of the complex number I am
quadrature of the complex number I am trying to measure real part imaginary
trying to measure real part imaginary but I'll still always get one fourth
but I'll still always get one fourth that's the variance in the photon number
that's the variance in the photon number units for a squeeze State as I'm going
units for a squeeze State as I'm going through that 0 to 2 pi oscillation some
through that 0 to 2 pi oscillation some part of the zero to two Pi oscillation I
part of the zero to two Pi oscillation I increase it it means that the other
increase it it means that the other orthogonal quantity drama you squeeze it
orthogonal quantity drama you squeeze it so it goes like this
okay so uh you have talked about uh uh uh uh exquisition said has even number
uh uh exquisition said has even number of photons and mathematically so can you
of photons and mathematically so can you give a physical intuition why uh we get
give a physical intuition why uh we get only even number of photons physical
only even number of photons physical intuition
intuition you have an answer Psy to that I what is
you have an answer Psy to that I what is the physical intuition for the squeeze
the physical intuition for the squeeze vacuum squeeze around the real
vacuum squeeze around the real quadrature it would have only even
quadrature it would have only even number of protons
it is quadrature dependent I'm trying so his question is a physical reason for
his question is a physical reason for why uh okay I can give you
why uh okay I can give you not for squeeze vacuum but you know
not for squeeze vacuum but you know there is another state that I did not
there is another state that I did not talk about it's called the uh have you
talk about it's called the uh have you heard of the term cat State yes so if I
heard of the term cat State yes so if I take a coherence State Alpha that I
take a coherence State Alpha that I wrote down earlier so e to the minus mod
wrote down earlier so e to the minus mod Alpha Square over 2 Alpha to the N
Alpha Square over 2 Alpha to the N square root of n factorial n
square root of n factorial n if I Define a state Alpha plus minus
if I Define a state Alpha plus minus Alpha
Alpha with whatever normalization constant I
with whatever normalization constant I need in order to make it a unit Norm
need in order to make it a unit Norm state
state what will happen if I write the state
what will happen if I write the state can you see what happens here
can you see what happens here when I take minus Alpha
when I take minus Alpha I have the first part of this sum
I have the first part of this sum Remains the Same e to the minus mod
Remains the Same e to the minus mod Alpha Square by 2 but here I get Alpha
Alpha Square by 2 but here I get Alpha to the N plus minus Alpha to the n
this state is obviously not a Pokemon State this is not poisson but if you see
State this is not poisson but if you see these terms
these terms they are um
they are um uh they are they go between zero and
uh they are they go between zero and non-zero for odd and even right so when
non-zero for odd and even right so when so this term is minus 1 to the N times
so this term is minus 1 to the N times Alpha to the N so when n equal to 1 this
Alpha to the N so when n equal to 1 this is negative so it cancels out so it only
is negative so it cancels out so it only has even components but now this I did
has even components but now this I did not give you an answer for a squeeze
not give you an answer for a squeeze vacuum but if you look at the wigner
vacuum but if you look at the wigner function of the
function of the this particular cat state is a very
this particular cat state is a very strong resemblance with the squeeze
strong resemblance with the squeeze vacuum squeeze along that that q
vacuum squeeze along that that q quadrature that I had drawn but I don't
quadrature that I had drawn but I don't know the short answer is I don't have a
know the short answer is I don't have a very good answer to the physical
very good answer to the physical intuition
intuition or why you have
or why you have uh one other way to think about
uh one other way to think about um you know squeezing the first paper
um you know squeezing the first paper that introduced squeezed light
that introduced squeezed light um was a three-part paper in 1979 to
um was a three-part paper in 1979 to 1981 by Jeff Shapiro and Horace un
1981 by Jeff Shapiro and Horace un they did not call it school slide at the
they did not call it school slide at the pipe so if you look at the title of the
pipe so if you look at the title of the paper they called it two Photon coherent
paper they called it two Photon coherent States
States now their reason for calling it a two
now their reason for calling it a two Photon coherence state is that the
Photon coherence state is that the squeezing hamiltonian
squeezing hamiltonian if you write down the unitary of
if you write down the unitary of squeezing operator
squeezing operator it looks like this
these are the field operators and this is a annihilation of photons
and this is a annihilation of photons twice in the structure for Photon twice
twice in the structure for Photon twice so that was the reason why they called
so that was the reason why they called it a two Photon
it a two Photon um two Photon coherent State and it was
um two Photon coherent State and it was Carl caves who started calling it
Carl caves who started calling it squeeze State because of its picture in
squeeze State because of its picture in the Victor space but that is another way
the Victor space but that is another way to think about why the squeeze state has
to think about why the squeeze state has even number of photons if you were to
even number of photons if you were to work out the fog basis elements because
work out the fog basis elements because a squeezed vacuum will be simply
a squeezed vacuum will be simply applying this operator to vacuum
okay and if you work out the application so now if I have an e to the N operator
so now if I have an e to the N operator I will have to write it as you know the
I will have to write it as you know the usual Taylor series
usual Taylor series a square over 2 factorial and so on
a square over 2 factorial and so on take this
take this do this thing and apply it to vacuum and
do this thing and apply it to vacuum and you will be able to see that you will
you will be able to see that you will get 0 and then 2 and 4 and 6 and so on
get 0 and then 2 and 4 and 6 and so on the middle things will cancel out this
the middle things will cancel out this is the closest I can come in terms of
is the closest I can come in terms of physical intuition
physical intuition hope that's um
hope that's um thank you
thank you three four five okay uh yeah so you
three four five okay uh yeah so you mentioned about svi and PSA techniques
mentioned about svi and PSA techniques so I was wondering if there is a
so I was wondering if there is a threshold for the squeezing parameter
threshold for the squeezing parameter that you actually need to uh see uh the
that you actually need to uh see uh the independency from copper and if it's a
independency from copper and if it's a good question there is no threshold it's
good question there is no threshold it's a smooth thing right so for Kappa to
a smooth thing right so for Kappa to completely disappear your R will have to
completely disappear your R will have to be infinity or your ETA for the homody
be infinity or your ETA for the homody and fish inefficiency to completely go
and fish inefficiency to completely go to one your face sensitive amplifiers
to one your face sensitive amplifiers gain has to go to Infinity
gain has to go to Infinity but obviously that is not the case in an
but obviously that is not the case in an experiment so in that experiment we had
experiment so in that experiment we had six DBS squeezing for the squeeze vacuum
six DBS squeezing for the squeeze vacuum and something like 2 DB of squeezing for
and something like 2 DB of squeezing for the PSA so we just reduced the the the
the PSA so we just reduced the the the effect of that inefficiency in the
effect of that inefficiency in the homonym receiver and hence improve the
homonym receiver and hence improve the quality of our image in the experiment
quality of our image in the experiment so it is as if I have a homodyne
so it is as if I have a homodyne detector with a detection efficient 70
detector with a detection efficient 70 but as if my information bearing light
but as if my information bearing light is seeing a homodyne detector of 80
is seeing a homodyne detector of 80 efficiency
efficiency so that was the effect that we had seen
so that was the effect that we had seen But there is no sharp threshold it's a
But there is no sharp threshold it's a smooth effect okay
hello sir so for the current state of light the it follows the poisonian
light the it follows the poisonian statistics okay so it has a probability
statistics okay so it has a probability distribution function and the squeeze
distribution function and the squeeze state of light it follows the supposed
state of light it follows the supposed Union statistics so where the variance
Union statistics so where the variance is less than the mean so does it have
is less than the mean so does it have any analytical distribution function
any analytical distribution function just like the current state oh yeah of
just like the current state oh yeah of course of course the sun slides Fox fog
course of course the sun slides Fox fog distribution I don't like writing that
distribution I don't like writing that it's a not a very certain distribution
it's a not a very certain distribution to write down but you can find it in
to write down but you can find it in textbooks it involves the harmite
textbooks it involves the harmite polynomials but definitely there is a
polynomials but definitely there is a nice close form expression for that
nice close form expression for that distribution so the thermal State also
distribution so the thermal State also has that kind of distribution so the
has that kind of distribution so the thermal States Photon distribution is a
thermal States Photon distribution is a what is called the Bose Einstein
what is called the Bose Einstein distribution okay so that is or in the
distribution okay so that is or in the language of Statistics it's a geometric
language of Statistics it's a geometric distribution
distribution thermal States Photon distribution is a geometric distribution
distribution is a geometric distribution squeeze test number distribution is a
squeeze test number distribution is a much more complicated looking
much more complicated looking distribution in termite polynomial
distribution in termite polynomial so so if we converge the current state
so so if we converge the current state and the thermostats do we get the uh
and the thermostats do we get the uh suppose onion so coherence State and
suppose onion so coherence State and thermal State they are both classical
thermal State they are both classical States so if you mix them let's say on a
States so if you mix them let's say on a beam splitter what you will get is a
beam splitter what you will get is a displaced thermal state in fact on the
displaced thermal state in fact on the two outputs of the beam splitter you
two outputs of the beam splitter you will get a classically correlated
will get a classically correlated gaussian state with a non-zero mean it
gaussian state with a non-zero mean it will not be a squeeze State a squeeze
will not be a squeeze State a squeeze state is a non-classical state
state is a non-classical state so if I put a squeeze State through a
so if I put a squeeze State through a beam splitter you will get an entangle
beam splitter you will get an entangle set on the two outputs and that's what
set on the two outputs and that's what I'm going to go into the next part of my
I'm going to go into the next part of my talk on Quantum sensor networks but with
talk on Quantum sensor networks but with thermal State and a coherent State you
thermal State and a coherent State you can all kinds of things that you can
can all kinds of things that you can generate with data and linear Optics
generate with data and linear Optics beam Splitters are states that are only
beam Splitters are states that are only classically correlated not entangled
classically correlated not entangled okay
okay I think I saw a question there for the
I think I saw a question there for the measurement you have chosen the specific
measurement you have chosen the specific projectors what is the reason behind
projectors what is the reason behind that you have chosen specific projectors
that you have chosen specific projectors and that is also in entangled basis that
and that is also in entangled basis that PSI 1 and PSI 2 so the for the noon
PSI 1 and PSI 2 so the for the noon State example I showed yes yes I just
State example I showed yes yes I just picked just I just picked one
picked just I just picked one measurement basis just to show that that
measurement basis just to show that that measurements CFI equals the qfi just to
measurements CFI equals the qfi just to show you an example
show you an example like
like when you are finding optimal receivers
when you are finding optimal receivers often times for many problems in
often times for many problems in communication sensing I work with it's
communication sensing I work with it's it's actually
it's actually R art slash intuition finding optimal
R art slash intuition finding optimal receivers Quantum tools they give you a
receivers Quantum tools they give you a good measurable value this is the best
good measurable value this is the best you can do from there finding how do you
you can do from there finding how do you design a measurement that achieves that
design a measurement that achieves that there is no one recipe so I just wanted
there is no one recipe so I just wanted to show one example calculation for the
to show one example calculation for the coherent State one it was very obvious
coherent State one it was very obvious as to why we should have used that means
as to why we should have used that means better
better for the newstead example that
for the newstead example that measurement was the obvious because noon
measurement was the obvious because noon or not known meaning the up the the
or not known meaning the up the the orthogonal state n 0 minus 0 n it was
orthogonal state n 0 minus 0 n it was the obvious one to uh to get that five
the obvious one to uh to get that five because that Phi appears in a Bernoulli
because that Phi appears in a Bernoulli distribution probability which is
distribution probability which is something that is easy to estimate but
something that is easy to estimate but other than that there's no real
other than that there's no real intuition behind it
intuition behind it thanks
thanks but an actual measurement that will
but an actual measurement that will achieve it actually a beam splitter
achieve it actually a beam splitter followed by Photon detection would have
followed by Photon detection would have achieved it I just did not want to do
achieved it I just did not want to do the calculation because interfering Fox
the calculation because interfering Fox States through a beam spirit is much
States through a beam spirit is much more difficult to do in Us in this
more difficult to do in Us in this setting but just wanted to say that
setting but just wanted to say that there are many many uh measurements for
there are many many uh measurements for the same problem that will achieve the
the same problem that will achieve the qfi
qfi which are not even sometimes the sld
which are not even sometimes the sld measurement
measurement uh sir my question is regarding the
uh sir my question is regarding the experimental schematics that you've
experimental schematics that you've shown uh where the you've placed two
shown uh where the you've placed two phase differences after the beam
phase differences after the beam splitter uh five by two and minus five
splitter uh five by two and minus five by two is it always necessary to have
by two is it always necessary to have five by two and minus five by two no uh
five by two and minus five by two no uh will that was just one example the five
will that was just one example the five by two and minus five by two example
by two and minus five by two example this is uh people who call it a
this is uh people who call it a conjugate phase interferometer yeah
conjugate phase interferometer yeah conjugate phase interferometer comes
conjugate phase interferometer comes about in a sanyac loop-based fiber
about in a sanyac loop-based fiber optical gyroscope okay okay but if you
optical gyroscope okay okay but if you have a phase only on one arm that also
have a phase only on one arm that also is fine like the ligo example that Carl
is fine like the ligo example that Carl talked about but will the calculations
talked about but will the calculations change significantly not significantly
change significantly not significantly not see no you can in fact I you can try
not see no you can in fact I you can try the calculation with just Theta on one
the calculation with just Theta on one arm with a coherent State and I think
arm with a coherent State and I think your official information for that
your official information for that instead of n will come out to be 4 n if
instead of n will come out to be 4 n if I remember correctly okay but try it out
I remember correctly okay but try it out it's a it will go as always proportional
it's a it will go as always proportional to n okay and with the squeeze state or
to n okay and with the squeeze state or noon state it will go as the constant
noon state it will go as the constant will change depending upon your exact
will change depending upon your exact setting uh and so one more question so
setting uh and so one more question so uh you've shown the fission
uh you've shown the fission in terms of uh the derivative of PSI and
in terms of uh the derivative of PSI and also PSI I mean it is in in the form
also PSI I mean it is in in the form having these two things some places have
having these two things some places have also seen it being represented as the
also seen it being represented as the variance of the hamiltonian with the
variance of the hamiltonian with the Hamilton so how are these two related
Hamilton so how are these two related how are they related very good question
how are they related very good question you must have noticed in my talk I
you must have noticed in my talk I didn't have a hamiltonian yeah right
didn't have a hamiltonian yeah right yeah I was directly asked writing rho of
yeah I was directly asked writing rho of theta rho of X yeah where did row of X
theta rho of X yeah where did row of X come from in a real sensing setting uh
come from in a real sensing setting uh well I should not say real sensing the
well I should not say real sensing the two types of sensors one is passive
two types of sensors one is passive sensors
sensors one is active sensors like when I take
one is active sensors like when I take this phone and take a camera a picture
this phone and take a camera a picture this is a passive sensor I'm not sending
this is a passive sensor I'm not sending any light yeah but I can still write
any light yeah but I can still write down the quantum state of the photons
down the quantum state of the photons that I'm collecting I'll show some
that I'm collecting I'll show some examples tomorrow of such passive
examples tomorrow of such passive sensors there
sensors there you will oftentimes just write the
you will oftentimes just write the quantum State based on your physics of
quantum State based on your physics of the Imaging system
the Imaging system but then if I'm sending a laser light
but then if I'm sending a laser light probe or a squeeze light probe for a
probe or a squeeze light probe for a beam reflection measurement or uh for
beam reflection measurement or uh for the ligo interferometer there I can
the ligo interferometer there I can describe the action of that measurement
describe the action of that measurement of that physical
of that physical um Metrology tool to my probe and that
um Metrology tool to my probe and that action physicists would often write as a
action physicists would often write as a hamiltonian which is e to the i h t uh
hamiltonian which is e to the i h t uh where T is the time duration over which
where T is the time duration over which you're acting upon the state with that
you're acting upon the state with that hamiltonian that's a unitary now e to
hamiltonian that's a unitary now e to the i h t that H will depend on that
the i h t that H will depend on that Theta right yeah and e to the iht
Theta right yeah and e to the iht applied on the state size 0 will give
applied on the state size 0 will give you your final state which you can then
you your final state which you can then think as my PSI of theta or rho of theta
think as my PSI of theta or rho of theta you calculate the variance of the
you calculate the variance of the hamiltonian it will match the formula I
hamiltonian it will match the formula I gave you if you were to write that on
gave you if you were to write that on PSI of theta okay okay got it okay and
PSI of theta okay okay got it okay and just one last question so uh in the part
just one last question so uh in the part that you showed that you were
that you showed that you were experimentally generating The Squeeze
experimentally generating The Squeeze state so you said that you actually
state so you said that you actually squeezed the vacuum so basically
squeezed the vacuum so basically according to me I mean you're not
according to me I mean you're not sending anything in so I mean through
sending anything in so I mean through the crystal so what exactly what exactly
the crystal so what exactly what exactly am I doing yeah okay very good question
am I doing yeah okay very good question so that's the whole that could be a
so that's the whole that could be a subject matter for a whole uh two
subject matter for a whole uh two lecture series
lecture series um
um non-linear Optics okay classical
non-linear Optics okay classical non-linear Optics there are many
non-linear Optics there are many processes like down conversion some
processes like down conversion some frequency generation difference
frequency generation difference frequency generation
frequency generation so the process that we are using in my
so the process that we are using in my lab is called the spontaneous parametric
lab is called the spontaneous parametric down conversion so what happens there is
down conversion so what happens there is that there are three frequencies
that there are three frequencies involved so let's say there is a pump
involved so let's say there is a pump frequency fifth so pump frequency
frequency fifth so pump frequency if you and there are two more
if you and there are two more frequencies I call the signal frequency
frequencies I call the signal frequency in idler frequency
in idler frequency so I put nothing in the signal and idler
so I put nothing in the signal and idler I put vacuum in signal and idler okay
I put vacuum in signal and idler okay pump pump I put a strong coherence state
pump pump I put a strong coherence state now if you do the classical non-linear
now if you do the classical non-linear Optics of the input output Theory then
Optics of the input output Theory then solve the Maxwell's equations in a bulk
solve the Maxwell's equations in a bulk Chi 2 medium
Chi 2 medium what you will get is that you will get a
what you will get is that you will get a coupled mode equation in terms of the
coupled mode equation in terms of the amplitudes of all these three modes of
amplitudes of all these three modes of the three okay three frequency modes if
the three okay three frequency modes if you look at those couple more equations
you look at those couple more equations you will see that
you will see that if a s and a i the complex numbers
if a s and a i the complex numbers corresponding to the signal and idler
corresponding to the signal and idler frequencies were zero the output as and
frequencies were zero the output as and AI should also be zero meaning if I
AI should also be zero meaning if I don't put light in those two frequencies
don't put light in those two frequencies I should not see any light in those
I should not see any light in those frequencies and the pump stays the pump
frequencies and the pump stays the pump but in the lab then you do the
but in the lab then you do the experiment you actually see light in
experiment you actually see light in those signal Eiler frequencies
those signal Eiler frequencies okay something like what's happening the
okay something like what's happening the classical nonlinear Optics is not
classical nonlinear Optics is not describing it correctly
describing it correctly but then if you take those this coupled
but then if you take those this coupled mode equations connecting asai in an
mode equations connecting asai in an asai out
asai out you put hats on those A's and make them
you put hats on those A's and make them Annihilation operators then the whole
Annihilation operators then the whole thing becomes a two mode boogalube of
thing becomes a two mode boogalube of transformation or two more squeezing
transformation or two more squeezing transformation
transformation and we know that a two more squeezing
and we know that a two more squeezing operator acting on vacuum vacuum will
operator acting on vacuum vacuum will give you a two mode screen State yeah
give you a two mode screen State yeah so but then you might still ask
so but then you might still ask where did the photons come from you just
where did the photons come from you just told me that squeeze light has photons
told me that squeeze light has photons but I did not put any photons in those
but I did not put any photons in those that cannot happen physics thought allow
that cannot happen physics thought allow doesn't allow it so the photons
doesn't allow it so the photons obviously came from somewhere they came
obviously came from somewhere they came from the pump
from the pump but the pump is 10 to the six times
but the pump is 10 to the six times stronger than the than the squeeze light
stronger than the than the squeeze light that is coming so the pump depletion is
that is coming so the pump depletion is very negligible
very negligible or we assume it to be very negligible so
or we assume it to be very negligible so we say that we are operating in a
we say that we are operating in a non-depleted pump regime
non-depleted pump regime in that case you can think of that as
in that case you can think of that as squeezing the vacuum
squeezing the vacuum but if your pump is not strong
but if your pump is not strong then the depletion from the pump has to
then the depletion from the pump has to be properly taken into account yeah and
be properly taken into account yeah and then that three-mode hamiltonian gives
then that three-mode hamiltonian gives rise to very interesting non-gaussian
rise to very interesting non-gaussian characteristics that somebody bowling
characteristics that somebody bowling ski Carmichael and many others a Quantum
ski Carmichael and many others a Quantum Optics studied that you can generate
Optics studied that you can generate this interesting non-gaussian states but
this interesting non-gaussian states but the regime that most people in the
the regime that most people in the community use this is in the
community use this is in the non-depleted pump regime okay so the
non-depleted pump regime okay so the non-depleted pump regime will give you
non-depleted pump regime will give you the single Photon I mean usually spdc is
the single Photon I mean usually spdc is also used for generating single photons
also used for generating single photons well single that's a good so single
well single that's a good so single photons is a byproduct that people do
photons is a byproduct that people do afterwards all spdc sources in fact I
afterwards all spdc sources in fact I was visiting Professor varshi sinha's
was visiting Professor varshi sinha's lab this morning and we were discussing
lab this morning and we were discussing exactly this topic
exactly this topic every spdc and spontaneous photo mixing
every spdc and spontaneous photo mixing Source in the world what comes out is a
Source in the world what comes out is a two more space vacuum okay it says and
two more space vacuum okay it says and two more squeeze vacuum if you write
two more squeeze vacuum if you write down in the fork basis it's a
down in the fork basis it's a superposition of zero zero one one two
superposition of zero zero one one two two three three and so on
two three three and so on if you turn your pump power down your
if you turn your pump power down your two two three three four four are very
two two three three four four are very small so what people do is that they
small so what people do is that they stick a detector in one of the two of
stick a detector in one of the two of modes so if the detector clicks with
modes so if the detector clicks with high probability the other mode has one
high probability the other mode has one photon
photon so they use the spdc as a means to
so they use the spdc as a means to generate a single Photon other people
generate a single Photon other people use single Photon emitters like color
use single Photon emitters like color centers and Diamond to generate single
centers and Diamond to generate single photons to a totally different physics
photons to a totally different physics but spdc the raw state has always
but spdc the raw state has always squeezed light
squeezed light got it so thank you
got it so thank you right so I think we can take the rest of
right so I think we can take the rest of the questions out for coffee
interact with them we don't have too much time we are running a little late
much time we are running a little late uh Randolph Applause for
uh Randolph Applause for thank you we'll be back
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.