Quantum computing, particularly quantum machine learning, holds significant potential for revolutionizing financial analysis by offering exponential speedups for complex calculations and pattern recognition, although practical applications on current hardware are still under development.
Mind Map
Click to expand
Click to explore the full interactive mind map • Zoom, pan, and navigate
welcome to the webinar on quantum computing
computing
applications in the financial sector
which is jointly organized by the
keene institute at unc chapel hill and
the nc state ibm q hub
i'm your host my name is eric geisels
i'm a professor
of economics at unc chapel hill and a
professor of finance
at the kenya flagler business school
this seminars
this webinar series actually has been
running for
almost a year we started out
about a year ago with the first
presentation by stefan werner from
ibm research zurich one of the leading
scholars in the in the
field uh it was followed by a number of
webinars by
uh azaya hull from the swedish central bank
bank
uh roman oros from donostia
international physics
center tomoki tanaka from kyo university and
and
uh mitsubishi financial group
daniel edgar from ibm research uh
also in zurich and uh xiaohan who from jpmorgan
jpmorgan
we're ending the first year with a real treat
treat
um i have the privilege of hosting
uh today uh set lloyd from mit uh seth
lloyd is
one of the pioneers of quantum computing
he's done seminal work in this field including
including
proposing the first technologically
feasible design for a quantum computer
and in recent years seth has actually
written quite a number of papers on
quantum algorithms for
algorithms procedures such as principal components
components
support vector machine deep learning etc
and that's exactly the topic that he's
going to be talking about today he's
going to talk about quantum
machine learning algorithms for
financial analysis before we start just
one thing about housekeeping at the
bottom of your screen there is a q a
button button so please feel free to
write your questions during set's presentation
presentation
with that being said welcome seth and uh please
please
share your screen and we're ready to go
okay good uh so uh i'm going to
just say a little bit uh before i i put
the talk up
um can people see me yes i can see you
okay great um uh thank you eric it's
great pleasure to be here i only wish i
were there
because i love chapel hill it's one of
my favorite places
and i'm very sorry i can't be there in
person you have good taste
yeah it's a pleasure to be here and um
uh to participate in this
this uh series which looks like it's
really been a very interesting series
i'm sorry i haven't caught the
earlier versions of this and please put
me on your list for
coming up um just as a logistical part i
really prefer it if people
ask questions and put them in the q a while
while
they're going on and uh so if you can actually
actually
pop things into the q a i'll see that
the questions are there
and then at an appropriate point um you
know i'm going to stop along the way
and and look for questions in the
question and answer um
it otherwise it it it feels too much
like i'm just you know
in my my living room with all my books
talking to my screen
which i guess i am but i would much
fortunately you are [Laughter]
[Laughter]
got to be somewhere during the pandemic
all right i'm going to share my screen
to talk with you about quantum machine learning
learning
and specifically quantum machine
learning for financial systems
now i'm going to first talk very
generally about quantum machine learning
and why it might be that it could be
useful for analyzing
the kinds of patterns that arise in
financial systems
and i should say that we actually don't know
know
right now if the cut the generation of
quantum computers we have
could help with financial problems and
financial analysis i note that there are many
many
seminars and you know conferences on the web
web
for quantum computing about
economics and finance and things like
that i want to make it clear that we
don't know if it actually is going to
give an advantage
but it might and let me say why it might
so now i'm going to share my screen
let's see if i can do this
okay all right i hope you can see my
beautiful powerpoint slides
here yes we can good great i've actually
never made a powerpoint slide in my life
and i
intend to go to my grave with never
having done so
so um uh so i'm first going to talk
about the first part is about quantum
machine learning
and then i'm going to go to talk about
possible applications to finance along
the way
i have some more specific slides here about
about
technical aspects of the quantum algorithms
algorithms
but i believe this to be this is quite a
general seminar series and
i'm going to i'm i will only go into
these technical aspects of people
um uh make me of course it's very easy
to make me do that because i like
talking about the technical aspects
but um i think uh but i i wanted to make
sure that we get the general point
across and i already gave the most
important point which is
we don't know if quantum computers are
going to help with financial problems
but now let's look at why they might be
able to do so
so what are the motivations here so what
do we do
what what do we know that quantum
computers can do better
or faster than classical computers and
by the way they don't do it yet because
the current generation of quantum
computers is still
insufficiently powerful to take
advantage of
these theoretical enhancements um
however we are in a very exciting time
for quantum computing because
uh you know people are trying to build
quantum computers lickety-split
and google and ibm and microsoft and
amazon and
huawei and uh tencent and
all those folks are vying for to be the
first people to
corner the quantum market and as a
consequence the
technologies for quantum computings are
going ahead by leaps and bounds
so the kinds of things i'm talking about
here might very well be available in the
next few years
anyway what do quantum computers do that
classical computers don't
well quantum computers excuse me
pardon me they can perform linear algebra
algebra
exponentially faster than classical computers
computers
that's the basic reason and i'll say why
they can do it in just a second it
actually has to do
with the fundamental nature of quantum
mechanics of the mathematical formalism
of quantum mechanics says
the states of quantum systems are
vectors in a very high dimensional space
and when i take those vectors the
dynamics of quantum systems basically is
described by multiplying those vectors
by gigantic matrices
so the underlying formalism of quantum
mechanics is about
all about linear algebra and if you're
clever about it then you can
actually convince quantum systems to do
this linear algebra in ways that you
know solves problems that we might be
interested in solving such as financial problems
problems
so that's the first aspect of quantum computers
computers
they're really good at linear algebra
because linear algebra is what quantum
mechanics is all about
now it's also well known and has been
known for a long time that quantum
systems can generate
patterns in data that classical systems
can't now quantum
quantum uh systems are famous for
generating strange and counter-intuitive patterns
patterns
in data for what
generating what einstein called spooky
action at a distance
which sounds great sounds better in
german i think
uh you know they exhibit what certainly
are called entanglement
they uh they they have patterns they
exhibit patterns more recently people
talk about
patterns such as quantum supremacy in
which quantum computers generate
patterns classical computers can't
so quantum systems have been known but
they've been known for 100 years or more
to generate patterns and data
that classical systems can
so this suggests using them for quantum
machine learning
because in machine learning
we have systems like deep neural
networks that can recognize patterns
and they can also in data and they can
also generate
patterns and data so we can use them in
recognition mode
and many systems we can use in
generative mode
so for instance uh in a famous there's a famous
famous
um uh google experiment where they
taught a a deep neural network spontaneously
spontaneously
to figure out that there are lots of
pictures of kittens on the internet
and so it could recognize pictures of
kittens but it also can be operated in
generative mode so if i gave it a
picture of eric here
and uh and said what do you see it would
generate a kittenish picture of
eric looking very you know sweet and
fluffy and not
maybe with some sharp claws there but
so so in machine learning the the we
have these systems that can generate
patterns now quantum systems can
generate patterns
in data that classical systems can't so
we can reasonably hope
that quantum systems may be able to
identify and classify
classify patterns and data that are
classically inaccessible
that's a reasonable hope and this turns
out to be true
so um let me describe how this works
so uh uh are there are there please
i don't see any questions in the q a
that's the end of the introduction
please pop some questions in the q a
so that i i'm i'm i know that there are
people out there
so but meanwhile while i'm waiting for
the questions to show up i will tell you
what's going on here
so let me let's see what's going on here
so here maybe i'll have a question so
you're you're mostly talking about deep
learning i got her
which is sort of the the hidden layers
of the of the deep learning
uh are in a classical computer
can reach patterns that that are different
different
they're not as rich i guess is that sort
of the argument
then the deep learning patterns you
could hidden layer patterns that you could
could
harvest from a quantum machine yeah
though i'm actually
in fact i'm not uh i'm not going to talk
that much about
deep learning um uh i'm i'm going to mention
mention
i'm going to talk about a variety of
different methods some of which can be
performed by neural networks
um you know when when my i i started
working on these quantum machine
learning problems
in about back in 2012 when my postdoc
patrick rabindrost
came into my office and said hey seth we
should look about quantum algorithms for
machine learning
and i said great what's machine learning
because i didn't really i mean i kind of
knew but
so he started telling me about support
vector machines and kernel methods and
clustering algorithms
and embedding methods and and uh uh
the which i'm going to tell you about um
about how you quantize them
and then he told me about he said let's
look at deep learning i got all excited
because i thought that you know it meant that
that
computers could learn stuff that was
deep like things about like love and
truth and happiness
and you know the nature of the universe
and then he said no no no no
it's just like these deep neural
networks so but it is true that deep
neural networks
when you quantize them that the class
the quantum versions of deep neural
networks can provably generate patterns
that you can't generate classically
what's not known is whether they can
quantum deep neural networks again can learn
learn
and classify patterns that can't be
learned and classified classically
but i will give examples of cases where
quantum computers can indeed do that
oh and i see question in the q a and
this is very exciting
uh oh yes do present nist devices can be
useful for quantum machine learning yeah
so that's a very good question and i
actually again as i said it's not clear
we don't know
if if quantum machine learning will work
well on this
devices on standing for i forget always
forget what it stands for because i hate
acronyms so much but
it stands for near-term intermediate
scale quantum devices which have like
100 qubits
can do like 10 000 operations dark fully
error corrected
or they're or for special purpose
devices that people are making now
i think that one can reasonably hope
that one might be able to do this on
this devices
you know i don't think that we're going
to have to wait around my guess is
we're not going to have to wait around
for fault tolerant
full fault tolerant scalable quantum
computers to do quantum machine learning
for reasons that i'll talk about below
um uh and that's good
because the expected time to get you know
know
large-scale fault tolerance scalable
quantum computing is something like 15 years
years
plus or minus never
so it's well maybe not minus never but
plus never right
like minus seven plus never that's where
the error bars stand right
so that would we it's good that we might
be able to do this on the near term
and on special purpose devices and i'm
going to argue that we can do that
we ought to be able to do that so here
let's look at these linear algebra
algebraic methods so why let me just like
like
just mention why it is that quantum
computers do better
so i explained why quantum computers are
good at linear algebras because quantum
computers are all about linear algebra
that is here if you look right here if
you can see my pointer
if i have a vector of data x i can map it
it
to a quantum mechanical state this uh
like for those of you who are not
quantum mechanical people when i put
something in these funny brackets right
here they're called dirac
brackets it means whatever's in there is
quantum mechanical
right so uh so the
logo of google the google quantum group
is google in these direct brackets
um so i can take a vector over
n with n components and map it to a
quantum state over
log and quantum bits so that's a tremendous
tremendous
enhancement in terms of compression in
terms of the representation of data
just to give you an idea and i can take
i can take 300 quantum bits
can store a vector of the components
that has 2 to the 300 components
and i pick 2 to the 300 because 2 to the
300 happens to be the number of
elementary particles in the universe
so with 300 quantum bits i can have a
vector on on a vector space
that's so large i need a classical
computer the size of the universe
to actually register this in data so
that's potentially useful and indeed
when you find out that for these basic
linear algebra subroutines the blahs this
this
goes into matlab you look at the quantum
versions you find things like the fast
fourier transform for finding periods
and data
the classical version if my vectors are
n-dimensional they go as order
n log n that's great the quantum version
goes as order log n squared
so again it's like 2 to the 300
classical computer is never going to do
that that would take
you know the the fan is 2 to the 300 it
would take the age of the universe to do
a fourier transform on something that's
the size of the universe
quantum computer three that log n is 300
300 squared is uh is 10 to the 10 to the fifth
fifth
right so it would take ten of the fifth
steps on a quantum computer
so these the it's if you can if you can
phrase things in terms of these linear algebra
algebra
subroutines and you're in good shape so
similarly finding eigenvectors and
eigenvalues of matrices you know order n squared
squared
on a classical computer or log n squared
on a quantum computer you detect a
pattern here
inverting matrices so if i want to say
oh i have a big matrix ax and i want if i
i
i want to solve the equation ax equals b
so x is equal to a
inverse times b on a classical computer
you can do this in time order
n log n using uh conjugate gradient
descent that's already really amazing
you do
order n log n in quantum computer it's
order log n squared again
so so uh uh these quantum computers in principle
principle
and actually in practice for doing these
kinds people of course have demonstrated
experimentally all these uh uh all these
basic leading quantum basic linearizer
best subroutines on small quantum computers
computers
in practice uh in um in practice
we're not at the case where we can
actually use these for machine learning
yet but you know this looks good
and indeed uh for data access uh
uh uh the real bottleneck i claim
is the problem of data access that is if
we have
a humongous vector of data stored in
random access memory and ram
the problem here is to map it into this
quantum state over many many fewer qubits
qubits
um and for this we need quantum random
access memory
so the main issue in actually the main
technological issue
in uh implementing the quantum machine
learning algorithms i'm described for you
you
for financial applications or other is
in fact this
data axis encoding the data in a quantum
state using quantum random access memory
or a variant thereof that is the the key issue
issue
technological issue that needs to be
solved and um
along with of course building the
quantum devices that
google and ibm and amazon and all these
folks are
are straining right now to try to build
but i actually
look just looking at the progress that
they're making i think they're doing a
pretty darn good job and i expect devices
devices
that suffice it suffice for in this era
devices that suffice for machine learning
learning
to be around soon okay good and i see more
more
q and a what kind of machine learning
um from the hema churukoti unsupervised
supervised machine learning
unsupervised machine learning
reinforcement quantum computing mostly
useful for the answer is all of those
and i'll talk about that in just a second
second
oh and here's one from abhijit rao with
a hybrid quantum classical approach
can machine learning tasks be
partitioned is it a fair to assume that
some tasks you identify are more
efficient on today's quantum computer
and being done in practice
that's a great question and i i think that
that
let me answer that right away because
it's such a great question um
what we if we look at the algorithms
that we've been developing
for uh doing quantum uh machine learning
the ones and where we simulate them and
we see how they work
the ones that work best are the ones
where we combine
quantum machine learning and classical
machine learning in a hybrid fashion
so you do classical pre-processing of
the data
in order to to encode it in a form
that's useful for putting into a quantum
computer or a special purpose quantum device
device
and then you process that information on
the quantum device
and then you measure at the end so uh
uh we anticipate we not only we
anticipate we actually find when we
simulate what's going on with these
quantum and algorithms at a hybrid
classical quantum approach that combines
for instance classical deep learning
where the qua small quantum computer is
what works best
maybe do you mind if i say something
so so i mean i know you talk a lot about
the linear algebra and the speeds up by
implied by that there is a obviously
also a big chunk
that doesn't require all the input
output which is
the quadratic speed up on monte carlo simulations
simulations
oh yes on quantum machines
that speed up things that we cannot
compute analytically
uh particularly derivative derivatives
financial derivatives uh and
and there that's that's their their
probably progress is more
promising i i would say in the short term
term
i agree um i i agree eric that's very
important so um
and uh thank you for bringing that up
because i i'm not sure i
was going to mention it it comes at the
end of this talk so
generically for quantum computers these
are exponential speed ups
but also for just for quantum devices in
general you get a generic
quadratic speed up so something that
takes time and on a classical computer
like optimization
or doing monte carlo uh that gets goes
from you know
to simulate n steps of a classical monte
carlo on a quantum computer requires
square root of n
steps on a quantum computer and that's
extremely that's potentially very useful
of course then the you're always there
and you have a tiny quantum computer
competing with a gigantic classical computer
computer
and so it's hard to know who's going to
win and
um but in fact what one of the charges
that the people
from the finance industry might talk
with like bankers for instance say
you want we really want to speed up on
monte carlo and i will actually
uh i will actually i'm i'll make sure
that i have enough time to talk briefly
about this but i'll say it right now in
case i run out of time
so it turns out that you get it for
monte carlo
there's a generic square root of n speed
up for monte carlo which is good because
the bankers say if to me they say yeah
our main task is to figure out how much
supercomputer time we need at 5 00 p.m
we have to figure out how much
supercomputer time we need between 5 pm
and 9 a.m
to do the monte carlo we need to make
the decisions we're going to make tomorrow
tomorrow
correct so if one could actually speed
that up that would be great
now recently my group came up
with an exponential speed up for the
analysis of monte carlo
so we can show that you get a quadratic
speed up for
for performing the monte carlo but for
actually analyzing the results of the
monte carlo
you can get actually have an exponential
advantage over classical computers
and that's of course very important for
the kinds of things for people who are
doing things like um
uh uh stimulating you know derivatives
and options and variants large problems
so it's interesting yeah let me let me
let me move ahead then
so so let me just i'll just mention very
so yeah i want to make sure that we
um we're on schedule so let me just move
to the end of the introduction here
so if you actually look at the kinds of
these these kinds of basic linear
algebra subroutines and look at the
quantum versions and you look at kind of
machine learning and data analysis that
gets exponential speed ups from linear
algebra you find that
cluster finding like k means for
instance but other kinds of cluster
finding methods
um uh these are purely linear algebraic methods
methods
support vector machines where you have
clusters of data embedded
in a high dimensional hilbert space um
the so-called kernel method
and you have your goal is to find the
optimal separating
plane between them these get an
exponential speed on
just simple least mean squares data fitting
fitting
right so if i want to fit i have some
data that lives in a high dimensional space
space
and i want to fit some hyper surface
that that gives me
the less least mean squared error to
error to fit this data you know this is the
the
original application of data fitting
methods developed by
people like laplace right in the 1700s
this gets an exponential speed up on
quantum computers
um principal component analysis um which
is about
also kind of almost the first thing you
do for data we'll see in just a second
that that gives us
a gets an exponential speed up as does
new methods like gradient descent and
newton's method
all of these bread and butter classical
data analysis methods
get exponential speedups and actually
topological analysis of data
which i note is something that people
from looking at the literature i had a
long discussion with the folks
at the i i know it seems absurd to me
now or and
wouldn't have believed this three years
ago that that morgan stanley and goldman
sachs would have had their own dedicated
quantum computing groups
if you told me that five years ago i
would have just laughed but it is the case
case
a very nice long talk with the folks at
goldman sachs
um a month or two ago about the use of
quantum algorithms for topological
analysis of data
to apply to financial data um where
i gather that you know people basically
you have a time series
you embed this time series in a very
high dimensional space
and then you analyze the topology of the
the the vectors that are your time
series in this high dimensional space
and then a change in the topology of the
data says
bad stuff is going down or like you know
something's about to happen
uh i won't go too into greater detail
about that i just want to note that we
have a
if you want to look we have a at our
papers on topological analysis of data
um then uh they apply directly to this
kind of topological analysis of
financial time series
um which people apparently use all right
let me move ahead here
let me actually talk to give a very
simple example financial example
this was really the first financial
algorithm that we we really
looked at this is this is a classical
markowitz portfolio management and by
the way i'm
i'm aware that this is extremely
simplistic and nobody does this right
but but in fact there are many many
variants on this and so i just like to
say show you how this kind of thing works
works
so so um i'm sure people are familiar
with this but i'll just go over it
anyway suppose we have a set of vectors
of historical returns let r be the
returns of all the stocks on the stock market
market
let j be uh i don't know either a tick
by tick or day by day
uh record of the returns so r zero is
the returned on day zero r1 the return
on day two etc
and the average we have an average
return over the
m days or m ticks that we have um of
course we could you know
weight this by uh uh having more recent
data be more
important than over than older data etc
and then but anyway this is our vector
of historical returns and then we have a
covariance matrix c
which gives us the linear correlation
coefficients between
these historical returns so here i got
this covariance matrix and for instance
the 3
5 entry of this covariance matrix
is the linear correlation coefficient
between stock 3 and stock 5.
now the kind of cool thing about this is
that in the quantum case once whoops
what happened to my
lost my lost my talk here
okay in the quantum case then these
vectors then correspond to quantum states
states
um and again you have an exponential uh compression
compression
if i have you know n these vectors if i
have n stocks and the quantum states are
over log n
qubits and then actually the covariance
matrix just turns out to be what's
called the density matrix for these
vectors it's just
a statistical mixture of these quantum
vectors and that's extremely useful because
because
then we can actually sample from the we
can use these
these vectors of historical returns to
sample from elements of the covariance matrix
matrix
anyway the goal of classic markowitz you
know 1940 portfolio management
is you have a vector of wealth w is the
vector and
its entries tell you how much of your
wealth you're going to invest in each stock
stock
and you want to maximize your expected
return for
fixed risk or or
equivalently from the mathematical
standpoint if you
if you've picked your expected return
you want to minimize your
risk and so the the risk this is the
covari the covariance matrix is just the
standard deviation
of the of your return and this is your
expected return
and so what you'd like to do is you want
to find actually your goal is to find
the optimal risk return
curve so you know if you have you can
get higher returns for more risk
for greater risk and at a certain point
you know as the risk goes up a lot you
may get
very high expected you get high expected
returns but you probably don't want to
do that
and so this is the this blue shaded area
is the set
of of obtainable uh
risk return predicted risk returns and
you would prefer to have this orange
point that's on the optimal risk return
frontier rather than this green point
which has lower return for the same
fixed risk
so the goal is to find this and you
actually see this is actually just a
linear algebraic problem you just set up
the following lagrangian here's the
thing you're trying to maximize the
expected return
you set a lagrange multiplier that says
we want to we want to minimize
or we want to fix the risk and then we
we just
take a variational approach and we
actually then we just have to find
solve a linear equation where we have c
times w
our w is what we want to find and we
have r which we know
and we know c the covariance matrix so
we would just need to invert this equation
equation
and the quantum mechanical version is
exactly the same and you can just use
the quantum matrix inversion to solve
this exponentially faster on a cloud on
a quantum computer
okay so um that's you know that that's
that's the kind of thing that i mean by
now we proposed this a long time ago now
but but
and it's not you know this is not
surprising that quantum computers are
good about this
a surprising thing is that um excuse me
it is a spam call um a uh a surprising
a quite surprising thing that came up
just in this last year which i'd like to
mention particularly to this audience
is it turns inspired by these quantum
methods that are exponentially better
than the classical methods for this kind
of problem
um ewen tang and then i and anders
gillian and some other folks
uh realized that you could actually make
classical algorithms that are inspired
by these kinds of quantum methods
that might in fact allow you to analyze data
data
classically and get this same kind of
information classification so-called quantum-inspired
quantum-inspired
algorithms if you look them up and
together with the folks at xanadu the
quantum computing company we actually
simulated what happened if you run these
quantum inspired algorithms
on actual uh uh return data for
the stock exchange and um we found
actually worked
um and uh i'm not sure that we we have
not yet i'm not sure
again this is a very simplistic picture
right here
but there are classical quantum inspired
classical algorithms that might also
work for this okay
um keep those questions coming i don't
see any new ones so just like when i see
the number of the q a go from three to four
four
i'm going to answer your question aha
look at that great that's what we want
to see
okay this portfolio optimization is
limited to equality constraints
is there existing are there existing
methods using inequality
ah this this is an awesome question and
uh the um the answer is
i wish i wish
i wish there were um thank you eric
um so uh
in fact for folks who are familiar with optimization
optimization
using inequality constraints it's a
harder problem
than the equality constraints for
example the inequality version
of of this lagrangian method is i
i would say oh i want to have say i want
to say
say this risk is less than or equal to a
certain amount
and um if the solution lies in the
interior of the space here
this can be a very hard problem to solve
um i mean here you actually you would
get a lagrangian problem with
so-called courage and tucker um methods
and you have to solve the lagrange
equation and you find feasible points
that might be the solution and then you
have to evaluate to see if they are solutions
solutions
and we don't know how to do that on a
quantum computer and there are dynamic
programming methods for quantum computers
computers
but they don't give exponential speedups
in general over classical computation
so if somebody out there in this
audience or anywhere else
could come up with a method for doing
optimization with inequality constraints
on a quantum computer
uh my hat would go off for you and i
think the world would
beat a path to your door so great question
question
so let's now turn to this question of
generating uh
data and let's look at at this problem
of generative adversarial networks or gans
gans
and we're going to look at the the issue
about quantum gans
quantum generative adversarial networks
so i'm sure that folks there know about
generative adversarial networks it's a
very sweet way
of training a generator to generate
fake data that mimics the statistics of
real data
which of course is very useful in
financial markets right if you if you
want to be able to
you'd really love to be able to um i mean
mean
let's just be realistic right you you
get a bunch of real data from the
historic stock returns for instance
and if you can make a generator that
generates a you know
data that has the same statistics then
you can statistically predict what the
thing is what the markets are going to do
do
and if you're right then in principle
you make a lot of money
in practice of course you can just lose
your shirt but of course that's the way
things are
right so so let's look at this so so now
this is something
the uh uh christian weedbrook the head
of xander and i
wrote the first paper on this back in
2018. now this is a
there are dozens of papers on quantum
generative adversarial networks now
the idea is simple so we have a
discriminator whom i'm going to call alice
alice
and she has to discriminate between
either real data or fake data
and bob here is going to be generating
the fake data
here and it's the the degenerative
adversarial network
is and i've described this in quantum
mechanical terms rho is just a quantum
state sigma is a quantum state
and so alice's wants to make a quantum
measurement that's going to distinguish
between these two
and the way it works is alice is
presented with either a piece of real data
data
or a piece of fake data and then she's
going to make a measurement that's going to
to
minimize her probability of error she
wants to maximize her probability of
successfully distinguishing between the
real data
and the fake data and she has an
adaptive method such as a deep neural network
network
that allows her to make this measurement
so first we fix the statistics of the
real data and the fake data
and then she adapts her measurement to
distinguish optimally between them
and then alice fixes her measurement and
then it's bob's turn to try to fool her
by generating fake data that will fix
that will fool the measurement that was
really good at distinguishing
between the real investigative reform
and so now he gets better and better
generating the fake data
so you see it's an adversarial game
where i'm just phrased as to quantum
mechanical terms the quantum mechanical
terms everything is matrices so the
measurement corresponds to a matrix
and alex alice is trying to maximize the
expectation value of the this will be
just be the probability
of her um distinguishing correctly
between the real and fake data
so she's looking along this set of
positive matrices and bob is just trying
to minimize
the the norm actually between the um
the real data and the fake data and it's
a game an adversarial game and
actually you can show that it has you
can apply game theory and nash
equilibrium theory and you could show
that it actually has a unique
nash equilibrium um
in which the fake data actually over
time as you play this game
many times the fake data gets closer and
closer and closer to
the real data so this is very nice and
we've have nice
actual experimental demonstrations of
this it's actually rich just in the case
of ordinary generative adversarial networks
networks
even though uh you do have a unique nash
equilibrium you may also find
that there are instabilities in which
you know the generator
and the discriminator chase each other
around the strategy space which is
uh and never actually arrive at the nash
equilibrium so interesting and
entertaining things happen
but quantum mechanically this is great
moreover quantum mechanically
here's a case where we can actually
prove that
if the actual real data is being
generated by a quantum mechanical system
then a classical system can't generate
that same data this is the phenomenon
that i was alluding to before as in
for example quantum supremacy or this
weird funky
spooky action at a distance this data
can't these statistics can't be
generated classically
so you can't actually reproduce that
quantitative classically so here's a
case where at least we can prove
for a specific case that as the real
data is being generated quantum mechanically
mechanically
that you can't reproduce it classically
but you can reproduce it quantum mechanically
mechanically
um and i have oh uh and i will skip over
this is a technical version of this we
have a recent paper on this for those
who like
quantum generative adversarial networks
we have a version based on the quantum
version of earth movers distance or
wasserstein one distance so
the classical version of this is is was
actually invented by gaspar malz in 1781
he was a
buddy of laplaces and it's a metric
between probability distributions
but i don't have time to go into the
details of so i won't let me go ahead
so now let's look at deep quantum
learning um this was eric
uh raised this question before let's
look at the the
the quantum version of deep neural
networks are simply
quantum circuits where the individual
quantum logic gates
have tunable parameters in them and so
this is like a classical
neural network where the weights you
know in classical neural network you
typically have
a fixed non-linearity preceded by a linear
linear
transformation and the weights of your
deep neural network are the
variables in this linear transformation so
so
in the quantum approach to deep quantum
learning the way that people talk about
it is we have a network
and we have quantum logic gates that
operate on our quantum variables
and then each possible quantum logic
gate is parametrized by a bunch of
different possible parameters and these
parameters are now the weights
so i get a transformation of my initial
state this is this u of theta
which is a function of these weights and
the goal is to adjust these thetas the
weights to optimize the overall transformation
transformation
just as in a classical neural network except
except
now everything is quantum mechanical and folks
folks
people build lots of systems to do this
they don't have to be
you know doesn't have to be a fault
tolerant quantum computer but you can
you can make special purpose
superconducting circuits for this you
can implement systems like this on ion traps
traps
you can use linear and non-linear optics
all of these have been experimentally
demonstrated moreover
in fact in all of these methods there
are star quantum startups popping up
like little quantum mushrooms out of the
ground these days to try to
actually do this kind of stuff and let
me give you an example of something where
where
actually again we haven't we can prove
that the quantum computers can do things
that are different from what classical
computers do
and these are these embedding and kernel
kinds of methods
so the idea is we've classical method
classical data
like say pictures of ants pictures of bees
bees
and pictures of cockroaches um why not
and uh
and the goal is the embedding means we
take these actual pictures
and we have an embedding map that maps
them into a very
high dimensional vector space and this
vector space is
a so-called hilbert space it has a
metric on it and that's what this kernel
refers to the kernel just gives us a metric
metric
on this high dimensional vector space
and then
what the goal of the embedding is to
embed these different
find and embedding that invents your
pictures of ants
get embedded in one corner of the
hilbert space pictures of bees get
embedded in another quarter of the
hilbert space
and pictures of cockroaches get embedded
in another corner of the hilbert space
so that these clusters are well
separated from each other
and then you can make a measurement that
will distinguish between them
can i ask a quick question absolutely so
we're talking about exponential speedups
you started out
talking about that and you started out
talking about that
in the context of getting classical data into
into
a qubit representation with the log
rule on on the transformation
that's one thing but here we're also
talking about the parameter space
which is high dimensional so we're we're
basically talking both about two
two things we're talking about uh the
speed up in terms of the dimensionality
of the data
versus the dimensionality of the
parameter space those are two of kind of
two different animals
absolutely that's a very excellent point
right so so let me if
in this in this picture right here given
the kinds of quantum systems that we
have these days
we don't have very many parameters to
vary so
because you know we you know yeah exactly
exactly
as in this you know now we have quantum
computers that have like not many hidden
layers not much
not many hidden layers exactly so this
is in fact that uh i
that's a very good point and thank you
for raising it eric so this is not a
case where the
number of parameters that we can vary
the weight space is large it's in fact just
just
almost the opposite case that you have
classically classically you've got these
humongous weight spaces with like you know
know
10 to the 11th parameters and they're
many more parameters than there are data points
points
so this is in fact a case where when we actually
actually
simulate these methods these embedding
methods what we do
is in fact we have classical
pre-processing of the data so i would
have it over here in this like empty
space right here
i have a deep neural network that's
going to take our data
and then the output of the deep bureau
network are is going to be the
parameters that we're going to use for
our quantum network
that is so we're going to the deep
neural network is going to
compress the data into a quantum state
and then we're going to actually
use these parameters to try to embed the
data in a good way
and we co-train the hybrid classical
column system
to actually perform this embedding
um and that's really necessary for
exactly the reason that you mentioned
that we just don't have quantum systems
where we have a gazillion parameters in
our deep
they're not deep quantum networks you
know they're shallow
right however even for shallow quantum
circuits you can prove
that the instead of embeddings that you obtain
obtain
are not obtainable classically just in
the same way for the quantum games the
quantum generative adversarial networks
you can't get the kind of statistics of
the out of a classical system that you
can generate
out of a quantum system for embeddings
in a hilbert space
the embedding in the the hilbert space
or the vector space of quantum
mechanical states
the statistics of the measurements you
get from that cannot be obtained
by any kind of classical right so at
least we have a proof
that the embeddings for these kernel
methods that we obtain
are not obtainable classically
and that's of course so for and this you
know this is a classification
method again um uh for for
financial systems um uh uh this would be
something where
where it would be i i think you would
probably want to apply it to situations
like the
i was mentioning about the topological
data analysis where it's like
okay this is kind of like some kind of normal
normal
market normal situation we want to say
is the new situation a normal situation
or is it some kind of abnormal situation
so let me actually finish up here um
because uh
okay this is more on this variation of
quantum embedding let's like
let's be finished up with these these uh the
the
the promised um exponential advantage of computers
computers
for um for doing uh quantum for doing
monte carlo
um this because this is a neat result uh
and we were we're very happy
with it and this is a generic problem in
quantum computers this is the archive
number for this paper if you want to
look at it
there's great progress in making quantum
hardware but we actually don't have
great applications for this and as i mentioned
mentioned
i've been casting around for years now
after getting you know
when bankers come and say we need better
quantum algorithms for monica we need
better although
quantum algorithms for monte carlo and
you know we've got these existing
quantum algorithms which give us this
generic square root of n speed up which
is great
but it's not clear that that's you know
given the competition between small
quantum computers and big classical computers
computers
it's not clear that that's worth doing
can we actually get
some method that gives a kind of
exponential advantage
and the answer is yes and
um i'm going to show how we get when we
have this exponential advantage
for uh solving different for uh
for inverting matrices i'm going to
actually show
that we can uh that we can actually get
an exponential advantage for
analyzing the uh data in monte carlo
and i'm out of time here so i'm just
going to say this very briefly
so let's look at continuous time markov
processes um
actually the ones we analyze in the
paper are we we
while we were writing it we were writing
it during the pandemic and so we started
off with epidemiology
that was our first continuous time
markov process and then we said you know
we need another example
and so it was before the election so we
decided to look at the spread of
political opinion on complex social networks
networks
we managed to get it posted before the
election and so i'm
not saying we had any influence on the
outcome of the election and i'm also not
claiming we've had the influence on the
spread of the pandemic
and or keeping it spreading but our next
paper on this is actually going to be
exactly on analysis
of black shoals that's uh data so
um for the for the in case there's an
upcoming financial crisis we want to be
topical about this
so again we have classical monte carlo
efficient estimation
of expectation values for stochastic processes
processes
and we get a square root of n speed up
generically from this from this nice
work of astromonteneur in 2015.
i will just say what what happened so
what we'd find out is if we do the
classical the classical monte carlo
the quantum version of this monte carlo
operates by mapping
the the vector of probabilities to
quantum states and then you construct a
quantum state which gives you the
history of the vector of probabilities
for your system
um so and in classical monte carlo you
just sample from this history
and in quantum mechanically you get a
quantum superposition of all these histories
histories
i'm sorry time is going up here this is
what quantum gravity time goes up
and i'll just simply mention that if you
look at these methods
that i described before way back at the
beginning sorry sorry excuse me
these kinds of basically neurologist
subroutines and all of these kinds of
methods that you can apply
that give you these exponential advantages
advantages
that you can actually let me go back to
this last slide right here sorry
um you simply find that uh
for things like uh singular value
transformations or
principal component analysis doing
fourier transforms to try to find the
power spectrum
of this this monte carlo or to do
multi-scale analysis
via wavelet transforms these are all
things that you actually can't do
with just monte carlo sampling it turns out that
out that
you can't do singular value
transformations or principal component analysis
analysis
you can't do power you can't do fourier
transforms or if you have like
gigantic high dimensional spaces you
can't do wavelet transforms
because the monte carlo just doesn't
give you the data in the right form
it's fantastic for estimating
expectation values of observables
as we know but if you do the quantum
version of this
to get the square root of n speed up and
you get your answer as a quantum state
simply i'll just simply note in the
paper that that i described there
you can and you know now there's this
data analysis using these kind of
quantum data analysis methods that i
mentioned before
like these ones right here you find that
the analysis
of the outcome of your quantum monte
carlo simulation
is exponentially faster than the
classical analysis of the same things
so we actually if you're actually
analyzing if the
if the uh if you look at the analysis
of the outcomes of monte carlo for this
kind of thing
you know bread and butter principal
component analysis singular value
transforms power spectra etc
that the quantum algorithms we're doing
analyzing the quantum state that
solution of the monte carlo is
exponentially faster
so that's great and uh this might be
quite interesting if once you start
thinking about markov chain monte carlo
where you're as you said you're not
necessarily limited to thinking about
expectations but you actually want to have
have
a quantum representation of the entire
uh monte carlo simulation outcomes
and then sort of characterize that in a
you know you know and look at it actually
actually
uh you know in a very intelligent way
that i i really like this actually
but you in fact you in fact you you just
summarize exactly what's going on here
very well better than i
i did myself i'm probably too immersed
in it yeah so
you you yeah it the quantum algorithm
works because you just
get the whole history of the classical
monte carlo
right the entire history of the
probability distribution
and and then instead of just getting
samples from it
and if you have this entire history in
quantum superposition then you can do
all this fancy pants quantum
analysis of it and give you information
that you couldn't have classically
very smart and i'm very anybody who has
suggestions about this were you know as
i say we're
we're kind of um uh kind of
single-mindedly moving
to apply this to black trolls as our
next example because it seems like the obvious
obvious
simplified example to look at because people
people
people burn a lot of classical monte
carlo and black trolls as my
understanding so
anyway if anybody has other applications
it's you know we're
our minds are very very open uh because
we're very very ignorant so we have
nothing but open space in our minds
okay um so i'm gonna there's another
question in the q a
and i'm almost done okay
uh right what advantages opportunities
have any do you see in
usage of open quantum system for
monochromatic simulations
that is a fantastic question that's from
travis huron
fantastic question travis i don't know
the answer to that one
everybody keeps asking these great
questions to which i don't know the answer
answer
so this of course is this is a coherent
we create a coherent
quantum simulation of what's going on
but of course if i have an open quantum
system i'm going to get
you know an incoherent probabilistic
dynamics for the system and instead of
having a pure quantum state to do this
analysis i'm going to have a mixed
quantum state
and i i don't know and i don't think
people in my group know either
what the answer is for when to the
extent to which we can actually apply
these methods
or potentially other methods to open
quantum systems so
i would definitely like to know the
answer so if you figure it out let me know
know
okay let me just stop by ending that that
that
actually so remember this question i say
it's a big bottleneck how hard is it to
and to to put question how many these
how much
information can you pop into one of these
these
the shallow quantum circuits that we
have during the coherence time
this was the question that eric was
asking before
and you guys asked great questions i got
to say i don't know the answer to all of
them right
the the question of like how many
parameters if we have these thetas and
we're you know these stators are being
determined by zapping our system with
with lasers or microwaves just how many bits
bits
can we cram into a quantum system during
the coherence time
well my my the and the answer of course
is not as many as into a classical
computer for sure
but i'd like to argue that in fact it's
not that bad so i'm going to end up with
that because we're out of time basically
so basically if you look at the just
like you do shannon theory
like about i say what's the bandwidth of
my lasers or microwaves
you know how many bits per sample do i
have how many bits i have
you just say the the answer is you know
the number of qubits
times the bandwidth times the bits per
sample is the total
times the time is the total number of
bits you pop into the system
it's a very simple formula thank you
thank you claude shannon uh
and so you just do this analysis for say
superconducting circuits
these are these are the circuits we have
now right and
it's going to get better in the future
you find that you can get around 10 to
the 10th bits
loaded into quantum states and
superconducting circuits this is just
actually for it so this is a 33 cubit
superconducting circuit or something
like that
right well if you could have 100 qubits
that'd be fine
for for atoms or ions you can get a lot more
more
i mean you can get 10 to the 12th bits
more even more you can get you know
terabyte into these systems right now
so that's a fair amount of information
and together with classical
pre-processing of the data to make these
hybrid systems where you know
you procrastinate it classically you pop
it into your quantum system
now you do quantum processing of the
data so the quantum processing
takes place on numbers like this the
classical pre-processing
it reduces the large size of the
classical data space into something
that's palatable
for a quantum computer so that i'd say
that that's doable
and with that i'm gonna say uh thank you
for your attention this is and thank you
for all your excellent questions
um uh and i'm happy to answer more i
think there's one
excellent okay great go ahead all right
one from dylan herman and by the way i'm
now i'm really anticipating this because
i'm sure i'm not going to be able to
answer it all right
when forming the lagrangian for the
portfolio optimization problem the
formulating is a quantum linear systems problem
problem
ax equals b b is typically very sparse
it was mentioned in paper by scott
erinson that the speed up in this case
is expected only to be quadratic to
go over a minimum search bound oh you
mean if it's not sparse right so
uh do we expect only expected quadratic
speed up for portfolio optimization then
what would you we would like your
thoughts on this thanks okay yeah
that's an extremely good question so uh
uh the point here is that if the
covariance matrix
is um this is an older paper of scots so
if the covariance here
a in the ax equals b is the covariance
matrix for
the data this is not sparse
in general so uh that
uh critique from scott uh
uh which he made
basically uh scott and i like he chases
me around and i write my group says hey
we can make these great
quantum machine learning out of them and
then scott says oh but you've got to
read the fine print
i think his article is called read the
fine questions yeah it is right
yeah so so uh he said yes but it's got
to be sparse
and then we found that we could actually
do it if it's low rank
if the covariance rate is non-sparse but
low right
factor what we people what people call a
factor model in finance basically
right exactly now now i should say that
that if it's low rank
then we can also do these these quantum
inspired models as well so they're much
much much much less efficient than the
classical models
moreover in our actual analysis of
financial data
we it's we've the this this notion that
the actual financial data from the
markets followed you know we analyzed a
gigantic financial data set
of stock prices and we found that it's
marginal the statement that the
covariance matrix is low rank is
definitely a marginal statement which is
kind of what you'd expect in an
efficient market right if it were really
low rank then people would have figured
this out and they would be you know
it's the ordinary argument if it really
had all these patterns in it then people
would have figured this out that made
this happen
this can be a long conversation but
there's something called the factor zoo
in finance which is
people have come up with 400 plus
factors for
that and right in that case it would be
fine right if you're going to apply these
these
these like data analysis or immersion
methods of things like 400 factors or
or more than that so it's like if the if
if the rank affect that mean if so if
the effective rank is just 405.
we're cool with that we have one more
question yeah
okay great so the last question is okay
okay there's two more shaolhon who do
you see yes
yeah do you see potential advantage of
quantum monte carlo for financial
problems with highly complex underlying
stochastic formulations
yeah for sure i mean uh uh you know the that's
that's
we're applying it to to black trolls
because we're applying it to something
that's simple
and we want to do a demo but my understanding
understanding
from talking with people who you know
really do this for something like for
instance option pricing and stuff like
that and we have
we have um we have quantum models for
generic option pricing also
is it you know people put in a heck of a
lot of bells and whistles into these
models just to make them more realistic
because you want to have it
be as realistic as possible without
overfitting the data which is a
tricky thing um so but but you know
the these these quantum algorithms
should be fine
you want to have a really complex model
that's fine that's the whole point
is that the quantum models can operate
on very high dimensional spaces
and um with all kinds of you know
different uh
features in them so for each one you
kind of have to analyze to see how it works
works
but yes that's the whole point of doing
this so thank you shahan and peter
bordeaux says what is your opinion about
using kraus operators for implementation
of quantum
hmm uh
what is hmm does anybody know i don't
know what i don't know what hmm is
uh oh uh uh uh that's that's something that's
that's
not models hidden battle yeah that's
right oh so something
yes the answer somebody just yelled out
the answer here in my house it's like
hmm hidden markcroft models you don't yes
yes
yes all right yeah good question so
using cross operators for implementation
of quantum hidden markov models
they're actually open quantum systems
yeah so this is a really interesting question
question
and again i talked about this a little
about the open quantum systems
the this this exponential advantage for analyzing
analyzing
you know markov models hidden or
otherwise um
i know we we have it for for the
coherent case but for
the case when you have like a stochastic
open quantum system i don't know
the answer so i'll say that before i
said that before but since it's a very
open question i will say something more
about it
so it is known that if you look at the uh
uh
quantum version of hidden markov models um
um
which are kind of um they they go into
things like quantum epsilon machines
where i have internal quantum states i
have cross operators but
for folks who don't know what cross
operators are they're just the the
mathematical mechanism for updating these
these
hidden quantum hidden markov models
it is known that you can you can
reproduce just even just classical data sets
sets
with more compact quantum hidden markov
models than you have classically
so you you can have you know there are
examples where
the the the classical hidden markov model
model
has like you know you require four
states and the quantum one has two
um and it would be what would be what's
not known would it would be awesome of
course and i'm you see
i'm phrasing the question this way it
would be great if there are cases where
the smallest classical hidden markov
model for the model
for what the data has you know 10 to the
12 states
and the quantum one has like 42
right uh or e4 that you know the
classical one has 10 to the 12 states
but the quantum one can be implemented
on 42 qubits
in which case a quantum one could have a
large number of internal states you know 10
10
2 to the 42 different states which is 2 to the 42 is around
to the 42 is around 10 to the 12th right and uh uh tests 2
10 to the 12th right and uh uh tests 2 to the 42
to the 42 in internal states but you can implement
in internal states but you can implement the quantum one much more efficiently
the quantum one much more efficiently than you can classically
than you can classically that would be awesome and if you find
that would be awesome and if you find out that it's true please let me know
out that it's true please let me know um yeah on that note since we're running
um yeah on that note since we're running out of time i i
out of time i i we we could continue this for a long
we we could continue this for a long time but yeah
time but yeah unfortunately we are we are running out
unfortunately we are we are running out of time
of time i said at the beginning that this was a
i said at the beginning that this was a real treat to have you as the last
real treat to have you as the last speaker of the first year of our
speaker of the first year of our webinar series uh we are taking a break
webinar series uh we are taking a break over the summer and we will resume
over the summer and we will resume stay tuned uh all the information will
stay tuned uh all the information will be on our website
be on our website and really thank you seth it was uh
and really thank you seth it was uh great to have you
great to have you uh and we'll stay in touch thank you and
uh and we'll stay in touch thank you and mag compliment the audience for awesome
mag compliment the audience for awesome questions i mean you know that i was
questions i mean you know that i was unable to answer
unable to answer any of them like that's great that's why
any of them like that's great that's why that's why you have to come back good
that's why you have to come back good morning if
morning if your questions weren't answered just pop
your questions weren't answered just pop me an email and i'll try to answer it
me an email and i'll try to answer it answer
answer it i'd love to continue the conversation
it i'd love to continue the conversation [Music]
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.