Skip watching entire videos - get the full transcript, search for keywords, and copy with one click.
Share:
Video Transcript
Video Summary
Summary
Core Theme
This presentation explores the domain-driven application of Quantum Machine Learning (QML) across various fields, demonstrating its potential to solve complex problems more efficiently than classical methods. It highlights QML's core components, challenges, and its practical implementation in quantum state tomography, cybersecurity, climate modeling, and 5G network resource allocation.
Mind Map
Click to expand
Click to explore the full interactive mind map • Zoom, pan, and navigate
Uh hi everyone welcome to another
Friday's webinar of our Q sound group
quantum research group of stellish
university and uh today's talk is about
domain driven application of quantum
machine learning by Dr. Noelan
uh just a quick biography of Dr. uh Dr.
Helen is a research team lead at Ebrain
lab division and in the is a
postdoctoral associate at the center of
for quantum anthropological system CTQs
at university at the at the New York
University of Abu Dhabi and she earned
her PhD in machine learning from Hassan
University of Kazablanca Morocco if I'm
not mistaken where she also completed
her bachelor's and masters in physics
and neochnologies.
specializing in material and nomaterial.
At the moment her research focuses on
quantum machine learning, quantum
algorithm and their application in
fields like finance and healthcare. And
uh she's passionate about mentoring and
making quantum technologies accessible
through global initiatives and actually
we met two years ago in the
NYU in Abu Dhabi and we met again this
year. So I uh said maybe it's
interesting to initiate a collaboration
between our group and your grouping out
there. Okay, please floor is yours.
Thank you. Thank you for the invitation.
great. So, yeah, I will talk today about
quantum machine learning and some
application. So, starting
with what is quantum machine learning.
So probably you are familiar with that
but like quantum machine learning is
like a research area that explores the
interplay of ideas and approaches
techniques from quantum computing and
machine learning. And so it starts of
course like with the cubits which they
are like the fundamental units of
quantum information and to manipulate
these cubits we are using quantum gates
which are like the operation that
exploit like the quantum phenomena
phenomena like superposition interference
interference
entanglement. Then from this quantum
gate we can have the quantum circuits
which are like the networks of quantum
gates designed to perform a specific
task. Then what's the difference between
a quantum circuit and quantum machine
learning model? So when we imply like
this quantum circuit to perform a
specific task in machine learning like
classification, prediction, regression,
clustering etc. And we use kind of like
a trainable parameters in this circuit.
This is when we are talking about
learning. How it works? It's just like
classical machine learning. So first of
all we have a classical data we do like
the prep-processing classically. Then
after cleaning and making the data ready
we start like the first part of quantum
machine learning which is the data
encoding. So we convert the classical
data into a quantum state and there is
different techniques for that. Then we
have the inserts or like the
parameterized circuits. Here when we
have like a sequence of different gates
and technically we use the parameterized
one like with parameters which here like
the angles because we want to train
them. Then of course like
measurement. So here after like
the proprocessing the data through the
quantum circuit we measure the quantum
system to extract like the output. And
this measurements collapse to the quant
collapse like the quantum states into a
classical values which are then used for
further analysis or decision making. It
depends. Sometimes we use kind of like
classical layers to help us like do this
classification or decision making. Then
of course we have a loop which is
iterative refinements. So this model
undergoes iterative refinements where
the parameterized or like the parameters
of the quantum gates are adjusted to
minimize the cost function and this
iterative process continues until the
model converts to an optimal
solution. And this is like the key
point. We have the scalability, speed,
accuracy and versatility. But the thing about
about
this key metrics or it's not always the
case. We cannot generalize. It's like it
depends. Maybe in a very specific
scenario in a very specific specific
task with a specific data set we can achieve
those. Yeah. And this slide summarize
like the core component that forms the
foundation of QML. So on the left side
we begin with the data encoding as I
mentioned and there is like the m the m
like the important one or like the most
used one is like basis encoding, angle
encoding and implicit encoding and it
depends on the problem and basis
encoding most of the time is you is like
not really used to perform like specific
task for the application side. So most
of the time we are using either angle
encoding or oppit encoding. Implicit
encoding sometimes when we are dealing
with like a lot like uh a data set
with a high number of features. So we
use implicit encoding because we can
encode like square n of like the number
of cubits. So if we have kind of like 16
features we can use only four
cubits. Then we have the inserts.
Uh and there is like common choices like
include real implicit like key unitary
pair circuits and sometimes more
expressive like the quantum tensor
networks and s like MPS and
mirror and this circuit define the
variational space we explored during the
optimization and then based on this two
we have the models so there is like
QNN's quantum graph neural network super
vector machine LSTM physical ical
neuronet networks and quantum
convolution neuronet networks. So this
mod models are titled like to different
types of input data and the learning
task either regression, classification,
prediction and on the right side we
highlight like the application where
quantum machine learning show like
strong promise. We have like climate
modeling, material science like
predicting or calculating the ground
state energy of such molecules, cyber
security, transportation when where we
optimize like it can be very
uh uh uh travel assessment problem for
example and the finance there is markets
prediction or like the fraud detection,
healthcare. So these domains can benefit
from quantum some the potential quantum
advantage. Then finally at the bottom
here we acknowledge that current
challenges this include algorithm
algorithmic barriers like the baron
plateaus issues with like robustness and
their noise lack of standard benchmark
especially when it comes to comparing
classical to quantum.
and limitation due to the current
hardware and data privacy
constraints. So during this presentation
I will walk you through a few of this
application in more details starting
with quantum state tomography. So
quantum state tomography can be
intuitively understood as a way of
reconstructing a hideen 3D object using
its 2D projections.
So as you see here in the left side. So
this is just like an analogy. The
observed objects compromising like
various shapes represent the complex
quantum states and drew the tomography
we call it like projection from
different angles and combine them to
recover the internal structure. So in
the quantum states or systems we perform
a series of
measurements to it's like similar to
this projections to infer the unknown
state and here on the right side
uh it illustrate the idea with real
quantum data. So the first plot shows
the reference
uh or like the true quantum state. Then
the initial gaz is like what we start
with often just like a noise or a flat
assumption. Then finally through
iterative reconstruction we arrive at
the final result which closely
approximates the reference. So this
highlight the power of reconstruction
but also underscores the challenge. So
usually achieving just as like accuracy
traditionally requires like a vast
number of measurements. So our
motivation was like
uh to explore quantum machine learning
methods that can reconstruct this state
more efficiently. And of course like
there is a lot of classical techniques
that use that but we focus on the
quantum side and since it was inexplored
at that
stage. So
first we started with this approach this
which is like variational quantum
circuit. So it's just like as I
explained we encoded the classical data
which was like kind of like a quantum states
states
into quantum states. is just like uh it
was a sample encoding because the data
somehow was like in form of classical
but it represent the quantum states and
it was available. So then the varational
circuit is then applied to generate a
candidate quantum state and this
parameters of the circuit are
iteratively optimized to minimize a loss
function defined as like one minus the
fidelity between the predicted and the actual
actual
state importantly. So half of the cubits
are used for representing the mixed
target state and the other half serves
as like auxilary cubit and then the
output is evaluated and the fidelity is
computed using trace base
expression. Then one thing about this
model or like this approach is the
computitional complexity. So here
computitional complexity refers how uh
the required computitional resources
such as like it can be time or memory
scales with the size of the input. So in
this the context of quantum state
tomography the input size is typically
characterized by the dimension the of
the quantum
state. Here the VQC approach scales
efficiently with the number of the cubit
which uh like represented with the
dimension of the matrix of the quant density
density
matrix. Then moving to an a second
technique that we use in this work. So
it's parameterized circuit for classical
statistics which is somehow very similar
to the first one. So here the aim is the
same is just like construct a pure
quantum state by learning both its
amplitude and the phase from the
classical measurement data which is
different from the previous one. So
specifically the frequencies of the
outcomes of our multiple poly
bases this is was our
data to do this we use like two
parameterized quantum circuits for each
construction. So one circuit is trained
to outu to output the implicit and the
other to output the phase of the quantum
state and together they define the full
complex coefficient in polar form. So
the phase and the completed then the
training objective is to minimize like
here we use the log likelihood loss
between the observed frequencies and the
model prediction using the born rule. So
we use it here to convert the quantum
circuits output into
probabilities and it tell us like the
likelihood of getting a certain
measurement outcome by squaring like the
implicit of the corresponding quantum state
state
components. So for each measurements
basis we compare like the predicted
probabilities like the one that are
calculated using the bone rule with the
observed frequencies and update the
circuit parameters
accordingly and in this study we used
two different circuits A and
B and yeah so here it's like the diff
they are like different like different
from structure and
the the
expressibility. So here we have like
more entanglement comparing to this one
and here more
parameters and for like the
computitional complexity remains the
same and the thing that we need to
highlight here is like the difference
between this technique and the previous
one as they are somehow similar. So the
variational quantum circuit learns
directly from the quantum state and
tries to approximate the full quantum
state. Well, this use classical data
which is like the measurement
frequencies and fits into a quantum
behavior. Moving to a third approach
which is bizian inference approach. So
here we view the quantum state
tomography as a problem of statistical
inference. So we begin with a period
distribution over the parameters that
define the quantum state. Then the
measurements are performed on unknown
states and the outcomes are used to
update the our beliefs forming posterior
distribution. So sampling from the
posterior like allows us to estimate the
most probable states as well as quantify
the uncertainty.
So this probabilistic view enhance
robustness to noise and incomplete data
especially for our case offering both
like fidelity and confidence and uh the rec
rec reconstructed
reconstructed
states and for the computitional
complexity is also the same but it also
depend on the sampling strategies and pre
model And the last technique is quantum
principle component analysis. So this is
kind of like a powerful method for
estimating the spectral properties of a
quantum state because it treats like the
density matrix not as a static object
but as like kind of like a generator of unitary
unitary
evolation enabling us to apply the
transformation to another state. So
using multiple copies of like the
density matrix and the
swap operation we simulate this
evolution and apply quantum phase
estimation to extract like the iggon
vectors as well as the values
efficiently. So this
techniques excels like when the density
matrix is low rank is like as in many
practical quantum tomography tasks and
allows us to reconstruct the state from
its principal components. That's why
called like quantum principal
components to assess how well we
approximate the original state. We
compute the fidelity using the man
formula. However, is that like the this
technique or the advantage of this
technique is most evident for the
quantum native data. When applied to
classical data sets, the speed up
becomes kind of like less significant
due to input model
limitation. Then for the
computitional complexity. So here offers
like one of the most favorable
complexities enabling efficient S
reconstruction with reduced measurement
overhead. Moving to the results starting
with the first technique. So uh our
investigation focus on the relationship
between the loss value and the two
critical parameters the depth and the
number of uh iteration. a shown in these two
two
figures. So starting with like here. So
this is the lowest low like the result
demonstrate like the that the lowest
loss and hence the highest fidelity was
here and a clear trend emerged like
increasing the circuit depth correlates
with improved performance suggesting
that deeper circuits offer greater
expressive capability uh for
approximating quantum state such as like
Hamiltonian. And this algorithm achieves
like strong performance using a
relatively small number of variational
parameters and iterations. So the
convergence you can see is like from around
40. Moving to the second technique. So
here uh we use like
the parameterized circuits for classical
statistics with like apply to three
cubit systems using two different
circuits as I explained A and B. So
starting with the left side. So here we
observe like how average the fidelity
evolves with increasing the circuit
depth and fidelity here quantifies how
accurately the con reconstructed quantum
state match the original target state
and high a higher fidelity implies a
more precise state estimation. And we
see that both circuits benefit from
deeper architecture, but somehow circuit
B is like consistently achieve slightly higher
higher
fidelity, especially at mid-range
depths, indicating it might be better
suited for capturing the underlying
here on the right is like uh it provides
like insight by showing the standard
deviation in the fidelity across
repeated runs. So this matrix tells us
about the consistency and stability of
the circuit performance. Here again,
circuits B demonstrate lower variance
across more steps, suggesting not only
higher accuracy but also more reliable
practice. And with this results it
highlight that the circuit B is not only more
more
expressive because it has more
parameters but achieve like better
fidelity also more robust maintaining
trails. Moving to the baian inference.
So here we evaluated the performance of
this approach. The posterior
distribution was derived using
likelihood and evidence functions and of
course like the parameter optimization
here we use I think kila
optimizer. So in the left
side it shows the posterior distribution
over like the parameter theta for a four
cubit system and we use like the
depth eight eight layers.
So this posterior beliefs are formed
after the sampling process and
subsequently used to generate the new
prior distribution reflecting the
adaptive nature of like the besian
methods. We then evaluated the model
fidelities and over like 25 random
randomly selected quantum state. So yeah
fidelities. So each dot corresponds to a
single state fidelity with the green
dashed line marking like the average. So
this results demonstrate reasonable
performance through with variation
across samples suggesting that
fine-tuning of parameters and possible
reduction in circuit depths might
enhance the stability in the accuracy
which means like uh fidelity in this
case. Moving to the final method, we
apply the quantum principle components.
So here using two cubit
setup, we begin with a known randomly
generated state. Then we apply the
quantum phases estimation using the two
auxilary cubits to reconstruct it. So
for each parameter t we run around like
50 independent quantum fuzz estimation
it iteration and we calculated the
fidelity between the original and the uh
reconstructed states. So here you can
see uh that the average fidelity across
those like 50
runs exhibits like periodic behavior and
reach its peaks around like uh
20 achieving a maximum average fidelity
On the right side, the plot shows the
corresponding standard deviation of this
fidelities for each t reflecting how
consistently the model performs. And of
course at the same optimal t we observe
not only high fidelity
uh but
also reasonably low variation yielding
to a 50 not 50 but 95
two. So here this results confirm that
this algorithm when paired with like the
phase estimation is capable of producing
highly accurate reconstruction if the
selected. Moving to another application
which is cyber security. So cyber
security has become one of the most
pressing issues in our digital age and
to illustrate the gravity of this
situation. So some facts is
like there were like more than
2,365 reported cyber attacks unpacking
like more than 400 million individuals.
So and there is like data breach and
different type of
attacks. So this numbers highlight the
severity of the problem we are dealing
with. So cyber security threats are not
only increasing in frequency but also
evolving and making them harder to
So given this pressing issues, working
on cyber security offers significant
benefits of course like effective cyber
security strategies and enable precise
risk management but of course there is like
like
different type of like how to solve the
problem. of course either by providing
more robust uh quantum algorithms or
quantum machine learning models or maybe
adding like more defense mechanism. But
here it's slightly different
because we are exploring whether like
quantum machine learning can address
address or at least mitigate this issue
in a different
way because we are using the clustering
to cluster the severity of this
set. So here this is the methodology
that we used.
So focusing on two quantum enhanced key
means clustering
algorithm we have the Q kernel and Q C
swap. So the first of all we start with
the data set which is like very famous
it has like different attacks with different
different
provider and what we are trying to do is
cluster this
attacks. Then of course we do we perform
labeling coding sample imputation and
scaling issue using standard color and
mean max color techniques. This ensures
that our data is properly normalized
then ready for clustering and
after pre-processing we initialize like
the sound traits of uh for the
clustering algorithms and we explore
different methods starting with like spectral
spectral
clustering. This is classical method.
Then usually this spectral clustering it
involves computing the similarity and
laplasian matrix followed by like a de
composition. This leads to key means
clustering on the igon vectors with
cluster assignments based on the oludian
distance. The focus of our methodology
of course is like on the quantum
approach. So we have two and both
methods involves quantum data encoding
encoded.
Sorry then yeah this quantum states are
then processed to assign the clusters
and this QC swap means the cluster
assignments are based on the C swap
tests. So it's like a quantum specific
method where while in the Q kernel the
assignments are determined using a
quantum kernel which measure like the
similarity between the quantum
states and both algorithms iteratively
update the sound choice and the cluster
assignment until they converge similar
to the classical key means.
However, so this quantum approach offers
significant performance
improvements and we evaluate this
improvement using performance metrics
such like uh silhouettes and Davis Ben
clustering. Okay.
Yeah. Then yeah in this step this is
very important because we need to find
like how many clusters we want to do
based on this is called like elim elbow
method. It determines like optimal
clusters by plotting the WC SS versus
like the number of clusters and see
which one is kind of like better. So
here since like before getting lower so
we focus on getting like four clusters
and after trying like with four clusters
we need to define how how we can explain
this four clusters. This is what we did
here. So this is like present the
clustering outcomes using for the four
techniques. So the clusters are
visualized in a 2D space with like the
vendor project and the x-axis and the
products on the
y-axis and this like the x marks markers
clusters starting with key means and the spectral
spectral
clustering. So we observe a clear but
relatively border separation of the
clusters. So you can see it's easy
to understand like the the clusters or
clusters. But they produce clusters that
are less compact and more the spread
like comparing to the quantum ones.
D. So here this method is like they take
advantage of
like the quantum approach that we are
using and leads to an enhanced
separation between the clusters
indicating better defined groups of
variabilities. So the most important
part how we
can explain this
clusters. So for example the cluster z
like cluster the first cluster is
concentrate on critical vulnerabilities
related to Microsoft product
highlighting areas of high risk. So
anything related to Microsoft this is
the first cluster. Then the second
cluster group medium severity
vulnerabilities from a diverse range of
interpre software vendors and network
solution. So it comes like from vendors
of like different software and somehow
it's it has a medium severity comparing
to the others. Then we have the third
one third cluster. It focus on high
severity vulnerabilities affecting
products from Adob, Cisco and Google
indicating significant risk associated
with this product. Then the last one it
comprise like u the vulnerabilities with
high to medium severity particularly
from Microsoft and Oracle products and
somehow this is based on the results we
had like to analyze them and try to
figure out like how to explain and what
does this clusters means but yeah it
makes sense comparing to the existing
literature. So the strategic implication
of this finding are significant because
like quantum clustering provides better
defined clusters and also reveals the
strategic areas of concerns. So by
identifying and grouping this
vulnerabilities more precisely
organization can priorize like their
cyber security efforts effectively
particularly in areas where like the
concentrated. Moving to the results. So
this is the performance metrics for
different clustering algorithms.
So we we use like three different
metrics. Silhouette it measures how
similar an object is to its own cluster
comparing to the other clusters means
like that for example that data point is
it like more to this more fitting with
to this cluster or to the other cluster
and the higher silhouette score
indicates uh better defined and well
separated clusters. And we have DB
evaluates the average similarity ratio
of each cluster with its most similar
cluster. And the lower DB index is
better indicating more distinct
clusters. Then we have CH. It assess the
ratio of the sum of between
clusters the uh dispersion to within
cluster dispersion. So a higher CS
like index suggests more compact and
well separated
clusters and from this results.
So starting
with QC swap means uh achieve like the
highest silhouette score and the best DB
index reflecting well definfined and
separated clusters. It also performed
strongly in the CH index indicating
robust clustering performance moving to
quantum kernel names. He also delivered
competitive results matching like QC
swap in the silhouette score and
slightly outperforming in the CH
index key means this is classical like a
lower silhouette score and the higher DB
indicating less optimal cluster form
information compared to the quantum
approaches same for spectral
clustering. In summary, the quantum
enhance methods particularly like QC
swap key
means outperform like this classical
approach across all evaluated metrics
but this is for just this specific
set. Moving to another application which
is climate
modeling. So as artificial intelligence
continues to evolve rapidly, so one of
the major concerns we face is the rising
energy demands in EI. And since
2012, energy consumption associated with
EI technologies has be has been increasing
increasing
exponentially and uh doubling
approximately every
3.4 months. This leads to significant
challenge not just like from a
technological standpoint but also with
profound environmental impacts.
impacts.
So to counter this issue the concept of
green AI has like emerged. It emphasized
the development of energy efficient
algorithms and sustainable models that
can reduce energy consumption.
However, achieving true efficiency is
more complex like the
Jevans paradox highlight like a crucial
challenge. Making systems more efficient
can sometimes lead to even greater
energy use because of the expanded
application it uses. And this is what we
do for example even in quantum sometimes
like to get like higher performance or
better performance we use like more
depth and more resources. So it's not
always the case if you want to use like
uh sort of an algorithm to achieve
efficient performance but with lower
energy or lower consu comp computitional resources.
resources.
So here we explore like quantum machine
learning how can quantum machine
learning might provide a promising
solution to balance the EI advancement
with the sustainability and using
climate modeling as an
the eipens models which stands like for
adaptive quantum physics in for neuronet
networks. So this model uniquely
integrates like quantum computing
techniques with physics informed neural
networks targeting applications in
climate modeling. So the primary goal is
to improve the predictive accuracy in
the fluid dynamics will of course
significantly reduce the computitional
costs and the associate carbon footprint
to say that we are somehow using less energy.
energy.
So we use the data set uh it was
derivative like from the numerical
solution to the incompressible navir
stocks equations. So this solution
provides us with essential data s like
spatial coordinates velocity fields time
And we optimize data efficiency by
flattening and reorganizing it like
selecting 30,000 key data points for
training while utilizing the NR spatial
grid for testing. Some key features of
this method that we are proposing is
like we are using quantum multi head
self attention mechanism which enhance
the model performance. We are also
employing like the quantum tensor
networks to increase the representational
capacity and yeah our model achieves
somehow better reduction in the
parameters means like better carbon uh carbon
carbon
reduction. Okay. Yeah. And this is an
overview of our architecture. So at its
core, our model used the physics
informed neuronet networks with embedded
physical laws expressed as partial
differential equations or like what we
call the PTE directly into the learning
framework. So this integration allows us
to address both like the forward and the
inverse problems effectively. And of
course like as I mentioned one of the
key innovation is the quantum multi head
self attention. So this module encodes
the classical data into quantum states
and also computes the attention scores
aggregate them through tensor networks
enhancing the model representational
capabilities. Our loss is like a
function is like an hybrid one because
it combines the datadriven loss
uh specifically like mean square error
with a physics drive driven component
ensuring that the output aderes to the
governing physical equations. Finally,
we use
the optimiz optimization method as a
kazinoton approach which is further
refined using the super conversions
technique to ensure efficient and rapid training
training
conversions. And this is the result that
we got like uh our model achieved like a
significant parameter reduction compared
to the classical models which was very
similar. Just instead of having the
quantum layers we have the classical
specifically different quantum variation
also because like we use different
inserts of like the quantum tensor
networks demonstrated reduction as high
as like 63.29
29. And for the test lows comparison, uh
still our model delivered like
comparable or even superior test laws
relative to the classical enhanced
pins. Then yeah uh our impact on the
climate modeling which is the
application is not worthy. by reducing
like the computitional demands minimize
like the carbon emissions making the
model highly sustainable for the environmental
environmental
application. And yeah on the right side
here you can see visual prediction for
the current flow and the corresponding
re residials. These plots highlight like
the precision and the effectiveness of
the model capturing like complex flow
dynamics accurately while reducing the computitional
computitional overhead.
overhead.
Then moving to the last application
which is like resource allocations for
5G networks. So in 5G networks one of
the core challenge is like efficient
resource allocation specifically how
dynamically insign assign like uh
limited resources like bandwidth like
transmissent power and frequency
spectrum to a large number of users and
service that have different and rapidly
changing quality of service
requirements. And somehow these
resources must be distributed in a way
that satisfies constraints like s uh low
latency or like uh balancing the energy
efficiency and fairness among the
users. And of course like there is a lot
of classical methods that are solving
this resource allocation but it takes a
like a lot of
computational complexity especially if
we are talking about large scale
scenarios and the other thing is like it
has some difficulties in modeling the
nonlinear and highdimensional
interaction between the network parameters.
parameters.
So our motivation is to explore the
quantum machine learning specific
specifically we are using variational
quantum regression to see how it will be
or how to use to optimize the resource allocation
strategies. So yes this is our method.
So we use like this varational quantum
regression which maps like classical 5G
networks parameters to optimize optimal
resource allocation values. So the task
is regression which is somehow different
from the other tasks. So it refers to
learning a continuous relationship
between the input features and here we
have user density signal strains and
bandwidth demands and a target
output like optimal resource allocation
and usually like traditional regression
model estimate the coefficient for each
feature. In our case, these coefficients
are learned as parameter in a quantum
circuit and it's like any quantum
machine learning as I explained in the
beginning. So it starts with data
encoding. Here we use like Z feature map
which is like a pre-built function from
KisKet. It's use like
the hadamar and phase gates which are
like part of the angle
encoding. Then this is to allow us or to
allow the circuit to represent the
nonlinear patterns by modulating like
the quantum phase. Then we have like the
insert which is real and printed inserts
also pre-built function from kiskuit. It
consists like
from parameterized rotation and
entangling cenote gates that expose the parameter
parameter
space. Yeah. So this hybrid quantum
classical structure enables us to
perform regression in a high dimensional
space capturing complex relationship
between like the network load and
traffic conditions and of course optimal
resource allocation strategies.
And this is our methodology or like the
algorithm. So we begin by selecting the
most relevant features uh followed by
like a standard
train splitting. Then yeah the idea here
or like the core idea here is to
identify the optimal circuit depths and
we try like with the different number of
uh layers that I will show in the
results and we loop through multiple
depths by training the VQR model and
computing the main square error after each
each
run and the configuration with the
lowest MSE is retrained. Then once the
best steps is determined, we retrain the
model and evaluate it using the
classical regression metrics.
Specifically, we use like MSE, error,
MSE and mean absolute error. This metric
helps quantify how accurately the
allocation. Yeah. And this is the result.
result.
So and the left side we analyze like
eight different circuits configuration
each with a increasing depth layer. So
here it shows that
that
the depth seventh achieves like the
lowest objective function value
suggesting it provides the best balance
between the model expressivity and trainability.
trainability.
Well, on the right it's just like the
conversions behavior using the optimal
depths. The objective function as you
see rapidly increase decrease sorry and
stabilize around the 125 iteration
indicating that the optimizer
effectively located a well performing
parameter region for the 5G resource
allocation. And yeah, this is a scatter
plot comparing the predicted values with actual
actual
allocation. So the data points closely
follow the red diagonal
uh correct predictions and this
alignments confirms that the model is
learning manifold relationship between
the network features and the allocation outcomes.
outcomes.
However, we have the minor deviation
and somehow
like still like we need to enhance the
algorithm more to make sure that we
don't have those. But this might reflect
just like an kind of like under
represented samples or local
nonlinearities not fully captured by the
model. Then on the right side we analyze
the residuals. Uh the difference between
the actual and predicted values. So the
histogram overlaid with a smooth density
curve shows the most prediction errors
are small and cluster around
zero which is miserable. However, the
leftward skew indicates that a slight
underestimation bias. So in practical
terms, this means that the model tends
to allocate slightly less resource than
required in some scenarios. This is a
useful diagnostic as it can guide
further tuning and rebalancing the
training training data or adjusting the
circuit depths.
So yeah and together with this plot it
validates the model overall strength
refinement and yeah to conclude so this
presentation during this presentation I
explored like uh different set of
quantum machine learning techniques
siloed for different real world
challenges. We started with the quantum
state tomography where methods like VQC
baian inference and QPC demonstrated
varying trade-offs in fidelity and
efficiency showing that quantum machine
learning based strategies can con
reconstruct quantum states with high
accuracy. In the context of climate
modeling, we
introduced our model which was a hybrid
quantum classical model integrating
quantum attention into physics informed
neuronet networks and the model achieved
a significant reduction in parameter
count without sacrificing the
performance pointing towards like a more
sustainable approach for complex
physical simulation.
Turning to the cyber security, we
proposed the quantum enhanced clustering
framework that somehow surpass like the
classical methods but still we cannot
generalize that in enabling more
strategic traits prior
prior
prioritizing using the earlier data set.
We also demonstrated how variational
quantum regression can be applied to
optimize the resource allocation in 5G
networks. And the results showed
like like low prediction errors and also
smooth convergence behavior across
different circuit
depths. Overall this kind of
contribution underlined the versatility
of the quantum machine learning and its
promise across this kind of application
ranging like from the environmental
modeling to network optimization and
digital security. Looking ahead of
course like future research should
continues to address the scalability
quantum circuit
optimization and the development of
meaningful metrics to assess the
robustness and interpretability in
applied quantum machine learning. And
lastly, yes, comparing classical to
quantum, it's important, but we need
like for that to define the metrics
first. And yes, and before I I conclude,
I'm excited to share that we are
organizing like a workshop as part of
deep learning in Daba
uh QML for Africa workshop which will
take place in Kari, Rwanda on August 22.
Uh so this workshop will bring together
like researchers, students, anyone is
like interested or working on quantum
machine learning and quantum computing
across the African continent. So
everyone is welcome to participate and
scan the QR code to learn more about
this and yes and thank you for your attention.
Thank you so much for your interesting
talk. Uh now we have time for questions
in person and online. Question. There
was a question in the
So yeah, I can read that. How does does
it basically differ from classical model
like artificial neuron as well? I'm not
sure like which model we are talking
about but in general quantum machine
learning it's something very similar to
what we are doing in classical just
example the quantum instead of the
neurons and the classical liars. So we
have that can be used
to yeah so instead of bits we are using
cubit. This is not a proper
uh comparison but somehow this cubit has
like more uh or like they have some
quantum physics properties that they are
Yeah, I think all the phen quantum
phenomena that's the difference and of
course like from the
performance that's what like everyone
quantum machine learning is trying to
explore we call like potential advantage
how is it better than the classical or
even the question is like can we compare
but somehow it's very similar to the
classical because we are using to
evaluate the models we are using the
classical metrics that comes from the machine.
Uh hello thanks for the talk. Um can you
just comment on how you find the
centroidids by doing spectral
clustering? Okay. How does that work?
Because you plotted them in those
figures of yours. How do you actually
calculate the centroidids for spectral pl?
pl?
Yes. So regarding that this is what we
call like uh
the quantum architecture search. So it's
very hard because like we are kind of
dealing with a block box and even
sometimes you start like with dealing
with the equation to represent what you
want to do exactly but somehow once you
start performing or like doing the
experiments it doesn't lead to what you
want exactly. So somehow at some point
you keep just trying and narrow down the
search. So that's what right now almost
like all the people are focusing on
having such of like an automatic tools
that do design or search for the best architecture
architecture
automatically instead of doing it like
manually. So you give for example you
give the metric that you want. I want a
circuit that has like a performance of
90% accuracy. So either you give like
all the pre-built uh inserts or like the
quantum circuits that are already
available or you somehow you try to
construct your own you give like for
example I have this gate. So based on
this gate you can generate random
circuits and based on that you keep
testing until you find your your like
the best one. But of course there is the
technique which we call like
trainability and
expressability. So it means that somehow
when you make the your circuit more
expressible and you need to add more
layers more entanglement and there is
some studies going on on that like what
actually adds more to the
the
more. So once you have that it means
that like more expressible means that
you have more uh the the space solution
is like more covered and it's easy to
find your solution but how is it or how
it impact like affects like the
performance that's also open
question. So technically there is no
specific answer. So you know couple of
techniques and you
have of course like the literature
different existing works and based on
that you customize either you start
by using pre-built one and customize
based on the performance that you are getting.
Okay.
Yeah. And I I can add one thing more is
like right now there is like some
uh groups or like they are working on
adding kind of like a data set of the
circuits. So for example, we don't have
to worry about like uh designing the
circuits because we have a data set of
different circuits and we somehow we
just need to uh search for the best
circuits based on whatever available in
this data set. This is also one of the techniques.
Okay. Uh any other question
here or
not? Okay. Or online. I don't see any
other question and it's 3 p.m. should
be session. Thank you so much again no
for your interesting talk and this talk
will be hopefully will be available
sooner YouTube.
I hope so. Okay. Thank you so much and
have a great day. Okay. Thanks. Thank you.
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.