Quantum computing is rapidly advancing, moving beyond theoretical concepts to practical applications and industrial relevance, with significant global investment and a growing demand for a skilled workforce to drive its development and adoption.
Mind Map
คลิกเพื่อขยาย
คลิกเพื่อสำรวจ Mind Map แบบอินเตอร์แอคทีฟฉบับเต็ม
Hello everyone, welcome to the fifth
rerun of our course introduction to
quantum computing quantum algorithms and
kisskit. I'm Dr. Anupamare. I'm a senior
research scientist at IBM research
India. I'm also the technical lead for
AI for quantum for uh IBM quantum.
Welcome you all to this course. And
today what we really want to discuss in
this video which is ungraded is what is
it that we are wanting to do in quantum
and why is this all about? Why are we
learning quantum today? What is the
potential impact? Potential impact is
quite near-term in terms of making
compute industrially relevant. So what
is it that we want to do in quantum
computing? We want to go towards quantum advantage.
advantage.
In the center of your screen here you
see this white circle. These are
classically easy problems. For example,
sending an email or doing a video
conferencing or many such problems which
are uh easily solved by classical
computing or by AI. But there exists a
wide number of problems in this gray
ellipse that cannot be solved by
classical or even AI supercomputing and
never will be. A part of those problems
are quantum easy and can be solved by
quantum. And what is most interesting to
me is this new section of problems which
we were earlier not able to touch or not
able to think. These are quantum easy
problems which we were not unable to do
using classical computing. So in all of
datadriven sciences there is a rich seam
of problems that we will able to try or
we will able to design or solve uh via
quantum computing that is intractable to
classical computing.
A lot of people ask when will this
happen? I would say it has actually
started. Organizations across industry
and fields are accelerating their
investment in quantum computing with an
unprecedented pace. If you look at this
graph, you'll see that the number of
active PC's by different industry and
the industry sectors are colorcoded
here. That has been increasing. In the
last 3 years, it has increased over 50%
in the enterprise use case activities in
quantum PC's.
Investment in quantum computing is
accelerating worldwide. There's a lot of
global investment. For example, in 2018,
the National Quantum Initiative Act
authorized 1.2 2 billion investment in
funding over 5 years in the US. In 2019,
France formalized the French quantum
strategy with a budget of€1.8 billion
over the course of 4 years. In 2023,
Germany released its quantum technology
plan, investing €3.3 billion towards the
development of a universal quantum
computer by 2026 in an effort to build a
quantum ecosystem and a quantum
industry. And the list goes on with
Australia and so many other countries.
Everyone creating Japan, everyone
creating their own quantum plan
and so does India. India announced its
national quantum mission in the finance
budget in 2020 for the first time. Then
there was a sanction of 60 billion which
is an investment over the course of 8
years from 2023 to 2031. And we have
multiple goals that we are trying to
achieve. India should definitely have
its own quantum computer. So the first
thing is building the quantum computer a
50 to 10,000 cubit machine build on any technology
technology
but that needs to be built. In quantum
communications we want to have an
intensity QKD over 2,000 km. In quantum
sensing we want to be able to develop
high sensitivity sensors and materials.
For this effort, we have been creating
these hubs at four different places in
India and working actively.
The next goal which is workforce
development and that is part of what we
are doing here.
India has to produce 100,000 people who
are trained quantum developers by 2030.
In our course itself in the last four
runs we have skilled 37,000 people. This
year we have more than 50,000 people who
have already registered for this course.
In terms of number of startups we
currently have 60 plus startups in India
already a large number of them here in
Bangalore and the states have been very
supportive. For example, Andhra Pradesh
government has already called out and
supported that they are going to support
100 plus startups and and so on in other
states and the target here is to have
200 startups by 2030.
There's a large number of research
papers that are using these type of
hardares and softwares produced by
different uh quantum vendors. Of course,
as a result, uh, research in quantum
computing is proliferating with IBM
leading in quantum hardware and
software. And since 2016, when we first
made our quantum computer accessible via
the cloud, the scientific community has
leveraged our quantum technology, making
groundbreaking advances in in their
fields, publishing several papers, and
we have over 5,000 papers that we have
been able to publish on on our hardware.
and uh of course using our software.
When we think about what constitutes a
useful quantum computing, there are
three key milestones to us. Quantum
utility and we were able to establish
quantum utility in 2023.
What do we mean by quantum utility? We
were able to run these quantum
experiments reliably multiple time
repetitively. we get the same results
across different hardware at different
noise levels which are better than
classical brute force mechanisms
in 2026. We are looking forward to be
able to demonstrate quantum advantage
along with our partners and in 2029 we
have already put that in our road map.
We'll be delivering a first large scale
quantum fault tolerant quantum computer.
But why do we say we shouldn't wait for
fault tolerance? Understanding noise is
critically important. It helps us build
better error mitigation algorithms and
better error correction algorithm. And
fault tolerance actually doesn't mean
that there would be noiseless quantum
computers. It still will have some
noise. It's just that there is some
fixed error budget. The noise thresholds
are uh low and real time we'll be able
to um correct those errors. Now in order
to build those correcting algorithms we
need to start now.
So if we start now then we'll be able to
expand our error handling capabilities
and create better fall tolerant quantum
algorithms once the fall tolerant
quantum computers are built.
I'll just show you two graphs on on the
usage of these devices.
Couple of years ago people on an average
were I mean I mean the best uh number of
cubits you would see is less than 100.
Now it has gone to over 155 or 150 plus
number of cubits people are actually
using. So people are starting to use
more number of C cubits starting to
build these utility scale experiments
with different algorithmic approaches of
course and this uh graph on the right
stands for the circuit sizes that is
what is the largest circuit size that
people are being able to run. So that
impacts uh the number of gates and the
circuit depth. And if you see there are
very very deep circuits more than 8,000
gates that are being I mean 8,0002 cubit
gates. So very deep circuits that are
being able to run reliably on these hardware.
hardware.
We are already seeing first glimpses of
quantum advantages as researchers and
developers run these circuits that test
the limit of classical computing. We see
these as hypothesis of quantum advantage
and this work is setting the stage you
know for the back and forth from which
true quantum advantages will emerge.
We see two parallel paths. One is an
empirical test of quantum usefulness
which is for example a top it's it's a
top- down approach uh where quantum
scientists and quantum developers are
applying huristic methods that work well
for real world problems and these
explorations will help us realize
quantum advantage for practical problems
sooner for example the HSBC work where
they are showing a 34% improvement in
prediction of uh closing of a of a bond
price uh there's a 34% improvement with
the quantum algorithm and this has been
rigorously tested over couple of months
across different hardware on the same
hardware at different times of the day
and different multiple days across a few
months and on different hardware and you
have consistently got an average of 34%
improvement over the state-of-the-art uh
classical algorithms.
So that proves that empirically or
experimentally it proves that okay this
particular advantage is definitely
has an advantage over the classical algorithms.
algorithms.
Similarly we have got very good results
from some of the work with Cleveland
Clinic with modern on the mRNA problem
with Vanguard and so on.
The second path that you see here is you
know while we are empirically coming up
with these advantages it is important to
build trust in the quantum computers and
the quantum algorithms that it is truly
doing something beyond classical. We
need to be able to identify the kind of
quantum circuits that we'll be building
uh that can offer a verifiable
advantage. So uh there is a lot of work
where people are trying to come up with
this rigorous proof of advantage and
there's a bunch of papers that I've
listed here but there's a lot more that
is coming and in order to track all this
we have you know uh created a communityr
run tool alongside flat iron blue cubit
and algorithmic Q for the moment but yes
we are we welcome more partners and we
are soon going to have a lot more
partners. We want to be able to have
this tool to track progress towards
quantum advantage. So people like users
we can systematically monitor and uh
evaluate verified demonstration of
quantum advantage. You can put up you
can submit your results here. Uh you can
put okay this is my circuit and this is
the best result that I got. This was the
classical resource. This is the quantum
res resource and let people see how you
know they can they can try to you know
get better at it. So this is a this this
particular tracker shows how these
candidates stack up against the leading
classical methods.
Now because people talk a lot about
fault tolerance and all this buzz, it is
hard to know what these words really
mean. So let's break it down. First word
people say what is is large scale. So
what is large scale? Large scale is
hundreds of cubits capable of running
hundreds of millions of gates. So that
is the scale which we call large scale
right that has to be the scale beyond
that is the scale where you know you
cannot do classical simulation and now
you want to be able to run circuits
thousand times deeper than what is
possible on the devices today.
The way we are going to get there is
quantum error detection. And in this
course we are going to delve in the
fourth week a bit into error correcting
code and error detection set of error uh
set of these are basically a set of
techniques using error correcting codes
to encode quantum information not into
one physical cubits but in a number of
physical cubits. And there are several
ways to do it. A lot of people are
simply uh taking two or three physical
cubits and doing a parity check between
them to detect errors. that won't get
you to hundreds of millions of gates. Uh
if you're only detecting errors, the
overhead of course is exponential. But
the same stands for error mitigation. If
you uh want to run that many gates, you
will still need ridiculously good
physical cubits and you need to run the
circuit billions of times to get the
correct answer. In fall tolerant
computation, you'll correct the errors
in real time as they occur. Okay, so
fall tolerance is all about this
computing capabilities. And when we talk
about computing capabilities, I really
want to bring this to your attention. If
you are running some quantum experiments
with 10 cubits,
uh you can run it on your classical
computer. So whatever you're doing a
quantum experiment which you're
simulating on your classical computer,
you can you can run it and you will not
get any potential advantage because for
10 cubits you only need um uh some small
amount of 16k RAM, right? For 30 cubits,
a modern laptop with 17 GB RAM will
work, will be enough. You can use your
entire RAM and you can still do
somewhere around 20 to 30 cubit kind of
experiments. But this is exponential and
so for 31 cubits you will start to need
two laptops and very thing very quickly
things will get crazy.
This is why you know using classical
computers to run quantum problems is
actually a dead end.
So IBM Summit is world's one of the most
powerful supercomputer and if you are
running something around 48 cubits or 49
cubits you'll need something like summit
you want to run something at a scale of
60 cubits what a what 60 cubits can do
if you want to run that on a classical
device you will want to have all
classical computers on the earth
connected so one big device all atoms
you want to exhaust what you get to do
is what you will be able to do via means
of 60 cubits and that is the example as
a standard example of caffeine that you
know a lot of people would give caffeine
needs around 64 cubits for that you need
all atoms in the world and so on and at
100 cubits you would need a supercomput
about 10 trillion times larger than the
frontier to represent its states
so so you this is a dead end that's the
reason why we say that okay you cannot
use classical computers to run quantum
problems or quantum algorithms or
quantum circuits.
Now this particular chart is is is very
interesting to me. So on your yaxis is
the circuit width or the number of
cubits and on your x-axis is the circuit
gate size or the number of gates and and
if you see starling is here. So it's the
fall tolerant quantum computer which is
coming here in 2029
and blue J of course is a very large one
that's in 2020 2033 [clears throat]
and blue J is the one which is actually
you know you you say that that's the
realm of say actual chemistry advantage
or for short algorithm and the best
optim quantum optimizations and so on.
Now this uh going by this assumption you
should be able to actually see that
advantage in 2033 and that has an
assumption that your algorithms are not
going to get anything better only your
hardware will get better and better and
then potentially you can be here but
that's not true. Algorithms are going to
get better right? So algorithm will
definitely get better and hardware will
also get better. So this will this
advantage point is not going to happen
in 2033 but it's going to happen way
earlier somewhere around 2026 is what we
had predicted that we will be able to
see these advantages but we are already
seeing a lot of these advantage use cases.
cases.
[sighs] So that means in 2029 we'll be
delivering the world's first large scale
fault tolerant quantum computer which we
call Starling.
So today if you see this is the IBM
quantum system 2. This is the one that
is coming to India in the next 6 months
and this is Starling. What you will
finally see is Blue J will look
something like this which will have say
2,000 logical cubits or 1 billion
quantum gates can be successfully run in
this system.
We have been very clear about our road
map and I'll just present very briefly
to all of you the road map that we have
articulated. We have a development road
map and an innovation road map. Um and
and this is our development road map. So
on the bottom layer you see the hardware
which is in black. This is the kind of
hardware that we have been putting out.
We had put our first quantum computer
available for anyone to access around
the world in 2016 and it was a 4 cubits
very small machine. From that we went on
to 20 cubits, 53 cubits. Uh we have a
bunch of devices and recently we have
released something which we call
Nighthawk. It's a 120 cubit square
latice. It's a beautiful device which is
being able to do things way better than
the 156 cubits can do. On top of that
layer is something called Kiskit runtime
which is how can I accurately and
efficiently run my quantum circuits on
this hardware. So it is very closely
tied to the hardware. On top of that you
have a set of tools which we call as
orchestration tools which helps us in
doing resource management for example
Kiskit serverless plugins for HPC so
that we can envision and work our uh
plans around quantum supercomputing
and then on top of that is the actual
algorithm discovery. We have a bunch of
algorithms that we build on top of it.
Finally that get gets into our algor
algorithmic researchers who do not have
to or our application people who do not
need to understand everything in the
stack below to be able to run these libraries.
libraries.
So this is Nighthawk. Nighthawk uh is a
square latis. Earlier we had heavy hex
lattice. This is a square lattice. This
supports more efficient circuits with
fewer gates. It has a way better uh you
know u information routing. And we are
planning to scale even Nighthawk uh
significantly. So it would be with with
the help of uh connecting these chips.
Each chip has 120 cubits but these chips
will be connected by means of classical
communication and quantum communication.
And then we can actually stack them to
1,000 cubits.
And of course then finally we'll debut
uh Starling which will be able to run
circuits with 100 million gates on 200
cubits and then Blue J which can run
2,000 cubits u very reliably.
Now this has been the next layers uh
where you see there is a lot of focus on
orchestration of workloads, quantum
centric supercomputing uh then you know
our resource uh management and of course
discovery and new algorithms which will
lead to quantum advantage and then
finally applying those algorithms for applications.