The Case Against Quantum Computing

Thread Starter

nsaspook

Joined Aug 27, 2009
13,086
https://spectrum.ieee.org/tech-talk/computing/networks/the-practical-limits-of-quantum-cryptography

https://blog.trailofbits.com/2018/10/22/a-guide-to-post-quantum-cryptography/
Post-quantum cryptography is the study of cryptosystems which can be run on a classical computer, but are secure even if an adversary possesses a quantum computer. Recently, NIST initiated a process for standardizing post-quantum cryptography and is currently reviewing first-round submissions. The most promising of these submissions included cryptosystems based on lattices, isogenies, hash functions, and codes.
 

WBahn

Joined Mar 31, 2012
29,978
My understanding is that for many categories of classical cryptography the effect of full-blown quantum computing enabled attacks will be an effective cutting in half of the key length. Thus, for those algorithms, all that needs to be done is to double the key length and the algorithms will have the same computational protection against quantum computers as they currently have against classical attack computers.

Many (most?) of the main public-key algorithms will be rendered ineffective by quantum computing (of sufficient capacity), but there are other public-key algorithms that fall into the same category as most of the symmetric algorithms, requiring a doubling or so of the key lengths.
 

MrAl

Joined Jun 17, 2014
11,389
To bring QC down to earth i think an analogy would be calculating in the time domain vs in the frequency domain.
In the time domain we solve for one value at one point in time, but in the frequency domain in theory we can solve for all time.
Thus we get many results in the frequency domain at the same time but only one in the time domain so the advantage is obvious.

Also, i had thought about the applicability of QC also and it seems that there is another parallel analogy. The difference between parallel computing and non parallel computing, or to put it another way, some problems lend themselves to parallel computing and others simply do not.

A real test would be a chess program in QC that could beat all other high rated chess programs running on reasonably fast equipment.
It would be interesting, very interesting, to see how deep the QC program could go as to ply depth. Would be really amazing if it could see right though to the endgame, or to the end itself, although reading out the resutls could probably take longer than anything we could really do i bet.
Chess has been a subject of study for expert systems so if it does well then that means chess is dead and nobody has a job anymore in any field because they would all be solved :)
 

bogosort

Joined Sep 24, 2011
696
To bring QC down to earth i think an analogy would be calculating in the time domain vs in the frequency domain.
In the time domain we solve for one value at one point in time, but in the frequency domain in theory we can solve for all time.
Thus we get many results in the frequency domain at the same time but only one in the time domain so the advantage is obvious.
Can you give an example of such a calculation? As time and frequency are Fourier duals, one side doesn't have any more "information" than the other. While it's certainly true that certain operations are easier to perform after a Fourier transform -- e.g., convolution in one domain is equivalent to multiplication in the other -- this fact applies to both domains equally.

Also, i had thought about the applicability of QC also and it seems that there is another parallel analogy. The difference between parallel computing and non parallel computing, or to put it another way, some problems lend themselves to parallel computing and others simply do not.
If you mean that QC is like parallel computing strictly in the sense that only a small subset of problems are capable of quantum speedup -- just as only a small subset of problems are parallelizable -- then you're 100% correct. To be clear, however, we should note that QC is not in any other way like parallel computing. A common misconception is that quantum speedup happens because quantum computers try every possible solution in parallel, which is definitely not the case.

A real test would be a chess program in QC that could beat all other high rated chess programs running on reasonably fast equipment.
It would be interesting, very interesting, to see how deep the QC program could go as to ply depth. Would be really amazing if it could see right though to the endgame, or to the end itself, although reading out the resutls could probably take longer than anything we could really do i bet.
Chess has been a subject of study for expert systems so if it does well then that means chess is dead and nobody has a job anymore in any field because they would all be solved :)
It's an interesting idea, and if possible would reveal that chess positions have some deep structure beyond the obvious stuff. The challenge would be figuring out how to prepare the system to represent chess positions. I suspect that's a harder problem than figuring out optimal chess play using classical computing.
 

MrAl

Joined Jun 17, 2014
11,389
Can you give an example of such a calculation? As time and frequency are Fourier duals, one side doesn't have any more "information" than the other. While it's certainly true that certain operations are easier to perform after a Fourier transform -- e.g., convolution in one domain is equivalent to multiplication in the other -- this fact applies to both domains equally.


If you mean that QC is like parallel computing strictly in the sense that only a small subset of problems are capable of quantum speedup -- just as only a small subset of problems are parallelizable -- then you're 100% correct. To be clear, however, we should note that QC is not in any other way like parallel computing. A common misconception is that quantum speedup happens because quantum computers try every possible solution in parallel, which is definitely not the case.


It's an interesting idea, and if possible would reveal that chess positions have some deep structure beyond the obvious stuff. The challenge would be figuring out how to prepare the system to represent chess positions. I suspect that's a harder problem than figuring out optimal chess play using classical computing.
As to the first paragraph, if we look at a simple time function:
y=A*sin(w*t)

when we input a value for 't' we get only one solution:
0=A*sin(w*0)

but in the frequency domain it is taken to be a continuous wave that goes on forever and given simply by:
y=A

or in the Laplace domain:
y=(w*A)/(w^2+s^2)

So it is like we are getting the entire solution in one shot rather than having to go through every single time value. That's because we can look at it as being the entire solution for all time (y=A) because wherever we look we see the same thing, A.
It's like all the solutions are computed already because A is a constant..
I could explain more if needed.

As to the last paragraph, yes it would be interesting to see any structure if one turned up.
Yes i agree, setting up the solution even in a regular language like C is not easy because we want it to be fast yet still go through every possible combination if possible. This leads to a brute force:
for K=1 to 8 do
for Q=1 to Qn do
for R=1 to Rn do
etc., etc.
which would require one loop for every piece leading to a ton of combinations after just a few moves.
Add to that the complexity of the fact that not every piece can move into all the empty board positions that's what really makes it hard.
That's where stored opening books and stored endgame solutions and heuristics come into play.
 

bogosort

Joined Sep 24, 2011
696
As to the first paragraph, if we look at a simple time function:
y=A*sin(w*t)

when we input a value for 't' we get only one solution:
0=A*sin(w*0)

but in the frequency domain it is taken to be a continuous wave that goes on forever and given simply by:
y=A
You have it backwards. As a function of time, sin(ωt) has infinite support -- it is, as you say, a continuous wave that goes on forever. But in the frequency domain, it has only two nonzero values, precisely at ±ω. Regardless, both representations contain exactly the same amount of information.

or in the Laplace domain:
y=(w*A)/(w^2+s^2)
As in the time domain representation, y(t) = sin(ωt), the ω in the Laplace representation is a constant. And just as plugging in a value of t to y(t) gives you a single result, plugging in a value of s to Y(s) also gives you a single result. Neither representation contains more or less information than the other.

In linear algebraic terms, the mathematical object described by a sinusoid is independent of the choice of basis, just as the mathematical object described by a number is independent of the chosen base.

I could explain more if needed.
Please do, as I don't see the connection to QC.
 

Thread Starter

nsaspook

Joined Aug 27, 2009
13,086
https://fortune.com/2019/09/20/google-claims-quantum-supremacy/
Dario Gil, head of IBM Research, advises against using quantum supremacy as a metric with which to measure progress in the field. "The experiment and the 'supremacy' term will be misunderstood by nearly all," he told Fortune.

Gil described the experiment as a highly special case "laboratory experiment" that has "no practical applications." He added, "Quantum computers will never reign 'supreme' over classical computers, but will rather work in concert with them, since each have their unique strengths."
 

bogosort

Joined Sep 24, 2011
696
We'll have to wait for the official announcement and release of the paper, but apparently this is the real deal: Google has demonstrated quantum supremacy. This is monumental.

It's a bit disheartening that such news needs to be served with anti-hype disclaimers about the practicality of such a find, as if people were expecting the latest and greatest refrigerators and stereos to be equipped with quantum computers. QC applications are highly esoteric and not of general interest. The important aspect, the thing that should be blowing everyone's mind, is that this proves that there are at least two distinct and separate models of computation.

From an information theoretic perspective, this is flabbergasting. Of all the myriad ways humans have figured out how to count and calculate things, they have all been reducible to a single model of computation: a Turing machine. Whether we use our fingers, an abacus, or a supercomputer -- whether analog or digital, in whatever base -- the problems that were hard to solve on one were guaranteed to be hard on the other because they are all reducible to the same single model of computation. We now (presumably) have proof that there is another, fundamentally different way.

This isn't like finding out that quantum physics or general relativity are different from Newtonian physics, as both reduce to Newtonian physics in their respective limits. This is more like finding out that there are two distinct types of physics going on at the same time. What's perhaps even more strange is that the two models, though fundamentally different at their core, seem to be only slightly different in their capabilities, as most types of problems are handled equally well by either. It's curious, to say the least, that the universe allows for at least two types of computational capabilities, whose difference spring up in only a few types of problems.

Kudos to the Google team if they've actually achieved this.
 

cmartinez

Joined Jan 17, 2007
8,220
We'll have to wait for the official announcement and release of the paper, but apparently this is the real deal: Google has demonstrated quantum supremacy. This is monumental.

It's a bit disheartening that such news needs to be served with anti-hype disclaimers about the practicality of such a find, as if people were expecting the latest and greatest refrigerators and stereos to be equipped with quantum computers. QC applications are highly esoteric and not of general interest. The important aspect, the thing that should be blowing everyone's mind, is that this proves that there are at least two distinct and separate models of computation.

From an information theoretic perspective, this is flabbergasting. Of all the myriad ways humans have figured out how to count and calculate things, they have all been reducible to a single model of computation: a Turing machine. Whether we use our fingers, an abacus, or a supercomputer -- whether analog or digital, in whatever base -- the problems that were hard to solve on one were guaranteed to be hard on the other because they are all reducible to the same single model of computation. We now (presumably) have proof that there is another, fundamentally different way.

This isn't like finding out that quantum physics or general relativity are different from Newtonian physics, as both reduce to Newtonian physics in their respective limits. This is more like finding out that there are two distinct types of physics going on at the same time. What's perhaps even more strange is that the two models, though fundamentally different at their core, seem to be only slightly different in their capabilities, as most types of problems are handled equally well by either. It's curious, to say the least, that the universe allows for at least two types of computational capabilities, whose difference spring up in only a few types of problems.

Kudos to the Google team if they've actually achieved this.
"While our processor takes about 200 seconds to sample one instance of the quantum circuit 1 million times, a state-of-the-art supercomputer would require approximately 10,000 years to perform the equivalent task," the researchers said.
The researchers estimate that performing the same experiment on a Google Cloud server would take 50 trillion hours—too long to be feasible. On the quantum processor, it took only 30 seconds, they said.
:eek::eek::eek::eek:

just... wow...
 

Thread Starter

nsaspook

Joined Aug 27, 2009
13,086
We'll have to wait for the official announcement and release of the paper, but apparently this is the real deal: Google has demonstrated quantum supremacy. This is monumental.
How do you know that the quantum computer got the answer right in the first place? DO we just trust the answer with empirical proofs if classical computing the problem is impossible?
 

MrAl

Joined Jun 17, 2014
11,389
You have it backwards. As a function of time, sin(ωt) has infinite support -- it is, as you say, a continuous wave that goes on forever. But in the frequency domain, it has only two nonzero values, precisely at ±ω. Regardless, both representations contain exactly the same amount of information.


As in the time domain representation, y(t) = sin(ωt), the ω in the Laplace representation is a constant. And just as plugging in a value of t to y(t) gives you a single result, plugging in a value of s to Y(s) also gives you a single result. Neither representation contains more or less information than the other.

In linear algebraic terms, the mathematical object described by a sinusoid is independent of the choice of basis, just as the mathematical object described by a number is independent of the chosen base.


Please do, as I don't see the connection to QC.
Not sure what you are talking about when it comes to the sinusoid. You dont seem to have every analysed a circuit for a constant frequency sinusoid because you would have understood the point of view then.
 

MrAl

Joined Jun 17, 2014
11,389
How do you know that the quantum computer got the answer right in the first place? DO we just trust the answer with empirical proofs if classical computing the problem is impossible?
Hi,

That's an interesting question and that is probably why they need more peer review.
However, sometimes a sanity check helps to convince even if not prove. This is probably what they did.
For example, if we had a prime number generator and generated a set of primes and chose one HUGE number that a classical method would have a hard time doing, we could test just that one using classical methods. IF that huge number failed it would mean the new method failed, but if it was large enough and it proved to be a real prime, then the new method would have merit. Not a proof of course, but some indication that the method might work. On the contrary, in the 1980's i knew of one prime number generator that could generate a ton of primes but failed on just ONE because it generated ONE prime that was not a true prime.
 
Top