The Case Against Quantum Computing

Ya’akov

Joined Jan 27, 2019
6,035
How are you determining sufficient power consumption? If we take the upper bound of computational power usage as the black hole threshold (Bekenstein bound), and the lower bound as the Landauer limit, then I think we can all agree that human brains are nowhere near either limit.

In terms of time, I've never heard of a lower bound or minimum clock speed condition. (I suppose the potential heat death of the universe is an effective lower bound on clock speed.) But given that human brains can indeed compute, it would seem that our clock speeds satisfy any possible minimum requirements.
Because we can see that using normal computational methods to even approximate the behavior that seems to arise for consciousness, th epower consumption is orders of magnitude larger. Additionally, the apparent clock rate of the brain is insufficient in the same comparison.
 

bogosort

Joined Sep 24, 2011
694
Because we can see that using normal computational methods to even approximate the behavior that seems to arise for consciousness, th epower consumption is orders of magnitude larger.
That discrepancy points more directly to an algorithmic issue. To use my name as an example, if the only algorithm we knew of to sort a list of elements was bogosort (i.e., shuffle the list, pick an element and if it's not in sorted order, start over), then sorting would seem to require orders of magnitude more energy than it actually does.

Additionally, the apparent clock rate of the brain is insufficient in the same comparison.
I'm not sure what you mean here, insufficient for what?
 

xox

Joined Sep 8, 2017
697
I agree with you that there is very little (if any) intelligence in AI, but I disagree that accessibility to QC would somehow change that. Perhaps Penrose is correct that consciousness is a quantum phenomenon, but even if that were true, it seems orthogonal to QC.

The single, special ability -- if indeed it is special -- of QC is to leverage interference between the superpositions of n qubits to solve a problem that would otherwise require 2^n classical bits. In other words, the great promise of QC is to provide exponential speed-up to the very specific class of problems whose solutions can be found through quantum interference. Personally, this doesn't seem relevant to the challenge of creating artificial intelligence.

I think this becomes clearer with a slightly different framing: suppose we were able to create intelligent machines with classical computing, but they were exceedingly slow thinkers. Then, QC might be able to help.

It's important to remember that quantum computing does not promise magical abilities; everything that a QC can do, a classical computer can also do (just slower).
Quantum computing has been doomed since its very inception. Based on the presumption that probability density functions could somehow be used to store, process, and output information. The only catch being that it depends on this so-called "wave-function collapse" mechanism to work. Remember Schrödinger's cat? Multiple quantum systems are proposed to exist in an "entangled" state. I don't know about you, but I have a really hard time believing that.

Regardless, the din of all of the fluctuations within our universe make it unlikely that we will ever be able to reliably perform computations much less run any sort of useful program on such a machine. The future IMO is a hybrid of analog and digital systems.
 

bogosort

Joined Sep 24, 2011
694
Quantum computing has been doomed since its very inception. Based on the presumption that probability density functions could somehow be used to store, process, and output information. The only catch being that it depends on this so-called "wave-function collapse" mechanism to work. Remember Schrödinger's cat? Multiple quantum systems are proposed to exist in an "entangled" state. I don't know about you, but I have a really hard time believing that.
We can make classical computers that use probability densities to store, process, and output information -- the most general form of this is an analog computer -- so I don't believe that's the show-stopper. I do, however, share you skepticism that we'll be able to engineer a general purpose QC of any significant size. Barring some revolutionary leap in quantum error correction, I wouldn't be surprised if the best we can do turns out to be the computational equivalent of dumping a peg board into soapy water to "solve" for local minima.
 

Thread Starter

nsaspook

Joined Aug 27, 2009
9,991
What about The Traveling Salesman? ... wouldn't QC be the hell of a lot faster at solving such problem?
https://arxiv.org/abs/1805.10928
The famous Travelling Salesman Problem (TSP) is an important category of optimization problems that is mostly encountered in various areas of science and engineering. Studying optimization problems motivates to develop advanced techniques more suited to contemporary practical problems. Among those, especially the NP hard problems provide an apt platform to demonstrate supremacy of quantum over classical technologies in terms of resources and time. TSP is one such NP hard problem in combinatorial optimization which takes exponential time order for solving by brute force method. Here we propose a quantum algorithm to solve the travelling salesman problem using phase estimation technique. We approach the problem by encoding the given distances between the cities as phases. We construct unitary operators whose eigenvectors are the computational basis states and eigenvalues are various combinations of these phases. Then we apply phase estimation algorithm to certain eigenstates which give us all the total distances possible for all the routes. After obtaining the distances we can search through this information using the quantum search algorithm for finding the minimum to find the least possible distance as well the route taken. This provides us a quadratic speedup over the classical brute force method for a large number of cities. In this paper, we illustrate an example of the travelling salesman problem by taking four cities and present the results by simulating the codes in the IBM's quantum simulator.
"This provides us a quadratic speedup over the classical brute force method for a large number of cities"

1654817678783.png
 

cmartinez

Joined Jan 17, 2007
7,749

Using a photonic quantum computer chip, researchers from Xanadu, Toronto, have absolutely obliterated the current fastest computers and algorithms in completing a tricky sampling problem. According to their paper, published in Nature, it would take supercomputers and algorithms around 9,000 years to compute – their quantum chip Borealis smashed through it in just 36 microseconds.
 

Thread Starter

nsaspook

Joined Aug 27, 2009
9,991
In certain machine learning tasks, scientists attempt to glean information about a quantum system — say a molecule or a group of particles — by performing repeated experiments, and analyzing data from those experiments to learn about the system.

Huang and colleagues studied several such tasks. In one, scientists aim to discern properties of the quantum system, such as the position and momentum of particles within. Quantum data from multiple experiments could be input into a quantum computer’s memory, and the computer would process the data jointly to learn the quantum system’s characteristics.
I would hope so as probing chemistry is a proposed prime usage for possible future QC machines. The current proof-of concept experiments are limited to very specific tests for now.

https://arxiv.org/pdf/2112.00778.pdf
We envision that future quantum sensing systems will be able to transduce detected quantum data to a quantum memory and then process the stored data using a quantum computer. Lacking for now suitably advanced sensors and transducers, we have conducted proof-of concept experiments in which quantum data is directly planted in our quantum processor. Nevertheless, the robust quantum advantage we have validated highlights the potential for advancing quantum platforms to unlock facets of nature that would otherwise remain concealed.
 

bogosort

Joined Sep 24, 2011
694
What about The Traveling Salesman? ... wouldn't QC be the hell of a lot faster at solving such problem?
You've probably heard of the P = NP question, which basically asks if the set of easily solvable problems (P) is equal to the set of difficult to solve (but easy to verify a solution) problems (NP). The common sense answer, of course, is no, because some types of problems are just fundamentally more difficult than others. From everything we know about how the universe works, it makes sense that an N-body problem is much harder to compute, as N gets bigger, than the 2-body problem.

A proof that P = NP would arguably be the most important (and surprising) result in the history of science, as it essentially would mean that writing a symphony requires about the same amount of work as listening to a symphony.

Now, the traveling salesman problem (TSP), as usually formulated (find the shortest route that visits each city once), belongs to the computational class of problems called NP-hard. These are considered not only the most difficult of the NP problems, but they also have a special property: a solution to any single NP-hard problem automatically solves every NP problem. In other words, a polynomial-time solution to the TSP would be world-changing proof that P = NP. Calculating the orbits of every celestial body in the galaxy is as easy calculating the orbit of the moon around Earth.

Needless to say, most experts don't believe that QCs will help in finding an efficient solution to the TSP. On other hand, we've never been able to prove that P ≠ NP, so there's still the possibility of "hope". But I wouldn't hold my breath.
 

bogosort

Joined Sep 24, 2011
694
The QC hype (and the hot ball of money that follows it) has led to lots of snake oil. Unfortunately, very few media outlets are equipped to know the difference between legit research and scam. So, we get ridiculous headlines and articles like this.

Here's a helpful question to hold when you come across such claims: if the answer would take 9,000 years to calculate on a supercomputer, then how do they know the quantum computer got the right answer?

As for Xanadu's claims, the guy who invented the technique they use (boson sampling) thinks the company is a scam.
 

Thread Starter

nsaspook

Joined Aug 27, 2009
9,991
https://www.science.org/content/art...-can-beat-google-s-quantum-computer-after-all

Ordinary computers can beat Google’s quantum computer after all
Superfast algorithm put crimp in 2019 claim that Google’s machine had achieved “quantum supremacy”

The researchers calculated the output pattern for 1 million of the 9 quadrillion possible number strings, relying on an innovation of their own to obtain a truly random, representative set. The computation took 15 hours on 512 GPUs and yielded the telltale spiky output. “It’s fair to say that the Google experiment has been simulated on a conventional computer,” says Dominik Hangleiter, a quantum computer scientist at the University of Maryland, College Park. On a supercomputer, the computation would take a few dozen seconds, Zhang says—10 billion times faster than the Google team estimated.

The advance underscores the pitfalls of racing a quantum computer against a conventional one, researchers say. “There’s an urgent need for better quantum supremacy experiments,” Aaronson says. Zhang suggests a more practical approach: “We should find some real-world applications to demonstrate the quantum advantage.”
https://forum.allaboutcircuits.com/threads/the-case-against-quantum-computing.154610/post-1433947

 
Last edited:

Thread Starter

nsaspook

Joined Aug 27, 2009
9,991
A repeat post but it looks like IBM was correct.
https://www.ibm.com/blogs/research/2019/10/on-quantum-supremacy/
Quantum computers are starting to approach the limit of classical simulation and it is important that we continue to benchmark progress and to ask how difficult they are to simulate. This is a fascinating scientific question.

Recent advances in quantum computing have resulted in two 53-qubit processors: one from our group in IBM and a device described by Google in a paper published in the journal Nature. In the paper, it is argued that their device reached “quantum supremacy” and that “a state-of-the-art supercomputer would require approximately 10,000 years to perform the equivalent task.” We argue that an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity. This is in fact a conservative, worst-case estimate, and we expect that with additional refinements the classical cost of the simulation can be further reduced.

Because the original meaning of the term “quantum supremacy,” as proposed by John Preskill in 2012, was to describe the point where quantum computers can do things that classical computers can’t, this threshold has not been met.

This particular notion of “quantum supremacy” is based on executing a random quantum circuit of a size infeasible for simulation with any available classical computer. Specifically, the paper shows a computational experiment over a 53-qubit quantum processor that implements an impressively large two-qubit gate quantum circuit of depth 20, with 430 two-qubit and 1,113 single-qubit gates, and with predicted total fidelity of 0.2%. Their classical simulation estimate of 10,000 years is based on the observation that the RAM memory requirement to store the full state vector in a Schrödinger-type simulation would be prohibitive, and thus one needs to resort to a Schrödinger-Feynman simulation that trades off space for time.
 
Top