Researchers from Google, NASA, UCSB, and the University of Technology Sydney (UTS) have proposed a potential boundary between quantum and classical computations in a paper published in Nature Physics.
This is the first research to clearly identify benchmarks for experimental teams hoping to build near-term quantum computers that might be capable of surpassing the power of classical computers.
“The advantage offered by quantum computers is subtle. Some applications can have an exponential quantum speed-up over classical computers, while others receive no benefit at all,” said Professor Michael Bremner, Chief Investigator at the UTS node of the ARC Centre for Quantum Computation and Communication Technology.
“Understanding when quantum computers become useful is essential, especially when we are limited to using the noisy intermediate-scale devices that currently exist. Many things can affect the utility of near-term of a quantum computer for a given application: the number of qubits, the required circuit depth, and the level of noise in the system are just a few. We attempted to find the frontier between classical and quantum computing, we wanted to find the smallest quantum circuits that can do something that cannot be done at all on a classical computer.”
The team extended recent breakthroughs in quantum computational complexity theory to identify potential benchmarks and then probed these with extensive, classical, computational modelling.
“We focussed our attention on chaotic, or random, quantum circuits because as they get bigger the amount of entanglement grows rapidly. Such circuits produce subtle non-local correlations that make them particularly difficult to model on a classical computer,” said Professor Bremner, who is also a founding member of the UTS Centre for Quantum Software and Information (UTS:QSI).
The UTS:QSI was established in late 2016 to develop software for quantum computing and to discover new information processing applications of quantum technologies. The UTS:QSI’s software efforts complement the other predominantly hardware-focussed programs within Australia, providing potential applications and benchmarks through the ARC Centre.
This latest research argued chaotic circuits of 49 qubits at around depth 40 became difficult to simulate classically, given realistically achievable levels of noise. By contrast, Shor’s famous algorithm for determining the prime factors of a number gives another suspected exponential quantum advantage; however thousands of logical qubits (qubits with no noise at all) are required to outperform the best classical algorithms for this task.
“What distinguishes this current work is its focus on a realistic benchmark tailored to near-term hardware. This will be an exciting challenge both for experimentalists and those attempting to find better classical simulations of quantum algorithms,” said Dr Ashley Montanaro, University of Bristol Quantum Information Institute.
This theoretical research closely aligns with Google’s hardware program to develop near-term quantum processors.
“The important question for quantum computing is: What is the smallest computational task that is prohibitively hard for today’s classical computers? We propose the task of sampling from the output of random quantum circuits, which can be thought of as the “hello world” program for quantum computers,” said Dr Sergio Boixo, Google’s Quantum AI team, Google Research Blog.
“We also introduce the technique of cross-entropy benchmarking, and a successful experiment at scale would demonstrate the basic building blocks for a large-scale fault-tolerant quantum computer.”
“Characterizing quantum supremacy in near-term devices” was published in Nature Physics.
The work is supported with funding from the Australian Research Council through the Centre of Excellence program.
The Scientific Inquirer needs your support. Please visit our Patreon page and discover ways that you can make a difference. http://bit.ly/2jjiagi
1 comment