A new measurement method
A new measurement method indicates whether the concept for a quantum computer can be extended to a large number of quantum bits.

Bench­­marking scala­bility and per­for­mance of quantum com­puters

Researchers at the University of Innsbruck and the Institute for Quantum Computing (IQC), Waterloo, Canada have demonstrated a new method to benchmark different quantum computer platforms.

Quantum computers offer a fundamentally more powerful way of computing, thanks to quantum mechanics. Compared to a traditional or digital computer, quantum computers can solve certain types of problems more efficiently. However, qubits—the basic processing unit in a quantum computer—are fragile; any imperfection or source of noise in the system can cause errors that lead to incorrect solutions under a quantum computation.

Gaining control over a small-scale quantum computer with just one or two qubits is the first step in a larger, more ambitious endeavour. A larger quantum computer may be able to perform increasingly complex tasks, like machine learning or simulating complex systems to discover new pharmaceutical drugs. Engineering a larger quantum computer is challenging; the spectrum of error pathways becomes more complicated as qubits are added and the quantum system scales.

To scale or not to scale

Characterizing a quantum system produces a profile of the noise and errors, indicating if the processor is actually performing the tasks or calculations it is being asked to do. To understand the performance of any existing quantum computer for a complex problem or to scale up a quantum computer by reducing errors, it’s first necessary to characterize all significant errors affecting the system.

Researchers at the University of Innsbruck, around Prof. Rainer Blatt and Dr. Thomas Monz are developing a prototype quantum computer, where information is stored in single trapped atoms and manipulated with laser pulses. Their Canadian collaborators Prof. Joseph Emerson and Dr. Joel Wallman, at the Institute for Quantum Computing, are specializing on rigorous mathematical methods to quantify and verify errors in quantum computers.

The group of researchers identified a method to assess all error rates affecting a quantum computer. They implemented this new technique for the ion trap quantum computer at the University of Innsbruck, and found that error rates don’t increase as the size of that quantum computer scales up.

“Cycle benchmarking is the first method for reliably checking if you are on the right track for scaling up the overall design of your quantum computer,” said Joel Wallman. “These results are significant because they provide a comprehensive way of characterizing errors across all quantum computing platforms.” 

“We were particularly happy to see that the error rate does not increase with the system size”, said Alexander Erhard, from the University of Innsbruck. “This result gives us confidence that we are not missing a major issue that prevents us from scaling up to larger systems”.

Links

Nach oben scrollen