October 8, 2013
Researchers from the Japanese National Institute of Informatics (NII, Director General: Masaru Kitsuregawa) and Nippon Telegraph and Telephone (NTTTokyo; President and CEO: Hiroo Unoura) have, for the first time, undertaken a detailed study of the requirements of a large-scale quantum computer architecture. The results of the study, published today in the journal Nature Communications, reveal the number of physical devices and the amount of time required to execute Shor’s algorithm for factoring numbers on a quantum computer built from realistic hardware.
“Until now, it has proven difficult to make an accurate estimate how big and how fast a quantum computer will be in practice”, says Simon Devitt, member of the Quantum Information Science Theory (QIST) group at NII and lead author of the study. “Quantum computers will require error correction to protect quantum information from decoherence. For this reason, it has been a challenge to design a quantum computer architecture that can be analyzed in sufficient detail”.
The architecture is based on a promising form of quantum error correction known as topological error correction. “In this model, a quantum algorithm is like a intricate puzzle”, says Devitt. “The size of the puzzle indicates how many qubits and how much time is required to run the algorithm”.
The results of the study indicate that while quantum computers can, in theory, solve problems that would be practically impossible for even the fastest conventional computers, they will not solve these problems in only seconds. Even with a large quantum computer, it may take a year or longer to factor a number large enough to threaten the public-key crypto-systems in widespread use.
“While quantum computers appear to be more powerful than classical computers, the requirements of scalable quantum computing are daunting”, says Ashley Stephens, one of the authors. “Our work is an important step in understanding these requirements, but we have also identified several areas for improvement”.
The study highlights the important role of programming and optimization in the field of quantum computing. “As quantum computers become a reality, the problem of efficiently programming these computers shouldn’t be overlooked”, says Stephens.
Earlier this year, Simon Devitt and Kae Nemoto (the head of the QIST group at NII) released meQuanics, a game designed to allow players to optimize the very circuits used in this study. “Our results highlight the need for continued refinement of how quantum algorithms will be programmed in the computers of the future, and this is one of the primary reasons why we began work on the meQuanics project”, says Nemoto.
One of the key conclusions of this work is that the most effective way to speed up quantum computers may have little to do with the actual quantum hardware. “We found that the way in which we implement Shor’s algorithm is critical”, says Bill Munro from the Basic Research Laboratory at NTT. “By optimizing quantum circuits and compiling the final program more carefully, we achieved the same speed-up as we would have by increasing the accuracy of the hardware by a factor of 10 or even 100”.
The authors plan to undertake similar studies of other algorithms and architectures in the future.
For more details, see the paper “Requirements for fault-tolerant factoring on an atom-optics quantum computer” Nature Commun. 3, October (2013)
+81 3 4212 2740
+81 3 4212 2561
Information is current as of the date of issue of the individual press release.
Please be advised that information may be outdated after that point.