A brand new quantum computing benchmark has revealed the strengths and weaknesses of a number of quantum processing units (QPUs).
The benchmarking assessments, led by a workforce on the Jülich Analysis Centre in Germany, in contrast 19 totally different QPUs from 5 suppliers – together with IBM, Quantinuum, IonQ, Rigetti and IQM – to find out which chips had been extra secure and dependable for high-performance computing (HPC).
These quantum methods had been examined each at totally different “widths” (the overall variety of qubits) in addition to totally different “depths” for 2-qubit gates. The gates are operations that act on two entangled qubits concurrently, and depth measures the size of a circuit – in different phrases, its complexity and execution time.
IBM’s QPUs confirmed the best power by way of depth, whereas Quantinuum carried out greatest within the width class (the place bigger numbers of qubits had been examined). The QPUs from IBM additionally confirmed important enchancment in efficiency throughout iterations, notably between the sooner Eagle and more recent Heron chip generations.
These outcomes, outlined in a research uploaded Feb. 10 to the preprint arXiv database, recommend that the efficiency enhancements might be attributed not solely to raised and extra environment friendly {hardware}, but additionally enhancements in firmware and the mixing of fractional gates — customized gates out there on Heron can scale back the complexity of a circuit.
Nevertheless, the newest model of the Heron chip, dubbed IBM Marrakesh, didn’t reveal anticipated efficiency enhancements, regardless of having half the errors per layered gate (EPLG) in comparison with the computing big’s earlier QPU, IBM Fez.
Past classical computing
Smaller firms have made comparatively huge beneficial properties, too. Importantly, one Quantinuum chip handed the benchmark at a width of 56-qubits. That is important as a result of it represents the flexibility of a quantum computing system to surpass current classical computer systems in particular contexts.
“Within the case of Quantinuum H2-1, the experiments of fifty and 56 qubits are already above the capabilities of actual simulation in HPC methods and the outcomes are nonetheless significant,” the researchers wrote of their preprint research.
Particularly, the Quantinuum H2-1 chip produced outcomes at 56 qubits, working three layers of the Linear Ramp Quantum Approximate Optimization Algorithm (LR-QAOA) — a benchmarking algorithm — involving 4,620 two-qubit gates.
“To one of the best of our data, that is the biggest implementation of QAOA to unravel an FC combinatorial optimization drawback on actual quantum {hardware} that’s licensed to offer a greater outcome over random guessing,” the scientists mentioned within the research.
IBM’s Fez managed issues on the highest depth of the methods examined. In a take a look at that included a 100-qubit drawback utilizing as much as 10,000 layers of LR-QAOA (practically one million two-qubit gates) Fez retained some coherent info till practically the 300-layer mark. The bottom performing QPU in testing was the Ankaa-2 from Rigetti.
The workforce developed the benchmark to measure a QPU’s potential to carry out sensible purposes. With that in thoughts, they sought to plot a take a look at with a transparent, constant algorithm. This take a look at needed to be straightforward to run, platform agnostic (so it might work the widest potential vary of quantum methods) and supply significant metrics related to efficiency.
Their benchmark is constructed round a take a look at known as the MaxCut drawback. It presents a graph with a number of vertices (nodes) and edges (connections) then asks the system to divide the nodes into two units in order that the variety of edges between the 2 subsets is maximal.
That is helpful as a benchmark as a result of it’s computationally very troublesome, and the issue might be scaled up by growing the dimensions of the graph, the scientists mentioned within the paper.
A system was thought-about to have failed the take a look at when the outcomes reached a totally blended state — after they had been indistinguishable from these of a random sampler.
As a result of the benchmark depends on a testing protocol that’s comparatively easy and scalable, and may produce significant outcomes with a small pattern set, it’s fairly cheap to run, the pc scientists added.
The brand new benchmark is just not with out its flaws. Efficiency relies, as an illustration, on mounted schedule parameters, that means that parameters are set beforehand and never dynamically adjusted throughout the computation, that means they’ll’t be optimised. The scientists instructed that alongside their very own take a look at, “totally different candidate benchmarks to seize important features of efficiency must be proposed, and one of the best of them with essentially the most specific algorithm and utility will stay.”