Jump to content
  • IBM releases 1,000+ qubit processor, roadmap to error correction


    Karlston

    • 386 views
    • 9 minutes
     Share


    • 386 views
    • 9 minutes

    Company now expects useful error-corrected qubits by the end of the decade.

    IBM_Quantum_Family_of_Processors-1-800x5

    The family portrait of IBM's quantum processors, with the two new arrivals (Heron and Condor) at right.
    IBM

     

    On Monday, IBM announced that it has produced the two quantum systems that its roadmap had slated for release in 2023. One of these is based on a chip named Condor, which is the largest transmon-based quantum processor yet released, with 1,121 functioning qubits. The second is based on a combination of three Heron chips, each of which has 133 qubits. Smaller chips like Heron and its successor, Flamingo, will play a critical role in IBM's quantum roadmap—which also got a major update today.

     

    Based on the update, IBM will have error-corrected qubits working by the end of the decade, enabled by improvements to individual qubits made over several iterations of the Flamingo chip. While these systems probably won't place things like existing encryption schemes at risk, they should be able to reliably execute quantum algorithms that are far more complex than anything we can do today.

     

    We talked with IBM's Jay Gambetta about everything the company is announcing today, including existing processors, future roadmaps, what the machines might be used for over the next few years, and the software that makes it all possible. But to understand what the company is doing, we have to back up a bit to look at where the field as a whole is moving.

    Qubits and logical qubits

    Nearly every aspect of working with a qubit is prone to errors. Setting its initial state, maintaining that state, performing operations, and reading out the state can all introduce errors that will keep quantum algorithms from producing useful results. So a major focus of every company producing quantum hardware has been to limit these errors, and great strides have been made in that regard.

     

    There's some indication that those strides have now gotten us to the point where it's possible to execute some simpler quantum algorithms on existing hardware. And it's likely that this potential will expand to further algorithms thanks to the improvements that we can probably expect over the next few years.

     

    In the long term, though, we're unlikely to ever get the qubit hardware to the point where the error rate is low enough that a processor could successfully complete a complex algorithm that might require billions of operations over hours of computation. For that, it's generally acknowledged that we'll need error-corrected qubits. These involve spreading the quantum information held by a qubit—termed a "logical qubit"—across multiple hardware qubits. Additional qubits are used to monitor the logical qubit for errors and allow for their correction.

     

    Computing using logical qubits requires two things. One is that the error rates of the individual hardware qubits have to be low enough that individual errors can be identified and corrected before new ones take place. (There's some indication that the hardware is good enough for this to work with partial efficiency.) The second thing you need is lots of hardware qubits, since each logical qubit requires multiple hardware qubits to function. Some estimates suggest we'll need a million hardware qubits to create a machine capable of hosting a useful number of logical qubits.

     

    IBM is now saying that it expects to have a useful number of logical qubits by the end of the decade, and Gambetta explained how today's announcements fit into that roadmap.

    Qubits and gates

    Gambetta said that the company has been taking a two-track approach to getting its hardware ready. One aspect of this has been developing the ability to consistently fabricate high-quality qubits in large numbers. And he said that the 1,000+ qubit Condor is an indication that the company is in good shape in that regard. "It's about 50 percent smaller qubits," Gambetta told Ars. "The yield is right up there—we got the yield close to 100 percent."

     

    The second aspect IBM has been working on is limiting errors that occur when operations are done on individual or pairs of qubits. These operations, termed gates, can be error-prone themselves. And changing the state of a qubit can produce subtle signals that can bleed into neighboring qubits, a phenomenon called crosstalk. Heron, the smaller of the new processors, represents a four-year effort to improve gate performance. "It's a beautiful device," Gambetta said. "It's five times better than the previous devices, the errors are way less, [and] crosstalk can't really be measured."

     

    IBM-Quantum-Development-Innovation-Roadm
    IBM's new roadmap, which places improvements in performance and connectivity that result in useful error correction before 2030.
    IBM

    Many of the improvements come down to introducing tunable couplers to the qubits, a change from the fixed-frequency hardware the company had used previously. This has sped up all gate operations, with some seeing a 10-fold boost. The less time you spend doing anything with a qubit, the less of an opportunity there is for errors to crop up.

     

    Many of these improvements were tested over multiple iterations of the company's Eagle chip, which was first introduced in 2021. The company's new roadmap will see an improved iteration of the 133-qubit Heron released next year that will enable 5,000 gate operations. That will be followed with multiple iterations of next year's 156-qubit Flamingo processor that will take gate operations up to 15,000 by 2028.

     

    These chips will also be linked together into larger processors like Crossbill and Kookaburra that also appear on IBM's roadmap (for example, seven Flamingos could be linked to create a processor with a similar qubit count to the current Condor). The focus here will be on testing different means of connecting qubits, both within and between chips.

    Putting them to use

    Gambetta said the steadily improving error rate will be useful for people trying to get actual work done using existing quantum hardware. He highlighted the paper on error mitigation that IBM published earlier this year, which showed that a detailed understanding of the errors that occur during calculations can be used to squeeze useful results out of existing hardware. "Since that paper went out, there's like six or so papers on the arXiv" using a similar approach, Gambetta said. "But more importantly, we're going to showcase—I think the number now is like 13—demonstrations from our users, clients, and partners that have their own demonstrations of using a quantum computer as a tool."

     

    While this approach scales better than classical computing for some problems (including simulating generic quantum systems), it will ultimately run up against the hardware's limits—Gambetta suggested problems that need about 100 qubits and 10,000 gates. Going beyond that will require error-corrected logical qubits.

     

    But as noted above, those now appear on the roadmap for the first time. And they do so incrementally. Logical memory qubits will be demonstrated in 2026, followed by logical communication among qubits the following year. Logical gates show up with the Starling processor in 2028, and IBM expects to have the full package working on an iteration of Starling in 2029. That will allow a huge jump in gate operations, from Flamingo's 15,000 in 2028 to 100 million gates on Starling in 2029.

     

    But Starling will likely fall well short of the millions of hardware qubits that have been estimated to be needed to create a useful processor. Gambetta said the company will be focusing on reducing the number of hardware qubits needed to host a robust logical qubit, pointing to a manuscript the company placed on the arXiv earlier this year that discusses an alternative scheme for enabling error correction. The method most commonly tested today (called a "surface code") can require up to 4,000 hardware qubits to host 12 logical qubits; the scheme described in the manuscript can do so using only 288 hardware qubits.

     

    The challenge is that this approach requires connections among qubits that may be physically distant on a chip—or potentially on separate chips entirely. Transmon chips to date have limited connections to nearest neighbors, which is all that's needed for a surface code, although Gambetta said that IBM has shown its packaging technology is capable of supporting longer connections. In any case, a key focus of the next few generations of quantum processors will be enabling these longer-range connections. Different types of couplers appear on the roadmap in 2024 and 2025, and it will be absolutely essential for them to operate with high fidelity if IBM is to reach its end-of-decade goals.

     

    Given that these are the one feature that isn't an evolution of something that IBM is announcing today, they probably rate as the highest-risk item on the roadmap.

    Software matters, too

    Even if all of that is successful, Gambetta said the hardware released at the end of the decade will still be too small for complex algorithms like the one needed to break today's encryption. So error correction will usher in a period where we'll have hardware that can perform calculations that are impossible on classical hardware but that are not able to do everything we might ultimately want to use them for. Which means that software development will be critical to determining what we can accomplish for a number of years.

     

    On that side of the equation, IBM has helped develop an open source quantum SDK called Qiskit, which puts a layer of abstraction between what a programmer might want to see done and the need to issue commands directly to the hardware controlling the quantum computer. Today's announcements include Qiskit seeing its 1.0 release, indicating that the API has stabilized, and future work will shift to focus more on building useful libraries on top of the existing technology.

     

    Separately, the company has modified its generative AI coding tool to allow it to produce code that works on Qiskit. (We regret to inform you that we will almost certainly be incapable of evaluating how well this tool works.)

     

    Developing and testing software will be increasingly important over the next several years, as it's not completely clear what useful things can be done either with error mitigation on noisy systems or with a constrained number of error-corrected qubits. And while it's nice to see IBM hitting its 2023 processor goals before the year is out, the bigger news is probably its roadmap. People have been talking about developing error correction and showing small demonstrations of it for the entire time I've been covering the field. This is the first time I've seen a date put on its arrival in a useful form.

     

    Listing image by IBM

     

    Source


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...