A startup company has upped its qubit count by an order of magnitude in two years.
The qubits of the new hardware: an array of individual atoms.
Today, a startup called Atom Computing announced that it has been doing internal testing of a 1,180 qubit quantum computer and will be making it available to customers next year. The system represents a major step forward for the company, which had only built one prior system based on neutral atom qubits—a system that operated using only 100 qubits.
The error rate for individual qubit operations is high enough that it won't be possible to run an algorithm that relies on the full qubit count without it failing due to an error. But it does back up the company's claims that its technology can scale rapidly and provides a testbed for work on quantum error correction. And, for smaller algorithms, the company says it'll simply run multiple instances in parallel to boost the chance of returning the right answer.
Computing with atoms
Atom Computing, as its name implies, has chosen neutral atoms as its qubit of choice (there are other companies that are working with ions). These systems rely on a set of lasers that create a series of locations that are energetically favorable for atoms. Left on their own, atoms will tend to fall into these locations and stay there until a stray gas atom bumps into them and knocks them out.
Because the locations of atoms are set by the configuration of the lasers, it's possible to address each individually. Quantum information is stored in the nuclear spin, which is relatively impervious to the environment. While other types of qubits have coherence lifetimes that are just a fraction of a second, neutral atoms will often hold their state for tens of seconds. Because the nuclear spin doesn't readily interact with the environment, it's possible to pack the atoms closely together, allowing a relatively dense system.
It is, however, possible to manipulate atoms so that they can interact and become entangled. This works through what's called a Rydberg blockade, which prohibits interactions unless two atoms are a set distance apart and are both in the Rydberg state, in which their outermost electrons are only loosely bound and orbiting at a large distance from the nucleus. By placing the right pairs of atoms in the Rydberg state (which can also be done with lasers), it's possible to entangle them. And, since the lasers allow control over the location of individual atoms, it's possible to entangle any two.
Because this system allows atoms to be packed relatively tightly together, Atom Computing argues that the system is well-positioned to scale rapidly. Unlike in systems like transmons, where small differences in device fabrication lead to qubits with small variations in performance, every trapped atom is guaranteed to behave the same. And, since atoms don't engage in cross talk unless manipulated, it's possible to pack a lot of them into a relatively small space.
These two factors, the company's executives argue, mean that neutral atoms are well positioned to scale up to large numbers of qubits. Its original system, which went online in 2021, was a 10×10 grid of atoms (though three-dimensional arrangements are also possible). And, when they talked to Ars a year ago, they mentioned that they hoped to scale their next-generation system by an order of magnitude—although they wouldn't say when they expected it to be ready.
It’s almost ready
Atom Computing is now using the system internally and plans to open it up for public use next year. The system has moved from a 10×10 grid to a 35×35 grid, bringing the potential sites for atoms up to 1,225. So far, testing has taken place with up to 1,180 atoms present, making it the largest machine that anyone has publicly acknowledged (at least in terms of qubit count).
The qubits are housed in a 12×5 foot box that contains the lasers and optics, along with the vacuum system and a bit of unused space—Atom CEO Rob Hayes quipped that "there's a lot of air inside that box." It does not, however, contain the computer hardware that controls the system and its operations. The grid of atoms it's used to create, by contrast, is only about 100 microns per side, so it won't strain the hardware to keep increasing the qubit count.
Some of the changes in this system compared to Atom's first attempt were focused on managing the transition from a research system that was most useful for people learning to manage atom-based quantum computing, to one that has the stability needed for customers who are more interested in the algorithms that can be run there. "We've also added technology around uptime and availability to make this like a real product, a real cloud service," Hayes said.
That's an added challenge with atom-based systems because of the inevitability of collisions between the trapped atoms and stray gas molecules in the vacuum chamber. Ben Bloom, Atom's founder and CTO, said that an array of atoms can typically be maintained for over 100 seconds. That's enough for a lot of calculations but still means the system as a whole needs to be reset regularly.
As mentioned earlier, however, customers of this system won't be able to use all of these qubits for a single calculation—it becomes inevitable that an error will occur. So for now, the emphasis is on running algorithms that require fewer qubits and operations. This keeps things under the error threshold while allowing companies to develop algorithms that will become useful as quantum computers improve or possibly find individual cases where existing hardware is sufficient to provide useful results.
These sorts of calculations are often run multiple times in order to provide confidence in the results and get a sense of the error rate. And here, the high qubit count can also be useful. "We're actually just going to use all these qubits, because they're all identical to actually parallelize the computation," Bloom said. "So if someone hands us a 50-qubit algorithm, we will do that 50 qubit algorithm on all of our qubits, and we will then give you the results faster."
A matter of scale
But the biggest focus was simply on scaling the qubit count so that quantum error correction becomes possible. Error correction schemes typically involve spreading a single logical qubit across multiple hardware qubits, and so require a lot more of that hardware. "Our goal is to make a single system have a useful number of qubits," Bloom told Ars. "And to us that probably means hundreds of thousands to millions of qubits in a single system."
One of the features needed for error correction has already been demonstrated on Atom Computing hardware. They've already made non-disruptive measurements of their atoms while they're in the midst of a computation, something that's necessary to recognize and correct errors.
But other things are still in flux. On the previous version of the system, connections among qubits were handled by moving individual atoms next to each other in order to entangle them. But the process of moving them may prove to be a bottleneck as qubit counts continue to ramp up. "Moving right now is slower than our [qubit operations]," Bloom told Ars. "And so if you go into a world where you're doing error correction, I think you're going to have to have a huge benefit to make up for the cost of moving."
An error-corrected qubit can also take various forms, based on different configurations of the underlying hardware qubits. Early efforts have generally been tested on hardware that has two-dimensional arrays of qubits. But it's also possible to use three-dimensional schemes, and, with the right configuration of lasers, Atom's hardware could support 3D arrays. "In general, 3D has lots and lots of benefits," Blom said. "It's, again, all a question of carefully mapping out time-to-solution for fault tolerant algorithms, and understanding whether the trade offs are worth the complexity."
One of the goals of the new system is to start understanding these issues. Meanwhile, the company is also working on making sure the architecture can continue to scale to ever-higher qubit counts. In that regard, the company got a bit of good news in the form of three paperspublished in Nature last week. All of them showed similar systems operating with high fidelity. And Bloom said that, for the first time, the remaining noise wasn't due to the lasers that make the system work.
"The thing that has held back neutral atoms, until those papers have been published, have just been all the classical stuff we use to control the neutral atoms," Bloom said. "And what that has essentially shown is that if you can work on the classical stuff—work with engineering firms, work with laser manufacturers (which is something we're doing)—you can actually push down all that noise. And now all of a sudden, you're left with this incredibly, incredibly pure quantum system."
For Atom itself, the step up from 100 to 1,000 qubits was done without significantly increasing the laser power required. That will make it easier to keep boosting the qubit count. And, Bloom adds, "We think that the amount of challenge we had to face to go from 100 to 1,000 is probably significantly higher than the amount of challenges we're gonna face when going to whatever we want to go to next—10,000, 100,000."
Correction: The original article misspelled Atom's founder's last name, and had the timing of public availability wrong.
You can post now and register later.
If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.
Recommended Comments
There are no comments to display.
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.