Search the Community
Showing results for tags 'quantum supremacy'.
steven36 posted a topic in Security & Privacy NewsGoogle announced “quantum supremacy” last week, a technological achievement that has huge repercussions, not only for the company and its role in the world but for all of us individuals who want to maintain a semblance of the right to privacy. Google researchers have developed a computer called Sycamore, which is exponentially more powerful in its processing power than a “standard” supercomputer. The workings behind Sycamore are what make it such a breakthrough, since it uses an algorithm that would take 10,000 years to give a similar output on a classical computer but only 200 seconds on Google's processor. We should all be very concerned that an industry with a questionable track record on data protection, privacy and political neutrality now has access to the world’s most powerful computer. There is still time for our governments to play catch up and protect consumers. Although Google’s Sycamore is advanced, it is still not capable of fulfilling every data scientist’s deepest desires. The charge sheet against Facebook, rather than Google, is the longest – and is still growing. There have been long-standing concerns about the amount of data Facebook is harvesting from its users and what it would (or could) be used for. However, these issues are systemic and industry-wide, and in my opinion, the scandals involving Facebook in recent months and years could just as easily have affected Google, or perhaps even Microsoft. These issues came to a head around the Cambridge Analytica scandal, where Facebook was implicated in allowing a Russian-linked firm to harvest a huge amount of personal data, including political preferences, and allowing that knowledge to be used to meddle in the 2016 US presidential election. Now that the processing power available to manipulate and use large amounts of data has increased, the stakes are raised in what big data can be used for. The industry, however, doesn’t seem to accept these dangers. The implicit aim of tech companies is to acquire more users, more data, and ultimately more advertisers. The symbiotic relationship between these three factors underpins most tech companies’ business models, including the current wave of startups in Silicon Valley and elsewhere. This will not change. But what must change is the regulation around data security and its implementation and enforcement. Regulation is, by and large, already present: in almost every developed country, it is illegal for someone to hold data without a range of rigorous checks and balances on how it is sourced, held and transferred between parties. A range of international treaties, such as the European Union’s General Data Protection Regulation (GDPR) and the EU-US Privacy Shield mean that data can only achieve “freedom of movement” by fulfilling strict criteria. The largest data owners like Facebook and Google tend to follow these rules closely, meaning that the main concern is not control of data, but the data’s actual power. Big data can already predict an individual’s consumer habits and personal desires to a somewhat eerie extent. As processing power grows exponentially, will we have Facebook ads that can penetrate deeper and deeper into our lives and consciousness? What will be the effect on our mental health? Our family relationships? And at the macro level, our economies? None of these deeper questions appear to be being asked by either the industry or the regulators. Inevitably, they will become relevant as processing power increases; it is a matter of when, not if. There is still time for our governments to play catch up and protect consumers. Although Google’s Sycamore is advanced, it is still not capable of fulfilling every data scientist’s deepest desires. The Sycamore chip is a 54-qubit processor. That is relatively limited, and is one of the many reasons that the discovery is not practically useful. Researchers want a 100-qubit - or even 200-qubit - system before they are really able to put it to the test and see whether the dreams of quantum computing are realised. Rather than just controlling data transfer, it is time for a wider conversation about data usage. Which uses of data - regardless of who owns it and how it has been sourced - are ethical and safe? And which are unethical and dangerous? As lawmakers like US congresswoman Alexandria Ocasio-Cortez seem to enjoy grilling tech executives like Mark Zuckerberg on the minutiae of data usage, I hope we do not lose sight of the bigger picture. The stakes are too high, and the processing power is now too big, for us to be complacent. By Jamal Ahmed the founder of Kazient Privacy Experts Source
Karlston posted a topic in Technology NewsHere’s what the people who claimed Google’s quantum supremacy have to say about it "If you were invited to come here [Ars was], you probably read the leaked paper [we did]." Enlarge / Hartmut Neven, the head of Google's Quantum AI lab, walked Ars and others through an overview of the company's quantum computing efforts this week. John Timmer SANTA BARBARA, California—Early this autumn, a paper leaked on a NASA site indicating Google engineers had built and tested hardware that achieved what's termed "quantum supremacy," completing calculations that would be impossible on a traditional computer. The paper was quickly pulled offline, and Google remained silent, leaving the rest of us to speculate about their plans for this device and any follow-ons the company might be preparing. That speculation ended today, as Google released the final version of the paper that had leaked. But perhaps more significantly, the company invited the press to its quantum computing lab, talked about its plans, and gave us time to chat with the researchers behind the work. The supremacy result "I'm not going to bother explaining the quantum supremacy paper—if you were invited to come here, you probably all read the leaked paper," quipped Hartmut Neven, the head of Google's Quantum AI lab. But he found it hard to resist the topic entirely, and the other people who talked with reporters were more than happy to expand on Neven's discussion. Google's Sergio Boixo explained the experiment in detail, describing how a random source was used to configure the gates among the qubits, after which a measurement of the system's output was made. The process was then repeated a few million times in succession. While on a normal computer the output would be the same given the same starting configuration, qubits can have values that make their measured output probabilistic, meaning that the result of any one measurement can't be predicted. With enough measurements, however, it's possible to get the probability distribution. Calculating that distribution is possible on a classical computer for a small number of qubits. But as the total number of qubits goes up, it becomes impossible to do so within the lifetime of existing supercomputing hardware. In essence, Google was asking a quantum computer to tell it what a quantum computer would do in a situation that's difficult for a traditional computer to predict. (And doing so with a computer that has a high error rate. When asked, however, Google engineers indicated that errors would alter the probability distribution in a way they could detect when run with a moderate number of qubits). Google staff admitted that it was a problem specifically chosen because quantum computers can produce results even if they have a high error rate. But, as researcher Julian Kelly put it, "if you can't beat the world's best classical computer on a contrived problem, you're never going to beat it on something useful." Boixo highlighted that this problem provided a useful test, showing that the error rate remained a simple linear extrapolation of the errors involved in setting and reading pairs of qubits. This seemingly indicates that there's no additional fragility caused by the increasing complexity of the system. While this had been shown before for smaller collections of qubits, Google's hardware increases the limits on earlier measurements by a factor of 1013. Google and its hardware None of that, however, explains how Google ended up with a quantum computing research project to begin with. According to various people, the work was an outgrowth of academic research going on at nearby University of California, Santa Barbara. A number of the Google staff retain academic positions there and have grad students that work on the projects at Google. This relationship was initiated by Google, which started looking into the prospect of doing its own work on quantum computing at about the same time the academics were looking for ways to expand beyond the work that traditionally took place at universities. Google's interest was spurred by its AI efforts. There are a number of potential applications of quantum computing in AI, and the company had already experimented a bit on a D-Wave quantum annealer. But gate-based quantum computers hadn't matured enough to run much more than demonstrations. So, the company decided to build its own. To do so, it turned to superconducting qubits called transmon—the same choice that others in the field, like IBM, have made. The hardware itself is a capacitor linked to a superconducting Josephson junction, in which a bunch of electrons behaves as if it were a single quantum object. Each qubit behaves like an oscillator, with its two possible output values corresponding to still or in motion. The hardware is quite large, which makes it relatively easy to control—you can bring wires right up next to it, which is something you can't do to individual electrons. Google has its own fabs, and the company makes the wiring and qubits on separate chips before combining them. But the challenges don't end there. The chip's packaging plays a role in shielding it from the environment, and it brings the control and readout signals in from external hardware—Google's Jimmy Chen noted that the packaging is so important that a member of that team was given the honor of being first author on the supremacy paper. The control and readout wires consist of a superconducting niobium-titanium alloy, which constitutes one of the most expensive individual parts of the whole assembly, according to Pedram Roushan. And that connects it to external control hardware, with five wires required for every two qubits. (That wiring requirement is starting to create problems, as we'll get to later.) First image of article image gallery. Please visit the source link to see all images. The external control hardware for quantum computers is rather extensive. As Google's Evan Jeffrey described it, traditional processors contain circuitry that help control the processor's behavior in response to external inputs that are relatively sparse. That's not true for quantum processors—every aspect of their control has to be provided from external sources. Currently, Google's setup loads up all the control instructions into external hardware that's extremely low latency and then executes it multiple times. Even so, Jeffrey told Ars, as the complexity of the instructions has risen with the number of qubits, the amount of time the qubits spend idle has climbed from 1% to 5%. Chen also described how simply putting the hardware together isn't the end of the challenge. While the individual qubits are designed to be identical, small flaws or impurities and the local environment can all alter the behavior of individual qubits. As a result, each qubit has its own frequency and error rate, and those have to be determined before a given chip can be used. Chen is working on automating this calibration process, which currently takes a day or so. What's coming, hardware-wise The processor that handled the quantum supremacy experiment is based on a hardware design called Sycamore, and it has 53 qubits (due to one non-functional device in a planned array of 54). That's actually a step down from the company's earlier Bristlecone design, which had 72 qubits. But Sycamore has more connections among its qubits, and that better fits with Google's long-term design goals. Google refers to the design goal as "surface code," and its focus is on enabling fault-tolerant, error-correcting quantum computing. Surface code, as Google's Marissa Giustina described it, requires nearest-neighbor coupling, and the Sycamore design lays out its qubits in a square grid. Everything but the edge qubits have connections to their four neighbors. Enlarge / The layout of Google's qubits provide each internal qubit with connections to four of its neighbors. Google But the layout isn't the only issue that stands between Google and error-correcting qubits. Google Hardware Lead John Martinis said that you also need two-qubit operations to have an error rate of about 0.1% before error correction is realistically possible. Right now, that figure stands at roughly 0.3%. The team is confident it can be brought down, but they're not there yet. Another issue is wiring. Error correction requires multiple qubits to act as a single logical qubit, which means a lot more control wires for each logical qubit in use. And, right now, that wiring is physically large compared to the chip itself. That will absolutely have to change to add significant numbers of additional qubits to the chips, and Google knows it. The wiring problem "is boring—it's not a very exciting thing," quipped Martinis. "But it's so important that I've been working on it." Enlarge / The chip's packaging is dominated by the wiring needed to feed signals in and out of the chip. John Timmer Error correction also requires a fundamental change in the control hardware and software. At the moment, controlling the chip generally involves sending a series of operations, then reading out the results. But error corrections require more of a conversation, with constant sampling of the qubit state and corrective commands issued when needed. For this to work, Jeffrey noted, you're going to really need to bring latency down. Overall, the future of Google's hardware was best summed up by Kelly, who said, "lots of things will have to change, and we're aware of that." Martinis said that, as they did when moving away from the Bristlecone design, they're not afraid to scrap something that's currently successful: "We go to conferences and pay attention, and we're willing to pivot if we find we need to." Software, too While the quantum supremacy result was a bit of a contrived problem, it actually has a potentially useful application, in that the processor will produce a set of truly random digits, one that could be audited and verified if needed. But while potentially valuable, that's unlikely to provide a large enough market to justify Google's investment here. So developing additional software applications is going to be essential for the success of this project in the long term. A number of other companies have approached this issue by providing a cloud interface to their quantum hardware, even if said hardware had too few qubits to do anything useful. The goal was to allow people to gain experience working with a particular form of quantum processor and encourage the development of the libraries and toolkits that will make future software development easier. Martinis was frank about this, saying, "resources dictated we do [the] powerful processor first and open it up to services later." Enlarge / Here, Google researcher John Martinis stands in front of some of the hardware he has helped create. Google's version, with its bare metal and wooden platform, looks like it was made by graduate students, though the LEDs are a nice touch. John Timmer A Google quantum cloud service, however, is likely to be launched relatively soon. Nobody would give a deadline, but talk seemed to indicate next year is a likely target. Google has already put together an open source development framework for its hardware called Cirq, and the company built a quantum chemistry simulation toolkit called OpenFermion on top of it. Yet without any hardware to run the resulting software on, they're not going to pull researchers away from other platforms. Dave Bacon, who leads the development of these toolkits, said that one of the challenges is that nobody's sure of the right abstractions to make at this point. With classical computers, he said, you don't have to care about transistor physics. At the current stage—Google's calling it NISQ, for noisy intermediate-scale quantum—you need to squeeze the most out of limited hardware, and that level of abstraction may not be appropriate. Bacon said that Google expected the same three uses that other quantum computing companies are focusing on: simulation of quantum systems like complex chemicals and biomolecules; machine learning; and optimization problems. Google has been hosting annual symposia with researchers in the field to find out what they'd be interested in doing, and Bacon said that these experiences emphasized that scaling up the number of qubits would be critical to bringing additional algorithms into play. But he echoed people outside of Google, acknowledging that nobody is really certain how useful these algorithms will be when the processors are still error-prone. Still, Bacon highlighted one major change that the hardware has already enabled. A decade ago, he used to have to generate mathematical proofs to show a given algorithm would work as intended. "Now, I don't have to be smart," Bacon says. "I can just run it and see what happens." The quantum computing landscape Google's quantum supremacy announcement came at a time when other companies have already had cloud-based quantum processors available for well over a year (and D-Wave's quantum annealer is substantially older). Google's Neven dismissed this as a bit irrelevant, saying, "quantum computing is a marathon—we've tried to avoid petty competitiveness—it's not company vs. company, it's humankind vs. nature." But not all his competitors would agree. IBM, for example, having heard that this announcement was forthcoming, took issue with the declaration of quantum supremacy. Its issue was two-fold. First, IBM has decided that any declaration of quantum supremacy is inappropriate in an era of error-prone computations. But its second issue was less semantic and more technical. Google's argument for quantum supremacy focused on the claim that a simulation of its processor's behavior would take 10,000 years on a state-of-the-art supercomputer. But IBM noted that Google's argument was based in part on memory starvation, and supercomputers have hard disks that can hold temporary data during the computations. If that disk space is factored in, IBM argues, the calculation could take as little as 2.5 days. At a couple of minutes, the quantum processor beats that handily, but there's still the chance that algorithm optimizations will cut the margin considerably. Enlarge / The Santa Barbara crowd has given the Google logo a distinctly quantum twist. John Timmer This is somewhat similar to the experience D-Wave had, where every indication of a quantum advantage was quickly matched by computer scientists returning to the classical computing algorithms and finding ways of extracting speedups. And Google, to an extent, has expected this. Neven told us that the company had already funded "red team" researchers in academia to try to do similar optimizations. This is sort of a misdirection. Google will undoubtedly be able to add additional qubits, which will cause the classical simulation to slow down further. At the same time, this reaction does suggest that the existing players in the field are starting to get sensitive to competition—even if said competition doesn't even have a marketable product yet. There is the possibility that this competition will end up focusing on developer mindshare. Each company's processor will undoubtedly have distinctive hardware characteristics. As Bacon said, right now, software development involves talking fairly directly to the hardware. But he also told Ars that the differences aren't so substantial that we're likely to see any sort of vendor lock-in to a given processor unless some radically different technology begins to dominate, such as photon- or cold-atom-based computations. All of which suggests that, quantum supremacy or not, Google's entry into this market won't provide an immediate shake-up. Instead, it's likely to push everyone to accelerate their work on increasing the number of qubits and managing the error rate. In that sense, Neven's characterization of this being "humankind vs. nature" may not be so far off. Source: Here’s what the people who claimed Google’s quantum supremacy have to say about it (Ars Technica) (To view the article's image gallery, please visit the above link)