Jump to content

Intel offers AI breakthrough in quantum computing


The AchieVer

Recommended Posts

The AchieVer

Intel offers AI breakthrough in quantum computing

Intel's senior vice president and head of Mobileye, Amnon Shashua, on Wednesday unveiled new research done with colleagues at Hebrew University that both establishes important proof for capabilities of deep learning, and also offers a way forward for computing some commonly intractable problems in quantum physics.

 
 

We don't know why deep learning forms of neural networks achieve great success on many tasks; the discipline has a paucity of theory to explain its empirical successes. As Facebook's Yann LeCun has said, deep learning is like the steam engine, which preceded the underlying theory of thermodynamics by many years.  

But some deep thinkers have been plugging away at the matter of theory for several years now.  

On Wednesday, the group presented a proof of deep learning's superior ability to simulate the computations involved in quantum computing. According to these thinkers, the redundancy of information that happens in two of the most successful neural network types, convolutional neural nets, or CNNs, and recurrent neural networks, or RNNs, makes all the difference.  

Amnon Shashua, who is the president and chief executive of Mobileye, the autonomous driving technology company bought by chip giant Intel last year for $14.1 billion, presented the findings on Wednesday at a conference in Washington, D.C. hosted by The National Academy of Science called the Science of Deep Learning. 

In addition to being a senior vice president at Intel, Shashua is a professor of computer science at the Hebrew University in Jerusalem, and the paper is co-authored with colleagues from there, Yoav Levine, the lead author, Or Sharir, and with Nadav Cohen of Princeton University's Institute for Advanced Study.  

Also: Facebook's Yann LeCun reflects on the enduring appeal of convolutions

The report, "Quantum Entanglement in Deep Learning Architectures," was published this week in the prestigious journal Physical Review Letters.  

 

The work amounts to both a proof of certain problems deep learning can excel at, and at the same time a proposal for a  promising way forward in quantum computing.  

intel-mobileye-cnns-and-cacs-for-quantum-march-2019.png

The team of Amnon Shashua and colleagues created a "CAC," or, "convolutional arithmetic circuit," which replicates the re-use of information in a traditional CNN, while making it work with the "Tensor Network" models commonly used in physics.

Mobileye.

In quantum computing, the problem is somewhat the reverse of deep learning: lots of compelling theory, but as yet few working examples of the real thing. For many years, Shashua and his colleagues, and others, have pondered how to simulate quantum computing of the so-called many-body problem.  

Physicist Richard Mattuck has defined the many-body problem as "the study of the effects of interaction between bodies on the behaviour of a many-body system," where bodies have to do with electrons, atoms, molecules, or various other entities. 

What Shashua and team found, and what they say they've proven, is that CNNs and RNNs are better than traditional machine learning approaches such as the "Restricted Boltzmann Machine," a neural network approach developed in the 1980s that has been a mainstay of physics research, especially quantum theory simulation. 

Also: Google explores AI's mysterious polytope

 

"Deep learning architectures," they write, "in the form of deep convolutional and recurrent networks, can efficiently represent highly entangled quantum systems." 

Entanglements are correlations between those interactions of bodies that occur in quantum systems. Actual quantum computing has the great advantage of being able to compute entanglements with terrific efficiency. To simulate that through conventional electronic computing can be extremely difficult, even intractable.  

"Our work quantifies the power of deep learning for highly entangled wave function representations," they write, "theoretically motivating a shift towards the employment of state-of-the-art deep learning architectures in many-body physics research." 

intel-mobileye-rnns-and-racs-for-quantum-march-2019.png

The authors took a version of the recurrent neural net, or "RNN," and modified it by adding data reuse to a "recurrent arithmetic circuit," or RAC.

Mobileye.

The authors pursued the matter by taking CNNs and RNNs and applying to them "extensions" they have devised. They refer to this as a "simple 'trick'," and involves that redundancy mentioned earlier. It turns out, according to Shashua and colleagues. It turns out, they write, that the structure of CNNs and RNNs involves an essential "reuse" of information.  

In the case of CNNs, the "kernel," the sliding window that is run across an image, overlaps at each moment, so that parts of the image are ingested to the CNN multiple times. In the case of RNNs, the recurrent use of information at each layer of the network is a similar kind of reuse, in that case for sequential data points.  

Also: Google says 'exponential' growth of AI is changing nature of compute

 

In both cases, "this architectural trait […] was shown to yield an exponential enhancement in network expressivity despite admitting a mere linear growth in the amount of parameters and in computational cost." In other words, CNNs and RNNS, by virtues of redundancy, achieved via stacking many layers, have a more efficient "representation" of things in computing terms.  

For example, a traditional "fully-connected" neural network — what the authors term a "veteran" neural network, requires computing time that scales as the square of the number of bodies being represented. A RBM, they write, is better, with compute time that scales linearly in terms of the number of bodies. But CNNs and RNNs can be even better, with their required compute time scaling as the square root of the number of bodies. 

Those properties "indicate a significant advantage in modeling volume-law entanglement scaling of deep-convolutional networks relative to competing veteran neural-network based approaches," they write. "Practically, overlapping-convolutional networks […] can support the entanglement of any 2D system of interest up to sizes 100 × 100, which are unattainable by competing intractable approaches." 

 

 

 

 

Source

Link to comment
Share on other sites


  • Replies 1
  • Views 535
  • Created
  • Last Reply

I read the above post....and I don't have a Scooby Doo what any of it meant!!!🤣🤣🤣 I need to go lie down in a dark room.

Link to comment
Share on other sites


Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...