Jump to content
  • The “Google Silicon” team gives us a tour of the Pixel 6’s Tensor SoC

    Karlston

    • 446 views
    • 11 minutes
     Share


    • 446 views
    • 11 minutes

    Learn more about the Google Tensor from the people that designed it.

    A promo image for the Google Tensor SoC.
    A promo image for the Google Tensor SoC.
    Google

    The Pixel 6 is official, with a wild new camera design, incredible pricing, and the new Android 12 OS. The headline component of the device has to be the Google Tensor "system on chip" (SoC), however. This is Google's first main SoC in a smartphone, and the chip has a unique CPU core configuration and a strong focus on AI capabilities.

     

    Since when is Google a chip manufacturer, though? What are the goals of Tensor SoC? Why was it designed in its unique way? To get some answers, we sat down with members of the "Google Silicon" team—a name I don't think we've heard before.

     

    Google Silicon is a group responsible for mobile chips from Google. That means the team designed previous Titan M security chips in the Pixel 3 and up, along with the Pixel Visual Core in the Pixel 2 and 3. The group has been working on main SoC development for three or four years, but it remains separate from the Cloud team's silicon work on things like YouTube transcoding chips and Cloud TPUs.

     

    Phil Carmack is the vice president and general manager of Google Silicon, and Monika Gupta is the senior director on the team. Both were nice enough to tell us a bit more about Google's secretive chip.

     

    Most mobile SoC vendors license their chip architecture from ARM, which also offers some (optional) guidelines on how to design a chip using its cores. And, apart from Apple, most of these custom designs stick pretty closely to these guidelines. This year, the most common design is a chip with one big ARM Cortex-X1 core, three medium A78 cores, and four slower, lower-power A55 cores for background processing.

     

    Now wrap your mind around what Google is doing with the Google Tensor: the chip still has four A55s for the small cores, but it has two Arm Cortex-X1 CPUs at 2.8 GHz to handle foreground processing duties.

     

    For "medium" cores, we get two 2.25 GHz A76 CPUs. (That's A76, not the A78 everyone else is using—these A76s are the "big" CPU cores from last year.) When Arm introduced the A78 design, it said that the core—on a 5nm process—offered 20 percent more sustained performance in the same thermal envelope compared to the 7nm A76. Google is now using the A76 design but on a 5nm chip, so, going by ARM's description, Google's A76 should put out less heat than an A78 chip. Google is basically spending more thermal budget on having two big cores and less on the medium cores.

     

    So the first question for the Google Silicon team is: what's up with this core layout?

     

    Carmack's explanation is that the dual-X1 architecture is a play for efficiency at "medium" workloads. "We focused a lot of our design effort on how the workload is allocated, how the energy is distributed across the chip, and how the processors come into play at various points in time," Carmack said. "When a heavy workload comes in, Android tends to hit it hard, and that's how we get responsiveness."

     

    This is referring to the "rush to sleep" behavior most mobile chipsets exhibit, where something like loading a webpage has everything thrown at it so the task can be done quickly and the device can return to a lower-power state quickly.

     

    "When it's a steady-state problem where, say, the CPU has a lighter load but it's still modestly significant, you'll have the dual X1s running, and at that performance level, that will be the most efficient," Carmack said.

     

    He gave a camera view as an example of a "medium" workload, saying that you "open up your camera and you have a live view and a lot of really interesting things are happening all at once. You've got imaging calculations. You've got rendering calculations. You've got ML [machine learning] calculations, because maybe Lens is on detecting images or whatever. During situations like that, you have a lot of computation, but it's heterogeneous."

     

    A quick aside: "heterogeneous" here means using more bits of the SoC for compute than just the CPU, so in the case of Lens, that means CPU, GPU, ISP (the camera co-processor), and Google's ML co-processor.

    Carmack continued, "You might use the two X1s dialed down in frequency so they're ultra-efficient, but they're still at a workload that's pretty heavy. A workload that you normally would have done with dual A76s, maxed out, is now barely tapping the gas with dual X1s."

     

    The camera is a great case study, since previous Pixel phones have failed at exactly this kind of task. The Pixel 5 and 5a both regularly overheat after three minutes of 4K recording. I'm not allowed to talk too much about this right now, but I did record a 20 minute, 4K, 60 FPS video on a Pixel 6 with no overheating issues. (I got bored after 20 minutes.)

     

    33-1-980x735.jpg

    This is what the phone looks like, if you're wondering.
    Google

    So, is Google pushing back on the idea that one big core is a good design? The idea of using one big core has only recently popped up in Arm chips, after all. We used to have four "big" cores and four "little" cores without any of this super-sized, single-core "prime" stuff.

     

    "It all comes down to what you're trying to accomplish," Carmack said. "I'll tell you where one big core versus two wins: when your goal is to win a single-threaded benchmark. You throw as many gates as possible at the one big core to win a single-threaded benchmark... If you want responsiveness, the quickest way to get that, and the most efficient way to get high-performance, is probably two big cores."

     

    Carmack warned that this "could evolve depending on how efficiency is mapped from one generation to the next," but for the X1, Google claims that this design is better.

     

    "The single-core performance is 80 percent faster than our previous generation; the GPU performance is 370 percent faster than our previous generation. I say that because people are going to ask that question, but to me, that's not really the story," Carmack explained. "I think the one thing you can take away from this part of the story is that although we're a brand-new entry into the SoC space, we know how to make high-frequency, high-performance circuits that are dense, fast, and capable... Our implementation is rock solid in terms of frequencies, in terms of frequency per watt, all of that stuff. That's not a reason to build an all-new Tensor SoC."

    You knew this was coming: Google wants to talk about AI

    No, the traditional parts of a smartphone SoC are not why Google built a smartphone SoC. It wants to push the envelope, of course, in the onboard processing of artificial intelligence and machine learning. This is Google being its Googliest when it comes to chip design.

     

    "For Google as a company, we just apply AI to everything we do," Google Silicon Senior Director Monika Gupta said. "Even our cafeteria menus probably are analyzed by AI and designed based on our patterns and usage."

     

    I would love to give you some kind of stat about how powerful the AI processing in the Google Tensor SoC is, but Google isn't interested in talking about AI numbers like TeraOPS.

     

    "I think we don't really have a very good modern way of comparing ML processors simply because most of the benchmarks you see are very backward-looking," Carmack said.

     

    Plus—returning to that "heterogeneous" computing comment—this whole chip is called "Google Tensor" even though that's the brand name for Google's AI efforts, which you see in Google's "TensorFlow" machine-learning library and in the "Tensor Processing Units" from Google Cloud. Every component of the SoC is involved with Google's AI algorithms, which is why the whole chip is now called "Tensor."

     

    It's easy for my eyes to glaze over when companies talk about AI processing. Android OEMs like Huawei have been hyping AI co-processors for years, but when asked to show tangible user benefits that haven't been possible on previous smartphones, you mostly hear crickets. Even Google is guilty of this: the Pixel Visual Core in the Pixel 2 was never used by the actual Google camera app; it was only for third parties.

     

    The proof of Tensor's worth will be in what new capabilities it actually brings to the table, and Google appears to actually be delivering. It's taking a vertical approach to AI with Tensor, designing the AI hardware and the AI software, and Google says that the Pixel 6's ML code can't run fast enough or efficiently enough on older devices. Gupta said that Google is "taking the latest and greatest coming from Google research, and we've put it onto Tensor and Pixel 6, and we do it power-efficiently."

     

    So what are those new capabilities? Here's one: the previous Pixel phones' incredible camera performance despite their ancient sensors is thanks to its HDR+ algorithm, which, with a single button press, does exposure stacking across 10 photos taken in half a second and merges them into a single photo using machine learning. Thanks to the ancient camera hardware, though, the video quality on Pixel phones has been pretty bad, because there's just no way that you can run something like image stacking on a video... until now! The Pixel 6 can run Google's HDR algorithm on 4K video for every single frame at 60FPS. Welcome to the world of video image stacking, brought to you by Google Tensor.

     

    The video version of the HDR+ algorithm is called "HDR Net," and Google actually built a specific accelerator for this algorithm into the Tensor Image Signal Processor (ISP). Gupta said that this should "bring the signature Pixel look to videos."

     

    Did I mention my review unit recorded a 4K, 60FPS video for 20 minutes straight, while HDR-Netting every single individual frame, without any overheating issues? We'll have to see how good the camera quality is after a review, but all that talk about sustained "medium" performance and the weird CPU layout is starting to come together here. This sounds like a real, tangible 10x improvement over previous Pixel devices, which, again, lasted around 3 minutes in 4K video mode with no fancy image stacking.

     

    Being able to do an incredible number of machine-learning tasks in a split second can also lead to some fun new image techniques. There's "face unblur" functionality in the camera, which is a new application of Google's image-stacking techniques. If the Pixel 6 camera viewfinder detects a face, and that face is blurry from movement, the Pixel 6 will actually fire up a second camera and take two photos at once. The main camera will do the usual exposure for a low-noise shot, while the ultra-wide camera will take a faster exposure, which will remove the blur from any movement. Then Google will do an "align and merge," and you'll get a single picture with a clear face and a good-looking image. Again, it's something to test, but imagine not having any more blurry photos of your fidgety kids.

    Direct-My-Call-1.gif

    One of the many, many AI features of the Pixel 6. This one makes phone menus easier.
    Google

    Google says Tensor has also led to big strides in Google's voice recognition, with the Pixel 6 featuring what Gupta called "the most advanced speech recognition model ever released by Google." Voice recognition will now automatically attempt punctuation like commas and periods based on context and pauses in speech. It will try to pull in proper spellings of names via your contact list and previous usage. Gupta said that Tensor does this all while using half as much power as previously possible and that, "because Tensor allows us to run our models so efficiently, we essentially open up a power budget or a thermal headroom so we can layer on more and more advanced technology to make our features even better. We're able to keep up with more nuances of speech because we're running it so efficiently."

     

    This is an on-device, offline voice recognition, and it applies everywhere you see a microphone button, like the Google Assistant, Gboard, and Google Translate. Translate has a whole extra batch of Tensor-powered features and can do live translations now. Google Assistant hotword detection is supposed to be improved, too, thanks to more sophisticated ML that should make it work better in noisy environments.

     

    These are a lot of claims to go over once we finally get a second to breathe and look closer at a working Pixel 6 unit. We'll have a full review sometime soon.

     

     

     The “Google Silicon” team gives us a tour of the Pixel 6’s Tensor SoC

    • Like 1

    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...