Everything Google Announced Today: Android, AI, Holograms
Tuesday marked the return of Google’s annual developer conference. The 2020 edition of the event was canceled because of the pandemic, but today Google IO returned as a virtual event. The three-day conference began with an opening keynote address, where Google executives and project managers took turns showing off new software features, new AI-powered tools, and a zany prototype video booth made for hyperrealistic teleconferencing.
Here’s everything Google announced.
Android 12 brings many visual changes that make the next version of the mobile operating system a little more personal and playful. Pick up your phone and the lock screen will light up from the bottom, but tap the power button instead and the pixels will illuminate from the side of the phone. If there are no notifications on the lock screen, the clock will take up more space. Small touches like this even apply to the system’s design—the color tones of widgets and the notification drop-down menu can adjust to match your wallpaper.
Many of these changes fall under a new design language Google calls Material You. It’s coming first to Google hardware and software this fall, and it lets you change the color palette of all your apps, though you’ll be confined to the colors Google has chosen for its “Material palette.”
Android’s interface has also been given an overall redesign with new widgets, a fresh look for larger and bolder quick settings tiles, and a simpler settings menu. You’ll find new types of tiles in the quick settings menu too, such as Google Pay and smart-home control options. Thanks to under-the-hood improvements, the OS is smoother and animations are more responsive. Everything about the interface is a little faster and more efficient. The first beta version is available now, and the official release will likely roll out in August or September.
Perhaps in response to Apple’s recent announcement that it would disable ad tracking between apps by default, Google has emphasized newfangled privacy features of its own.
You can read a detailed rundown Android’s new privacy features by our own Lily Hay Newman. There’s a new privacy dashboard that allows users to view app permission settings, see which data is being accessed by which apps, and revoke app tracking privileges all from one screen. Also, an indicator will now pop up in the top corner to let you know if an app is using your mic or camera. More nuanced “approximate location” features allow you to give an app a general sense of where you are, rather than being able to pinpoint exactly which bathroom stall you’re in.
It’s the Zoom of the future! Kind of. Maybe the Google Meet of the future. While still a prototype, Google’s Project Starline is a virtual meeting booth with holograms. (Don’t miss our exclusive first look at the tech.) Two people sit in their respective booths in different locations, and your chat companion beams right in using tech that makes them look like they’re sitting across the table from you. Thanks to depth sensors, multiple cameras, and spatial audio, Starline makes you feel like you’re really there with the other person, as opposed to staring at yet another talking head on a video screen. It’s currently just a proof of concept, and we might see it in the real world within five years, according to Google.
Google is revamping its smartwatch operating system, with some help from Samsung. You can read our exclusive deep dive on the changes coming to Wear this year, but here are some highlights.
The next version of Wear OS—for now, just called Wear—will include some features pulled right from Samsung’s current wearable OS, Tizen. (Samsung’s forthcoming wearables will also use the Wear operating system.) Google says this and other optimizations will offer better battery life and up to 30 percent faster performance. Some Google apps will work directly on the Wear platform without requiring a constant phone connection, including turn-by-turn directions on Google Maps and offline music listening on streaming services like YouTube Music and (eventually) Spotify. Google is also putting its acquisition of Fitbit to use, imbuing the tech with standard Fitbit features like health tracking and workout progress.
Google gives all of its users a free place to upload all of their pictures, and that policy affords the company a huge benefit: a massive dataset it can use to hone its computer vision prowess. Today, we saw some enhancements coming to Google Photos that are powered by these machine intelligence experiments. First is a feature that automatically collects photos into albums using visual patterns in the images to identify photos that probably belong together. The AI engine looks at all your photos to find similar shapes and colors, and it can spot patterns the human eye might miss. As an example, Google showed pictures from one of its engineers. The Photos AI was able to assemble a gallery of photos from a specific backpacking trip he took by pulling in all the pictures where his orange backpack appears. Another example: The AI can spot all of your shots with a menorah in them, and put together a collection of Hanukkah memories.
Importantly, Photos users can control which photos show up in these collections. You can remove specific photos from memories, rename the memories, or prevent specific photos from ever showing up. This is a boon for anyone who’s lived through a heavily photographed life experience they’d rather forget.
On the creepier end of things, the company showed a new tool that can turn two static images into one animated image. It looks at the objects in the two images, then inserts interpolated frames to make animations that were never actually captured by the camera. Yes, it makes two still photos come to life. The effect is very unsettling.
Google is enhancing Chrome’s built-in password manager to aid users in keeping better track of their various account credentials across desktop and mobile. First, there’s a new password import tool that helps new users aggregate their many passwords into Google’s manager. Once the passwords are stored in Google’s password manager, users will have an easier time deploying them outside of Chrome; better integrations between Chrome and Android will store passwords and auto-fill information for apps as well as websites in a way that feels more seamless. Google’s password manager currently alerts you to security breaches on the web that may have compromised your passwords. Now, there’s a new feature in the password manager that adds one helpful step to that alert: a quick-fix tool that guides you through the process of changing any passwords that have been compromised.
Of course, Google isn’t the only company that wants to manage your passwords for you. We have a list of excellent options in our password manager guide—including some advice about why in-browser options like Google’s are more limited.
If you’ve been lucky enough to have a job that’s allowed you to work from home for the past 14 months, you’re probably used to living your work life in the cloud. Google’s new remote working tools aim to make that a little easier. Smart Canvas is a project management tool that lets multiple users work together across different document types. They can keep track of progress with checklist items tagged to specific dates and people, and brainstorm ideas live in one place.
Google Meet, the video chat platform, will soon be integrated directly into Google Docs, Sheets, and Slides. You’ll be able to click the little Meet button in the top corner, and collaborators can pop up on video in a column alongside the doc to argue about what gets edited. A new Companion Mode in Meet is meant to display members of a team in more equally placed tiles, along with better noise cancelation and automatic visual tweaks to zoom and lighting to make all participant videos more visually consistent. For anyone watching who needs captions, those can be turned on using live transcription, or even translated into one of Google’s supported languages.
Google showed off some new AI-powered conversational capabilities that will eventually turn up in products that use Google Assistant. First, it’s developed a new conversational model called LaMDA that can hold a conversation with you, either typed or spoken, about any topic you’re curious about. The AI will look up information about the topic while you’re talking, and then enhance the conversation in a natural way by weaving facts and contextual info into its answers. What we saw on Tuesday was just a controlled demo, but the LaMDA model really does look like it could make conversations with a computer feel even more human.
There’s another natural-language processing model headed to Google’s Search tools. Dubbed the Multitask Unified Model, or MUM, Google says the feature is intended to make sense of longer, multi-pronged questions submitted by users. In theory, you could ask it to compare different vacation locations, or tell you what kind of gear you’ll need to bring on a hike. It can gather information from websites in other languages, then use what it finds to uncover even more relevant information published in your native language. That way, what may be the most pertinent info on the web is not locked behind a language barrier.
These enhancements are part of Google’s larger effort to understand the meaning and context of questions in the way a human might. Still, Google says the features are still in the experimental phase, so it’ll be a while before the Assistant starts making decisions about any pod bay doors.
Google is tweaking bits of its Maps app in an effort to offer users more real-time information. When you’re asking for directions, Google will present an option for “eco-friendly routes” that factor in distance and road or traffic conditions to find a more fuel-efficient way to get where you’re going. A “safer routing” feature in Maps can analyze road lanes and traffic patterns to help you avoid what it calls “hard braking moments,” when traffic slows down unexpectedly.
If you’re walking around, there are also improvements to Google’s AR mode, Live View, that help contextualize where you are by analyzing streets signs and providing information like “busyness” levels for whole neighborhoods instead of just specific restaurants and shops. Live View also now works indoors, so you can see that contextual info inside a train station or a mall. The main Maps tool will also tailor what it shows you to the time of day and your location. Open Maps in the morning and you’ll see pins for breakfast options. Open Maps in a city you’ve never visited and you’ll see tourist spots and popular attractions.
In an effort to make you even more likely to buy stuff on the internet, Google has tweaked some of its shopping tools. Now users can use Google Lens to search images in screenshots taken on their phone and link third-party memberships directly to their Google account. Also, the days where you could idly add a 5-pound bag of gummy bears to your shopping cart and then forget about it are gone. Now, whenever you open up a new tab in Chrome, Google will show you all of the pending purchases you have sitting in shopping carts around the web.
Google also announced a Shopify integration feature, which will let sellers who use Shopify make their products appear across search, Maps, images, Google Lens, and YouTube.
Update, Tuesday May 18 at 6:20 pm: This story was updated to further clarify the way the Multitask Unified Model gathers information across websites published in different languages.
Everything Google Announced Today: Android, AI, Holograms (may require free registration)
- aum and Matt
- 2
Recommended Comments
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.