0

Google’s ambient computing vision is changing how the company works - The Verge

 1 year ago
source link: https://www.theverge.com/23065820/google-io-ambient-computing-pixel-android-phones-watches-software
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
VRG_ILLO_5214_GoogleIO_2022.0.jpg

Photo Illustration by Grayson Blackmon / The Verge

Filed under:

Pixel by Pixel: how Google is trying to focus and ship the future

Making ambient computing happen is forcing Google to change how it does everything

By David Pierce May 11, 2022, 3:01pm EDT

TheThe story of this year’s Google I/O actually started three years ago.

At I/O 2019, onstage at the Shoreline Auditorium in Mountain View, California, Rick Osterloh, Google’s SVP of devices and services, laid out a new vision for the future of computing. “In the mobile era, smartphones changed the world,” he said. “It’s super useful to have a powerful computer wherever you are.” But he described an even more ambitious world beyond that, where your computer wasn’t a thing in your pocket at all. It was all around you. It was everything. “Your devices work together with services and AI, so help is anywhere you want it, and it’s fluid. The technology just fades into the background when you don’t need it. So the devices aren’t the center of the system — you are.” He called the idea “ambient computing,” nodding to a concept that has floated around Amazon, Apple, and other companies over the last few years.

One easy way to interpret ambient computing is around voice assistants and robots. Put Google Assistant in everything, yell at your appliances, done and done. But that’s only the very beginning of the idea. The ambient computer Google imagines is more like a guardian angel or a super-sentient Star Wars robot. It’s an engine that understands you completely and follows you around, churning through and solving for all the stuff in your life. The small (when’s my next appointment?) and the big (help me plan my wedding) and the mundane (turn the lights off) and the life-changing (am I having a heart attack?). The wheres and whens and hows don’t matter, only the whats and whys. The ambient computer isn’t a gadget — it’s almost a being; it’s the greater-than-the-sum-of-its-parts whole that comes out of a million perfectly-connected things.

Which is a problem for Google. The company is famously decentralized and non-hierarchical, and it can sometimes seem like every single engineer on staff is given the green light to ship whatever they made that week. And so, since that day in 2019, Google has mostly continued to do what Google always does, which is build an unbelievable amount of stuff, often without any discernible strategy or plan behind it. It’s not that Google didn’t have a bigger vision; it’s just that no one seemed to be interested in doing the connective work required for the all-encompassing, perfectly-connected future Osterloh had imagined. Google was becoming a warehouse full of cool stuff rather than an ecosystem.

But over the last couple of years, Google has begun to change in order to meet this challenge. Osterloh’s devices team, for instance, has completely reset its relationship with the Android team. For a long time, the company proudly maintained a wall between Pixel and Android, treating its internal hardware team like any other manufacturer. Now, Google treats Pixel like the tip of the spear: it’s meant to be both a flagship device and a development platform through which Google can build features it then shares with the rest of the ecosystem.

“We really sort of co-design where things are headed,” Osterloh said. “I think that’s just sort of the nature of how computers have changed, and computing models have changed.” Both teams share visions of an ambient future, he said. “And we’re working on it together.”

Around the company, these related teams and products are starting to come closer together. They’re building on unified tech, like Google’s custom Tensor processors, and on common concepts like conversational AI.

As a result, Google I/O feels unusually… coherent this year. Google is trying — harder than I can remember — to build products that not only work well but work well together. Search is becoming a multisensory, multi-device proposition that understands both who’s searching and what they’re really looking for. It’s also extending the search experience far beyond just questions and answers. It’s making Android more context- and content-aware so that your phone changes to match the things you do on it. It’s emphasizing natural interactions so that you can get information without memorizing a rigid set of commands. It’s building the hardware ecosystem it needs to make all that work everywhere and the software to match.

Now, let’s be very clear: Google’s work is only just beginning. It has to win market share in device categories it has failed for years to capture. It has to build new experiences inside new and old devices. It has to figure out how to solve Android fragmentation between its devices and the market-leading devices from companies like Samsung, which might be the hardest part of all. And it has to become more present in users’ lives and extract more information from them, all without upsetting regulators, screwing up the search-ads business, or violating users’ privacy. The ambient computer was never going to come easily, and Google has made its own efforts harder in countless ways over the years.

But at the very least, the company seems to finally understand what an ambient computer requires — and why “it has Assistant!” is not a sufficient answer — and is beginning the work to get it done.

One computer, many screens

WhenWhen I sat down with Osterloh over a video call a few days before I/O, he began to wax poetic about Google’s hardware division, every so often glancing down just out of frame. I asked what he was looking at, suddenly suspicious it was a bunch of unreleased devices. I was right. “This is Pixel 6,” he said, holding up his current device. “And then, I have a Pixel 6A, in a very disguised case,” this time holding up a black brick of rubber surrounding the unreleased device. “And I have a Pixel 7, also in disguise.” He held up his wrist, too, with a Pixel Watch strapped to it.

As we talked, Osterloh reiterated the usual Google pitch for ambient computing, but this time, it came with a bit of a twist to the familiar. The long-term vision is still an always-there version of Google that works everywhere with everything all the time, but right now? Right now, it’s still all about the ultra-fast processor in your pocket. “Certainly for the foreseeable future, we feel like the most crucial part of that is the pocketable computer, the mobile phone,” he said. The smartphone is the center of the computing universe for billions of users around the globe, and so the first version of Osterloh’s ambient computer will be built around a smartphone, too.

Pixel_7_and_Pixel_7_Pro_Family.jpeg

The Pixel 7 and Pixel 7 Pro are Google’s next flagship phones.Image: Google

That’s why, when Google set out to make the Pixel 6A — which is largely a cost-cutting exercise, trying to turn a $600 phone into a still-credible $449 one — one expensive part survived the cut. “The target of what Pixel is, is about having an awesome user experience that keeps getting better over time,” Osterloh said. “And with that as the core, what you realize is like the thing that is essential to have across these devices is Tensor.”

The Google-designed Tensor processor was the key feature introduced alongside the Pixel 6, largely as a way to improve its on-device AI capabilities for speech recognition and more. And now, it seems, it’s going to be a staple of the line: Osterloh said all the upcoming Pixel phones — and even the Android-powered tablet the team is working on for release next year — will run on its Tensor processor.

The smartphone is the center of the universe for now, but you can already start to see how that might change. The new Pixel Buds Pro are a powerful set of noise-canceling headphones, for instance, but also a hands-free interface into a wirelessly connected computing device.

“Devices that can be close to your ear and enable you to have real-time communication with the computer are an absolutely essential part of ambient computing,” Osterloh said, noting that he now does most of his emailing via voice. Similarly, the new Pixel Watch is, in some ways, a phone accessory, delivering notifications and the like to your wrist and offering another interface to the same power in your pocket. But Google’s also selling an LTE version, so you’ll be able to access Assistant or pay with Google Wallet without needing your phone nearby. And that tablet, whenever it comes, will have all the same Pixel capabilities in a bigger shell.

The point is that it doesn’t matter, in the long run, which device you use. “Where the computing capability is, and how powerful the devices themselves are, shouldn’t matter to the user,” Osterloh said. “I think what they should see is increasing capabilities.”

The Pixel and Android teams have recently adopted a sort of mantra: Better Together. Much of what’s new this year in Android 13 is not whizbang new features but small tweaks meant to make the ecosystem a little more seamless. Through an update to the Chrome OS Phone Hub feature, you’ll be able to use all your messaging apps on your Chromebook just as you would on your phone. Support for the Matter smart home standard now comes built into Android, which should make setting up and controlling new devices much easier. Google’s extending support for its Cast protocols for sending audio and video to other devices and improving its Fast Pair services to make it easy to connect Bluetooth devices. It has been talking about these features since CES in January and has signed up an impressive list of partners.

Fast Pair and Matter support are coming to Android 13.Image: Google

It sounds a bit like Google finally watched an Apple ad and discovered that making hardware and software together really does help. Who knew! But Google’s position is genuinely tricky here. Google’s ad business relies on a mind-bendingly huge scale, which it gets mostly thanks to other companies building Android products. That means Google has to keep all those partners happy and feeling like they’re on a level playing field with the Pixel team. And it simply can’t control its ecosystem like Apple can. It is forever worrying about backward compatibility and how things will work on devices of all sizes, prices, and power. It has to engender support to make big changes, whereas Apple just brute-forces the future.

But Google has become increasingly bold in pushing ahead with the Pixel brand. It can afford to because Pixel is hardly a real sales threat to Samsung and others. (Besides, where are Android manufacturers going to go? Windows Mobile?) But it also has to because Google only wins if the ecosystem buys in, and Pixel is Google’s best chance to model what the entire Android ecosystem should look like. That’s what Osterloh sees as his job and, in large part, his team’s reason for being.

If Pixel’s never going to be a smash-hit bestseller (and it looks like it won’t be), the only way Google can win in the long run is to use it as a way to pressure Samsung and others to keep up Google’s features and ideas. Google has a chance to lead even more in tablets and smartwatches, two Android markets in desperate need of a path forward. “Phone is certainly super important,” said Sameer Samat, a VP of product management on the Android team. “But it’s also becoming very clear that there are other device form factors which are complementary and also critical to a consumer deciding which ecosystem to buy into, and which ecosystem to live.”

That’s another way of saying the only way Google can get to its ambient computing dreams is to make sure Google is everywhere. Like, literally everywhere. That’s why Google continues to invest in products in seemingly every square inch of your life, from your TV to your thermostat to your car to your wrist to your ears. The ambient-computing future may be one computer to rule them all, but that computer needs a near-infinite set of user interfaces.

Outside the text box

TheThe second step to making ambient computing work is to make it really, really easy to use. Google is relentlessly trying to whittle away every bit of friction involved in accessing its services, particularly the Assistant. For instance, if you own a Nest Hub Max, you’ll soon be able to talk to it just by looking into its camera, and you’ll be able to set timers or turn off the lights without issuing a command at all.

“It’s kind of like you and I having a conversation,” said Nino Tasca, a director of product management on Google’s speech team. “Sometimes, I’ll use your name to start a conversation. But if I’m already staring at you and ask you directly, you know I’m talking to you.” Google is obsessed with making everything natural and conversational because it’s convinced that making it easy is actually more important than making it fast.

The same logic applies to search, which is quickly becoming a multi-sensory, multi-modal thing. “The way you search for information shouldn’t be constrained to typing keywords into a search box,” said Prabhakar Raghavan, Google’s SVP for knowledge and information products. “Our vision is to make the whole world around you searchable, so you can find helpful information about whatever you see, hear and experience, in whichever ways are most natural to you.”

That has forced Google to reinvent both the input of search, leaning on voice and images just as much as the text box, as well as the output. “Some people really find it easy to process video,” said Liz Reid, a VP of engineering on the search team, “and other people will find it distracting. On the other hand, some people’s literacy is not as good, and so a long web page that you have to read through not only takes time, but they’re gonna get lost, and a video that’s spoken in their language is really intuitive.” Google built one hell of a text box, but it’s not enough anymore.

Flow__1.png

Multisearch is Google’s way of combining tools into a single search.Image: Google

The most obvious outpouring of that work is multisearch. Using the Google app, you can take a photo of a dress — in Google’s examples, it’s always a dress — and then type “green” to search for that dress but in green. That’s the kind of thing you just couldn’t do in a text box.

And at I/O, Google also showed off a tool for running multisearch on an image with multiple things in it: take a picture of the peanut butter aisle, type “nut-free,” and Google will tell you which one to buy. “We find when we unlock new capabilities that people had information needs that were just too hard to express,” Reid said. “And then they start expressing them on there.” Search used to be one thing, Lens was another, voice was a third, but when you combine them, new things become possible.

But the real challenge for Google is that it’s much more than a question-and-answer engine now. “What’s best isn’t really, in many of these cases, a strict stack rank, right?” Reid said. “A lot of these newer use cases, there’s a style or a taste component.” Shopping has become important to Google, for instance, but there’s no single correct answer for “best t-shirt.” Plus, Google is using search more and more as a way to keep you inside Google’s ecosystem; the search box is increasingly just a launcher to various Google things.

So rather than just aiming to understand the internet, Google has to learn to understand its users better than ever. Does it help that Google has a massive store of first-party data that it has collected over the last couple of decades on billions of people around the world? Of course it does! But even that isn’t enough to get Google where it’s going.

About the ads: don’t forget the fact that even in a world outside the search box, Google’s still an advertising business. Just as Amazon’s ambient computing vision seems to always come back to selling you things, Google’s will always come back to showing you ads. And the thing about Google’s whole vision is that it means a company that knows a lot about you and seems to follow you everywhere… will know even more about you and follow you even more places.

Google seems to be going out of its way to try and make users feel comfortable with its presence: it’s moving more AI to devices themselves instead of processing and storing everything in the cloud, it’s pushing toward new systems of data collection that don’t so cleanly identify an individual, and it’s offering users more ways to control their own privacy and security settings. But the ambient-computer life requires a privacy tradeoff all the same, and Google is desperate to make it good enough that it’s worth it. That’s a high bar and getting higher all the time.

Actually, this whole process is full of high bars for Google. If it wants to build an ambient computer that can truly be all things to all people, it’s going to need to build a sweeping ecosystem of hugely popular devices that all run compatible software and services while also seamlessly integrating with a massive global ecosystem of other devices, including those made by its direct competitors. And that’s just to build the interface. Past that, Google has to turn the Assistant into something genuinely pleasurable to interact with all day and make its services flex to every need and workflow of users across the globe. Nothing about that will be easy.

But if you squint a little, you can see what it would look like. And that’s what has been so frustrating about Google’s approach in recent years: it feels like all the puzzle pieces to the future are sitting there in Mountain View, strewn around campus with no one paying attention. But now, as a company, Google appears to be starting to assemble them.


Related:


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK