Closed Google I/O 2016: Live blog


Google’s biggest event of the year included the latest on its plans for virtual reality, artificial intelligence and messaging – key battlegrounds in the future of computing. Live coverage here of the opening keynote at the annual developer conference as Google CEO Sundar Pichai takes the stage.

A perfect California morning at the Shoreline Ampitheater in Mountain View. Google’s moved it’s developer conference down here from San Francisco this year.

Virtual reality is expected to be big news at Google I/O today. Our colleague Tim Bradshaw was the first to break the news that Google was developing a new virtual reality headset for smartphones, a successor to the much-loved but a tad basic Cardboard. Writing way back in February, he revealed that the new headset will be similar to the Gear VR, a collaboration between Samsung and Facebook’s Oculus.

Both Cardboard and Gear VR work by slotting a smartphone into a device that you then hold up to your eyes. The new device will work with a wider range of Android phones than the Gear VR as Google tries to expanding access to virtual reality.

Not sure exactly why you’d want virtual reality? Tim also wrote a great magazine story on four ways mass market VR could change the world: from how we learn to how we play.

Want the low down on the different devices already on the market? Here’s Tim on video testing out a range of headsets.

This was the scene inside the Shoreline earlier today

Interesting trivia: Steve Wozniak of Apple fame was a backer of the Shoreline, with music impresario Bill Graham. He’s here almost every concert. Not in evidence today, though.

You can also watch the keynote in virtual reality on (Google-owned) YouTube, which has been expanding its 360-degree video.

If you want to watch it the regular way – click here

It’s starting – with a horrible dirge-like drone, played on a giant air-born harp, wires stretched from the back of the auditorium all the way to the stage. It’s a spectacle, but music? Not so much.

And now, a video. Life in Google colours: everything bright and candy-clear.

Sundar Pichai takes the stage, here is what he tweeted before it started:

He’s justifying why they’ve bussed all these people the 45 miles down from San Francisco today. It’s a turning point for Google, he says, – and they can squeeze more people in, 7,000 people in.

Also 1m people watching the live stream in China, he says. It’s 1am there. Guess that shows how quickly new tech ideas move around the world these days.

Risky move: Sundar shows live search queries coming in from mobile – with voice queries done in colour. Oddly, looks very safe for work…

I bet that’s filtered – Google used to have large screens in the entrance to its offices showing live search queries, but after a while the results got filtered.

We’re getting a quick tour of things Google has been doing on the smartphone recently, like extensions of translation – they all show advances in machine learning, says Pichai. First announcement of the day: Google Assistant.

Google has had voice activated search (just say OK Google) but it looks like they are trying to take it the next step, making it more conversational. So instead of talking in keywords, you can ask in different ways – and the Assistant is going to try to understand the context better

Pichai gives an example: It’s Friday night, you’re thinking of going to a movie. So you should be able to just say to your phone “What’s playing tonight”. Google will then list some movies playing nearby. Say “We want to bring the kids”, and it narrows the suitable films, etc.

This sounds similar to what Facebook Messenger is trying to do with its ‘M’ virtual assistant. The difference? Facebook Messenger is primarily text based, backed up by real humans and has so far only been trialed with a few hundred people in the Bay Area.

This stuff is really hard – if Google thinks it can really do it effectively, it will be a big advance. As Pichai says, every case is different. It takes three things: context, personalisation (Google needs a lot of data about you to do this) and advances in natural language. It’s a huge step beyond Siri.

Pichai gives credit to Amazon for putting a voice assistant into the Echo – so, yes, Google is going to do it too, with Google Home.

The Verge already has a piece on new Google Home

Google is now giving a preview of bringing the Google assistant to the home, Google Home, which will be available later this year. You will be able to ask Google what you want to know, play music and entertainment and manage everyday tasks. It is apparently unmatched in its ability to recognise voices because of Google’s ten years of work in natural language processing.

Google Home looks like a clean-lined, modern vase – and the bottom part will be customisable so it goes with your decor.

Google Home will work with chromecast, where you can stream video and music on your TV. It will also connect to smart home systems for those who already use the internet to control their thermostats etc – including Google’s own Nest system but also other smart home networks.

In the future, Google Home should be able to actually complete tasks for you – giving the examples of booking a car, ordering dinner or sending flowers.

Interesting that a device that is this important to Google’s place in the home is getting announced with barely a mention of Nest, Google’s smarthome play. Pichai credited Amazon’s breakthrough with the Echo when he announced it.

Already the internet is obsessed with what the Google Home looks like:

Google shows a video with a family only talking to Google Home, not to each other. They reschedule dinner, text friends, do their Spanish homework and turn on the lights in the kid’s room.

Built-in Chromecast could make Home a pretty versatile hub for your home media. Use it to stream music – then tell the box to play music in all rooms. Or tell it what to stream onto the TV set. That’s something Amazon can’t do.

Pichai is now giving an update of Google Photos – one of the big hits from last year’s I/O, with computer vision being used to label and sort pictures automatically. There are now 200m monthly active users of Photos, he says.

(Here are the developers already working on Google Home)

Next up: a new messaging app, called Allo. Guess what: it is “smart” and learns.

Interesting that Google mentions security of this app straight up. After WhatsApp and iMessage have rolled out end-to-end encryption to billions, is security finally becoming a selling point? Still haven’t heard if it is end-to-end encrypted though.

Facebook is way ahead with Messenger and WhatsApp, Google badly needs a response here. First: a way to make your messages bigger or smaller (as in, shouting or whispering). Really? I suppose you never know what’s going to catch on, worth a try.

You can draw and write across your photo like Snapchat. And of course, there are many, many emojis.

Another Allo feature: it will try to guess what you want to say next. And if someone sends you a picture of a dog, it will suggest responses like “cute dog!” and “aww!”

Yes, it could take us all down to the level of canned responses. Is there still room for humans out there??

I worry no one is going to have a real conversation again. This does strike me as made by engineers, for engineers.

I guess you can now turn yourself into a chatbot, that must be some kind of advance

Now, how the Google Assistant will work in Allo. If you’re chatting with a friend about a restaurant, it will bring up details of the restaurant, a way to book on Open Table, and so on.

This looks almost identical to what Google showcased last year, with Now on Tap, which was a way to surface suggestions inside apps. But Google seems to have realised, belatedly, that it’s all about messaging

Another example: you’re talking to a friend about a soccer team and the assistant starts throwing up information you might want, like the latest scores and video clips.

Some jokes about the Allo name making the way around Twitter, which to me makes it sound like a Cockney messaging app.

One important question: if Allo works as well as advertised, will it leave much room for third-party chatbots, of the kind Facebook showed off recently? Google says it’s opening this up to developers, but it could satisfy many information needs itself – and most of those Facebook chatbots didn’t seem so smart.

Google has created an incognito mode for Allo, as it has for Chrome. Incognito is end-to-end encrypted and allows you to expire chats – so they don’t sit on the recipient’s device forever.

Even after the FBI’s battle with Apple, Google is pressing ahead with launching a product with a mode that means they can’t see the content of the messages and therefore can never give them up to law enforcement, even if they have a warrant.

What’s also interesting is Google will not be able to mine messages in incognito mode for data for advertisers.

Talking of privacy: with a video feature called Duo, when someone tries to start a video call with you, you get to see them BEFORE the call starts – what they’re calling a “preview”. I hope the person making the call realises they’re visible.

Duo is another example of Google re-imagining its existing services, now through the lens of messaging.

It’s Hangouts for Allo. Is this just flavour of the month, or a real turning point? Guess we’ll get to answer that question this time next year.

Duo’s video calling will apparently work on 2G networks, which could make it quickly adopted in markets with poor connectivity.

Moving on: it’s time for Android. We’re not expecting this year’s update to the operating system – the “N” release – to contain anything big, and Google has already been showing previews to developers.

You can suggest a name for the new Android OS (whoop). But apparently Namey McNameface is not allowed.

Performance improvements in N include a new graphics API called Vulkan, aimed at game developers.

For developers, this kind of thing is important, but it all seems a bit remote to the average customer. Last year’s Android, called Marshmallow, is only in 7.5 per cent of devices in use now – and more than half of Androids in use are pre-Lollipop, which dates from 2014. Google’s real problem is how to get all this cool new technology out into people’s hands much faster.

Google is driving home its security message today, announcing security enhancements in the latest Android OS.

Importantly, it will include automatic software updates. Android has often been criticised by cyber security experts as they find far more vulnerabilities in phones running the OS rather than Apple’s iOS.

One sign of how messaging is taking over the world (well, the smartphone, anyway): more than half the notifications sent to Android handsets are now from messaging apps.

So Google Android N will have some new ways to respond more easily to notifications that link to messages.

Messaging-as-the-new-platform is definitely theme of the day.

Though now – virtual reality…

First, an update on Cardboard. 50m Cardboard apps have been downloaded.

“We knew it was just the start”. It’s time to make something more immersive. It’s called Daydream

So many new brand names, after only an hour.

Someone needs to exert some real discipline over the Google brand creators, they’re running amok.

Daydream is the VR technology that will be pushed out into different platforms. So, it will be on smartphones: several “Daydream-ready” phones will be on sale later this year.

It’ll be in headsets and controllers: Google has produced a reference design for a headset, with several promised for later this year from different companies. No names of makers or pictures of the thing, though.

Instead, we’re getting a video demonstration of a new controller, to be used with the headset. This is obviously where Google thinks it has the chance to get an edge over Oculus, which won’t be bringing out its own controller until later this year.

Google is making its other products VR-ready: from streetview to Google photos to YouTube.

YouTube has been rebuilt from the ground up so it can handle VR.

Moving on to Wearables.

Android Wear smart watches must be one of the biggest let-downs of the last year. So now we’re getting… Android Wear 2.0.

Really. There was a rather half-hearted round of applause to that.

One of the main features of 2.0 is… better messaging.

Yes, it will have “smart replies” and machine learning. Buzzwords of the day.

There are also some additions to the fitness features and easier ways to listen to music.

But the real step forward: you won’t need a smartphone with you to use apps on Wear 2.0 devices.

Breaking the tie to the phone is going to be key if smart wearables are to have a chance.

Now it’s the turn of Chrome. That’s the trouble if you’re a company with so many platforms with massive global reach – they all have to get their moment in the sun at I/O.

Not heard about the Google self-driving car yet – but apparently they are at I/O.

That’s because it’s now become the Alphabet self-driving car. Officially nothing to do with Google any more, so no more of that driverless car magic to rub off onto the brand.

Interlude: We’ve been hearing about changes to tools for developers to build apps, track useage, etc. This is a developer conference, after all. Time for the rest of us to make a cup of tea.

A “sneak peak” of “a new way to experience Android apps”: Android Instant Apps. Tap on an app and it starts instantly, no need to install. It’s Google once again trying to make the app world work more like the world of the web.

How do they do it? The app has been split up into modules. Tap on an app icon and it installs just the first bit, to display the part of the app you’re interested in. So this is like deep-linking – another attempt by Google to make apps more web-y.

Google shows an example: if you want to pay at a parking meter, you can interact with the meter – and pay – without needing to install the parking app. Just the relevant parts of the app load, depending on your need at the time.

Sundar Pichai is back on stage, after all the demos. He’s back onto the subject of machine learning, which has been the subtext of much of what’s been talked about today.

He’s making a pitch for Google’s “AI in the Cloud” – TensorFlow, which was open-sourced last year to get more developers to adopt Google’s approach to machine learning, and the new cloud platform that Google showed off a couple of months ago.

Talking about how DeepMind’s AlphaGo computer beat the world Go champion, Pichai calls Move 37, in one of the games, “one of the most beautiful moves ever seen” in a game of Go.

This piece by Cade Metz explains what Move 37 was all about.

Sundar is now talking about challenges in healthcare in emerging markets. A small team of engineers and doctors used deep learning to teach computer vision systems to recognise diabetic retinopathy, loss of vision among diabetics.

“You can see the promise again, of using machine learning,” he said.

Sundar finishes by saying humans can achieve more by working with machines and artificial intelligence, subtly assuring us that the robots aren’t coming for us.

Yes, I was listening carefully, didn’t hear anything about killer robots in that bit.

Less than two hours for an I/O keynote, that must be a record. There was a lot to digest today. Two things stood out for me. AI and machine learning, obviously.

But more importantly, how Google is struggling to find better interaction models. There’s a belated attempt to put messaging at the centre. Plus VR, obviously. But on smartphones, it’s still trying to reach an accommodation with the world of apps.

Here’s a recap of what was announced at today’s event:

Google Assistant:

Trying to make voice-activated search more conversational and personal, based on context, data and advances in natural language processing.

Google Home:

The voice assistant will work in Google’s answer to Amazon’s Echo, a vase-shaped home-based device that you can ask questions, get to play music and entertainment and soon manage tasks like calling an Uber.

Google messaging:

A new app called Allo that includes Snapchat-like features such as drawing and emojis and an end-to-end encryption function like WhatsApp and iMessage. Google Assistant will work in Allo. It also suggests responses to your friends’ messages. Another new app called Duo with video-calling that apparently works on 2G.


The latest Android operating system includes new graphics capabilities aimed at games developers and security updates. But it doesn’t have a name yet – Google would like the internet to help pick a dessert-related name beginning with N.

Virtual reality:

Cardboard’s successor will be called Daydream and several Daydream-ready phones will be on sale later this year. It will have a headset and a controller, that looks like a streamlined remote control, and you will be able to use Google products from Street View to YouTube on it.

Androidwear 2.0:

The update to Android watches breaks the tie with the smartphone, so you can go running with a watch and not bring your phone. Some extra messaging, fitness and music features.

Instant apps:

Google makes apps more like the web by enabling you to use them without downloading them. The apps are split into modules so only the relevant parts load.

Artificial Intelligence in the cloud:

Sundar finished by talking about Google’s TensorFlow and the new cloud platform it launched a couple of months ago. He uses examples of DeepMind’s AlphaGo computer and a project to help recognise diabetes-related vision loss to show the power of machine learning.