Day 1 of Microsoft Build 2016
We'd been warned by Microsoft Build aficionados to turn up early for the Day 1 keynote, and they weren't wrong - we arrived 45 minutes early and the gigantic conference room at the Moscone Center in downtown San Francisco was already half full, and within minutes was packed, forcing late arrivals to move into a series of overflow rooms scattered throughout the conference center. It wasn't long before Microsoft's CEO Satya Nadella walked lightly onto the stage, and introduced the theme of this year's conference - how Microsoft's technology strategy aligns with its mission statement, and the role that developers have in delivering on that mission. Microsoft's mission is to 'Empower every person and every organization on the planet to achieve more', and they believe this is done through a three-fold strategy, creating more personal computing, reinventing productivity and business processes, and building the intelligent cloud platform.
So far so ethereal, especially when it was followed with the presentation of an emerging concept 'Conversations as a Platform', however the strategy became crystal clear over the course of the keynote.
After a series of impressive demos showcasing Windows 10 and HoloLens, 'Conversations as a Platform' started to reveal itself with a great demo of Cortana features coming in a Windows 'Anniversary' update later in the year. If you opt-in, Cortana will start managing your calendar and tasks much more proactively. She can identify promises you made in outbound emails and automatically generate tasks for you, and if you let her, she can automatically manage schedule conflicts by asking which meeting you want to keep and then automatically rescheduling the conflicted meetings. It's clear that Microsoft is investing a lot in the concept of 'digital assistants' and sees this as a significant pillar of productivity, enabled by their investment in machine learning, Cloud and mobility.
This was followed by an excellent demo of a new framework Microsoft unveiled today, the Microsoft Bot framework. This allows devs to quickly create bots, (we used to call them chatbots in the old days), autonomous agents running in chat programs such as Skype, Slack, and even SMS, and which respond to requests from users make in plain text.
The idea is that developers can use the framework to create their own bots that the general public can interact with. Imagine a pizza delivery bot that can take orders for pizza over Slack, or a travel bot that can search for flights for you from Skype, all just processing plain English questions like 'What flights go to San Francisco next Tuesday?'. Given the work we do with various McDonald's markets around the world, I immediately thought of a McDonald's bot which you can ask to place an order for a Big Mac combo once you get close to a store, or maybe a 7-Eleven bot that can tell you what the price of fuel is at your local petrol station. We could use these conversations to gain more insight on what customers want, and feed that into our personalization engine.
The next part of the demonstration gave me a distinct sense of future shock - with Cortana as your digital assistant, she can act as an agent and talk to the bots on your behalf. The example that was demoed was Cortana becoming aware that you had a trip coming up based on your chat with a colleague on Skype (all opt-in of course!), knowing what dates it was, then liaising with a hotel bot to organize a booking. Your interaction is limited to approving the actions Cortana was suggesting, removing a whole lot of clicking and web form filling from your life which would be a great productivity boost. She then also figured out that you have a contact in the area and composed a short Skype to let them know you'll be in town, which, upon approval, she sent on your behalf.
It was a fascinating look at the future of digital assistants and AI, and it's just around the corner.
The Bot framework falls into the Cortana Intelligence suite, and one of the strong themes of the rest of the keynote and the day's talks was the concept of 'democratizing' machine learning. Microsoft has called this theme 'Cognitive Services' and includes a bunch of easy to implement machine learning APIs, which you can call from any app you're building, on any platform. They provide an easy way for developers to implement things like speech recognition, computer vision, free-text sentiment analysis, and emotion recognition from video and images.
It was the first time we've seen all of this crystalize and become extremely accessible to the majority of developers, which, combined with other technologies such as HoloLens, the investment in digital assistants seen in Cortana, and the Bot framework let's us see the future of natural language and gesture based user experiences.
All this alone was an inspiring vision, however the highlight of the day was seeing this technology put to practical use, in a truly life enriching application. Saqib Shaikh is a blind software engineer, who has used Microsoft's Cognitive Services to provide a way to be told what he's looking at, simply by getting his PivotHead Smart Glasses to take a photo. The photo is sent off to Microsoft's image recognition APIs and it tells him what he's just taken a photo of. It's an amazing video, and reminds us of the power of technology to work for the good of society, which is not a bad way to kick off a 3 day developer conference.
Watch the video by heading to Channel 9 and fast-forwarding to 3.58.10.