Amazon echo and the new runtime
Amazon Echo and smartphones in the home
Last month Amazon announced that owners of their voice-controlled speaker, Amazon Echo, could now use it to control Nest thermostats. This is a big win for Amazon, who gains a sizable user base with Nest and cachet as a hub in the integrated home ecosystem with other manufacturers. (Honeywell is also included in the announcement.)
The Amazon Echo has been a surprise hit with consumers. Upon its initial release, it earned phenomenal customer reviews, and was heralded as a success by a tech cognoscenti that was, until recently, largely convinced that consumers would just leverage smartphones for the use cases presented by Echo (e.g. adding items to grocery lists, asking for the weather.)
Instead, Echo users are finding that, when they’re home, they use Echo more than they reach for their smartphones.
This is the biggest surprise that the Echo presents in its success. Not is it just an alternative to the smartphone, it is a superior alternative when a user is stationary within their kitchen or home. Ben Thompson articulated this well in his post covering the Amazon-Nest announcement.
"What makes smartphones the center of our lives is the convenience of them being mobile and always-connected; at home, though, we are rather stationary yet still connected. In other words, while we know that nothing matters more than convenience, what makes a smartphone so convenient most of the time — the fact it is pocketable and always with us — actually makes it less convenient at home, where getting something from your pocket is a pain, presuming it’s not plugged-in in the other room."
This is a bigger development than it might appear on the surface. In the mobile era, in which popular opinion states that “mobile is the sun” and that all content discovery is driven through it, the success of Amazon Echo signals a broadening in the runtime clients available to end consumers for discovery and general interface with the web.
“We are looking for a new discovery runtime.”
To understand the significance of Echo, it’s important to define what analysts mean when they talk about ‘discovery runtime’. In short, discovery runtime is the client (or interface) a consumer uses when they are looking for content on the internet.
In the web era (pre-smartphone, post-general adoption of the internet) discovery runtime was synonymous with Google. If you needed to find x on the internet, you opened up your web browser and searched for x on Google, regardless of what x was. This is why Google owned the last decade. They controlled the gateway to content discovery. What’s more, because users could only access the web through the browser, and because Google owned browser search, all content searches came through Google, regardless of what was being searched for. The browser-based discovery runtime was content-independent.
With the rise of the smartphone and its fractured app ecosystem, discovery runtime changed. Users started leveraging specialized apps to find things, instead of typing them into Google. This meant that the mobile discovery runtime was content-dependent, or contextual. That is, if you’re looking for a restaurant, you use Yelp. You don’t open up your chrome app and search for the restaurant using Google.
As this shift from web to mobile has taken place, companies (including Google) and investors have been looking for the one mobile discovery runtime that might consolidate the fractured app ecosystem users interact with today -- Google Now? Siri? notifications? messaging apps? As Ben Evans says below, investors are looking for the next Google, the mobile model for directing users to a single discovery runtime that is content-independent.
“Really, we’re looking for a new run-time - a new way, after the web and native apps, to build services. That might be Siri or Now or messaging or maps or notifications or something else again. But the underlying aim is to construct a new search and discovery model - a new way, different to the web or app stores, to get users.” -- Ben Evans
Others say there won’t be a clear victor in mobile runtime, that mobile search will remain contextual and we’ll use different applications based on the context of the situation we’re in. Here’s Fred Wilson in response to Ben Evan’s take on mobile runtime:
“I agree with Ben but I think there won’t be one runtime in the mobile era. I think what is emerging is multiple runtimes depending on the context – “contextual runtimes.” If I’m building a lunchtime meal delivery service for tech startups, that’s a Slack bot. If I’m building a ridesharing service, that’s going to run in Google Maps and Apple Maps. If I’m building a “how do I look” fashion advisor service, that’s going to run in Siri or Google Now. If I’m building an “NBA dashboard app”, that is mostly going to run on the mobile notifications rails.”
What's interesting about both of these takes on the future discovery runtime is that they both assume it will take place on a smartphone. And this is an assumption that has pervaded the tech community since the smartphone gained general adoption. It’s why every integrated home solution of the last five years -- Nest, Wink, et al -- was built off the assumption that its user would be operating their home devices through a smartphone interface.
The new runtime: content-dependent, environment-dependent
Now with Amazon Echo, we’re starting to get visibility into a future in which discovery runtime isn’t just content-dependent, it’s location-dependent as well. In this future, there will be a number of devices, each with their own discovery runtime, that we use to find services and products. These runtime we use will be both content and location dependent.
An example of a location-dependent runtime. If I’m at home, cooking with raw chicken germs on my hands (a thing,) it is not convenient to thumbprint-unlock my smartphone and find the next step in my google-searched recipe. I’m going to holler at Amazon Echo ("Alexa") instead. That’s location-dependent runtime.
An example of a content-dependent runtime. If I’m at my computer at work and I need the address of the restaurant I’m headed to that night, I’ll just open a chrome browser tab and find the restaurant. If I need a list of restaurants in the area nearby, I’ll pull out my smartphone so I can filter the search by proximity. That’s content-dependent runtime driving the device I use.
Expand this logic to include other use cases, and suddenly there are lot of potential devices available to users, depending on both the content of the query, and the location in which the user is making it. Mobile will likely win the majority of these use cases, but the point is there’s a fringe available to new (and old) runtimes. Whether you’re driving in your car (smart car operating systems), sitting at your desk at work (browser), or looking for 360 degree content (VR).
For the meantime, Amazon will look to strengthen and expand their position as the runtime of the home. Last week, in addition to the Nest partnership, Amazon announced the release of two additions to their Echo product line, Amazon Tap and Echo Dot, signaling a continued investment in the space. I’d expect more home product manufacturers to announce partnerships with Amazon Echo soon.