Samsung VP of research and development on voice as the next UI

Voice assistants as the next omnichannel experience

Kindra Cooper

Adam Cheyer

Adam Cheyer, the visionary behind the original Siri for the iPhone 4, believes voice assistants will be the next frontier of user interfaces – and not just because he invented the first voice command tool for the smartphone. It comes down to history’s habit of repeating itself, he says.

In the 80s, the Windows PC launched as the first user interface enabling non-software developers to use a computer, where the average joe had to learn how to use a mouse and interact with desktop icons.

Ten years later came the web browser, and a scramble to understand URLs, hyperlinks, and bookmarks. Fast-forward nearly another decade, and the iPhone launched at MacWorld 2007 and the App Store shortly after.

“The question is, if this trend continues, is there a new interaction paradigm that will dominate the way every consumer and business interacts?” Cheyer, VP of research and development at Samsung, said last week at the O’Reilly AI Conference in New York City.

Adam Cheyer

A customer pain point as a business opportunity

Even when customers say they don’t need a new device or user interface, they have unarticulated pain points in their lives, which creates context for a new product.

For instance, smartwatches and other wearables started as a stand-in for using the smartphone hands-free while cooking, jogging or driving. Today, however, specialized wearables offer distinct customer experiences of their own, such as the ability to track diabetes risk, fertility and even sun exposure.  

Read more: Bots vs. Humans - 4 Ways to Balance Automation with Live Agents

Before voice assistants can truly complement or replace the smartphone, they need to provide a better customer experience. Unfortunately, most of them are still limited to being a hands-free go-between for the smartphone or web browser.

“Today’s assistants are really dispatches to tens of thousands of other assistants,” said Cheyer. “What we’re really saying is ‘ask app seven to do command five.’”

This creates a fragmented customer experience where the user still has to tell the assistant what app to use, rather than being able to simply ask, “What Broadways plays are on this weekend?” and have the assistant pull information from a range of vendors.

Voice as an opportunity for an omnichannel customer experience

“I want one assistant that if I tell it something it will know it everywhere I need it,” Cheyer said, a serial entrepreneur who sold his latest company, Viv Labs, to Samsung in 2016 for around $215 million.

According to Cheyer, the one thing that’s keeping voice assistants from becoming as mainstream as the web browser is the lack of an omnichannel customer experience.

“I don’t want to have to think about what does my car assistant know about me? What did I tell my TV assistant? What preferences do I have loaded into my refrigerator? I want to conceptually think of having one assistant in the cloud and the device becomes just a context.”

This is not the case for most voice assistants today. For instance, your digital health coach cannot tell your Amazon Alexa to order more protein powder, because each voice assistant is on a proprietary device.

Read more: Digital Health Coaches Revolutionize the Patient Experience 

This is where third-party apps come in, because they have the potential to bridge the omnichannel divide. Hence why manufacturers of voice assistants provide an online marketplace where developers can distribute third-party apps that provide broader types of customer experiences than the manufacturer could alone.  

 “When Apple came out with the iPhone it had some great apps that came with it,” said Cheyer, “but really if we were limited to just those apps we would not have had the mobile paradigm shift.”

Personalization as a conversation

While machine learning allows a voice assistant to learn your preferences the more you use it, the experience is still not fully personalized. Rather, personalization today is really about repeatability based on past habits – for instance, once you’ve used your Google Home Assistant to order Jiffy peanut butter, the next time you order peanut butter you won’t have to specify a brand name.

“Today, every assistant on the market is the exact same assistant for every user and it’s not really defined,” said Cheyer. “I see a world where you define the assistant you want to have by shaping it with the brands and capabilities you want.”

The Samsung Bixby 2.0 conversational assistant is a step towards that futurama, where the assistant can see the email you’re reading or the photo you just took and use that context to personalize the next interaction. It can also tap into apps like Yelp, Uber, TicketMaster, Fandango and Google Maps, and answer questions without opening the apps directly.

At the press conference last year, the most lauded feature was its ability to provide concierge-like interactions where users can ask follow-up questions without having to restate the original question or hotword, something Amazon’s Alexa still cannot do.

For instance, if you ask about concerts in Brooklyn on Labor Day weekend, you can follow up by asking about another date without having to reiterate what you’re looking for.  

“You don’t have to say, ‘ask app seven to do command five.’ You just ask ‘what are the top events here in Los Angeles?’”

You can also make payments to these third-party apps through the voice assistant, once you’ve preloaded your preferred billing details.