Notes From Betaworks Voice Summit

Voice-based computing may be the next buzzword you hate to love. Betaworks Voice Camp, an accelerator program for voice-centered products, introduced eight young companies at last week’s demo day. I entered the event with limited context on the subject and walked out convinced that sooner or later, I’ll be managing many details in my life with voice tools.

Products that stood out

  • John Done is a conversational bot that makes phone calls for you and reports back with the information you need. Want to pick up peonies after work? The bot will call every flower shop in your area to find them for you. What’s awkward:  Humans picking up the phone are confused when they hear a robot asking them detailed questions about peonies.
  • AgVoice is designing voice tools to assist with agricultural production. Think farmers in the field talking to their earpiece to record notes.What’s fascinating:  Documentation accuracy is incredibly important in farming, where busy hands or unpredictable weather conditions make note-taking hard.
  • NeuroLex Laboratories uses speech samples to assess physical and mental illnesses before they become severe. It can break down elements of speech into symptoms for diagnosis.What’s surprising NeuroLex CEO Jim Schwoebel had to collect tremendous medical datasets to make this technology work. The data he needed already existed but hadn’t yet been used for audio diagnosis experiments.
  • SpokenLayer is a service that records and distributes audio versions of written content, matching the (human) voice to the publication.What’s cutting-edge:  They know listeners expect a different human voice when listening to Reuters than when they listen to Playboy. Their words not mine.

Panel points to keep in mind

  • VoiceLabs reports 7 million Amazon Echo devices existed in living spaces in 2016. Betaworks CEO John Borthwick and hashtag-inventor/bot enthusiast Chris Messina predicted during their fireside chat that the number of voice devices in households will continue to grow.
  • People are beginning to leave Apple earbuds in their ears even after they get off the phone. If it’s socially acceptable to have a device in your ear all day, Messina predicts new opportunities for verbal computing will emerge.
  • To design well for voice we need to better understand a user’s emotional rhythms. Shine’s trying to do this by developing a voice journal app that will help you process your feelings and follow up about your wellness.

Questions the experts had difficulty answering

  • What does it mean that children are talking to speakers expecting a bot to respond to them?
  • Are we headed towards having different voice assistants for distinct tasks or an all encompassing assistant that knows us well and follows us around like the duo ?
  • Where do we find large enough voice datasets to start building?
  • How do we design for the emotions of voice?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s