I talked to lots of bots. Here’s what made some better than others.

In the spirit of confessionals, I’ll offer this one: In any given week, I probably spend more time chatting with bots than I do humans. (At least on Facebook Messenger.)

As we consider where to take our bots, we here in the Bot Studio are working hard to better understand the bounds and opportunities of the major platforms, as well as the people who use them, in what context, and why they’d prefer chatting with a bot over some other activity.

And so I’ve spent weeks searching for bots, chatting them up, testing them, and probably annoying them and their kindly developers. Here are some early observations and notes for what we’d like to do (and not do). Note that this post is looking at only Messenger bots.

Some chatbots are all bot and no chat

Many bots we’ve been talking to have more in common with websites than they do with chatbots. Beyond a few buttons, they’re dispatch bots—delivering content via cards designed to drive users to a website.

They’re essentially nav bars in chat-bubble form:

giphy

One of the promising opportunities for chat, though, is its potential to cut through the architecture required by a graphical user interface. To find the Cubs score on a news website, for example, you’d need to search for it or make a series of selections, like “Sports > Cubs > Article,” and then scan for the score in the first couple of paragraphs. In chat or voice interfaces, you could cut through that by asking something like, “What was the final score in the Cubs game last night?” Few of the bots we’ve chatted with took advantage of this, instead opting for navigation by category. “What kind of stories would you like? Select trending topics or latest news.”

Most of my conversations with news bots in Messenger didn’t last; there was nothing that made the interaction worth the time. The same content appears in the same form elsewhere; it ended up being just another place to get news notifications that would push us to the websites. No different from an app, or a newsletter, or social cards.

So what makes a bot good?

Conversation provides a good guide here, though I immediately forgot how to be a human and broke all of the rules when building my first Messenger bot—because I was thinking about information design instead of conversation design. (More on that in a later post.) 

Here’s what I wrote to share with the Studio—some loose rules to check ourselves against when building our bots.

Clear introductions and instructions

It’s surprisingly hard to get some conversations off the ground—I often can’t figure out how to interact with a bot. I try a few “hello?” messages or commands, then give up and move on. “Get Started” is the default way to start a conversation on Messenger. When that fails, I’m left guessing:

Screen Shot 2017-05-31 at 1.32.23 PM

The first message sets the tone for the interaction—best to say hi and give the user a few hints about what directions the conversation might take:

Screen Shot 2017-05-31 at 1.40.24 PM

Building momentum: investing users in the destination

Bots often need information from users to be able to operate properly—a weather service can’t deliver the forecast without the user’s location, and a food recommendation bot needs to know your mood or preferences before it can offer choices.

But demands out of the gate without additional context feel abrupt:
Screen Shot 2017-06-01 at 9.18.21 AM

Here, though, the request for location and time comes after I see what the bot can do for me. I’m more likely to give information because I understand what I’m getting in return, and I’m trusting that the information I give will help make those results better. (Pardon the French; I’m asking for restaurant recommendations, now, in Paris’ 18th arrondissement.)

 

What’s nice about these interactions, too, is that it’s a quick back and forth. I can give my responses as easy, incremental choices—as opposed to having to piece together a request like this:

Screen Shot 2017-06-01 at 9.26.30 AM.png

Reactions: acknowledging the user

Imagine this interaction:

Vihaan: “I’m having a hard time getting over my dog’s death.”
Emily: [hands him a book on grief and walks away]

This isn’t a content mismatch; it’s an emotional or reactive mismatch. It might be perfectly appropriate to share a great book on grieving with a grieving friend, but if I were a decent human being, I would react to what Vihaan’s saying before contributing something of my own to the conversation. 

Some of the best bots I chatted with did this well—they acknowledged the user’s input before delivering information or asking for additional information. (And of course that acknowledgement often came in GIF form.)

If I were filling out a form, I wouldn’t be acknowledged until I hit “submit.” Here I get nice little prizes along the way, which makes me feel like an active participant in the process. Like we’re working toward this together, me and my bot friend.

Building in even a minimal amount of interaction fun helps make this navigation style fun. You’re building toward a destination–you invest because you expect a return. Each Q&A follows the last, and gets you closer to the answer.

Screen Shot 2017-06-01 at 9.46.44 AM.png

Transparency: offering a path for skeptics

One of my favorite onboarding experiences comes from Epytom. Most bots ask for something up front—a location, a subscription, etc. They need that information to work. But Epytom offers transparency alongside the ask. Users might be more willing to provide a location if they understand what they’ll get in return. When we build, we hope to provide not just the “how” but the “why.”

Screen Shot 2017-05-19 at 10.12.34 AM

Saying what you can’t do—and then what you can

Bots tend to get stuck in two places—when you hit the end of a decision tree and when you hit an error.

When reaching the end of the decision tree, the most seamless experiences offered other options. Do you want to find another restaurant? Would you like to hear a joke? When I said no thanks, the bot politely waved goodbye.

Screen Shot 2017-06-01 at 9.57.35 AM

Error-handling is a little tricker, because by definition the bot is not sure what it’s up against, conversation-wise. When bots change up the error message (selecting one from an array) makes the conversation feel less robotic. But bots shouldn’t just say what they can’t do—they should tell the human what they can do. Give options. Point the person in the right direction.

Easier Said than Done

One of the other projects we have going is to test many of the tools for bot-building, from simple mockup makers to libraries or frameworks for natural language processing or artificial intelligence. No sooner did I pick up the first tool than I broke all of these rules. Developer Me managed to confuse and irritate User Me, often within a span of 5 minutes.

So it’s clearly not simple.

But we’re hoping to build some bots that work well—and also consider issues like notification strategies, conversational flow, contextual content, surprise and delight. And visiting bots in the wild has been wildly instructive.

We’ll post more examples, tools, and even recommendations right here, keeping our insights frequent and informal. Please feel to weigh in, and let us know what bots you love (or hate), at bots@qz.com.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s