✍️ new trump, old trump

@TrumpOfYore: Using AI and math to find similar tweets from the president’s past

It’s become sport in some circles to juxtapose old Donald Trump tweets with his latest statements or positions — and calling out tweets that haven’t aged well. Quartz’s Annalisa Merelli wondered if we could build bot to do something similar.

So we gave it a shot.

Detecting irony and contradiction is probably best left to human tweeters for now, but we did build a bot that looks at every new Trump tweet and surfaces a similar one from before he president. It’s the Trump Of Yore bot.

Screen Shot 2017-06-14 at 12.40.53 PM

How does it work?

In short, we started with 12,647 tweets from the Trump Twitter Archive and turned each one into a set of numbers. When Trump tweets anew, we turn that tweet into a set of numbers, too, and ask a computer to pick an old tweet that’s mathematically closest to the new one.

If the similarity passes a (somewhat arbitrary) threshold, the computer automatically replies to the presidential tweet with the one from pre-president Trump. It also will post an image of the juxtaposed tweets in the @TrumpOfYore timeline.

For more detail about the bot’s inner workings, read on.

Chopping tweets into tiny bits

It takes a bit of translation to turn an image or a tweet into something a computer can play with. That usually means breaking the human creation into tiny parts and assigning numbers to each unique part.

For a picture, those parts could be pixels or groups of neighboring pixels. For a long document, they could be words, phrases or sentences.

But tweets are so short, and full of slang, links, and such. So how do you determine what to group?

Fortunately, some smart folks at Carnegie Mellon University wrote a paper¹ solving this problem, and then some.

They gave two million tweets to a computer and programmed it to chop up each tweet and learn, on its own, which tweet-parts appeared in tweets that had similar content — determined by looking at hashtags. The assumption was that tweets with matching hashtags contained similar content, and that the tweet-parts they shared are useful for comparisons. So use those tweet-parts.

When they were done, the researchers had a neural-network “model” for the best way to chop up tweets to make comparisons. And, fortunately for us, they’ve posted the model and the code to use it online.

Making the comparison

Using their posted code, I processed all of Trump’s tweets from before his inauguration into chains of numbered tweet-parts. Math and computer folks call these chains vectors, and in this case, each tweet’s vector is 500 numbers long.

Whenever Trump tweets, I use the same system to make a vector for the new tweet.

Computers are pretty great at comparing vectors quickly. My code finds the tweet in the archive with the largest cosine similarity to the new tweet, something outlined in this helpful post.

The most similar tweet gets tweeted out as a reply to the original tweet, using tweeting code for the Python computer language, and then both tweets are posted in the bot’s timeline as an image, using a nifty tool made by Quartz’s Chris Zarate.

Is the bot biased?

Most certainly. But not in the way you’re probably thinking.

Browsing though the bot’s choices over the last week or so, some picks seem spot-on:

Screen Shot 2017-06-15 at 8.48.25 AM

And others seem way off:

Screen Shot 2017-06-14 at 12.44.56 PM

Exactly why isn’t clear.

And that raises an issue discussed just this week at a conference about artificial intelligence and journalism: What responsibilities do I, as a journalist, have using a computer model I didn’t build myself? The model I used was trained on two million tweets I haven’t seen. So the model could be — and probably is — biased in ways I don’t know about. Maybe most of those tweets were posted by men. Or young people. Or bots!

Thanks to the research paper, I do know those training tweets were in English and were posted from June 1 to June 5, 2013. So it’s probably biased toward topics that people talked about on Twitter that summer. In fact, notice that my example of the bot’s “way off” match above was one posted by Trump during that exact date range. Also the phrase “fake news” wasn’t used in the same way, or as often, back in 2013.

Interestingly, had the model been built on tweets drawn from the following five days that year, it probably would have been colored by the bombshell reports about Edward Snowden’s leak of US national security documents, which became known to the world in a series of stories starting with this one on June 6.

Further exploration

And while two million tweets sounds like a lot of training data, Bhuwan Dhingra, who helped build the model and is the lead author on the paper describing it, told me that’s actually a modest size.

“Performance could be improved by training on a larger set, with higher model capacity,” he said. “And if you will be using this for an application I would suggest doing that.”

I don’t have ready access to millions of tweets — yet. But I hope to give that a try soon. For now, this experiment will help build some other Quartz Bot Studio projects that require text comparisons. And it helped get my hands into some machine-learning code.

Speaking of which, I cobbled together this bot in just a few days, and that is clearly reflected in my code and its lack of documentation. I’ll try to clean things up, but in the meantime, you’re welcome to explore it and read the notes I took along the way.

If you have any questions or comments, reach out the Bot Studio team at bots@qz.com.


¹ Here’s the full citation: Bhuwan Dhingra, Zhong Zhou, Dylan Fitzpatrick, Michael Muehl, and William W. Cohen. “Tweet2Vec: Character-Based Distributed Representations for Social Media.” ACL (2016).