The Current State of Bots on Twitter

Summary: I did some research into Twitter's policies toward bots, and how they are actually treating them. If you are planning on making a bot that does a keyword search and responds to the tweets it finds, you should probably look for something else to do with your time.

Over the couple months, it's become clear the Twitter has changed their standards to be more strict about on their system. This isn't really too shocking, given their migration from an open network with active community development to a walled garden full of celebrities and promoted tweets. As they have grown, Twitter has adopted a largely unfriendly attitude toward 3rd-party developers.

I don't even intend for anything I write here to sound like a complaint. It's obviously Twitter's right to prevent or allow whatever content they like, and frankly I don't really care. I've really enjoyed writing bots for Twitter, and I love that many people have gotten laughs out of them. I've never made a dime off the work I've done and I never intended to. I've been contacted in the past about writing bots to promote an assortment of products, and I always said no, because that struck me as a form of spam.

For some reason, none of my bots have been disabled, which suggests either that Twitter isn't doing a good job of blocking bots (unlikely), or, that I've done an okay job of not being too offensive or spammish. I hope that's the case.

Anyway, I spent a little time digging through their terms of service to see if I could get some specific details on what is and isn't allowed.

The classic structure for many of the funny bots on Twitter, including my own, is very straightforward: Search the public stream for a keyword or phrase, and when you find it, reply with whatever it is you want to say. It's basic, and it's been around for a long time.

I've heard from a number of users of chatterbot, as well as people with other bots, that Twitter is almost immediately disabling bots that follow this pattern.

I did some research to determine if there is a specific policy change on the part of Twitter to block these bots. First, I reviewed their Automation Rules and Best Practices. They state:

The @reply and Mention functions are intended to make communication between users easier, and automating these processes in order to reach many users is considered an abuse of the feature. If you are automatically sending @reply messages or Mentions to many users, the recipients must request or approve this action in advance. For example, sending automated @replies based on keyword searches is not permitted.

I was a little surprised to read this, because I know that there have been bots replying to keyword searches for years. However, this rule has been in place at least since 2010, if not longer. According to this May 2011 Wired article:

Twitter relies primarily on human intuition—on people reporting spam via the user pages of the bogus accounts or by direct messages to @spam. Those actions alert Harvey’s nine-person antispam team to investigate the accounts and, if need be, retire the offending bot.

I strongly doubt that this is their current setup. There's no way that Twitter isn't employing some sort of automated system for shutting down spammers. Anyway, what I think this article suggests is that bots are okay – unless someone is annoyed by them. So, if I had to guess, I would say that Twitter drastically upped the sensitivity level in whatever metrics they are using to mark offensive users. And I would expect this to continue.

Twitter has always been a little arbitrary about this stuff. I'm sure they'll never shut down @BestAt, even though it's in violation of their policies as I read them. And there are certainly some bots that have been launched since the start of the year that are still in operation. I know that Stealth Mountain (code here) is popular and it is still running, even though I think a lot of people are clearly irritated to have their typos corrected. And there's other bots that are still running, in clear violation of Twitter policies. I can only imagine that eventually those bots will be shut down as well.

If you really want to try to write a bot like these, I would suggest following these guidelines:

  • Don't abuse the system - don't run searches too often, and don't max out your API calls.
  • Don't be offensive - I strongly suspect that if your first or second tweet is marked as spam, you're going to be disabled.
  • Have a very clear opt-out policy on your profile page. I intend to add something to Chatterbot to automate this as soon as possible. Twitter technically requires an opt-in for bots as well (This can be as simple as tweeting to the bot, rather than being triggered by a keyword search), but I have a feeling that an opt-out at least counts for something
  • Don't expect it to last forever. Because it won't.
Filed under: Twitter, chatterbot