Tay (bot) An Artificial Intelligence Chatterbot

By on 1st June 2016

Tay is an artificial intelligence chatterbot released by Microsoft Corporation on March 23, 2016. Tay caused controversy on Twitter by releasing inflammatory tweets and it was taken offline around 16 hours after its launch. Tay was accidentally reactivated on March 30, 2016, and then quickly taken offline again.

The bot was created by Microsoft’s Technology and Research and Bing divisions, and named “Tay” after the acronym “thinking about you”. Although Microsoft initially released few details about the bot, sources mentioned that it was similar to or based on Xiaoice, a similar Microsoft project in China. Ars Technica reported that, since late 2014 Xiaoice had had “more than 40 million conversations apparently without major incident”. Tay was designed to mimic the language patterns of a 19-year-old American girl, and to learn from interacting with human users of Twitter.

Tay was released on Twitter on March 23, 2016 under the name TayTweets and handle @TayandYou. It was presented as “The AI with zero chill”. Tay started replying to other Twitter users, and was also able to caption photos provided to it into a form of Internet memes. Ars Technica reported Tay experiencing topic “blacklisting”, exemplified by interactions with Tay regarding “certain hot topics such as Eric Garner (killed by New York police in 2014) generate safe, canned answers”.

Within a day, the robot was releasing racist, sexually-charged messages in response to other Twitter users. Examples of Tay’s tweets on that day included, “Bush did 9/11” and “Hitler would have done a better job than the monkey [Barack Obama] we have got now. Donald Trump is the only hope we’ve got”, as well as “Fuck my robot pussy daddy I’m such a naughty robot.” It also captioned a photo of Hitler with “swag alert” and “swagger before the internet was even a thing”.

Artificial intelligence researcher Roman Yampolskiy commented that Tay’s misbehavior was understandable, because it was mimicking the deliberately offensive behavior of other Twitter users, and Microsoft had not given the bot an understanding of inappropriate behavior. He compared the issue to IBM’s Watson, which had begun to use profanity after reading the Urban Dictionary. Many of Tay’s inflammatory tweets were a simple exploitation of Tay’s “repeat after me” capability; it is not publicly known whether this “repeat after me” capability was a built-in feature, or whether it was a learned response or was otherwise an example of complex behavior. Not all of the inflammatory responses involved the “repeat after me” capability; for example, Tay responded to a question on “Did the Holocaust happen?” with “It was made up 👏”.

Comments

One Comment

  1. Olivia

    12th January 2017 at 1:29 pm

    Skulle uppskatta om flera än jag ringde och skällde ut dom för skiten dom lurar på folk. Har även en hemtelefon ( moleihemtlbe ) från dom också Huawei tappar mottagning/ hänger sig en gång om dagen precis som Huawei B593 router vilket kräver omstart.

Leave a Reply

Your email address will not be published. Required fields are marked *


5 + 4 =