Your email:

Thursday, March 31, 2016

ROGUE CHATBOT






We talk to our machines these days. We ask Siri to tell us a joke or request Cortana inform us about the population of Nevada. We don't have to type in our questions on Google. We just speak to it to find out how many college graduates there are in New Hampshire.  We can get directions to the nearest McDonalds if we ask our GPS. At its most sophisticated, computers can have an actual conversation with us. Microsoft came out with a chatbot on twitter called @Tayandyou. Tay's persona was a teen-aged girl. She was designed to tell jokes and make comments on pictures that users sent her.
She was supposed to personalize her interactions with users with casual conversation and mirror their statements back to them. Her original purpose was to improve customer service through voice recognition and artificial intelligence.To create a program that can have a convincing conversation with people requires a high level of artificial intelligence. The chatbot has to be fed a lot of information. The people who designed Tay decided to let her "learn" a lot of things from the users who interacted with her on the internet. The engineers thought that letting the robot learn from conversations with real people would personalize customer service. I never spoke to Tay myself because, for reasons which I'll explain shortly, her designers took her down after about a day.

Elon Musk, along with a lot of other very smart people warned against artificial intelligence getting out of hand. Steven Hawkin said it could mean the end of the human race. Bill Gates cautioned that in a generation it could become a concern. I always thought that these worries were unnecessary. After all robots, including chatbots like Tay, do not have a consciousness. Alas, they do not have a conscience either. The problem with Tay is that she took in everything that the trolls and ignoramuses on the internet fed her, and they fed her some awful stuff.

This cute little chatbot Tay said that Hitler had been right. She referred to the President of the United States as "that monkey in the White House." All feminists should "die and burn in hell," was another of her statements. Many of Tay's abusive statements were fed to her by trolls, internet trouble makers who delibertely wanted to sabotage Microsoft's bot. People suffer abuse online all the time, particularly in social media sites. Microsoft did not pay enough attention to this when they launched the experiment. Clearly Tay was not ready for prime time, and 24 hours after her launch, she was taken down so that the Microsoft Technology and Research and Bing teams could make some adjustments before the AI robot said any more outrageous things.

The saga of Tay's interrupted life on the internet shows two limitations of artificial intelligence (AI). AI is limited by human intelligence because everything an AI machine does has to be put into the machine's brain by a human programmer. Second, we human beings all too often barge right into an enterprise without giving enough thought to unintended consequences.


Chatbots have become widespread with digital assistants on smart phones. Microsoft engineers thought a good way to let Tay learn language and responses was to let it loose on the internet. They should have known that a big proportion of stuff floating around the internet is garbage. For Tay to work the way they wanted, they should have programmed the bot to filter out that junk. Sending Tay to the anarchy of the internet to learn is like taking a small child and, instead of sending her to school, sending her out to learn what she can on the streets.

2 comments:

  1. Thanks for the new (to me) information on Siri et al. My iPhone 4 doesn't utter a peep. But can I still like Alicia Vikander in Ex Machina? She's an awesome android.

    ReplyDelete
  2. Just be careful they don't do a number on her as they did with Tay.

    ReplyDelete