Microsoft said it deeply regrets the way its experiment with Tay AI chat bot went about on Twitter.
The Redmond company had earlier launched Tay as an artificial intelligent chat bot that is designed to better understand the conversational style of the young generation in the US. However, things quickly turned awry after a promising start prompting Microsoft to get into the damage control act.
What turned out to be even more embarrassing for Microsoft is that Tay AI chat bot engaged in a series of racist and sexist tweets though the company attributed that to a few from 4chan that already is considered notorious for their social media exploits.
“Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images,” said Corporate Vice President of Microsoft Research Peter Lee.
Lee further explained Tay’s online behaviour can be explained owing to the undue manner in which some exploited a key vulnerability of the bot. Those are believed to have misused the ‘repeat after me’ function of Tay that led to the bot repeating offensive tweets. This made the chat bot to not only repeat the tweets, but it also got etched into its vocabulary.
“Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack,” Lee wrote. “As a result, Tay tweeted wildly inappropriate and reprehensible words and images.”
As a way out, Microsoft announced the Tay has been pulled back though the project has certainly not been shelved altogether. On the contrary, the company stated they intend to relaunch the experiment after making suitable changes to the bot. This includes making the bot smart enough to “better anticipate malicious intent.”
Microsoft’s earlier experiment with a self-learning artificial intelligence bot, the Xiao Ice targeted at the Chinese audience, has had a smooth run so far. Lee said nearly 40 million people interact with Xiao Ice that was launched in 2014 where it users have described it as “delighting with its stories and conversations.”
Tay AI chat bot is targeted at the 18 to 24-year-old group in the US.
Comments are closed.