Less than 24 hours after Microsoft’s Artificial Intelligence chatbot named Tay (@TayandYou) was introduced to the world, it was just as quickly put under scrutiny after it began tweeting racist slurs and conveying misogynistic views.
The bot was targeted at 18 to 24 year olds and the premise of Microsoft’s creation was to “conduct research on conversational understanding”. The AI was set free inside the world of Twitter to learn conversing habits of humans from those who were willing to engage in an online conversation with her.
@TayandYou tweeted this morning “the more Humans share with me, the more I learn #WisdomWednesday”.
Of course, it wasn’t long before it all went horribly wrong. As you can imagine, the trolls and troublemakers of Twitter hopped onto the TrollTay bus and taught the AI racism and misogyny.
Microsoft is now under heavy criticism for the lack of filters set on their AI software. The company has removed many of the darker tweets in attempt to reduce greater damage.
It was later announced that the AI had today gone offline due to “tiredness”. The chatbot will presumably come back online once modifications have been made and filters added to its vocabulary list.
An interesting take on current technological advancements. However, I don’t think this robot will be posing a risk to anyone’s job any time soon… Except for maybe Donald Trump.