"Microsoft is battling to control the public relations damage done by its “millennial” chatbot, which turned into a genocide-supporting Nazi less than 24 hours after it was let loose on the internet. The chatbot, named “Tay” (and, as is often the case, gendered female), was designed to have conversations with Twitter users, and learn how to mimic a human by copying their speech patterns. It was supposed to mimic people aged 18–24 but a brush with the dark side of the net, led by emigrants from the notorious 4chan forum, instead taught her to tweet phrases such as “I fucking hate feminists and they should all die and burn in hell” and “HITLER DID NOTHING WRONG”."
This blog (started in 2010) identifies management and leadership-related topics, like those explored in the Managing and Leading Information Services graduate course I have been teaching at the University of Pittsburgh since 2007. -- Kip Currier, PhD, JD
Friday, March 25, 2016
Microsoft scrambles to limit PR damage over abusive AI bot Tay; Guardian, 3/24/16
Alex Hern, Guardian; Microsoft scrambles to limit PR damage over abusive AI bot Tay:
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment