(Page 2 of 2)
play with them. Even very young kids immediately get it, and often will engage deeply, which speaks to the attractiveness and ease of voice-controlled computing.
But when kids talk to Alexa and its competitors, do they realize they’re talking to a computer and not a person? Do adults?
Amazon has gone to great pains to present Alexa as a persona, if not an actual person. The company employs a team of people whose job it is to build Alexa’s personality, and make “her” more engaging. The company challenged university teams from around the world to create Alexa chatbots that could hold humans in conversation for as long as possible. One tactic of the winning team, from University of Washington, was to imbue its chatbot with inflection to suggest emotion. (Other companies, such as Jibo, whose “social robot” recently made the cover of Time magazine, are on a similar track.)
It’s something to think about the next time you yell at “her” in frustration because “she” didn’t understand your command, tell “her” to shut up, or casually call “her” stupid. Young kids in particular might confuse Alexa for just another human you’re talking to on the phone. Barking commands to your female-named, female-voiced assistant—rudely instructing “her” to carry out this or that menial task—reinforces outdated, misogynist stereotypes about the roles of women and men.
The misogyny inherent in the default femaleness of these devices has been discussed almost since they arrived on the market, though it’s getting renewed attention in the #MeToo context. (The issue is yet another example of why the tech industry should endeavor to welcome more women to its ranks.)
There are signs of improvement. For example, after Quartz tested the responses of each leading digital assistant to sexual harassment early last year, Amazon, for one, quietly changed the way Alexa responded. More can be done here, including allowing people to select whatever voice or wake word they would like. Even though research suggests people are more responsive to female speech, why not just give them a choice?
But then what if we over-correct, and start treating the software inside our smart microphones with the politeness and respect we would offer another human, and insisting that our kids do, too? What patterns are we establishing for their future interactions with A.I.? And what are we hiding from our kids and ourselves about the true nature of these devices, and their associated systems, when we extend human decency to something that is not human, but attempts to act human?
We don’t have good answers to these questions.
A compelling new draft report entitled “Ethically Aligned Design,” from the Institute of Electrical and Electronics Engineers, discusses, among other things, “affective computing—an area that studies how computers can detect, express and even ‘feel’ emotions,” write two of the report’s authors, Rafael Calvo and Dorian Peters of the University of Sydney. In their summary post at The Conversation, Calvo and Peters note the dearth of research in this area, and suggest “we need to learn much more before these systems become widely used.”
But these systems are clearly already becoming widely used. It’s up to each user to decide how to treat them, and what to teach their kids. In any case, we’re all part of a grand experiment about privacy, technology, and humanity.
* * *
Here’s an experiment you can try, too. Change the wake word on your Alexa-enabled device to “computer,” or better yet, “Amazon,” for a week. (Amazon gives you those two choices, along with the default, “Alexa,” and “Echo.”) See if you think of the device differently, use it differently, speaking the mega corporation’s name out loud in your home, each time you want something.