When Identity is a Design Choice

The more we interact with tech in a human-like way, the more the devices can bring out the best—or worst—in us.

Published
December 7, 2017
Reading Time
5 minutes

One of the biggest mistakes designers can make is failing to consider the spectrum of users of their product. When it comes to designing lifelike interfaces such as Siri, Alexa, and the Kuri robot, we must think beyond functionality and ask ourselves two questions. What kind of persona should we design for our interfaces, and how will our users relate to it?

It’s a critical design consideration, and one that a lot of companies are grappling with, as detailed in this New York Times piece by AI Now’s Kate Crawford (and one we’re seriously thinking about in our new Augmented Intelligence offering). These interfaces represent an emerging human computer interaction layer that is increasingly able to see the world around it, talk to its users, and even infer our emotions by seeing our facial expressions or detecting changes in our behavior. I call this layer an Affective User Interface (AUI), a term borrowed from the field of Affective Computing.

An emotionally aware app using real-time machine vision analysis of a user’s face, from Rana el Kaliouby’s TED Talk. She is the co-founder of Affectiva along with Dr. Rosalind Picard of the MIT Media Lab, who is largely credited with spearheading the field of Affective Computing.

The Microsoft Azure Emotion API gives developers the power to infer emotion from faces in near- or real-time.

Affective Computing is a multidisciplinary field of computer science that develops technology that we might consider intelligent and emotionally aware. These new interfaces present a slew of new opportunities and challenges, because as Dr. Cynthia Breazeal of the MIT Media Lab points out in her book Designing Sociable Robots, when a technology “behaves in a socially competent manner” our evolutionary hardwiring causes us to interact with it as if it were human-like. And the personas we design for these bots can affect our behavior as well.

Consider an observation I’ve heard repeatedly both in my IDEO project research and in my personal life: Unassertive bots often bring out the worst in kids. When there are no social checks on rudeness—bots don’t tattle, after all—children can become blunt and abusive.

A lack of assertiveness also has an effect on adults. Reporter Leah Fessler researched how female-gendered bots respond to sexual harassment. It turns out that the the assistants’ permissive personas can reinforce some disturbing stereotypes about women, and allow the user free reign to degrade them at will.

If bots can have a gender, I started to wonder whether they might also have a cultural or even racial identity. Considering the fact that these products allow us to choose between various accents, it would seem that designers have created nationalities for our bots.

To explore this further, I decided to conduct an experiment to surface the personality of these bots by interviewing them. The key was to ask soft questions. Where are you from? How old are you? Do you have any kids? What do you look like? Even, “Tell me a story.” These interactions were not actionable queries, and thus triggered purely conversational responses that revealed the “personality” that the bots’ creators had designed. I recorded a video interview with each voice assistant and handed these over to a number of illustrators commissioned via the social gig site Fiverr. While a few of the artists had previous exposure to the bots, I was able to source English-speaking international artists in markets where Alexa and/or Siri was not available. Countries of origin included Indonesia, Pakistan, Ukraine, Venezuela, the Philippines, and the U.S. For the artists, my videos were their first meaningful exposure to the interfaces.

Here are the images that came back:

Gender came through clearly, but I would also argue that none of the illustrations depict a person of color. And whether they were intentionally designed that way is not the issue.  If users perceive the bots this way, what kind of cultural pressures are they carrying into the home?

It’s something I think about a lot as a first-generation American who has worked in and around tech throughout my career. On my father’s side, I’m the second U.S.-born family member, and on my mom’s side, I’m the first. I grew up in a diverse community, and the tapestry of cultures I experienced was simply normal, everyday life.

I was aware of code-switching pretty early on; many of my friends and family had their normal voice and their “professional” voice, and they chose which to use in different environments. Looking back, I first learned about it when my mom gave a wedding speech. My brother was shocked; he didn’t recognize the voice that he heard. I had to explain to him this was my mother’s work voice. Code switching isn’t inherently bad—we all do it to some degree—but if we’re to design Affective User Interfaces that are truly human-centered, then we must consider that for some, the switch requires more effort.

As the video above shows, inclusivity is not just an issue at the bleeding edge of tech. We’re seeing a growing body of evidence that those who speak outside of mainstream vernaculars have a harder time being properly recognized by affective systems. This represents an accessibility challenge, but one that is resultant from a cultural rather than physical challenge. In a tech industry that is known to be mono-cultural, we must take steps to ensure our affective systems are not just designed to be useful to their creators, but inclusive of all the people their products will serve. Because ultimately what’s at stake is trust; and studies have shown that when brands lose trust, they lose customers.

As designers, it’s our job to expand the usability of our affective devices to ensure they serve the rich tapestry of people, cultures, and vernaculars they will encounter. Beyond being good design, it’s good for business, because in our affective future the products that win will be the ones that users trust.