Blog

AI systems can perpetuate bias
Changing the Conversation on AI: Promoting Equity

One of the arguments for the use of artificial intelligence (AI) is that the technology removes human bias. Unfortunately, research shows the opposite. A 2019 Harvard Business Review article detailed how AI systems can perpetuate bias. Often, technologies like facial or voice recognition software are engineered with data that is not representative of the entire population and may not account for distinctions in race, gender or language. Last year, the Association for Computing Machinery recommended both the public and private sector cease using facial recognition technologies, citing evidence of “clear bias based on ethnic, racial, gender and other human characteristics recognizable by computer systems” until systems can be evaluated for bias and overhauled, if necessary.

Consider Individual Experience

“It’s easy for us to look at these devices on the surface and think they are benign. We assume everyone has the same experience with them and that is not true,” said Halcyon Lawrence, an assistant professor of technical communications at Towson University whose research focuses on speech intelligibility and the design of speech interactions for voice technologies. A native of Trinidad and Tobago, Lawrence often encounters systems that don’t recognize her non-standard accented English. That leads to frustration as she needs to repeat herself. She noted that others may feel the need to change how they speak to suit the device.

“But speech is part of identity,” she noted. “Devices are more frequently disciplining those who do not speak standard English.” This creates an uneven playing field where standard English speakers often have no problem but those with any accent or dialect experience difficulty. They often feel like they have to change who they are to have successful interactions with the technologies.

Lawrence emphasized that AI technologies need to be designed to reflect the full range of the population. And representation works both ways; people not only want to be understood by the device, they also want to hear voices that represent them. This is critical for marketers, who employ AI technologies to improve the customer experience.

What Can Marketers Do?

Knowing the limitations of the systems currently on the market, Lawrence recommended marketers employ the technologies “judiciously” and “ensure that customers can communicate with you in the way they are comfortable.” She urged marketers to think about other ways customers can engage with companies and to provide a way for them to opt out of using the AI tools and communicate via other channels.

Marketers can also be advocates for inclusion with their engineers and vendors. “Marketers can be powerful advocates for change and advocate for a variety of voices,” said Lawrence. “Ask yourself, ‘Does this technology sound like my customer base?’. We want customers to have universal experiences,” she advised.

Lawrence also cautioned marketers to be mindful of the language used to describe new technology. The tendency is to use terms like “revolutionary” or “game-changing”, but if a technology perpetuates bias, it should not be labeled with those terms. “We have to start thinking about the social justice aspects of new technology. Think about who gets left out. Some may benefit but others may be harmed. Do technologies represent the communities they are designed for? How we talk and write about technology is as important as holding the technology itself accountable,” said Lawrence.

While AI has the potential to improve customer experience and help marketers reach new groups of customers, it is clear that more work needs to be done to truly make them bias-free. While tech giants like Google are working toward developing more inclusive technologies, marketers can contribute to the conversation within their organizations by knowing their target markets and advocating for representation along with a more inclusive and equitable customer experience.

Facebooktwitterlinkedinmail