TECHNOLOGY

Why Artificial Intelligence Is Stuck in the 1950s

As technology continues to evolve, and more companies introduce artificial intelligence, will they learn from mistakes of the past?

Share on
BY Eric Holtzclaw - 10 Nov 2017

PHOTO CREDIT: Getty Images

I love my Alexa. As I moved into my new house, I made sure that she is an integral part, from controlling the lights, to playing music, providing news and updates.

But Siri and I don't have the same close relationship. I end up yelling at her in frustration more often than not. She just doesn't understand my Southern accent.

I was visiting with Kriti Sharma, the VP of Bots and Artificial Intelligence (AI) from Sage and shared my love for Alexa and frustration with Siri while we discussed the new Sage artificial intelligence bot Pegg.

Pegg, a smart agent recently introduced by Sage, can take care of simple accounting tasks such as sending or following up on an invoice, coding expenses against account codes and informing you that your bank balance is low - all by asking her simple natural language questions. Good news for me, she is integrated into Alexa.

It was during my conversation with Kriti that I became aware of a gender bias that is slowly being introduced into artificial intelligence.

Let me explain:

Our first introductions to AI are often systems like Alexa and Siri - they are at our beckon and call to do the tasks we ask. Individuals in the field, Kriti included, share stories of how inappropriate, even abusive, users of artificial intelligence can be.

In fact, the team at Sage had to consider how Pegg would respond to statements like "Pegg, I love you" or "Pegg, I hope you die".

Contrast these helpers with the artificial intelligence created by other companies: IBM has created Watson, and ROSS is the first AI Lawyer. Both male personas in what has been traditionally a male dominated profession.

See the problem?

The world of Artificial Intelligence continues to reinforce existing gender bias by creating "assistants" that have female personas that are meant to serve us and where "experts" are given male personas. Kriti shared:

"Gender bias is systematically built into the world we live in. It has long been suggested that the default voices are female because humans associate women's voices with helpfulness and support. These kinds of gendered associations and perceptions exemplify biases that we are perpetuating in the machines we are building."

While this seems minor, studies have proven that these subtle suggestions dramatically impact the way we view the world.

When asked what could be done to circumvent this future, Kriti shared the following advice:

"With AI, we as developers have an opportunity right now to stop bias of any kind from being built into our products. One of the perceived advantages of using AI is that it is impartial, logical and objective. However, in reality, AI will always be biased unless it's built with mechanisms to counter the biases left by its developers. To create truly inclusive AI, it's critical that we recognize how bias infects AI and can make the 'intelligence' generated less than objective."

Kriti continued:

"The first step is to level the playing field by diversifying and enhancing the developer talent pool. Employing a diverse team of developers to build emerging technology, that relies on learning from the humans that build it, will ensure we have machines that recognize all races, gender, voices and faces.

The next step is for businesses to establish ethical technical standards that identify and eliminate biases in AI-- starting with the first line of code.

Providing clear expectations on how developers source and process increasingly more diverse data sets for training AI; as well as, the kinds of specific values embedded in reinforcement learning are crucial for success. We must also consider how the design and persona of anthropomorphic AI assistants or programs reinforce stereotypes on gender roles within the workplace.

By creating an established code of conduct for AI development and training, like the one I helped to create for Sage, businesses can prevent perpetuating biases and stereotypes in their AI technologies."