Sarah Thomas, Women in Comms: A future overrun with sexist, racist machines is not hard to envision, unfortunately

Sarah Thomas, Women in Comms: A future overrun with sexist, racist machines is not hard to envision, unfortunately

We hear a lot about how artificial intelligence (AI) has the potential to displace jobs, especially those held by women in tech, but should we also worry about a future overrun with sexist, racist machines?

This article was written by Sarah Thomas, director, Women in Comms, and first published by Light Reading (Banking Technology’s sister publication).

It’s not hard to envision, unfortunately. If AI is not designed to reflect all types of individuals, but rather only the white men who are writing the algorithms, that might be the scenario we end up with.

First the threat to jobs is real, and it’s weighted more heavily towards women. Second, and arguably more concerning, is the damage that can happen when AI infiltrates every aspect of our lives, and it brings harmful stereotypes and biases with it.

First on the jobs front: The World Economic Forum predicts that 5.1 million positions worldwide will be lost by 2020, hitting women the hardest. Men will face nearly four million job losses and 1.4 million gains, while women will have three million losses for 0.55 gains. This is because AI will displace jobs that women hold at higher rates, such as administrative positions, and because it’ll affect the tech industry where there is already a well documented disparity.

Second, and less explored, is what these computers will look like. Like the tech industry at large, the field of AI is dominated by white males. AI learns from humans – these white, male humans. If human biases, whether unconscious or deliberate, make their way into algorithms, it gets reflected in the robots and programs that result. The machines may be “intelligent,” but who cares if they are also racist, sexist and painfully stereotypical?

We’ve already seen some examples of this happening. Here are just a few:

  • In 2015, Google’s photo-recognition feature misidentified black faces as gorillas – not on purpose; it was just largely trained on white faces…

Click here to read the full article.

@banking
techno