In the first part of this post we looked at some misleading headlines relating to Artificial Intelligence and how the body of the articles all came to the same conclusion – that the quality of the coding and the data used are imperative. AI technology is amplifying the biases of its programmers, so let’s look at three possible ways to mitigate this…
Part 2 of a series - read Part 1 here.
If you haven’t heard of the term ‘Brogrammer’ before that’s okay, but I bet if I asked you what you thought one was you’d have a pretty good shot at getting it right. The current male dominated culture can have a negative impact on women in the industry, which is especially worrying considering that “under a fifth of tech and ICT workers in the UK are female.” Reshma Suajani, founder and C.E.O. of Girls Who Code, says “If you were to look at the gender makeup and the racial makeup of technical interns versus who gets hired, there is a huge drop”.
Those who believe that combating gender diversity issues in the workplace is somehow “virtue signaling” or a PR stunt either don’t understand it, or choose not to. There can be no doubt about the benefits of having more women in tech, and this is proven by many data-driven sources. Dr Joanna Bryson, from the University of Bath's department of computer science said that the issue of sexist AI could be down to the fact that a lot of machines are programmed by "white, single guys from California" and can be addressed, at least partially, by diversifying the workforce.
Employing people into your organization should be done on merit as opposed to what top branded school they went to, their gender or their race. But employing people based on their skills as a programmer is trickier than it seems. There are a number of merit-based agencies such as HackerRank and tara.ai that are attempting to remove the biases from employers, and match the correct skills with the correct job. Gary Brady from HackerRank says “By filtering out resumes using traditional credentials, like GPA or university, we missed out on a massive pool of qualified, committed and passionate candidates.”
At Tara.ai - whether they're men, women, retirees or twenty-somethings with a fresh undergraduate degree, developers get paid the same wages for the same work.
Shaherose Charania, a board member and also the co-founder of Women 2.0, says that companies lacking diversity are more prone to making mistakes that offend their users (see my discussion about misleading headlines in Part 1). There have been a number of issues with ethnic minorities being paid less than white workers, and companies have been sued for their discrimination of Asian workers, but the science tells us that diversity makes us smarter and that being around people who are different from us make us more creative, diligent and harder working. At the risk of sounding flippant, if we don’t consider other cultures when building AI products, we might lose our hair. Not everyone sleeps on a bed!
The perfect blend
There is no utopian workplace with the perfect blend of diversity, meritocracy and multiculturalism that will be immune to the issues with building AI; however, these issues can be minimised by developing businesses in these three areas. Rachel Thomas of Fast.ai believes that having more diversity within the AI world will also help with the problems of bias and fairness that currently bedevil the industry. More diverse teams can prevent problems.
In the meantime, we will have to remind ourselves to filter any overly dramatic headlines, and always read beyond the first few paragraphs.