What is the future of diversity and ethics in AI and the tech industry? Earlier this month, BCS hosted their 2019 Insight Event, and AVADO joined in the discussion. Here are our five key takeaways from the conversation.
What we’re about to tell you isn’t news. But it’s worrying, all the same.
There continues to be a persistent lack of diversity in the tech industry – particularly in AI. And it influences how our products are developed, and the safety of those products; what our AI can do, and who can use it; and who we create products for, resulting in large groups of consumers remaining untapped.
Earlier this month, BCS hosted their 2019 Insight Event, where panellists discussed the future of diversity and ethics in AI and the tech industry.
Here are our key takeaways from the event:
1. Tech is too important to be left to 50% of the population
The tech sector contributed close to £200bn to the economy last year, and its growth rate is 2.5 faster than the whole economy. Yet, according to a recent report from Inclusive Boards, the sector lags far behind the wider economy on measures including gender, race and class representation.
Gender is where the sector performs worst. Almost two-thirds of boards, and more than 40% of senior leadership teams, have no female representation at all, while across the sector the average is just 12.6% of board members and 16.6% of senior executives. And according to the BCS Insight Report, women formed a meagre 16% of the total IT workforce in 2018.
In the same BCS report, the numbers for BAME (Black, Asian and Minority Ethnic) people in IT appear to be more favourable than those for gender and disability bias. However, just 8.5% of senior leaders in technology are from a minority background, according to the report. This puts the tech industry far behind the UK as a whole, as one in seven members of the population comes from a BAME background.
Nevertheless, there is hope. According to BCS, if the sector were to diversify, women alone represent an additional available workforce of 706,250 people – a potential asset that seems mad to miss out on.
In other words, tech is simply too important to be left to a fraction of the population.
2. Lack of diversity can create bias in data and algorithms
Our systems are only as good as the humans that create them. So what happens when the workforce developing our tech isn’t representative of the world we live in?
According to new findings published by an NYU research centre, the lack of diversity in AI has contributed to flawed systems that perpetuate gender and racial biases. Perhaps unsurprisingly, the study found that the biases discovered in these systems can be “largely attributed to the lack of diversity within the field itself”.
Crucially, we have to be aware of the risk that AI can replicate or perpetuate historical biases and power imbalances. There are several disturbing examples of this, such as image recognition services making offensive classifications of minorities, chatbots adopting hate speech, and Amazon technology failing to recognise users with darker skin colours.
In short, it’s quite clear that the lack of diversity in tech and AI can create a bias that is hardwired into our data and algorithms. And to stop it, we need to create a more diverse workforce.
3. By ignoring disabled people, we’re ignoring a huge untapped market.
According to Government data from 2016/17, there are 13.9 million disabled people in the UK. Though accounting for 18% of the working-age population, people with disabilities constituted only 12% of the total UK workforce in 2018 – and only 9% of IT specialists in the UK.
This lack of diversity isn’t just apparent in the world of work, but in the products and services, we offer (or don’t) to people with disabilities. Access issues and outdated attitudes mean people with disabilities (and their families) are often shut out from activities many take for granted – such as visiting a restaurant, taking the tube, or attending a seminar.
However, people with disabilities represent a huge potential market for tech and IT products. And currently, businesses are missing out on the untapped £250billion annual spending power of disabled people and their families.
If we are able to change the way we work to accommodate for people with disabilities, this could account for an additional available workforce of 128,000 people – equating to an extra 45,000 IT specialists in the workforce. A failure to make any changes may not only result in lost profits but also a “scandalous waste of talent”.
Fortunately, these problems are not difficult to fix. Staff can be retrained. Buildings can be adapted. We don’t need to live in a society where nearly a quarter of the population is shut out of everyday life.
As BCS rightly write in their latest Insight Report: “This is a question of fairness in society, doing the right thing and dignifying people with the flat playing field to gain employment that others already have.”
4. Empathy in product design supports diversity.
It’s important to realise that the lack of diversity and inclusion in AI – and in overall product development – is not just cause for cultural concern. When applied to products where safety is a factor, it becomes a question of life and death.
One well-known example is from automotive design and safety: historically, automotive products were largely designed by men, and in the 1960s, when the first crash test dummies were designed, these were modelled after the average male height, weight and stature. Consequently, the industry designed cars that were largely unsafe for women, with female drivers being 47% more likely to be seriously injured in a car crash. It wasn’t until 2011 that the first female crash test dummies were required in safety testing.
According to TechCrunch, design bias is equally problematic when it comes to race, ethnicity, socioeconomic class, sexual orientation and more. For example, Google’s computer vision system labelled African-Americans as gorillas, while Microsoft’s vision system was reported to fail to recognise darker-skinned people. These examples of bias in AI could create serious implications for millions of people’s safety.
To build safer products, it’s clear that we must include diverse perspectives in our teams. But what happens if we hardwire empathy to the design process?
In a recent study from University of Connecticut, marketing professors Kelly Herd (University of Connecticut) and Ravi Mehta (University of Illinois) presented a question to a group of 200 adults: “What kind of potato chip would you create, and what would you name it, if you wanted to sell the product exclusively to pregnant women?”
Half of the group was simply given the assignment in an objective way. The other participants were told to envision how the customer would feel while eating the snack.
Perhaps unsurprisingly, the most creative ideas (including “Pickles-and-Ice Cream” chips; “Sushi” chips; and “Margarita-for-Mom” chips) came from the group that had thought about how the consumer would feel before starting the task.
“We’ve shown that empathy can change the way in which you think,” Professor Herd commented. “it appears that subtle things, such as imagining how someone else would feel, can have a huge impact on creativity in general.”
5. Ethics in AI is fundamental, and cannot be ignored.
The evidence is clear that diverse teams produce better results. According to a recent McKinsey report on Why Diversity Matters, companies that achieve gender diversity on their executive teams are 21% more profitable than those that don’t.
Meanwhile, the same report found that companies who are ethnically and culturally diverse outperform their counterparts by 35%.
Moreover, this is what the workforce of the future expects of corporations. According to PwC’s recent Workforce of the Future survey, 88% of Millennials want to work for a company whose values reflect their own. Millennials will comprise 75% of the global workforce by 2025.
In a 2018 study by Deloitte, only 48% of Millennials believe businesses behave ethically compared to 65% in 2017. These factors have a huge impact where Millennials want to work, and the roles they want to pursue.
According to Forbes, diversity and flexibility are clear factors in attracting and retaining millennial talent – particularly women.
It’s not too difficult to do the math: the companies that will stay ahead are those that prioritise ethics and diversity – in everything from their product to their workforce.
Ready to make a change in your organisation?
Bias can’t be solved by algorithms. However, by addressing bias early, we can avoid coding it into our tech and AI products, with an impact that could reverberate for years.
Although it is a daunting project to confront the lack of diversity and ethics in AI, it also affords us a unique opportunity to reflect on our own humanity, and challenge our human biases and assumptions – and create not only better AI, but a better world as well.
But this transformation won’t happen without a cultural shift. Unless D&I leaders fly the flag for diversity and ethics, employees will struggle to see it as a priority.
Here at Arch and AVADO, we recognise the growing market need. Our apprenticeship programs can enable you to find, recruit and develop people from different backgrounds. Through our Creative Pioneers programmes, we have helped companies such as Channel 4 use apprenticeships to attract diverse talent.
And through initiatives such as AVADO’s Data Academy, we are working to promote equal access to data science and AI – and bring more women into tech roles. Many of our programs can even be accessed for free, leveraging the apprenticeship levy.
In short, casting your net wider for talent is a no-brainer. And we want to help you get there.
Get in touch at email@example.com to find out more about how we can help transform your organisation.