The risks of Unethical Artificial Intelligence

Toju Duke, Project Lead, Women in AI Ireland

”Let’s build AI systems that will help protect and save lives and not drive further inequality, discrimination and segregation. Let’s keep AI ethics top of mind when thinking AI, or developing AI systems.

TOJU DUKE, PROJECT LEAD, WOMEN IN AI IRELAND

I’d like you to meet Bertha. Bertha has always dreamed of becoming a software engineer that will design products for healthcare. She made sure she studied biomedical engineering and went through the challenges of university graduating with first-class honours. Bertha has been desperately looking for a job for the past year and is increasingly getting frustrated. Why can’t she land an entry-role or graduate job as an engineer considering her grades and all the hard work she’s put into her CV?

You might have heard of Artificial Intelligence (AI) and the great groundbreaking waves it’s making in Tech. It’s unending possibilities of solving the world’s problems today from climate change, to cancer diagnosis, natural disasters detection and prevention, and even covid vaccines makes it a no-brainer in emerging technologies. AI is literally the best thing since “sliced bread” as the saying goes. Now you might be thinking about what AI has to do with Bertha. AI is increasingly being adopted in recruitment, where recruitment tools utilise AI systems to save HR staff hundreds of hours sieving through CVs. Most of Bertha’s applications, if not all, were assessed by an AI system. As AI is trained on large amounts of data, it learned how to select and choose candidates using the data it was trained on. We have on average 70% male working in the Tech industry, this means 70% of the CVs that were classified by the AI algorithm were from men. Bertha is a victim of a biased AI system that learned to favour male CVs over women, and dropped a CV that had the term “women” in it, or certain “women colleges”. A couple of years ago, a large tech company recalled its AI recruitment software due to its inherent bias. What about Bertha though? How is she going to understand that the problem was not her, or her capabilities, knowledge or talent but rather technology that was built in ”error”?

Let’s move on to Joey. Joey is a young man of African Caribbean descent, 28 years old working in an insurance company as a salesman. Joey is married with 2 kids and has never committed a crime. One Friday evening, Joey was driving through his driveway and noticed a police car parked in the corner. He steps out of his car and is stopped by the policemen who arrested him immediately. His wife and kids were watching all of this from the confines of their home. Once Joey’s interrogations commenced when he was detained, he realised he was being accused of a crime he did not commit because the facial recognition algorithm employed by criminal justice organisations, identified him in error. Joey now has a criminal record, lost his job, needs to regain his composure and dignity and convince his kids that he is innocent and everything will be ok.

It doesn’t stop here. There are so many other gruesome stories of wrong diagnosis in healthcare ranging from kidney transplants to skin and breast cancer, where algorithms are driving further discrimination against BAME communities due to the lack of diverse, inclusive and fair datasets across AI algorithms. This also applies to other industries such as finance. Most of these biased systems have been recalled by the owners, most times years after the AI systems have been deployed, sold and implemented across various other businesses. Here’s an example: Tinyimages a dataset from MIT and NYU introduced in 2006 was recently found to contain a range of racist and sexist labels. Eg Nearly 2000 images were labeled with the N-word and it also included labels like “rape suspect” and “child molestor”. The discovery was made by Abeba Birhane, a UCD researcher in Dublin and research centre Lero. This led to MIT removing the dataset immediately after the research.

A lot of industries rely heavily on image datasets which are used for computer vision, a segment of AI. Computer vision helps computers understand and label images. It’s now used in social media platforms, self-driving cars, healthcare, convenience stores and in monitoring crops and livestock amongst others. Any form of bias and incorrect data in these AI systems has a great impact on different individuals across a wide range of industries.

At this stage you’re probably thinking, why are we still utilizing AI based on this? Regardless of the bias and potential danger AI can bring to society if done wrong, AI has so much potential in solving the world’s problems today that we shouldn’t give up on it. So how do we prevent further bias and damage on minority groups and marginalised societies? This is where Ethical AI or Responsible AI comes to play. Ethical AI provides a framework where AI systems are developed in an ethical way that will protect human values and ensure ethics are embedded in the core of these algorithms. Ethical AI looks at 5 main components: Fairness, Explainability, Accountability, Security and Privacy. These 5 factors help protect members of society and ensure AI systems are fair to all regardless of race, gender, ethnicity, culture or sexual orientation. Diversity is key to the success of any organisation. This also applies to technology. Responsible AI should be introduced at the 3 different phases of product development: The pre-development phase, training and testing phase and post-launch. Regulations across the globe should be put in place around the unethical use of AI. Let’s build AI systems that will help protect and save lives and not drive further inequality, discrimination and segregation. Let’s keep AI ethics top of mind when thinking AI, or developing AI systems.

 

You can hear more from Toju at the Women in IT Virtual Summit Ireland on 17 November 2020. You can view the agenda and register your free place here