Is It True that Technology Makes Inequality Worse?

January 14, 20211:10 pm2796 views
Is It True that Technology Makes Inequality Worse?
Image source: Rawpixel

Many believe that technology and automation like AI, Machine Learning, Chatbot, etc. will bring good omen to businesses. Yet, leaders also need to be aware of the challenges and possible negative consequences these sophisticated techs might pose. Some data suggested that using technology in recruitment creates biased results, while others cited that it helps streamline the recruitment process more effectively. There should be a deeper investigation on this matter, and at the same time, leaders need to recognise drawbacks technology poses during recruitment processes to avoid biases and losses. 

In this article, we will be discussing how technology affects equality within the workforce. Note that, this article might not cover all the possibilities, yet can give a bit insight on how technology should be used in the recruitment processes. 

AI helps remove biases  

In a quest for equity, artificial intelligence could play a fantastic tool for the fight against inequalities. French mathematician Cedric Villani in his study found that in terms of AI, the inclusion policy must take on a double objective: to increase visibility and to strengthen competition on the domestic and international market. These objectives help ensure that the development of technologies does not contribute to increasing social and economic inequalities. 

Villani’s report also mentioned the creation of an automated administrative procedure management assistance system to improve equal access to public services. AI-based technologies enable us to better take into account the needs of people with disabilities and improve their living conditions. The use of AI can therefore effectively reduce biases on diverse groups of candidates. 

Another popular example of how technology helps remove bias is the use of AI to remove gendered job description. Textio, a smart text editor, is capable of making a job description more inclusive. In their interview article, Textio mentioned that there was an increased rate of women being recruited. 

As interviewee, Aubrey Blanche, Atlanssian’s Global Head of Diversity and Inclusion, told Textio that the percentage of women getting hired rose from 10 to 57 percent. At first, she came on board and planned to hire more women for technical roles. The team was ready to get women in the pipeline but after two weeks of posting, there were zero female applicants. So, they used Textio to streamline their recruitment. Learning from the data Textio provided, they finally got women on board. The idea to put AI at the service of equal opportunity, of the fight against discriminations, or still of diversity and inclusion in the workplace hence is proven good. 

See also: 4 Trends in HR Technology that Will Shine in 2021

The contrary ideas  

Although AI represents a fantastic opportunity for social innovation, recent study from the Febian Society revealed that automating technologies creates heightened risks for historically disadvantaged groups. 

Among other evidence, the study cited machine learning algorithms, which help with recruitment strategy give outdated information on recruitment decisions based only on discriminatory data. It went on to mention a well-known case where Amazon abandoned its AI recruitment software because it used past data to learn to reject women coders. During the selection process, the algorithm excluded information about sex, race, and other characteristics covered by equality laws. 

Furthermore, a report covering healthcare industries also mentioned that healthcare providers have to make a lot of decisions when it comes to providing the best patient care. In some cases, algorithms are used to help with the clinical decision-making process, but the fairness of such tools is not a guarantee. 

Using 25 combinations of datasets, clinical outcomes and demographic attributes, healthcare report researchers set up a series of predictive models. The models included specific fairness criteria that were adjusted to be more or less strict. The researchers quantified the effect this has on the model’s performance. 

The healthcare researchers then concluded that there were concerning limitations regarding algorithmic fairness in healthcare. They also note that constraining a predictive model in order to achieve fairness was insufficient for, and might actively work against, the goal of promoting health equity. 

To conclude, healthcare researchers advised that it is necessary for researchers developing healthcare-related predictive tools models to actively engage in participatory design practices to address the biases currently present in healthcare. 

Education to build a fair algorithm  

Living in the Age of Data, we have to live side by side with technology. Needless to say, not making the use of technology and automation for businesses could give a huge impact on business performance. But how should leaders minimise the weaknesses found in its technology? 

Mike Walsh, CEO of Tomorrow and author of “The Algorithmic Leader: How to be Smart When Machines are Smarter Than You” wrote to HBR that the longer-term solution to algorithmic inequality lies in the ability to provide an adequate education system for the 21st century, where business leaders play a crucial role. 

Walsh said that business leaders should carve channels of communication, feedback, and advancement for freelancers at the edge of their organisations. They also need to get serious about retaining and community engagement. There are some good initiatives some companies have applied, such as offering internships to high school students and are working with local schools to upgrade their teaching curriculums. 

What are your initiatives to help improve the future workforce? You can share your thoughts with us (renny@hrinasia.com) and we will cover your story. 

Read also: Solving HR Technology Puzzles

(Visited 1 times, 1 visits today)