HR Tech: Reducing Bias through AI and Behavioural Nudges

November 16, 20201:10 pm1416 views
HR Tech: Reducing Bias through AI and Behavioural Nudges
Image source: Rawpixel

Bias is inevitable, especially when it comes to work. Nearly two-thirds of respondents in The Bias Barrier report said they experienced bias in the workplace in 2018. And the sobering statistics continue from there: respondents reported that bias had negative impacts on productivity (68 percent), engagement (70 percent), and on happiness, confidence, and wellbeing (84 percent). 

As humans, we can hold a variety of unconscious biases. Many are necessary to daily life, almost intuitive. Some others are less productive and are holdovers from the past, no longer relevant. For example as follows: 

  • Similarity bias – We tend to favour people most like ourselves. 
  • Confirmation bias – We often prefer information that confirms our beliefs and are prone to discount information that contradicts them. 
  • Recency bias – We also can put greater emphasis on things that have just happened. 

These and other types of biases can unconsciously influence our decision-making. Managers might inadvertently hire or promote those who are most like them, make talent selections that align with their preconceived notions, and base their performance evaluations on what they expect to see or have seen most recently.

See also: 7 Reasons HR Technology is Well-Represented

Organisations are increasingly recognizing that humans are biologically hardwired to operate on instinct and habit, so nonhuman solutions are highly sought-after to mitigate outmoded and problematic biases. For instance, the use of artificial intelligence (AI) in recruitment alone is expected to increase threefold in 2021

Using AI to help reduce bias across HR  

AI is not new, but it has been making interesting strides into talent acquisition, internal mobility, learning and development, and performance management. Some common use cases of AI include: 

  • Revising job postings to use gender-neutral language
  • Anonymising resume information (e.g., names, photos, gender, schools, ZIP codes, graduation dates) to reduce reviewer bias
  • Using gamification to assess abilities beyond resume text and match applicants to their best-suited roles
  • Providing real-time performance metrics to nudge more frequent feedback, transparency, and learning recommendations

However, AI is not without its own challenges. The algorithms that drive AI (including the parameters for machine learning applications) are created by humans – and humans have unconscious biases. Until we reach the technology singularity, at which point AI will program itself, this means that AI is also subject to bias.

Many organisations are aware of AI’s flaws and are taking steps to address them. For example, several leading technology companies have announced their use of open-source software tools that can be used to examine bias and fairness in AI models. Furthermore, there is a growing number of AI auditing firms emerging to help address these issues.

Combining AI with behavioural science  

AI can provide humans with powerful tools to reduce unconscious bias, but in turn, humans need to design AI with fairness standards in mind and routinely monitor and test algorithms to ensure they do not favour or disadvantage any particular group. This way, we can use human judgment, aided by AI, to reduce both our unconscious biases and inadvertent machine-learning biases.

Steps to remove bias within organisation  
  • Examine end-to-end talent’s life cycle to identify the areas most prone to bias (e.g., decisions on resume screenings, interviewing, selection, performance management, or internal mobility).
  • Explore AI/data science solutions while designing for fairness to reduce the bulk areas of potential bias (e.g., identifying processes or tasks that can be automated). 
  • Determine behavioural science opportunities to nudge decision-makers at the right times with the right information to inform decisions (e.g., examining a full review period rather than only recent actions when measuring performance, evaluating ability test results to supplement resumes when selecting candidates for interviews, or showing candidate details as a group instead of one by one to compare to the desired fit). 
  • Keep in mind that, for humans, a bias issue can be seen as a learning issue. 

An increased number of AI tools will continue to emerge, and organisations will become more familiar with behavioural science tools and nudges to help their people make better and more informed talent decisions. 

Read here: The New IT Epoch: Technology Redefining Workplace 

(Visited 1 times, 1 visits today)