With the increasing use of computer softwares for application tracking and candidate background checks, how do you verify if the candidate screening software is programmed with prejudice?
To understand if hiring decisions are based on biased approaches by computer softwares, a team of computer scientists from the University of Utah, University of Arizona and Haverford College in Pennsylvania have discovered a way to find out, if an algorithm used for hiring decisions, loan approvals and comparably weighty tasks could be biased like a human being.
The researchers, led by Suresh Venkatasubramanian, an associate professor in the University of Utah’s School of Computing, have discovered a technique to determine if such software programs discriminate unintentionally and violate the legal standards for fair access to employment, housing and other opportunities.
A method to identify if software is operating without bias can be determined through the algorithms used. Many companies use algorithms to help weed out job applicants when hiring for a new position.
“There’s a growing industry around doing résumé filtering and résumé scanning to look for job applicants, so there is definitely interest in this. If there are structural aspects of the testing process that would discriminate against one community just because of the nature of that community, that is unfair,” Venkatasubramanian said in the press release.
See: How Recruiters Screen Their Candidates
Machine-learning algorithms
Many companies have been using algorithms in software programs to help filter out job applicants in the hiring process, typically because it can be overwhelming to sort through the applications manually if many apply for the same job. A program can do that instead by scanning résumés and searching for keywords or numbers (such as school grade point averages) and then assigning an overall score to the applicant.
These programs also can learn as they analyze more data. Known as machine-learning algorithms, they can change and adapt like humans so they can better predict outcomes. But there has been a growing debate on whether machine-learning algorithms can introduce unintentional bias much like humans do.
“The irony is that the more we design artificial intelligence technology that successfully mimics humans, the more that A.I. is learning in a way that we do, with all of our biases and limitations,” Venkatasubramanian added.
Venkatasubramanian’s research revealed that you can use a test to determine if the algorithm in question is possibly biased.
If the test — which ironically uses another machine-learning algorithm — can accurately predict a person’s race or gender based on the data being analyzed, even though race or gender is hidden from the data, then there is a potential problem for bias based on the definition of disparate impact.
If the test reveals a possible problem, it can be fixed easily. All you have to do is redistribute the data that is being analyzed — say the information of the job applicants — so it will prevent the algorithm from seeing the information that can be used to create the bias.
As of now, however this research is a proof of concept. Will this test soon directly be fed into systems for bettering hiring practices, only time will tell! Wait and watch for our future updates on this story.
News source: utah.edu
Also read: Top 8 Rules of Recruitment Hacking for HR Professionals
Image credit: wikimedia.org