The Next Big (Data) Thing: Awaiting the Robo-Revolution

As the quest for the shiniest silver bullets in recruitment continues the fields of artificial intelligence, machine learning and “Big Data” are proving to be a great hunting ground for salesmen and sensationalists alike. To the trained eye these fields are distinct and separate, yet when selling a solution that makes a claim in one of these areas there’s a wealth of content that shows us that our industry is capable of misunderstanding them all equally.  Whilst the misrepresentation of technology by those that are selling it is nothing new in the recruitment industry the coverall terms of A.I and “Big Data” have become imbued with a special power.  Part of this magical thinking has increasingly led to loftier and loftier claims for technology that, though still in their infancy, can be skewed for more clickable headlines.

In journalism there’s an eponymous “Law” for this kind of headline. “Betteridge’s Law of headlines” is an adage that states: “Any headline that ends in a question mark can be answered by the word no”.  It is intended to be humorous but seems to work in the overwhelming majority of examples. “Is This the True Face of Britain’s Young?” Sensible reader: No. “Have We Found the Cure for AIDS?” No; or you wouldn’t have put the question mark in. “Does This Map Provide the Key for Peace?” Probably not.  I propose a similar adage for content covering technology and its application to recruitment and HR.  Betteridge’s Law still applies in this area for example “Does AI Mean the end of Graduate Recruitment?“. Nope, it doesn’t.  What about those headlines that aren’t posed as questions? For those I propose Ward’s Law “The more often an article about recruitment uses terms associated with A.I., Machine Learning, and ‘Big Data’, the more likely the results of any study quoted will overstate the efficacy of the technology“. In other words, the more often recruitment is thought of as being “solved”, the further that will be from the truth.

Let’s look at an example.  Recently I saw this headline “Big Data research predicts which CV’s will be invited to interview by recruiters”, which sounds fantastic! It continues with “New research has discovered a way of telling which CV’s are most likely to be picked out from a large pile of job applications by recruiters.”  This type of press release has become formulaic. Characterised by a claim to the potency of the algorithm, some slightly spurious statistics that don’t entirely hold up to further research or misrepresent the original intent of the research, make some other claims where the algorithm hasn’t been tested but would be an amazing disruption “…make it possible to predict a candidate’s future performance simply by scanning their uploaded CV..”, and ending with a sooth-saying doomsday quote from someone in the industry – “In the future we’ll all be fed by tubes and robot overlords will tell us what jobs to do”.

In this example we are given the sample size a “staggering 441,769 CV’s” and the percentage accuracy of 70-80% when graded against human recruiters screening the same 441,769 CVs. That means that in 88,354 to 132,531 cases the algorithm disagreed with the human recruiters and rejected the candidate. That’s quite a number of false positives/negatives, even more so for any company that values a candidate’s experience and values how applicants might be treated in their processes.  Where this element of humanity breaks down even further is that when given another source of data – a cover letter – the algorithm performs worse, the strike rate falling to 69%.  How many of those 132,531 the algorithm did not invite to interview went on to be hired? We’re not told.  The other aspect of this any many similar stories to consider is that humans aren’t great at dealing with large numbers.The reason for this is that our sense of number is based upon two innate systems which essentially deal with small numbers accurately or large numbers only approximately.  We don’t often encounter large numbers, so when we do, it can be easy to struggle to know if that number is statistically significant.  LinkedIn boasts 433 million members and Facebook has 1.65 billion monthly active users but at this scale those numbers are almost meaningless when applied to the hiring goals of one company. Our inability to connect large datasets with real people is rampant. Big numbers dehumanise us, and the bigger the numbers, the worse the effect. If these raw numbers alone aren’t enough for a little doubt to be cast we can look to those elements that the decisions are being made upon.

Whenever I read about a potentially revolutionary algorithm I’m always keen to understand how it is arriving at its results. In particular in these screening algorithms, what is the programmer choosing to include, what do they exclude and what weighting are they giving those elements on which they are basing those decisions? In this example experience, workplace and education are all measured. We’re also told that “Contextual factors were also taken into consideration, such as ‘did the candidate apply in time’ and ‘was the candidate already employed by the company?’”.  Then potentially more problematically, as alluded to in the full version of the PhD thesis this press release is taken from, demographic factors like “age, gender, nationality, marital status, and distance from the hiring companies” are also included.

This post is to comment on the representation of emerging technology and its application to recruitment, and it’s not my intention to speculate on a possible future of robots replacing humans, but there’s an algorithmic future that’s being neatly swept under the carpet by those who are “pro-robot”.  Research from Harvard University found that ads for arrest records were significantly more likely to show up on searches for distinctively black names or a historically black fraternity.  Research from the University of Washington found that a Google Images search for “C.E.O.” produced 11 percent women, even though 27 percent of United States chief executives are women. (On a recent search, the first picture of a woman to appear, on the second page, was the C.E.O. Barbie doll.)  Google’s AdWords system showed an ad for high-income jobs to men much more often than it showed the ad to women, a new study by Carnegie Mellon University researchers found.  Those who advocate a perfect future will have to confront this research and much more like it.  Whilst it’s often cited by overzealous salespeople that algorithms based on data are free from bias, software is not free of human influence. Algorithms are written and maintained by people, and machine learning algorithms adjust what they do based on people’s behaviour.  All this is even before a well meaning but industry-novice programmer opts to include factors like “age, gender, nationality and marital status” which are explicitly protected in discrimination law. Would an organisation deploying such an algorithm to sift candidates have to expose how the selection was arrived at?  Would candidates still be afforded the same protections?

The problem here is that programmatically applying a simplistic model doesn’t allow for any degree of nuance, and when we’re seeking to measure humans, nuance is everything. Sorting and ranking algorithms for stock in a warehouse have a great advantage over those that seek to catalogue people, and books on a shelf or a can of baked beans in a supermarket don’t have the free will to opt out of the process at any time, but humans do.  Historically, humans have opted out of over-automated processes. I remember a UK bank luring customers back to the fold with the “promise of no automated call centres” and several websites offer the opportunity to “talk to a real person”.  For those companies unwilling to interact at these early stages there may come a time of reckoning when candidates opt instead for a more human process, and not to become a human to be processed.

So how did we get here?  Why is it that the future is either an electric nirvana or a desolate dystopia? Like a lot of science reporting in the media the rise of technology is held up as a scare story, a robo-bogeyman to frighten HR.  Uniquely in the world of HR and recruitment a wealth of the content on the rise of technology is written by those who are selling the products. We have a discourse owned by the vendors, and an audience that doesn’t want to, or hasn’t invested the time to learn about the tech.  It’s no wonder that somewhere in the middle of all this, there is misunderstanding, acceptance and skepticism, and for some money to be made in this new wild west frontier.  For the rest of us there is plenty of content filled with wild claims and spurious statistics, you might not agree with the findings of studies of the claims or the vendors, but I’m sure someone somewhere is ready to tell you that “60% of the time, it works every time”.

Screen Shot 2016-06-01 at 15.18.28

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *