AI, psychometrics and people decisions: what is there to fear..?

Rich Littledale
7 min readJan 11, 2019

--

If you enjoy this article, you can hear me talk on this topic with Bruce Daisley (tech supremo and bestselling author) on his great podcast “Eat, Sleep, Work, Repeat”.

https://eatsleepworkrepeat.com/appsandyournextjob/

Tech firms are using powerful predictive analytics — AI and machine learning — to draw conclusions about us, and try to predict our behaviour. This technology is increasingly being applied to hiring. Is this something to fear, or something to embrace?

A bit about me (why I feel qualified to comment)

My first contact with psychometrics was in the summer of 1996, when I spent a few months as an intern at OPP, the company that introduced the MBTI* to the UK and Europe. I didn’t start using them in earnest until five years later in 2001, when I completed my MSc at the Institute of Work Psychology in Sheffield and started my career as an Occupational Psychologist in earnest. Since then I’ve regularly used, designed and evaluated assessment tools — including psychometrics — as part of my practice, and I am a Chartered Occupational Psychologist.

The problem I’m working on currently is how to support and develop startup founders to be better leaders with better relationships and better teams. In doing this work, psychometrics are part of my assessment toolkit, so I try to stay close to innovation in this area.

A change is happening

For a long time the world of psychometrics has been pretty stable. The personality test that I use most frequently in 2018 (Hogan suite) is one that I was trained on in September 2001 (although it has been updated over time). There have been practical changes: psychometrics are now predominantly completed online rather than on paper (you’d be surprised what a big deal that seemed to the profession at the time). And in the background the theory and statistical techniques used have adapted — the shift from classical test theory to item response theory. The things that matter to users - what they measure and how they measure them (questionnaires) - have not really changed.

Until now.

As the technology to gather and process data improves, new ways to evaluate people are emerging. Cambridge Analytica’s mapping of Facebook activity to personality traits is the most well known of these — even if there is reason to doubt whether it actually worked as they promised to clients — but there are many more besides. Games; video of facial expressions; activity from email, calendar and other cloud apps. Just three sources of data that are being analysed with the promise of being able to predict performance or behaviour by using techniques like machine learning and artificial intelligence.

Recently I attended an session on AI in Assessment and Selection by Cognition X. Thanks to them for this great infographic.

Is this something to worry about?

“All publicity is good publicity”. I think that Cambridge Analytica might be the exception that proves the rule, for Facebook and for psychometrics. If you also throw AI/machine learning into the mix, with the increasingly well understood risks of baking bias into your decision making process, there are double the reason to be suspicious. Amazon’s ditching of its automated resume search may for some confirm the suspicion.

As a psychologist and expert practitioner, here’s my take:

The technology is over hyped, and in danger of over promising. Hype and hyperbole about AI** has reached a crescendo, and while it gets people interested, it obscures the fact that the techniques being used are simply tools. Tools that have benefits and drawbacks. Tools that are good for some jobs, not for others. The key risk is that those creating the new tools fail to invest time and energy in anticipating the unintended consequences of what they are building (if an investor has just given you a few million in series A or B funding, your priority is to grow, not to doubt). Like anyone pushing the boundaries of knowledge and what is possible, but at the same time having to meet the commercial needs of a business, good ethical governance is key. In this area — like much of tech — I think there is ground to be made up.

It is a mistake to treat the new wave of assessment tools as a homogenous group. As you can see from the Cognition X slide above, the new wave of assessments come in a number of different types. Cognition X differentiate between question based, video, games or gamefied, and “other”. It is also useful to know the distinction between genuine game based assessment — where behaviour in a game scenario is captured and rendered into discrete data points — and gamefied assessment — which is a more traditional measurement approach delivered in a way that seeks to engage users by game-like elements e.g. using “high scores” or “winning points” for completion. I think though that there is another important distinction to be made among these new providers: a continuum between top-down and theory driven at the one end, and bottom-up, data driven on the other.

I worry when the methodology and theory is opaque. For some of the new wave of assessment tools, the method of measurement may have changed, but the traits being evaluated have not. These may be related to established models like the big five, or narrower dispositions like risk orientation, but it is possible to see the link between what is being measured and how that impacts behaviour and performance. For some, however, the link is far less clear. Things are measured, and performance is predicted, but the link between the two is opaque.

This means it is not possible to give those taking the test any meaningful or useful feedback on how they performed. Happily this is something that some providers are aligned with.

Alastair Frater from Arctic Shores told me:

“our experience shows us that tangible, timely and accessible feedback is key for candidates to accept any assessment process, particularly new, technical-based ones”.

Perhaps more worryingly though, a lack of a theoretical model to challenge also increases the chances that you replicate biases in the data.

(I’m not the only one thinking this. Here’s what Developmental Psychologist Ute Frith thinks about big data: https://www.wired.co.uk/article/uta-firth-facebook-google-data-garbage.)

However, the new wave of assessment tools address a real flaw in current psychometric practice. The best predictor of how someone will behave is how they do behave. Anyone who has completed a personality questionnaire knows the drill: respond to loads of questions about yourself. Sometimes these will about putting yourself onto a scale (highly agree to highly disagree), and sometimes you will be forced to make choices (are you more like this, or more like that). However the questions are framed, these questions get at how you are through the proxy of how you see yourself. Whether they are asking you to play games, or recording and analysing video of your face and voice, the new wave techniques go a step further, directly to behaviour (even if that behaviour is within a very narrow, controlled realm). And as such this means there is the potential to make more accurate behavioural predictions.

And most importantly, if these tools can improve the fit between person and role, it is good for everyone. Analysis of the threats presented by the new wave of assessment methods often omits one key fact: most current methods used to predict job performance are pretty bad. It causes a person harm if they are denied a role that they would have been successful in and were the best candidate for, but that harm happens thousands of times over every day due to unstructured interviewing. Effective measures, aligned to a clear understanding of what the job requires, are better for applicants as well as organisations. And this is something that Arctic Shores at least are aiming for.

Alastair Frater again:

“It is essential to align any psychometric, games-based, neuroscience-based or traditional, with the underlying predictors of success — that is why Arctic Shores runs pre-assessment validation on incumbent role-holders to establish what actually links to on-the-job performance. The results are often surprising. For instance, a leading professional services business found that high optimism was one of the key predictors of actuary hires being successful in role”.

If adding tools to the mix reduces that error rate, and reduces the proportional impact of that error on disadvantaged groups, I tend to be a utilitarian about it and call it good. Workplace stress is a blight on people’s lives, and a burden on our economy. If more people land in jobs that they will be good at, and that they will enjoy, that has to be a good thing too.

* My position on the MBTI has shifted over the years, and currently is set to pragmatic tolerance.

** There is a school of thought that says there actually isn’t any AI in assessment, or anywhere else really. I recently attended an event where Daniel Hulme — CEO of Satalia, and Director of Business Analytics at UCL — was talking about AI. He used a definition of intelligence which is borrowed from my profession — psychology and psychometrics — and which runs as follows: goal-directed adaptive behaviour. Daniel argued that if AI is defined in that way, AI systems should be able to “adapt themselves, in production, without the aid of a human”, and that no current systems exist that can do this. I heard nothing in the masterclass to suggest that any new candidate selection and assessment tools are doing this. It is more likely that new assessment approaches are actually underpinned by data science and machine learning techniques — most likely supervised machine learning — rather than AI. Now I could reasonably be accused of being nit-picky here about definitions. It seems really that “AI” is being used as a catch all for a cluster of powerful predictive analytics techniques, and whether or not they really are AI doesn’t affect their usefulness. However, I do think that the mystique of “AI” puts people off from trying to understand what is going on, and makes them more credulous customers.

--

--

Rich Littledale
Rich Littledale

Written by Rich Littledale

Psychologist in startup land, exploring the people side of technology and technology businesses. Consulting at www.peopleuphq.com, co-founder at www.supc.co.uk

No responses yet