One of the most interesting and controversial book in the HR section in a long while! Hilke brings the investigative journalist and an industry outsider to her new book “The Algorithm: How AI decides who gets hired, monitored, promoted & fired & why we need to fight back now.”
- A fateful cab ride was the genesis of the book, when the Lyft driver was gobsmacked by being interviewed by a robot for a baggage handler job at the local airport, Hilke began her quest to learn more.
- The one-way video interview and its inevitable demise when HireVue’s claims of facial expressions to predict candidate’s honesty was proven to be crap.
- Talent Intelligence platforms like Eightfold can prove to be a positive force in helping organization know who to promote and like any double edge sword, cursed for its contribution on who should be laid off,
- How do we fight back? Hilke gives some sound advice to HR Leaders.
- Crystal ball predictions for 2030, sound pretty accurate to us!
[00:00:00] Welcome to the Recruitment Flex with Serge and Shelley.
[00:00:09] I'm Serge.
[00:00:09] And I'm Shelley.
[00:00:10] And we talk all things recruitment starting right now.
[00:00:17] Bonjour and welcome to the recruitment flex, Shelley.
[00:00:21] We have one of the best books I've read in this industry
[00:00:25] for a really long time.
[00:00:26] We have the author on the social media feed saying, hey, is anybody read this book? And so Serge was like, oh yeah, I'm already on chapter three. So we're so, and we love the perspective here, but before we get too much further down the path, can you just share a little bit of who you are and a little bit about how you came to write a book
[00:01:43] that is so specific to our little niche in this world,
[00:01:46] recruiting and technology?
[00:02:41] like, oh, yeah, it was great. I mean, it's taking out a little weird.
[00:02:42] And I was like, tell me more.
[00:02:44] These virus interviewed by robot.
[00:02:46] I was like, interviewed by robot.
[00:02:48] What?
[00:02:49] I've never heard of this, a job interviewed by a robot.
[00:02:51] So it turns out that probably wasn't a robot,
[00:02:53] but probably a pre-recorded voice, right?
[00:02:56] Somebody called his number.
[00:02:57] And this pre-recorded voice asked him three questions.
[00:03:00] He had applied for a bag of Chandler position
[00:03:01] at a local airport.
[00:03:03] And then I started after going to another conference
[00:03:05] and hearing that there is a couple of things. I think that when I started looking into facial expression analysis, like I had just assumed it was sound signed underneath that. Right. If we check people's facial expressions for the job,
[00:04:21] I just had assumed that they assigned a scientific evidence underneath that.
[00:04:24] And when I started talking to experts in the field of like emotional recognition the predictive health analysis is coming into the space. And I think there will be a lot of tension in my privacy, right? Like if I have a mental health diagnosis, but a company is going to use a vocal biomarker to analyze my speech streams, maybe to help me with services. But through that, they might find out that I'm depressed or anxious. Like, how do I feel about that? And how may this data be used? There's going to be a whole new world opening up for us.
[00:05:43] I think you mentioned something in there that is on folks. They didn't know that. You know what? And I actually think the vendors themselves so often built these tools assume that there are signs underneath these things. So when I talk to people about how do they find out
[00:07:02] our personality traits in one-way video interviews, using the helpful in the process. But I think a lot of job applicants don't perceive one way video interviews as something good. I've had it maybe a handful who said it was fine. I didn't care that much. I could do it on my own time and that was cool. But most people were just really upset that the companies wouldn't put their resources
[00:08:23] out that they couldn't ask questions. like you. Very clever. Because it talked a lot about what we on the other side of the screen. That is recruiters. What is our biggest complaint about the tools we are given to do the job is the resume parsing, right? It how I found out that skills is super important to highlight in your resume or your LinkedIn profile, right? And that's a really fair point. We should look at more skills versus the schools that you went to and all of that. Yeah, and that's the beautiful thing about LinkedIn because they've standardized how you say
[00:11:01] this is the skill I have.
[00:11:03] They give you a list to choose from,
[00:11:05] which from a massive challenge for the industry. And part of the challenge that we're all feeling is do we want to implement more hard skills, challenges, exercises without causing too much friction in the process as well? But let's take a step way back.
[00:12:20] If we look at the title, it ends with why do we need to know how's technology being used to actually put in the appropriate guardrails. Like it doesn't help to just have these big laws that then don't work for every individual industry. But I also wrote the book for HR leaders to actually empower them and be like, don't? We know that personality tests have very little validity in terms of predicting how successful somebody will be in a role. Totally get that they have less bias. And I get that it's interesting to folks to find people
[00:15:00] who are maybe adaptable and you may be able to pull that out of some of these analyses
[00:15:04] if we want to use that. Okay, but maybe hire an employment lawyer who is really well-versed in that to actually check. And I think a lot of folks assume that, okay, this big company uses this tool.
[00:16:20] So of course they must have checked it.
[00:16:21] I wouldn't assume that at all.
[00:16:24] I think maybe running a small pilot study that the idea of one size fits all is a great idea. And it takes out the human bias that, you know, if I'm hungry, I see somebody differently than if I have a good day, but it may not actually work for everyone. One size may not work for everyone. And if people are not represented in the training data or we ask them to play a game where you have to hit the space drive as fast as possible and they have a motor disability, it seems unfair.
[00:17:41] And I think you need to have everyone in mind,
[00:17:44] not just the middle, the big middle of quote unquote,
[00:17:47] regular people who don't have those 32 skills. And if your ATS applicant tracking system is calibrated to find the people with most of these skills
[00:19:01] that they have on their resume,
[00:19:02] maybe there are five people that have the five skills
[00:19:05] that they actually need and are really good at this,
[00:19:07] you might throw them out. says, I just really am fair because we just look at the individual. But what are the team dynamics? We can hire the most qualified person and the best team worker. They have a toxic boss. How are they going to succeed? We don't look at the sort of circumstances around them. And some of the stuff we really cannot predict. Like if somebody has, you know, that to take care of a kid suddenly, they have caregiving obligations.
[00:20:20] They need to move.
[00:20:22] Like, I mean, those things are just really unpredictable.
[00:20:24] If somebody's going to succeed or not, and maybe we just have to live with that.
[00:20:27] And predictability.
[00:21:22] to harm the patient in health care settings. You can be a fault, is it the software maker?
[00:21:24] Is it the vendor?
[00:21:24] I mean, we can't really hold AI accountable
[00:21:26] because it can pay damages.
[00:21:28] You know, it's a computer software.
[00:21:30] So there is really a question here.
[00:21:32] I think that's why vendors keep saying,
[00:21:33] oh, we don't make the decision.
[00:21:34] It's the company's.
[00:21:35] We just give them a suggestion.
[00:21:38] What I've also seen is that obviously,
[00:21:39] and I've done this myself, I've did all these tests.
[00:21:42] And even though when I knew maybe the science is a little,
[00:21:45] if we hear, I'm not really sure if the results can be trusted. understand my companies don't want to do it because they feel liability, right? If you publicly come out and say, oh, we used this for hiring and we found out that it discriminated against the majority of women. If you have thousands of people, hundreds of thousands of people applied, you may have a large class action lawsuit. But I think the problem is that we don't have a lot of opportunity then to write the
[00:23:00] market because we don't know what was wrong.
[00:23:03] So I was surprised when I started talking to employment lawyers who are there when companies to algorithms and we know about it, but we don't really know about it. Part of the challenge is when it comes to social media, there's a lot of discussion with algorithm because we use social media on a daily basis, similar to the point of you're looking for a job every five years, and you do not understand as yes, maybe a no, don't tell their managers and don't tell the people how they were ranked, but run them through the system for years to come and actually see, for the like 10,000 people we hired over the last three years did this actually being true, that they were more successful, less successful.
[00:25:42] I mean, it's put it to the test.
[00:25:44] I think my worry is I'm saddened to hear
[00:25:47] that they're human hiring managers I did not put in perspective. Mm-hmm. And when we think about the use of chat GPT from the job seeker to take their resume, take the job description and say, write my resume to test themselves through different assessments could do that and then put it somewhere on the blockchain. So it's safe there, it can't be hacked into and these are proven skills that Hilka has.
[00:28:20] And I can add to them and I can appeal the process
[00:28:22] if something is wrong.
[00:28:24] And so actually, I don't think that's true
[00:28:25] because that was a different Hil talent acquisitions since the 1950s. The science behind it is very old and you can argue if it's been validated or not. But what's your overall thoughts of psychometric testing that's been around forever and it's still used in the process right now? What's your thoughts there?
[00:30:41] That's right. There were probably a bunch of Thomases in the training data.
[00:30:43] Who knew?
[00:30:44] Maybe a popular name at that time.
[00:30:46] It admired lots of.
[00:30:47] Right.
[00:30:48] But also like locations and some other things that I think are problematic.
[00:30:51] Looking at testing AI tools also versus traditional methods and some of these pilot faces could
[00:30:56] be really helpful.
[00:30:57] Yeah.
[00:30:58] Do you think through, okay, are these like the measures?
[00:31:00] So there's lots of thoughts that we have to think about and like how valid are the differences
[00:31:04] between people? or sub one for a month and then let them all go. That is not doable and would be horrible for job applicants as well. But can we use maybe VR or something like that to actually test people on the five most important skills that folks have identified and maybe teamwork is one of them or some other soft skills, but throw them into different scenarios and play it out a little bit.
[00:32:20] And I think that would actually be helpful
[00:32:22] for job applicants as well to sort of learn
[00:32:24] what am I being asked to do in this job
[00:32:26] because in a job description, there are certain keywords your preference and maybe I'm not very conscientious. But if I work as an accountant, maybe I train myself to be very conscientious. So I think those are things to think about as well, that we have the ability to adapt. But how do we measure that? And I do not know that. I feel like measuring soft skills is the new frontier and I don't have an answer to that either.
[00:33:40] Do you know, Helka, a very wise person said to me
[00:33:44] in the very beginning of my recruitment career, the same degree and they don't have a lot of job experience and figure out who is the best, who should come here? I know, I know. I do want to ask you this in your research and in your study as you ask around, is there anyone that you can point to, whether it's a technology, a vendor or company, I'm hoping, that is doing it well? Because I think it's really easy to say who's not doing it very well and we can look at
[00:35:03] the failures.
[00:35:04] But where's the good news?
[00:35:05] Are there organizations that they're pace setters? that are more focused on skills and looking at the larger big data, like inferring skills. And I think that might be actually helpful for women who overall tend to be a little bit more shy, right? And it takes them a little bit more time to actually put a skill on the resume where a lot of men are a little bit more confident. And so if a tool could infer that if I have these three
[00:36:20] basic skills, but like most other people who have applied for the job who also have these skills,
[00:36:24] have these 10 other skills, it might actually level And science takes a lot of time. So I think there's a real pressure to monetize the tool. And so I hope is, okay, if we can build some of these tools in the public interest, can we put pressure on companies to use the right technologies that we know work? And maybe that's too naive of a thought, but that is one way, because we're not going to see aggressive
[00:37:43] regulation in the space. And I doubt that would through these tools. I do feel actually that knowing how prevalent and how flawed some of these tools are is actually helpful to actually think through, okay, well, how can we build better tools? Like how
[00:39:00] can we ask the right questions to not buy the wrong tools?
[00:39:03] Because I assume that no one in HR and talent acquisition
[00:39:06] wants to buy the wrong tools, they just need an efficient So we're 2024 now. Okay. Let's move forward to 2030, which is six years. And it's not too far out. It feels far, but it's really not far out in the pace of change and what we're seeing with AI. What does 2030 look like for hiring?
[00:40:20] I hope we have massively better tools.
[00:40:24] And I do think that Gener. Yes. This was great. Hillcard really appreciate this conversation. So first of all, for everyone listening, please do go get the book, the algorithm, how AI decides who gets hired, monitored, promoted and fired and why we need to fight back.
[00:41:45] It's available everywhere.
[00:41:46] Amazon is probably where most people go.
[00:42:43] look closer at that. I think that is probably one of my next projects is going to be a book is going to be a podcast film. I
[00:42:46] don't know yet. Maybe all of it.
[00:42:49] All of the above. Yeah.
[00:42:51] Thank you so much for coming on is greatly appreciated. And
[00:42:54] everyone go buy that book right now.
[00:42:57] Thank you so much. And thank you for really delving into the
[00:43:01] topic with me and having a really hopefully productive
[00:43:04] conversation for folks out there.
[00:43:06] Wonderful to meet you. Thank you so much.
[00:44:01] through financial news and wondering, how does any of this affect me?
[00:44:03] How can I read a major headline
[00:44:04] and truly understand what impact
[00:44:06] that has on not only my portfolio, but my life?
[00:44:09] Well, our goal on the podcast Inside the Street,
[00:44:11] hosted by Wall Street analyst,
[00:44:13] seller, She-Fray Partners,
[00:44:14] is to provide public investors and young professionals
[00:44:16] with a deeper understanding of the mechanics
[00:44:19] that drive those major headlines.
[00:44:20] And what better way to dive into these mechanics
[00:44:23] and hosting Wall Street analysts themselves
[00:44:25] to discuss the newest trends in finance firsthand.


