Reducing Bias w/Lindsey Zuloaga
The Recruitment FlexJune 04, 202400:31:37

Reducing Bias w/Lindsey Zuloaga

This week we welcome Lindsey Zuloaga, Chief Data Scientist at HireVue Transitioned from academia to industry, driven by frustrations with the hiring process. Developed and improved HireVue AI-driven video interview analysis, moving from facial analysis to focusing on language and context. Lindsey explains HireVue processes to ensure fairness and reduce bias in AI tools. Optimistic view of AI’s role in creating more job opportunities and better job matches by understanding skills comprehensively. Emphasized the need for continuous adaptation and agility in the workforce as technology advances. We discuss HireVue mention in the book "The Algorithm"

This week we welcome Lindsey Zuloaga, Chief Data Scientist at HireVue


  • Transitioned from academia to industry, driven by frustrations with the hiring process.


  • Developed and improved HireVue AI-driven video interview analysis, moving from facial analysis to focusing on language and context.


  • Lindsey explains HireVue processes to ensure fairness and reduce bias in AI tools.


  • Optimistic view of AI’s role in creating more job opportunities and better job matches by understanding skills comprehensively.


  • Emphasized the need for continuous adaptation and agility in the workforce as technology advances.


  • We discuss HireVue mention in the book "The Algorithm"



[00:00:04] Welcome to The Recruitment Flex with Serge and Shelley. I'm Serge.

[00:00:09] And I'm Shelley. And we talk all things recruitment starting right now.

[00:00:17] Bonjour and welcome to The Recruitment Flex. Shelley,

[00:00:21] more interviews, more guests. We're learning every day.

[00:00:24] Yes. So Serge, I'm really excited because I think this would be the very first time

[00:00:30] we have a chief data scientist on the show.

[00:00:33] Yes, it is.

[00:00:34] I am absolutely thrilled. What an opportunity for us. Who we have joining us today is Lindsay

[00:00:42] Zulaga, who is the chief data scientist with Higher View. Welcome to the show, Lindsay.

[00:00:48] Thank you both. Great to be here.

[00:00:50] As I said, this is a first. This is just an awesome opportunity. And I have to admit, Lindsay,

[00:00:57] I was checking you out on LinkedIn and I nearly fell out of my chair.

[00:01:03] Your background is so fascinating, which only a recruiter would say because that's what we do

[00:01:09] for a living, right? Like we study people's career tracks and work histories. So I would

[00:01:15] love it if you could share with the audience a little bit about you, your career path and your

[00:01:20] journey. How did you end up in HR tech? Yeah, definitely not planned that way.

[00:01:27] I'll start pretty far back. As a kid, was a pretty scientific thinker, but I definitely

[00:01:34] didn't really see myself as a scientist. I was really good at math, but I didn't really

[00:01:40] understand what math was actually for. I remember being in eighth grade and having someone ask

[00:01:46] the teacher, when are we ever going to use the quadratic formula? And the teacher was like,

[00:01:49] you won't. And then when I got to high school, I had a really amazing physics teacher. I

[00:01:59] actually wrote a little essay that I have posted somewhere back years ago on LinkedIn about him,

[00:02:05] but an amazing physics teacher that really opened up my eyes to like,

[00:02:09] oh, this is what math is for. Actually math describes the world and there's quadratic

[00:02:15] equations used in the equations of motion of how things actually move. And that was huge for me

[00:02:22] to connect why math is important. So I was the first person in my family to go to college.

[00:02:28] I had very little confidence with kind of what I was going to do or even going at all. I

[00:02:34] didn't know the first thing about college, but I kept chipping away at it and I worked

[00:02:39] several jobs as I went. And I was very intimidated by studying physics or majoring in physics,

[00:02:45] but it was the only thing that really I was passionate about. So I went for it and actually

[00:02:50] started getting into research in my undergrad and then went into a PhD program as well.

[00:02:56] Never saw that coming, never excited to do that, but ended up getting a master's and

[00:03:00] a PhD in physics and doing a postdoc and surprising myself and my whole family and

[00:03:07] doing that. And when I decided to transition from academia to industry was when I learned how

[00:03:13] broken hiring is, and I was shocked. I was working in this space that's pretty competitive

[00:03:19] and I was doing pretty well in academia. And so I thought I should be fine getting a job,

[00:03:24] right? But I went out and applied for many jobs and it's just a black hole.

[00:03:30] And it's been a weird situation where I was overqualified for a lot of jobs,

[00:03:37] but also had never had a job before in the real world as I call it. So it was hard and that

[00:03:44] ties into my passion for working in this field was this, you go into an applicant tracking

[00:03:50] system, you upload your resume, then you have to manually re-enter everything that's in your

[00:03:55] resume because it didn't parse it very well. You make a cover letter and a specific resume

[00:04:00] with the keywords from that job posting, and you finally press submit and you never ever hear

[00:04:06] anything ever again. So that was really rough. I did end up getting a foot in the door in data

[00:04:11] science, which with the amount of math that I had had was pretty easy to pick up, right?

[00:04:17] This was before there were data science programs and data science majors. I taught myself

[00:04:22] this online very quickly, I liked understanding algorithms and machine

[00:04:26] and got into that in the healthcare space initially. And then found HireVue pretty

[00:04:31] soon after and was really interested in what they were doing looking at video interviews

[00:04:37] and analyzing video interviews to predict how well someone will fit in a certain job.

[00:04:43] And I'll admit I had my skepticism going into it, kind of understanding how they were

[00:04:47] doing that, what data they were using. And it was early days. HireVue had been around for a

[00:04:52] while, but they were just starting to use machine learning or AI in the process. So

[00:04:57] it was really exciting to get in early when they started doing that. And I feel a lot

[00:05:01] of ownership. It's been almost eight years that I've been at HireVue, seen that product

[00:05:06] go from its infancy to what it is today and we're adding more and more products and

[00:05:11] capabilities all the time. So it's been an exciting journey.

[00:05:15] I was trying to figure out the timelines exactly. So you would probably have started around 2016,

[00:05:21] correct?

[00:05:22] That's right.

[00:05:23] And HireVue came in fruition when around 2014?

[00:05:27] We actually were founded in 2004, but it was very small time. So yeah, our founder

[00:05:34] was actually doing his MBA at a small university here in Utah and he could not get an

[00:05:40] interview at Goldman Sachs because they didn't recruit in his school. And Goldman Sachs was

[00:05:46] literally like, you could see it from his window, but he couldn't get an interview there. So his

[00:05:51] idea was we could open up the funnel more if we ship people webcams and they could record

[00:05:56] themselves. And so that was the original idea. And for a long time we did, we shipped

[00:06:01] people webcams. We still offered to do it not too many years ago and no one had asked

[00:06:05] in so long that we stopped offering. But that was the original idea and it grew pretty slowly

[00:06:11] for a while and then took off right around like 2012, 2014 more and more.

[00:06:16] Shipping webcams. It feels like sounds that would be really hard to scale, correct? If

[00:06:23] you think about how many interviews are running through the HireVue platform right now,

[00:06:28] it'd be almost millions of cameras you'd have to ship. Thank God for the advent of

[00:06:33] laptops and computers getting webcams. On that note though, tell us more about HireVue.

[00:06:38] For the audience that's never heard of it, can you give it a little bit of a breakdown

[00:06:41] what problem it's trying to solve? Yeah, originally we were this video

[00:06:45] interviewing or asynchronous video interviewing where candidates can record themselves answering

[00:06:50] questions on their own time and then recruiters and hiring managers can watch those videos.

[00:06:55] A big challenge that we still saw is there's still hundreds or sometimes thousands of applicants

[00:07:00] for every role and it's hard to watch all those videos depending on the role. That's where

[00:07:05] we started thinking more about assessing the videos through automatic means or through

[00:07:11] machine learning. We built a product, it's not all of our interviews, a lot of people think

[00:07:15] that if they're taking a HireVue interview where their AI is being used on them.

[00:07:19] It is disclosed if it's being used. It's not even the majority of the interviews actually.

[00:07:24] Some of our customers do use that AI scored interviewing where candidates have the same

[00:07:30] interview. We're measuring things that are related to that job. We put a lot of work

[00:07:34] into that side of it. We have a big team of PhDs in industrial organizational psychology.

[00:07:40] We're assessing this job to measure the things that matter. We've trained algorithms to score

[00:07:46] on a standardized rubric. We're predicting things like teamwork ability or communication

[00:07:51] skills or things like that. Those scores help guide human reviewers as they evaluate candidates.

[00:07:59] We've also added with time more products, things like games. We have acquired other companies

[00:08:05] that do things like coding challenges, virtual job tryouts, scheduling. Scheduling

[00:08:11] interviews with people is a big time sink. A lot of it's trying to automate the tasks that are

[00:08:17] really awful for humans to do so they can focus on the more rewarding things and the more human

[00:08:23] things in the process. We see a lot of our customers have automated a lot of those pieces

[00:08:29] and they see a lot of great results from that. I'm just wondering if you could go back just a

[00:08:34] smidge because you talked about the origins of the async video interview and the challenge being

[00:08:42] you've got just the inherent bias of somebody watching a video of someone, right? How could

[00:08:48] you possibly get through a thousand videos? Where is HireVue at today for clients that aren't

[00:08:54] using the AI scoring tool? How is that working? Is it still asynchronous? Is there some sort of

[00:09:02] filter up front and only a small percentage do an async video? It really depends on the company.

[00:09:09] Yes, that's true. Sometimes there's filtering that happens before someone's invited to an

[00:09:14] interview. We do have customers that watch every single video, right? They have teams of people

[00:09:19] that are evaluating. There's reasons for that and sometimes it's concern over using AI,

[00:09:24] which we have a lot of great data that's very convincing to show that our system is

[00:09:30] better than humans in every way we've measured because of this consistency, less bias,

[00:09:36] and better at predicting good hires. We got to prove it out further down the road with our

[00:09:41] customers that you're seeing the results a year in or whatever it may be that these are actually

[00:09:48] better hires and they stayed longer on the job. Those people that we put in the top tier.

[00:09:55] Actually, Lindsay, I'm a little bit surprised. I thought the product was stitch because I

[00:09:59] know there's been some discussion was brought up in a book of the algorithm as far as

[00:10:04] leveraging AI to visually assess. It's a live product right now, right? It's available to

[00:10:10] customers. It wasn't shelved for a moment in time or did I read that completely wrong?

[00:10:16] You mean using the actual visual component of the video.

[00:10:19] Exactly. Yes.

[00:10:21] Yeah. We did that for a time and we discontinued it

[00:10:24] four or five years ago. It's been a while.

[00:10:27] Yeah, exactly. That's what I thought.

[00:10:28] Yeah, it's absolutely gotten a lot of attention. The history there is early days we were

[00:10:35] interested in muscle movements in the face as you can imagine why. That's a big part of how

[00:10:40] people express themselves, particularly for certain roles where you're looking at someone's

[00:10:45] a flight attendant or a customer service role and we want to see if they smile or those kinds

[00:10:50] of things could be related to their performance in that role. What we found through our research

[00:10:55] is that we've always seen language had the most predictive power and what someone's face does

[00:11:02] aligns very closely with their language. It didn't actually add more value beyond the language.

[00:11:09] Also, these large language models have just gotten so much more powerful. The value we

[00:11:14] get from language is just better and better. At the same time, we had concerns over

[00:11:21] the visual aspect of what are you looking at in the face? If I move my eyebrow in a funny way,

[00:11:27] am I not going to get the job? People are concerned that there's something they can't

[00:11:30] control there. Of course, throughout the whole process we're testing the algorithms

[00:11:35] for bias. We're looking into a lot of things. There are issues around lighting or skin color

[00:11:43] that do make a challenging address. At the end of the day, we just found it was more

[00:11:48] concerned than it was really adding value. We did away with any non-verbal communication

[00:11:55] back in 2020 and then phased that out. Just to clarify right now, when you're doing

[00:12:02] any type of AI assessment on a candidate is based on language? Is it words used? Do

[00:12:09] you look at accents? No, it is the transcript. The transcripts are fairly robust now. These

[00:12:15] systems are really accurate, transparent people. The only way accent would come into play is if

[00:12:20] someone had a very thick accent, their transcription accuracy could be less, which is true for humans

[00:12:26] as well. We have to have ways of looking at thresholding or flagging for human review if we

[00:12:31] feel like someone's very hard to understand. At the end of the day, it is looking at

[00:12:35] the content. These new large language models are really good at getting at context and nuance.

[00:12:41] It's less about the actual words you chose or where you're from in the country or the world and

[00:12:47] the meaning there. I want to touch on standardization in hiring. There was a recent

[00:12:54] article, I think in Safety Mag that talked about AI and automation bringing much needed

[00:13:00] standardization to the hiring process. Quite honestly, I couldn't agree more.

[00:13:04] But would you mind just elaborating a little bit on how this standardization

[00:13:09] is going to help reduce bias and improve DENI in hiring?

[00:13:15] Yeah, we've seen it with a lot of our customers. It's amazing how having a process that you're

[00:13:21] looking at things that are relevant to the job and nothing more just naturally gives you a

[00:13:26] boost in diversity. AI or not, having something that is highly standardized, we see this.

[00:13:33] That means that we're looking at something that's very job-related. You're applying

[00:13:37] to be a teller. We have you do an exercise where your accounting changed or something like that.

[00:13:42] This is highly related to the job. We don't have any gut feelings or any judgments based on

[00:13:49] your name or where you're from or what school you went to or your GPA.

[00:13:53] You immediately get a boost from that and that you're giving everyone that same chance.

[00:13:57] Further, when you do use AI, we've all seen the headlines of AI can be biased and of

[00:14:03] course it can. You have to be careful that it doesn't repeat the mistakes of the past.

[00:14:09] Like any powerful tool, you can use it for good or you can use it for harm.

[00:14:13] You've got to be careful, but you can actually mathematically tune the algorithm to ignore

[00:14:20] certain factors. An example would be in a video interview, you might have even training data

[00:14:26] that has bias against women. Well, there are things that are hinting towards being a woman.

[00:14:32] You talk about child care or something like that. With an algorithm, you can automatically punish

[00:14:38] that feature so you can say ignore that word then. If there's a word like that or a subject

[00:14:44] matter that differentiates men and women, the algorithm should be blind to that word.

[00:14:49] That's something you cannot do with humans is just kind of tune out certain aspects

[00:14:54] that can lead to bias. There's a lot of promise there.

[00:14:58] So is that built right into the product regardless of the client or is it up to the

[00:15:03] client to say, okay, when I am configuring my system or configuring my version of higher view

[00:15:12] or is this right across the product line? It's across the board and we published a

[00:15:17] peer-reviewed paper on this last year in the Journal of Applied Psychology.

[00:15:21] We build it into the optimization. So we're not just trying to make the algorithm predict

[00:15:28] an outcome accurately like your team orientation ability or something like that, but we also are

[00:15:35] penalizing it for having group differences in the outcome. So it's built in that it's

[00:15:41] optimizing those things at the same time. There is follow-up for the individual customer.

[00:15:48] So when we release that algorithm into the wild and people are using it,

[00:15:52] is there anything unexpected? Like now we're using it on a different population.

[00:15:57] Is there any bias that creeps in because of the way you're sourcing? Like you're sourcing

[00:16:01] from this particular college that has more people of color but this other college doesn't

[00:16:07] and that's a different population than we trained on. So we would want to check that

[00:16:11] regularly and perhaps mitigate further on that particular population.

[00:16:17] So is that higher view doing the check or the client has to put up their hand and say,

[00:16:22] I need you to double check?

[00:16:25] No. Yeah. It is part of our statement of work that we do that at least annually.

[00:16:30] That's best practice in assessment in IO psychology. In the US, we have the New York

[00:16:35] City law now, which is the first AI and hiring law that requires we do third party audit

[00:16:43] annually, which is pretty much that. It's an adverse impact check with a third party every year.

[00:16:49] We've been doing that internally for a long time and we've done several third

[00:16:52] party audits as well, but that's an annual thing that we're doing going forward to comply

[00:16:57] with that. Obviously if I'm a TA practitioner, an HR practitioner, there's a lot of concerns when

[00:17:02] it comes to AI tools and what happens if something goes wrong? What happened if the

[00:17:08] tool is biased or flawed in any way? It looks like the liability is going to be the employer

[00:17:15] or the end user that's leveraging it. So I'm just curious, what advice would you give to

[00:17:20] employers that are looking at these types of tools? Like what should they be assessing? Should

[00:17:25] they have peer review studies like you guys are doing third party audits? What should they

[00:17:29] look at? So AI regulations are in their infancy and people are trying to wrap their

[00:17:36] head around this and it's hard. I think they're going to need to be specific to the industry.

[00:17:40] Like when you say regulate AI, that's so broad and no one really knows what it means.

[00:17:46] Everyone's saying here are the guidelines, the framework, and they're all saying similar

[00:17:50] things but they're not specific enough. So as a data scientist, you think about fairness or

[00:17:56] ethics and what that means. That's really hard to define for some data scientists. If you work

[00:18:02] in a social media company and you're trying to figure out what content to recommend to someone,

[00:18:07] no one has even defined what fairness means or what you should even think about. Really,

[00:18:13] that's like a completely new thing. Hiring is not a new thing. So I think we have to

[00:18:19] communicate that to people sometimes. Like this is not totally new. We're using new tools

[00:18:23] to do something that people have been doing for a long time and there's a lot of established

[00:18:29] stuff here. So we're building on decades of science and best practices, but we're using

[00:18:35] new tools. We still have to follow all the rules and that's something the federal government has

[00:18:40] come out and said, hey, if you're using AI in hiring in the US, you still have to follow the

[00:18:45] Americans with Disabilities Act. It's like, well, no shit. Of course you do. All these

[00:18:50] things are kind of obvious but some people have reacted like these AI systems, they're just

[00:18:55] doing whatever. They're doing whatever they want and of course that shouldn't be true.

[00:18:59] You should look for a vendor that is an expert in hiring and knows the space, knows HR,

[00:19:04] knows TA, following all those classic, we do our adverse impact checks at this frequency.

[00:19:11] This is what we look for. These are psychologists that are trained in the industry to know

[00:19:16] this is what it looks like when we have a problem. How do we build up the evidence

[00:19:21] that this is working? It's all this science that's been around for a really long time,

[00:19:26] but you've just brought in new tools of assessing people.

[00:19:30] TITLE There's a ton of noises when it comes to this space, AI and HR tech,

[00:19:36] and as a practitioner, you're getting hit from everywhere and you don't know what's real and

[00:19:40] what's not. Has that been a challenge for HireVue because you're competing with a lot

[00:19:46] of startups that might not have a team of data scientists. They might just be leveraging

[00:19:51] ChatGPT plugin to do assessment on candidates, which is doable. I don't recommend you whatever,

[00:19:57] but how does HireVue counter against that noise in the marketplace?

[00:20:01] TITLE Yeah, and it's going to get more interesting to your point. ChatGPT can

[00:20:05] do a lot of things. It's really cool. But how do you put guardrails on it? How do you

[00:20:11] make sure you have science behind it? ChatGPT wasn't trained to be accurate or truthful,

[00:20:17] necessarily. Yeah, I think we will see in every industry more startups come online that

[00:20:23] are doing cool things really quickly. We've been set up really nicely through

[00:20:27] being a pioneer of using AI in this space. We've had to get through a lot of scrutiny

[00:20:33] and it's made us stronger. We've done third-party audits. We have an explainability

[00:20:38] statement. We've been pushed to be more transparent and that has been really good

[00:20:42] for business. It's been good for us. I was at Unleash a couple of weeks ago or HR Tech in the

[00:20:48] fall. Everyone's talking about skills and they say we infer skills. And I'm like,

[00:20:52] how well do you infer skills? I mean, do we have an accuracy on that inference or is it

[00:20:57] just check the box? And ChatGPT is really good at inferring skills by the way. So

[00:21:02] can we all do that now or are we even going to discuss how accurate we are or how good we

[00:21:08] are at inferring skills? It comes back to these questions of scientific rigor. And so I think

[00:21:13] as you're looking at vendors asking for that, if they can't explain what they're doing,

[00:21:19] any technical ability, then I would be concerned. And even bringing in your own

[00:21:23] people from your company that might be more science-y or math-y to ask questions,

[00:21:28] even if they're not in HR, dig at it a little bit. I think can be helpful as well.

[00:21:32] One of the things that we noticed going to HR Tech in 2022, everyone had D, E and I everywhere.

[00:21:39] Right? And then after ChatGPT came out at Unleash the year after, it's like they put

[00:21:44] a sticker of AI over the D and I, and that's what their tool is doing now. And we come

[00:21:50] across exactly that situation when you're asking how they're leveraging AI on the floor.

[00:21:56] The majority don't know. And when I said the majority, it's probably a salesperson they hired

[00:22:00] and obviously it's not the founder. It's not fair, but you gotta explain exactly how the

[00:22:05] tool works. There was a lot of buzz earlier this year. There was a book that came out

[00:22:11] called The Algorithm. And boy, did they ever pick on HireVue. Nante, Heike Schulman.

[00:22:19] She was very critical of HireVue in that book. So I would love to give you the opportunity to

[00:22:27] counter or maybe respond on what HireVue does to ensure that the data used to train its algorithms

[00:22:35] is diverse and unbiased. Yeah, I talked to Heike, she quoted me in the book. I think

[00:22:42] there's definitely, as I mentioned on the visual aspect and the nonverbal,

[00:22:48] there's a lot of outdated information in the book. So there's a lot of focus on things that

[00:22:52] we haven't done for a long time, which I think is unfair given where we're at now.

[00:22:57] She's a very smart person and I respect her a lot as a journalist. I think that

[00:23:00] there's a lot of good questions that she brings up that we've grappled with for years.

[00:23:05] And having those conversations pushed us in the direction of being more transparent,

[00:23:09] having better practices. I think our practices are very good. We've set the bar in the industry

[00:23:16] for sure, but because we are a pioneer, we open ourselves up to be scrutinized. When we build our

[00:23:23] training sets, when we train our algorithms, we control a lot of that. And when we started

[00:23:29] doing this, we didn't necessarily, right? We would get our data from our customers and we

[00:23:33] realized pretty early on that we want to have a lot of control for this exact reason. The

[00:23:38] data that you use is so important. If we're trusting our customers, how are they measuring

[00:23:43] performance? Do we know that they are not biased versus, hey, we're going to have trained evaluators

[00:23:49] that evaluate interviews that we know have some background in this and we're giving them the

[00:23:54] exact rubric. We're having multiple reviewers evaluate every single answer. We are comparing,

[00:24:01] we're looking for discrepancies, we're discussing, we're looking at the diversity

[00:24:05] of that group and the diversity of the training data. And that is all published in our explainability

[00:24:11] statement. Our explainability statement, the short version is 30 pages long, but it's open to anyone

[00:24:16] who's interested to go through. We're just updating it right now, but it's a living document.

[00:24:20] We'll change it as we get more data, as we improve our practices or anything that we change

[00:24:26] in how we train algorithms, et cetera. But it goes through all of that. So I'm really proud

[00:24:30] of the work that we've done. Like I said, we've learned through time and dropping the video

[00:24:37] aspect of the evaluation was one of those ways where it's like, hey, we saw that this was not

[00:24:43] worth the concern that it was causing and we're willing to admit that we could do it a better

[00:24:47] way. And we did. We see it as a journey that we're on. And there was a lot of focus in the

[00:24:51] book of where we were four or five years ago, in my opinion. So. Thank you. Thank you, Lindsay.

[00:24:57] Cause as I was reading it, I was thinking, wait a minute. I was pretty sure that you had sunset

[00:25:04] that part of the product. Thank you for clearing that up. Search over to you.

[00:25:08] Yeah, absolutely. And I'm glad it was brought up because we read the book,

[00:25:12] we had Hilke on the podcast as well. And I think she called out some really good things

[00:25:17] in the sense that, Hey, we got to start thinking about this, but I agree it was unfair

[00:25:21] to a lot of vendors that were named in the book as far as what their actual practices are.

[00:25:25] So thanks for clarifying that. Now let's look in the future. We're in 2024 and we're moving

[00:25:32] really quickly. Seems the last couple of years, technology has advanced to the pace

[00:25:35] that we've never seen it. And that's always been the case, but AI has just sped that up.

[00:25:41] So what is the world of work and recruitment going to look like in 2030? Are we going

[00:25:47] to have robots interviewing robots? Yeah. I always think about that. Are

[00:25:53] we just going to have robots sending our emails for us and other robots reading those emails?

[00:25:57] And then we're not even talking to each other anymore. No, I love this question. I think you're

[00:26:01] right. We're on this exponential curve. So we should expect the unexpected and we should expect

[00:26:06] to be blown away several times in our lifetime. These big things like the printing press or

[00:26:12] antibiotics or the internet. Like maybe we're in one of those moments right now. It's hard to

[00:26:17] say when you're in it, but it feels big. And there's a couple of things for the future of

[00:26:21] work. Generally, I'm pretty optimistic about us as humankind being able to adapt like we always

[00:26:29] have. There will be a lot of automation. There will be a lot of jobs that go away,

[00:26:34] but there will be more jobs that are created. I know a lot of people are worried that this

[00:26:38] will be the final time and that it won't happen again, but history has proven that

[00:26:42] that typically we see things shift in a way we don't yet understand. And we can't really predict

[00:26:49] yet what that will look like. We're already seeing it. We're seeing layoffs, but we're seeing a lot

[00:26:53] of new job creation as well. Different types of jobs that we just have to be more agile. We

[00:26:59] have to understand skills and jobs and people better. And I think AI is going to really help

[00:27:05] us do that. So particularly in the world of hiring, I'm really excited about AI just helping

[00:27:12] people transfer their skills to different roles. And when you're hiring for a job that no

[00:27:18] one's ever had before, you can't really do it the old way of looking at the resume and saying,

[00:27:23] I'm just going to hire someone who's had this exact job before because no one's had that job

[00:27:27] before. So you have to say what are the jobs that are close to this job? And it might be

[00:27:32] something you didn't expect, but this job is similar to a mail carrier or something. Maybe

[00:27:38] you didn't see that, but AI can tell us this has a lot of the same skills

[00:27:42] and that's the people we're going to want to move over. And I'm excited for candidates

[00:27:46] because I think it means a lot more people are going to have options and they're going to be in

[00:27:51] jobs that they really like. And hopefully applying for a job doesn't mean that you're

[00:27:55] one by one entering different funnels where only one person comes out the bottom in this kind of

[00:28:01] requisition based model. And there's more of a multi-dimensional space where opportunities are

[00:28:07] available to you. And you own a lot of your own data, including your assessment data,

[00:28:12] more robust data than just a resume or a LinkedIn profile.

[00:28:15] I love it. One last question and I'm curious for HireVue itself,

[00:28:21] is there anything exciting coming up in the next year? Like what is in the roadmap for

[00:28:25] HireVue as far as new products, new innovations?

[00:28:30] Now I'm going to say something I'm not supposed to commit to yet. Let me think.

[00:28:35] That's fine. No, no, no. Don't worry about it.

[00:28:38] My product leaders like weed. I can't date on that. No, kind of what I was talking about,

[00:28:44] we are thinking a lot about talent acquisition and talent management and how those things overlap.

[00:28:51] And as a person who's not from this space, I think a lot about why are those things separate

[00:28:55] or why do you lose a candidate? Like they apply for a job and then if they didn't get that

[00:29:01] job, they're just gone. A lot of companies will say, hey, we want to keep you in mind if

[00:29:05] something else comes up and they have no process to do that. So in our system,

[00:29:10] we're starting to think more about, hey, we have this candidate. They just applied for a job at

[00:29:15] your company. You know they're looking for a job. They want to work there. And here you are

[00:29:19] posting a really similar role or even basically the same role again two weeks later. They've

[00:29:26] already taken an assessment. Here they are and they come back in. So I think that's

[00:29:31] just a no brainer. We're thinking a lot about that and it ties into my future of hiring idea

[00:29:37] around being resurfaced for opportunities rather than applying for hundreds of jobs

[00:29:42] and getting denied every time. Perfect. We're going to get your chief product officer on the

[00:29:48] show really soon and we'll question him on that. We have a new one and she starts tomorrow.

[00:29:52] So we just throw her right in. Throw her right in. Perfect. Well, I really appreciate you coming

[00:29:58] on the show. If anyone wants to find out more about Lindsay, what's the best way to get a hold

[00:30:03] of you? Yeah, you can find me on LinkedIn. So Lindsay Zulaga. And for HireVue, obviously

[00:30:11] hirevue.com is probably the best way to find out more about HireVue. Perfect. Thank you so

[00:30:17] much. We really appreciate this amazing information. Can't wait to have you on the show again.

[00:30:22] Nice to meet you, Lindsay. Thank you.

[00:30:34] Shelley, let's face it, texting candidates is the easiest way to hire quicker today,

[00:30:40] but your cell phone doesn't connect to your ATS. You're sharing your personal number with

[00:30:44] strangers. That's pretty scary, right, Shelley? And it's not even legally compliant.

[00:30:50] This is where our friends at Rectex come in. They've created simple yet powerful text recruiting

[00:30:55] software that works with your ATS. Plus, it's designed by recruiters for recruiters,

[00:31:01] so you know it works. To learn more and book a demo, visit www.rectxt.com,

[00:31:11] mention the recruitment flex, and get 10% off annual plans.

[00:31:14] Do you love news about LinkedIn, Indeed, Google, and just about every other recruitment tech

[00:31:20] company out there? Hell yeah. I'm Chad. I'm Cheese. We're the Chad and Cheese Podcast.

[00:31:26] All the latest recruiting news and insights are on our show. Dripping in snark and attitude.

[00:31:32] Subscribe today wherever you listen to your podcasts. We out.