A group of men eating ice cream during peak London summer started drowning in large numbers.
As there was a huge number of men eating ice cream who drowned, it was concluded that eating ice cream led to drowning 🤷🏼♀️
This did sound absurd to the researchers investigating this curious case of the missing link between ice cream and men drowning. However, upon closer investigation, it became evident that this was a classic case of a confounding variable at play. The actual factor influencing both the increased ice cream consumption and the rise in drowning incidents was the hot summer weather.
People were more likely to eat ice cream and swim during the summer, hence the correlation. This illustrates how confounding variables can mislead us in everyday life. A more relevant example is in the hiring process, where a candidate excels in interviews but performs poorly on the job. Here, the 'experience of attempting interviews' acts as the confounding variable.
Candidates who frequently attend interviews become adept at navigating them, which doesn't necessarily reflect their actual job performance.
These 'interview hackers' develop strong interviewing skills that overshadow their actual professional abilities.
The act of assessing quality candidates has got particularly harder now as we're living in a post-GPT world where LLMs are getting increasingly better at drafting product strategies, writing PRDs, and critically thinking through various scenarios that are usually led end-to-end by product managers.
I recently read about applications where AI was able to provide real-time answers to questions asked by the interviewers essentially rigging the system.
I faced a similar situation while drafting an assignment that could help assess a product manager's skills. It's highly likely that the candidate is using some version of a ChatGPT or a Claude to help draft better answers. How do we then cut through the noise and understand their thinking process?
To answer this question myself, I've been documenting some internal meta-notes to help me do a better job at distinguishing candidates with interview hackers. These meta-notes are more oriented towards product managers and some of them could also apply to other domains.
Breaking the fourth wall
I sometimes subtly probe the candidates to go deeper. Sometimes, they break the 'fourth wall' and provide a spiky point of view.
Say, for example, you ask the candidate — 'Tell me more about how you prioritise your time with an example?'.
The candidate might usually start with a project that they've taken up, how they've used certain frameworks, and how they had approached prioritisation that way. This is where the usual conversations go into, and perhaps that might be it.
But for some candidates, they might do more critical reflection on their own process, and even talk about places/scenarios where the specific prioritisation framework might not have worked. In reality, there are no blanket solutions.
Sometimes, through this exercise, a spiky point-of-view emerges. One which is rooted in their experience, and yet others can still disagree with it. It captures attention as it stands out in the sea of sameness, but does provide valuable signal that this candidate has great lessons rooted in practicality.
Trees and Branches
I've been able to identify top-notch talent by asking this — 'What was the hardest problem you've encountered and how have you approached it?'. While the candidate starts narrating, I start using the metaphor of a tree to weave questions around their narration.
When the candidates go deep into one particular topic (say metrics), I zoom out a bit, and ask them about outcomes. I don't want to know the details of how the leaf looks like, I just want to see the overall outline of the tree, the branches and the twigs.
Whenever the candidate goes too deep, I nudge them to go a bit broad. When they go too broad, I nudge them to go one level deep. While doing this, I also see if the candidate is having an holistic approach to problem solving. For example, if they're building an electronic health record system, how are they thinking about the legal, data privacy, ethical implications of collecting patient phone numbers?
Listening to respond
For listening skills, I give them constructive criticism at the end of the interview, and see how they respond. If they're listening with an intention to learn more, then I see that as a good sign. If they're listening to respond, or even worse: to defend, that's a red flag.
Narratives on lived experiences
I also frame the questions slightly differently. Instead of asking them 'How should a product manager involve the stakeholders?', I reframe this as 'Describe a challenging situation in the past involving difficult stakeholders, and how did you navigate this?'.
I've seen that the answers slightly shift from a theory to a lived experience. This is very difficult to fake, and the more interviews one conducts, we do get good at spotting fabricated narrations. They wouldn't necessarily pass the smell test that way.
Stretching to extremes
Another interview technique I recently adopted involves stretching an idea to its extremes. When a candidate describes a decision they previously made, I extend the scenario to extreme conditions. For instance:
- What if the data is insufficient?
- What if there are no insights from interviews on this process?
- What if there is no clear roadmap?
By posing these hypothetical extremes, we can gain insight into the candidate's internal decision-making model. This method mirrors a Socratic dialogue, primarily driven by 'What if...' questions.
The effectiveness of this technique lies in its ability to move beyond conventional responses. Many candidates are familiar with standard practices and may offer predictable answers when asked about them. However, critical thinking often emerge in response to extreme situations, outliers, and edge cases.
This approach helps identify candidates who can think critically and adaptively in unconventional scenarios and separate the 'interview hackers' out of the mix.
Member discussion