Something’s been bugging me about how new devs and I need to talk about it. We’re at this weird inflection point in software development. Every junior dev I talk to has Copilot or Claude or GPT running 24/7. They’re shipping code faster than ever. But when I dig deeper into their understanding of what they’re shipping? That’s where things get concerning. Sure, the code works, but ask why it works that way instead of another way? Crickets. Ask about edge cases? Blank stares. The foundational knowledge that used to come from struggling through problems is just… missing. We’re trading deep understanding for quick fixes, and while it feels great in the moment, we’re going to pay for this later.
Very simple, many are done in 5 min; this just weeds out the incompetent applicants, and 90% of the code is written (i.e. simulate working in an existing codebase)
Ambiguous requirements, the point is to ask questions, and we actually have different branches depending on assumptions they made (to challenge their assumptions); i.e. simulate building a solution with product team
The first is in the first round, the second is in the technical interview. Neither are difficult, and we provide any equations they’ll need.
It’s much more important that they can reason about requirements than code something quick, because life won’t give you firm requirements, and we don’t want a ton of back and forth with product team if we can avoid it, so we need to catch most of that at the start.
In short, we’re looking for actual software engineers, not code monkeys.
Those are good approaches, I would note that the “90% is written” one is mostly about code comprehension, not writing (as in: Actually architect something), and the requirement thing is a thing that you should, IMO, learn as a junior, it’s not a prerequisite. It needs a lot of experience, and often domain knowledge new candidates have no chance of having. But, then, throwing such stuff at them and then judging them by their approach, not end result, should be fair.
The main question I ask myself, in general, is “can this person look at code from different angles”. Somewhat like rotating a cube in your mind’s eye if you get what I mean. And it might even be that they’re no good at it, but they demonstrate the ability when talking about coffee making. People who don’t get lost when you’re talking about cash registers having a common queue having better overall latency than cash registers with individual queues. Just as a carpenter would ask someone “do you like working with your hands”, the question is “do you like to rotate implication structures in your mind”.
judging them by their approach, not end result, should be fair.
Yup, that’s the approach. It’s okay if they don’t finish, I want to know how they approach the problem. We absolutely adjust our decision based on the role.
If they can extend existing code and design a new system (with minimal new code) and ask the right questions, we can work with them.
We do two “code challenges”:
The first is in the first round, the second is in the technical interview. Neither are difficult, and we provide any equations they’ll need.
It’s much more important that they can reason about requirements than code something quick, because life won’t give you firm requirements, and we don’t want a ton of back and forth with product team if we can avoid it, so we need to catch most of that at the start.
In short, we’re looking for actual software engineers, not code monkeys.
Those are good approaches, I would note that the “90% is written” one is mostly about code comprehension, not writing (as in: Actually architect something), and the requirement thing is a thing that you should, IMO, learn as a junior, it’s not a prerequisite. It needs a lot of experience, and often domain knowledge new candidates have no chance of having. But, then, throwing such stuff at them and then judging them by their approach, not end result, should be fair.
The main question I ask myself, in general, is “can this person look at code from different angles”. Somewhat like rotating a cube in your mind’s eye if you get what I mean. And it might even be that they’re no good at it, but they demonstrate the ability when talking about coffee making. People who don’t get lost when you’re talking about cash registers having a common queue having better overall latency than cash registers with individual queues. Just as a carpenter would ask someone “do you like working with your hands”, the question is “do you like to rotate implication structures in your mind”.
Yup, that’s the approach. It’s okay if they don’t finish, I want to know how they approach the problem. We absolutely adjust our decision based on the role.
If they can extend existing code and design a new system (with minimal new code) and ask the right questions, we can work with them.