The Problem with Using HackerRank as a Programmer Screening Tool

I think using an online judge such as HackerRank to screen software engineering candidates inherently comes with significant drawbacks. Namely:

  • they expect input and output in a necessarily contrived form
  • there is no communication or creativity involved

Let’s look at the typical flow for this kind of technical screen, based on an actual example I went through on HackerRank (from memory). Note: this was part of an internal evaluation of HackerRank for potential inclusion in our hiring process, not part of an actual hiring test.

Example:

  1. you get a link to a timed challenge from the hiring company
  2. as soon as you open the challenge, a 90 min countdown is started
  3. you are presented with a fairly detailed problem statement detailing the input format, how you are supposed to read it, and the kind of output that is expected by the system:

    Log files contain one log per line, where each log contains the following entries, separated by spaces:

     hostname http_status request_size response_size response_time_ms
    

    Your solution should read the filename from stdin, write its output in a file named like the input file prefixed with “records_”. The output should contain each unique hostname encountered in the input, followed by a space and the number of times that hostname was encountered in the input. For instance, if you read “log_2017_dec_19.txt” from stdin and that file has the following content:

     zeus 200 1532 501 31
     hermes 400 6072 - 25
     zeus 200 1550 32
    

    Then we expect output in a file named “records_log_2017_dec_19.txt” with the following content:

     zeus 2
     hermes 1
    
  4. you get to submit a solution in any language supported by Hacker Rank, using the online editor. In this case the selection seemed to cover the most common languages (Bash, Java, C, Python, Ruby, Go, JavaScript…).
  5. each time you submit a solution, it is automatically validated against a set of inputs and failures are reported to you. You get to resubmit until you’re happy with the result or the time is up.

Seeing this problem statement, I was initially quite stoked because this is literally something I do everyday. Fundamentally, this is just asking for:

cut -d' ' -f 1 | sort | uniq -c

… but the specification throws a few complications our way, so this won’t actually pass the test. As a result, something that could take 30s ends up taking 30min just to meet the weird requirements of:

  • reading the filename on stdin (as opposed to say, receiving the filename as an argument or the file contents on stdin)
  • producing output in a file called records_$filename
  • having output in the form hostname number, which happens to be the reverse of what uniq does. So I had to spend a chunk of time writing a sed query to reverse the output and potentially remove multiple spaces, ugh. At this point, figuring out the proper sed incantation was starting to take too much time but I was committed to do this in as close to a one-liner as I could – too late to rewrite it in something that would have been more straightforward like Python.

Not that it matters, but if you’re curious, here is what my submission ended up looking like (the sed expression is almost certainly wrong in some way):

read filename
cut -d' ' -f 1 $filename | sort | uniq -c | sed -E 's/([0-9]+) ([^ ]+)/\2 \1/; s/ *//' > records_$filename

There are a few things that seem appealing with screening candidates this way:

  • they do get tested for coding skills
  • the system does a few things for the hiring manager, for instance it can report on test failures and check for plagiarism
  • the hard time limit means that the candidate can not spend too much time (this can be a problem for more free form take-home problems)
  • the candidate gets to work in a more “natural environment” than a whiteboard, on her own laptop and with Google/Stackoverflow available

Unfortunately, I believe the pros are outweighed by the cons:

  • the “natural environment” argument is countered by the fact that an online editor is not quite your natural development environment. Specifically, Hacker Rank disables copy/paste on its page which makes testing in a proper shell/editor hard for no reason. I presume they do this so that we don’t scrape interview questions? It just seems arbitrary to me.
  • it takes time to iterate on a solution, because uploading the solution, running it and getting the test results takes an order of magnitude longer in the online tool than it would locally.
  • the countdown timer was stressing me out the whole time
  • the candidate can not ask questions at all. I believe this is a really important skill on the job, that is completely outside the scope of an online judge.
  • there is zero creativity or decision-making involved in the output. You quite literally code to a spec, which has never happened to me. Ever.

You really have to wonder, what are you testing with this process? If you purely want to weed out candidates who can’t code, I guess this might work. But I fail to see how working on a generic problem in a contrived web interface with zero communication and creativity can be a good predictor for software engineering success.

Discuss on HN

Discuss on Twitter

Written on December 19, 2017