采摘编程面试问题。

2020年4月19日。提起下python58staff-plus6interviewing2

最近有人送我一张纸条,询问他们对于面试人员工程师内部流程是否是好的。他们特别关注的是,他们遇到了一些工作人员,加谁的工程师都在努力编写代码查找回文或逆转的阵列。

有程序员的口头传统谁根本无法计划,在2007年的时候伊姆兰Ghory创造的想法捕获回Fizzbuzz problems,伴随着最经典的Fizzbuzz问题:

Write a program that prints the numbers from 1 to 100.But for multiples of three print “Fizz” instead of the number and for the multiples of five print “Buzz”.For numbers which are multiples of both three and five print “FizzBuzz”.

在Python3一个特别平凡的解决方案可能是:

对于i在范围(100):div_3 = I%3 == 0 div_5 = I%5 == 0如果div_3和div_5:打印( “Fizzbuzz”)的elif div_5:打印( “嗡嗡”)的elif div_3:打印(”费兹“)其他:打印(STR(I))

说实话,我从来没有去过愿意给候选人Fizzbuzz问题。在最好的感觉不屑一顾,在最坏的情况确实不尊重候选人。我还从来没有采访过的软件工程师谁根本写不出这样的问题,虽然在我的职业生涯肯定已经采访了类似系统管理角色相邻谁写不出这种程序的候选者。

当采访人员多位工程师,我相信,以验证他们尚能周到创作软件,同时努力你最好不下去了没有多大意义的信号赶上了这一点很重要。我在那个被访问受限于深计算机科学或数学技能的乡亲公司很少奏效。这并不是因为这些技能都是不值钱,而是因为你很可能已经有他们在黑桃。在一些场合这些技能进行了研讨性缺失,我们需要的是在某一特定领域世界一流的专家(分布式数据库,编译器等),而不是一般的东西。

如果你聘请世界一流的专家,通过各种手段,你要努力在他们的自定义,具体和专业评估他们hard的问题。如果专业正好是计算机科学的某些方面,然后钻了他们对小!然而,对于大多数资深的乡亲,我发现,计算机科学直接引用不是特别好评价,而不是最好的办法是找到介绍领域的某些方面,而不对他们为中心的现实问题。

什么时候Digg was trying to get acquired几年前,我们做了一堆手机屏幕,并记得之一是写,将返回各自提和主题标签的开始和停止位置的功能,这样你就可以呈现不同他们在网页上对移动 app versus a text message.

例如,你可能会,有一个像鸣叫:

Hey @lethain, did you know about a #typo on page 24 of your book?

然后将输出会是这样的:

[ {kind: "mention", name: "lethain", start: 5, end: 7}, ...]

我喜欢这个问题有很多,因为它是一个真正的,诚实的问题,你会在现实生活中遇到的问题。你必须了解这个问题,理解的目的,然后解决它。也有一些有趣的edgecases的考虑,如用户名中不包括标点符号等。

当时,我并没有太多的经验,写一个标记,所以我真的不记得我做什么,但今天,我可能会尝试解决沿着这些线路,写作的内容:

TXT =“嘿@lethain,你知道你的书的第24页上#typo?” WORD = 'word' MENTION = 'mention' HASH = 'hash' HASH_START = '#' MENTION_START = '@' WHITESPACE_START = ' ' PUNCTUATION = '!?,;:' def make_token(kind, txt, start, end): return (kind, txt, start, end) def tokenize(txt): tokens = [] acc = "" start = 0 kind = WORD for i, ch in enumerate(txt): if ch in WHITESPACE_START or ch in PUNCTUATION: if acc: token = make_token(kind, acc, start, i) tokens.append(token) acc = "" start = i kind = WORD elif acc == "" and ch == HASH_START: kind = HASH elif acc == "" and ch == MENTION_START: kind = MENTION else: acc += ch if len(acc) > 0: token = make_token(kind, acc, start, len(txt)) tokens.append(token) return tokens print(tokenize(txt))

有很多像这个问题:它是真实的,这个问题很容易解释,它几乎不需要脚手架或样板上手,而且不需要知道许多图书馆或内置的方法。

在另一方面,有几件事,使这是一个很难回答的问题给考生。首先,令牌化文本是一个相当具体的任务,如果你已经做了的时候了一把在你的职业生涯,然后你会过得比人谁也不会更好。当我第一次的问题,我会从未标记化文本,从那以后,我得到了更多的一点体会writing the systems library。良好的面试没有测试具体的知识,而是解决问题的能力。实际上只有最顶尖的工程师,我的采访最终会写在他们的工作一个标记。即使他们需要一些复杂的tokenization, I'd expect them to probably use a tool likeANTLRinstead of writing their own.

The other concern I have with this specific problem is that it is the sort of problem that can be fairly easy while all the pieces are fitting into your head, but once you make one mental slip then all the pieces come crumbling down around you.I confused myself a bit writing this code alone in a room, and could easily imagine getting stuck during a timed interview with someone watching you.

AtCalm, we've historically relied on having a single staging environment where software was tested against a complete environment, but recently the team rolled out multiple staging environments.This is a major improvement, and opened up an interesting question: how should we assign staging environments to people?

The good enough solution is to assign one staging environment to each team, but the theoretically ideal solution might be having a queue of folks who want to use an environment, and then assign them across the pool of environments as they become available.

I was thinkingthatmight be an interesting interview problem.It's a fairly real problem, doesn't require much backstory to answer, and it avoids the mental load or context-specific experience of something like tokenizing text.

POOL = ['a', 'b', 'c', 'd'] ASSIGNMENTS = {key: None for key in POOL} WAITING = [] def status(): txt = "Status\n" for key in POOL: val = ASSIGNMENTS[key] val_txt = val if val else "" txt += "\t" + key + ": " + val_txt + "\n" txt += "\nwaiting: %s" % (WAITING,) print(txt) def queue(name): for key, val in ASSIGNMENTS.items(): if val is None: print("UPDATE") ASSIGNMENTS[key] = name return WAITING.append(name) def pop(name): for key, val in ASSIGNMENTS.items(): if val == name: first = None if len(WAITING) > 0: first = WAITING.pop(0) ASSIGNMENTS[key] = first return ops = [ (queue, "will"), (pop, "will"), (queue, "will"), (queue, "jill"), (queue, "bill"), (queue, "phil"), (queue, "chill"), (queue, "quill"), (queue, "fill"), (pop, "chill"), (pop, "bill"), (pop, "chill"), ] status() for cmd, val in ops: cmd(val) status()

This is a problem where it's easy to write something that works, and watch someone debug the issues they run into along the way.Then once someone writes an intial solution, you can imagine a bunch of ways to continue adding complexity to it that require them to refactor their code.For example, you could add a maximum allowed time to use an environment, after which you'd automatically get removed, and so on.

Requirements for a good problem

Putting all of this together, I think what makes good problems to evaluate programming experience of senior and Staff-plus candidates are:

  1. Support simple initial solutions and compounding requirements
  2. Are solvable with a few dozen lines of code
  3. Require algorithmic thinking but not knowing a specific algorithm
  4. Don't require juggling numerous variables in your head
  5. Does support debugging and evolving the code
  6. Are not domain specific to advantage arbitrary experience
  7. Require writing actual code in language of their choice
  8. Are done in a real code editor

I'm sure there are other criteria that are important, but generally I think those are a good starting point.


As a final warning against mathematical problems, I still fondly remember when I in my younger days thought it would make sense to give one of the easier but nottoo easyProject Eulerproblems to a candidate with a math background, which backfired when they immediately produced the closed-form equation rather than writing code to do it.

Just as some folks will find the problems you select to be artificially hard if you pick the wrong sorts, you'll also find other folks who'll find them artificially easy.Picking general problems that start small and evolve into more complex problems throughout the interview allows you to mostly solve for both.