My Facebook news feed today points to an article in HBR Magazine about hiring practices at Automattic. The key point is that they use “tryouts, in which final candidates are paid to spend several weeks working on a project.” The idea is an old one, namely that traditional interviewing is only really good at assessing how well someone interviews. If you want to see how well someone works, and especially how well someone works in a given work setting, then the best thing to do is observe them doing actual work in the actual work setting.
Automattic is not alone of course. Heroku, a cloud platform company in San Francisco, now conducts three day job interviews. Candidates are also expected to supply evidence of their contributions to open source software projects. This provides further evidence of their ability to get things done with others, and also provides a code base that can be discussed in depth without causing intellectual property problems. Square has done pair programming in interviews for some time. The first two Vancouver startups I asked confirm that they use a similar approach with shorter time periods. At least in the software development world this is an idea who’s time has come.
At longer time scales, and in many more fields, co-op programs in which students spend 3 months or more at a prospective employer have long been seen as extended interviews.
What these interview approaches have in common is they focus less on what a candidate knows, and more on what they can do. These companies want to know whether candidates will be able to be productive in their environment. Do they get things done? Do they share our values? Will it be fun to work with them? Whether someone might be missing some particular piece of knowledge is less important to them. They believe that someone with the right work practices can always pick up more knowledge later; but the person who doesn’t fit in, or doesn’t have the right work ethic can’t learn that, even if they may have top grades in certain important content.
This warrants an extension to maxim that it’s not what you know, it’s who you know that matters. For these companies it’s what you can do that matters.
Now consider university degrees.
The university degree as certification is a long standing social contract. Students attend four (or more) years of higher education, work hard, and then the school issues a degree certifying the results of their efforts. This social contracts is supported by government, in that only certain institutions are permitted to award degrees. That is intended to ensure consistent quality of the certification. But the employers are very much key to the arrangement, their agreement to recognize degrees as certification of what matters is essential to stability of the contract.
Where does a university degree fit into a what can you do world? If employers continue to find their own ways to asses what a candidate can do– if in the extreme they are able to assess what a candidate can do in their workplace — then the degree may become less important. We know that in many cases university exams are not like professional work; they assess a student’s ability to take university exams better than anything else. The problem for universities of course is that if degrees aren’t taken as certification of job readiness, then that’s one less barrier to entry for new entrants into higher education. If a candidate can get a job without a degree, how many of them can train for a job without a university?
Universities need to understand whether the shift to what can you do job assessment will remain localized in one or two fields or spread more broadly. If it does spread then we need to look how we might better prepare students for this new form of assessment. Purists may cry that it isn’t our job to prepare people for work, that is too applied a focus for the university. Me personally, I’d rather not let someone else take our work.
Gregor Kiczales is a Professor of Computer Science and Provost’s Fellow for Flexible Learning Strategy.