Oxford admissions interviews are notoriously challenging, particularly in mathematics. A 25-minute interview is designed with stunning precision to tease out the strongest problem-solvers (the main criteria for selection in mathematics). The competition is stiff. Yet a good chunk of candidates who make it to the final interview stage unravel within moments, ruling themselves out of contention.
These hopeful candidates can execute basic warm-up tasks at will — say, sketching a logarithm — but stumble the moment they are presented with a novel problem that they have not previously encountered — say, sketching the function y = log (log x).
Nothing says awkward like an interview candidate out of his depth and while it is reasonable for students to ask for prompts (a deliberate part of the interview dynamic in fact), a great many are stumped to the point of silence, even when prompts are offered to them.
This is not down to nerves (though candidates can be incredibly nervous) or the cunning guile of the Oxford interview (there’s nothing cunning about asking for a sketch of a log (log x)).
It is a recurring theme that leaves the admissions team perplexed every year. After all, these candidates apparently represent the brightest young minds in society.
So let’s ask the question: how do these hapless students ever make it to interview in the first place?
It’s that ugly five-letter word again: exams.
Admissions officers do not have the time to trawl through thousands of personal statements or references, so they rely on two performance measures as a filter for talent: academic grades and scores in the entrance exam.
That’s all exam grades are to Oxford admissions officers: a crude filter for student potential.
Exam grades are not trusted enough to differentiate talent at the top-end. Their only purpose is to weed out students who do not achieve the highest grades. Even here they are failing.
Exam scores have, at best, a weak correlation with problem-solving: students quite often excel in one but not the other. The entrance exam may be a better approximation of problem-solving but we are kidding ourselves if we think a single test of any kind can reliably capture students’ talents.
Problem-solving is a holistic blend of complex skills and finely tuned attitudes. It requires a balance of intuition and formal rigour, of resilience and creativity. It depends too on collaboration and the ability to seek out feedback and act on it. And so much more: certainly too much for a single test to do justice.
Relying on exams to judge students’ talents results in deadweight at the interview stage of admissions. Perhaps worst of all, many candidates who make it to the interview are fooled into thinking they have the talent to go all the way. At the same time, many would-be Oxford undergraduates are not even in contention because their academic grades do not stack up.
The weakest justification for persisting with this broken admissions system is that there is no viable alternative. That may be true for now but admissions is ripe for disruption.
Emerging EdTech solutions hint at alternative models of assessment that may inspire a new paradigm for admissions (and indeed employment).
The rise of continuous assessment models and the ability to track students’ every interaction with digital learning content may allow for broader, more holistic evaluations of student potential.
Instead of singular exam scores, admissions officers may enjoy access to students’ historical learning profiles.
They may analyse students’ learning patterns over time and pinpoint a range of cognitive and non-cognitive traits. They can ask targeted questions such as how students react to failure, how consistent their effort is and how they deal with novel situations. These questions elude today’s blunt measuring tools of exams and are the main driver for Oxford admissions interviews. They need to be brought to the focus of mainstream assessment so that every student has the opportunity to give a complete account of their learning habits and dispositions.
It is not yet clear just how broad or deep these digital measuring tools can go; less clear still how such tools would scale.
Admissions officers will always be constrained by time and there is a risk that well-intentioned tools designed to capture holistic learning traits will be reduced to simple, quantifiable measures for filtering purposes. Worse still, they may be ignored altogether.
The first step is acknowledging the current system is broken. Oxford tutors have long known it and EdTech innovators have an opportunity to give admissions the shake-up it so desperately needs.
Note: this piece is based on my past experiences as an admissions tutor at the University of Oxford and all views expressed are my own.