Walking home the other night, I walked upon a man knelt on the ground, searching for something under the streetlight. He told me he was searching for his wallet, and I, too, got on all fours to help.
After some time, I asked, “Are you sure you dropped it here?” The man laughed, “No, of course not! I lost it a few blocks over that way, but the lighting is much better here.”
As a child, I found this often-recited parable silly; what sort of nut does that? In these first months of medical school, I found my answer: we do.
Despite medicine’s obsession with data, evidence, and validity, when it comes to education and assessment, we search under the streetlights. Time and time again, we look not for the metrics that are important, or the outcomes that matter most, but those that are easiest to obtain.
Exhibit A: the way medical students are evaluated. Some time ago, Ashish Jha asked Twitter, “What makes a good doctor?” The results don’t have NEJM- or JAMA-caliber rigor, but they’re telling; ‘Competent/effective’ ranks fifth, after ’empathetic,’ ‘good listener,’ ‘compassionate,’ and ‘humble’ … even ‘intelligence’ is eighth. And yet, I’d challenge any medical student to tell me, with confidence and candor, that their medical curriculum values those traits above clinical knowledge. I don’t blame my school, but the system; there’s a reason that, of the 759 pages in my First Aid for the USMLE Step 1 book, the social sciences are a succinct 13.
The conversation about post-Flexnerian medicine, competency-based assessment, and holistic evaluation is refreshing. But there are buzzwords thrown around at conferences, and then there are the day-to-day realities—where a clean divide exists between the things that really matter, and the things that are easy to measure. In medical school, clinical knowledge comes before empathy, listening, or compassion, because clinical knowledge is a number. A discrete, objective data point that fits nicely on a bell curve.
Even as I complain about the system, I absolutely understand it. Last block, I scored a 91% in the Medical Knowledge competency. A good, clean, objectively quantifiable 91%. Meanwhile, my peer reviews ranged from ‘sub-optimal’ to ‘above average’ in Integration of Knowledge, and ‘entry-level’ to ‘aspirational’ in Professionalism. The result: I passed Microbes & Immunity, even though I might be terrible (or wonderful) at putting ideas together and working with others.
Perhaps the reality of medical education today is that we simply don’t yet have the tools and evidence to align what matters in learning to what matters in clinical practice. Maybe the informatics platforms aren’t refined enough to reliably identify the ‘well-rounded physician.’ But if that’s the case, then let’s take a moment to erase the buzzwords, look past the illusion, and admit to ourselves what’s really going on: that we’re searching under the streetlights.