There is a particular kind of frustration that accumulates in people who spend time at the intersection of research and practice. You have evidence. Good evidence. Evidence that has survived peer review, replication attempts, the skepticism of colleagues. And then you watch an institution — a school district, a government agency, a large nonprofit — continue doing the thing that the evidence says not to do. Sometimes for years.

The instinct is to attribute this to ignorance, or resistance, or politics. And sometimes those are real factors. But I've become convinced that the more foundational explanation is structural: institutions were designed to resist change, and the mechanisms that make them stable also make them slow. The problem isn't that they don't want to update. It's that the update mechanism was never the point.

The Stability Bargain

To understand why institutions lag, you have to understand what they were optimized for. Democratic institutions, bureaucratic agencies, large organizations of any kind — these were built during an era when the speed of information was slow, consensus was hard-won, and the main threat was not sluggishness but volatility. The rules that feel obstructive today were designed to prevent the alternative: organizations that change rapidly based on whoever currently has power, or whatever evidence happened to land in front of the decision-maker last.

Stability is not a failure mode. It's a design choice. The question is whether the assumptions baked into that design still apply to the environment you're operating in.

The challenge is that those assumptions have shifted. The production rate of evidence has accelerated enormously — not just in volume, but in the speed with which findings become accessible. Institutions haven't updated their operating assumptions to match.

What the Lag Looks Like in Practice

I've documented this in several contexts, but let me describe the pattern abstractly. You have a finding — call it F — that meets reasonable evidentiary standards. It takes, conservatively, 18 to 36 months for F to traverse the pipeline from research publication to professional knowledge to organizational awareness to internal deliberation to policy change. During that window, practices that F argues against continue unchanged.

This lag is not evenly distributed. Larger institutions lag more. Institutions with more regulatory oversight lag more. Institutions where the practice in question is embedded in professional identity lag significantly more. The lag is, in other words, a function of the institution's existing commitments — which is exactly what you'd expect if the real constraint is not information but structure.

Three Partial Responses

People who have thought carefully about this tend to land in one of three places. The first is accelerationist: make the pipeline faster, invest in knowledge translation, build better infrastructure for evidence dissemination. This is useful and I don't want to dismiss it. But it addresses the supply side of the problem when the constraint is on the demand side.

The second response is structural: change the governance mechanisms themselves. Build in mandatory review cycles, sunset provisions, evidence requirements for continuing practices. This is harder and slower to implement, but more likely to produce durable change. The challenge is that you're asking institutions to use their change mechanisms to become more changeable — a recursive problem.

The third response is what I'd call parallel institution building: don't change the slow institution; build something faster alongside it, prove out a different operating model, and let the demonstration do the work. This has a long history in education reform in particular, and a mixed record. The fast thing often lacks the scale and durability of the slow thing.

Where I've Landed

My working hypothesis is that the lag problem doesn't have a single solution and probably shouldn't. Different institutions are appropriately slow at different things. The real work is not eliminating lag universally but making institutions more legible about their update mechanisms — so researchers, advocates, and practitioners can understand where the friction is and intervene with more precision.

What does that look like? At minimum: documenting what evidence would be required to change a given practice, making that standard explicit before the evidence exists, and tracking the gap between evidence availability and practice change over time. This won't eliminate the lag. But it will make it harder to pretend the lag isn't there.

All writings — LB