Use of evidence in public health, volume 2

Volume 1 was my semi rant against RCT obsession in inappropriate contexts. I’ll pick it up a little more below

I saw an exceptionally interesting thread about evidence in public health. Specifically on the WAVES study. It got me thinking.

The thread from Harry was amazing.

The general consensus was

  • “spot on – stop asking ‘does it work?’ and instead ask ‘how does it contribute?’”
  • Complex systems adapt in response to interventions so we shouldn’t necessarily expect changes to distal outcomes.
  • If you haven’t, read Harry and others’s amazing article – The need for a complex systems model of evidence for public health. This has immense implications for our approach to evidence development and use of evidence

A practical “call to action” framework for this sort of thinking was developed my Miranda Wolpert in the context of mental health. Rethinking public mental health: learning from obesity

Here are a few thoughts.

1. Upstream almost universally matters far more than  downstream. The evidence paradigm has to be right

Downstream evidence paradigms don’t work very well in an upstream context.

We know the Evidence base in PH is hugely skewed to individual level, biomedical paradigm, downstream intervention focused.

Leads to evidence availability being weighted towards:-

  • Pedometers v national cycling infrastructure
  • Methadone treatment v poverty

See here from Marmot

See also this excellent picture from Boyd Swinburn, setting out the drivers.

2. We (wilfully?) Perpetuate the notion we can solve complex problems with downstream models of evidence based intervention implementation

We live in an exceptionally resource constrained environment, and thus it’s easy to see a scenario where it would be seen that if there’s RCT evidence that says “it doesn’t work” it won’t happen. Important if RCT is the wrong methodology for the evidential question.

We often (wrongly) assume that decision making and decision makers will be rational, or indeed evidence based / informed. Sometimes it is, often it isn’t.

3. In this we systemically ignore important stuff

Harry also tells us about the dangerous olive of evidence and specifically our focus on cost effectiveness / ROI (and the lack of evidence, wrong model problem) may systemically set us up to do the wrong thing, or certainly ignoring a large body of evidence where actually “the answer” may lie.

  • The wrong evidence paradigm might lead us to do the wrong thing
  • The available evidence available consciously and unconsciously influences what we choose to implement intervention wise. This needs v careful thought.

4. Does response curve – consider physical activity.

The couch to a bit active is critical from a population gain point of view, context matters.

We tend to forget evidence in the context of diminishing marginal return

5. We have maybe incurable RCT obsession.

I like RCTs

The world is also full of observational design studies that say big effect size of intervention that turn into very small effect size when compared against comparator and especially in RCT context.

RCT method are good and still has its place, and no doubt best way of controlling for bias. But not appropriate for many contexts and circumstances.

Read the test, learn, adapt, and my thoughts on it – applying RCT to neat & simple vs complex and messy questions. RCTs don’t work for many complex scenarios, it’s the wrong paradigm.

RCT evidence and complex systems

We tend to like measuring sandbags with RCT type methods. For example Take one sandbag out of a wall to see if that sandbag stops the flood. Maybe instead we should Evaluate the wall and check the building blocks are the right ones. Of course there is a legit question of whether the sandbags the right building blocks? And if the Right building block, what are the interactions between blocks, adoptive

Thus RCT is good for identifiable individuals and simple context.

But consider the rose hypothesis on population shift v high risk – if you reduce BP (by less than standard error of measurement) in a population, this will have massive effect, far greater than focusing on identifiable high risk individuals.

But effect as it’s so small will never be measured by RCT method.

There are also other important issues in the trade off between internal validity (accurate and bias free measure of whatever its measuring) vs generalisation (to the messy real world)

6. The logic model around which the evidential question is framed

Complex adaptive systems. Hint is in adaptive.

Linear models are a bit old hat in complex systems – Proximal doesn’t lead to distal outcome in the same way we’d like. Things go wrong, other things affect the chain. Some of these are predictable, some are not.

Thus……..there’s a danger (big danger) we focus on RCT level evidence and do the wrong thing.

I once had a go at RCT imperialism.

I was once an RCT imperialist …..

maybe I was imperialist as I’d been overly swung to this way of thinking

Maybe I was using “highbrow” evidence to beat off low value dross from Pharma

Maybe I was thinking in a medical model way in a complex social model world

Anyway I’ve seen the light.

7. Developing the theme …..lack of evidence is frequently cited as barrier for not doing something

That must be considered in terms of counterfactual, what’s the evidence for the counterfactual or the status quo. Is doing noting an option

Burden of evidence proof in a complex system is different

need to move to a decision logic framework, not hypothesis testing – ie not a criminal burden of proof, but balance of probability and what is happening in the background.

8. There are oddities and interesting nuances in evidence

Here’s an interesting one that I like


  • Many strongly support (its good for teeth etc).
  • Many strongly oppose (unnecessary mass medication, it can harm)
  • On the line of argument that there are adverse consequences (my view is that argument doesn’t stack up evidentially) I’m often told ……“you cant make a claim that it has no effect as there’s no RCT data (never will be – a bit like parachutes for falling out of planes)) and only observational studies, whilst at the same time make a claim about safety based on (small, pretty poor quality) observational studies and no RCT”

…….Logic error

And that’s before we get into conversation about weight, validity strength and direction of observational evidence

Second oddity

  • Imagine developing and marketing a cancer drug of marginal value and using the evidential standard of observational data vs expecting an RCT.

Read this by the ex commissioner of CDC. Evidence for Health Decision Making — Beyond Randomized, Controlled Trials

9. So……..Evidence is important, but you’ve got to get it right

Here’s a few concluding thoughts

And all of this evidence stuff is before we get into the way in which real people view and weight evidence. With different world views and ideologies etc

multiple forms of evidence are relevant for multiple jobs.

Beliefs, especially prior beliefs strongly influence how we interpret and contextualise evidence. Ditto ideology.

So be sensible

Get over your obsession with “we can’t do it if there’s no RCT” mindset.

Use multiple sources of evidence

Be ethical

Try to do the right thing.

1 Comment

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s