≡ Menu

UK Gov On Creating Randomised Controlled Trials

The UK Cabinet Office’s behavioural insights team recently published a guidance document on the topic ‘Test, Learn, Adapt: Developing Public Policy with Randomised Controlled Trials‘. The paper describes how the government should be more rigorous in assessing the impact of its policies to make sure they’re effective, deliver value for money and represent the best use of government resources.

Test, Learn, Adapt: Developing Public Policy with Randomised Controlled Trials

The central thesis of the guidance is that ‘Randomised controlled trials (RCTs) are the best way of determining whether a policy is working’ and these should be used more to test the effectiveness of public policy interventions. The report says random trials could be applied to “almost all aspects of public policy”. It recommend starting the trials in uncontroversial areas (see Reducing fraud, error and debt) – such as the wording on tax reminders (one trial suggested different wording could improve effectiveness by around 30%)  – before working up to more contentious social issues.

9 Key Steps to Conducting a Randomised Control Trial

The paper then goes on to outline in detail the 9 separate steps required to setup any Randomised Control Trial. These steps represent the Behavioural Insights Teamʼs ʻtest, learn, adaptʼ methodology, and focus on understanding better what works and continually improving policy interventions to reflect learning from different trials.

Test – (ensuring you have put in place robust measures that enable you to evaluate the effectiveness or otherwise of the intervention)

1) Identify two or more policy interventions to compare (e.g. old vs new policy; different variations of a policy).
2) Determine the outcome that the policy is intended to influence and how it will be measured in the trial.
3) Decide on the randomisation unit: whether to randomise to intervention and control groups at the level of individuals, institutions (e.g. schools), or geographical areas (e.g. local authorities).
4) Determine how many units (people, institutions, or areas) are required for robust results.
5)  Assign each unit to one of the policy interventions, using a robust randomisation method.
6) Introduce the policy interventions to the assigned groups

Learn – (is about analysing the outcome of the intervention, so that you can identify ‘what works’ and whether or not the effect size is great enough to offer good value for money)

7) Measure the results and determine the impact of the policy interventions.

Adapt – (using learnings to modify the intervention, that that you are continually refining the way in which the policy is designed and implemented)

8) Adapt your policy intervention to reflect your findings.
9) Return to Step 1 to continually improve your understanding of what works.

Examples

Along with providing detailed steps for how to create a Randomised control Trial, the document outlines a series of examples to illustrate the benefits that can occur from such systematic analysis of public policy interventions.

One such example concerns a controlled experiment to study the impact of three schemes to support incapacity benefit claimants in England: support at work, support focused on their individual health needs, or both. The experiment found that while the extra support cost £1400 on average it found no benefit over the standard support that was already available. Thus, the RCT ultimately saved the taxpayer millions of pounds as it provided unambiguous evidence that the additional support was not having the intended effect.

These kinds of findings provides important feedback for policy-makers who can then search elsewhere for effective solutions to practical problems.

Debunking the myths

Randomised trails are common practice in many areas of development and science, but are used nearly universally as a means of assessing which of two medical treatments works best. This was not always the case, however, and when they were first introduced in medicine they were strongly resisted by some clinicians who believed that personal expert judgement was sufficient to decide if a particular treatment was effective.

The myths around RCTs primarily focus on four areas:

1) We don’t necessarily know ‘what works’ – the paper counters this by explaining how RCTs can still be worthwhile in quantifying the benefit and which aspects of a program have the greatest effect. We should be willing to recognise that confident predictions by policy experts often turn out to be incorrect and RCTs demonstrate where  interventions which were designed to be effective were in fact not.

2) RCTs don’t have to be expensive – Costs involved very much depend on how the RCT is designed and with planning they can often be a cheaper than other forms of evaluation.  This is particularly true when a service is already being delivered, and when outcome data is already being routinely collected. Rather than focusing on what a RCT costs to run, the paper suggests, it might be more appropriate to ask: what are the costs of not doing an RCT?

3) RCTs can be unethical – There is often objects to RCTs on the basis that it is unethical to withhold a new intervention from people who could benefit from it. This is particularly true in terms of spending on programmes which might improve health, wealth or educational attainment by one group. They note how we can not be certain of the effectiveness of an intervention until it is tested robustly and sometimes interventions which were long believed to he effective have turned out to be ineffective or even harmful on further experimentation.

4) RCTs do not have to be complicated or difficult to run – The paper notes that ‘RCTs in their simplest form are very straightforward to run.’ The paper explains in great detail the pitfalls and provides much advice on the steps needed – as briefly outlined above – to create such a trial.

The gathering of evidence to support public policy is increasingly important when government resources are stretched. There is a huge responsibility on public policy leaders to show the effectiveness of their projects and ensure value for money. Only those services delivering proportionate value for money should be funded, while those programmes failing to deliver evidence-based results should face reform.

(more at Cabinet Office and guardian.co.uk)

 

{ 0 comments… add one }

Leave a Comment

Page optimized by WP Minify WordPress Plugin

Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported
This work by http://www.rfahey.org is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported.