LOST IN TRANSLATION? The Impact of the FNP Evaluation

On 10th December we hosted a panel debate to discuss the findings of the Family Nurse Partnership RCT Evaluation and their implications for evidence-based practice in early life.

The event was very well attended, and the discussion was wide-ranging, thought-provoking and very constructive. We are very grateful to our panel – Ailsa Swarbrick, Chris Cuthbert, Cheryll Adams and Leon Feinstein – for their brilliant contributions and to the Institute of Child Health for providing the venue.

In this brief note of the event we have pulled out key themes and ‘take home’ messages from the event which we believe will be useful to anyone working in evidence-based preventative work.

Evaluation should be seen as part of a journey, not just a final judgement.

It is important to see evaluation as part of a learning process to inform and improve interventions, rather than a single assessment of their value.

Commissioners must use evaluation as part of their decision-making process, but recognise that no single evaluation will provide a concrete ‘yes or no’ answer about whether an intervention should be commissioned.

There are different types of evaluation and RCTs are not always best. It is important to choose an evaluation design that is most appropriate the intervention, timing and context. It may be important, for example, to understand what parts of an intervention work best to inform programme design.

We must balance fidelity with adaptation to context.

When implementing evidence-based programmes – particularly those developed in different places – we must get the balance right between fidelity to the tested model and flexibility to the context. It is right to ‘stand on the shoulders of giants’ and use the best evidence from overseas. However programmes will need to be shaped to the national and local system / context, and practitioners also need some flexibility to respond to families’ needs.

We must have clarity about what interventions are trying to achieve and for whom, and ensure evaluations reflect this.

It is important to have a theory of change, and clarity about what an intervention intends to achieve, and for which people.

Evaluations should be carefully designed to measure whether an intervention is achieving its intended outcomes, and be based clearly on an agreed theory of change. Identification and correct categorisation of primary and secondary outcomes is critically important. Programme providers, funders and commissioners should scrutinise evaluation design before evaluation begins.

Effective evaluation requires researchers to have a good understanding of the programme goals and mechanisms, as well as the ability to conduct rigorous research.

It is important to see interventions in the context of systems.

Whilst it is important to evaluate single intervention, we must also understand and evaluate the wider system too. Solving complex programmes is not just about picking the best interventions; it’s about getting the whole system right and delivering the right combination of services to the right people at the right time.

Commissioning and funding organisations have an important leadership role.

There are economic costs, and wider risks to programme providers from conducting an evaluation. Government, together with other large commissioning and funding organisations can encourage and support organisations to carry out robust evaluations. They also have a leadership role in ensuring that evaluation results are communicated responsibly and used in a positive, constructive and reflective way.

In the case of FNP, continued support and further development is warranted as the international evidence remains strong, the secondary outcomes look promising  and the National Unit has an exciting programme of adaptations to test.